Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: null; date: 15 November 2018

Sampling for Studying Context: Traditional Surveys and New Directions

Abstract and Keywords

Using the example of Ohio and its media markets, this chapter discusses the geographic distribution of respondents resulting from alternative sampling schemes. Traditional survey research designs for gathering information on voter attitudes and behavior usually ignore variability in context in favor of representation of a target population. When sample sizes are large, these polls also provide reasonably accurate estimates for focal subgroups of the electoral population. As the examples here show, conventional polls frequently lack the variations in geographic context likely to matter most to understanding social environments and the interdependence among voters, limiting variation on such continua as urban and rural, economic equality and inequality, occupational differences, exposure to physical environmental conditions, and a variety of other factors that exhibit spatial variation. The chapter calls for more surveys that represent exposure to a broader range of social and physical environments than researchers have produced up to now.

Keywords: survey research, sampling schemes, sample size, polls, Ohio, social environments, voters

Introduction

Many advances in the quantity and availability of information have social science researchers reconsidering aspects of research design that were once considered either settled or without serious alternatives. Sampling from a population is one of these areas, in which options to simple random sampling and its common variants have emerged, along with the technology to implement them. In this chapter I discuss sampling designs in which subjects’ variable level of exposure to relatively fixed aspects of geographic space are considered important to the research. In these circumstances a random sample focused on representing a target population alone will not be sufficient to meet the researcher’s goals. Traditional sampling will certainly be faithful to the density of the population distribution, concentrating sampled subjects in highly populated areas. For research that also requires spatial coverage to represent socioeconomic spaces, however, common surveys are not the best option, even though they have been widely used in the absence of better designs (Makse, Minkoff, and Sokhey 2014; Johnston, Harris, and Jones 2007).

Not every survey is well-suited to testing hypotheses about context.1 Not that long ago political scientists and sociologists made creative attempts to use the American National Election Study (ANES) or the General Social Survey (GSS) to reason about context, while knowing that their sample designs did not represent a very broad range of contexts (Giles and Dantico 1982; MacKuen and Brown 1987; Firebaugh and Schroeder 2009). In the design for the ANES, as in the GSS, administrative costs are vastly reduced by adopting sampling strategies clustered in metropolitan areas, largely ignoring lightly populated nonmetro locations. Resulting studies commonly found respondents to (p. 98) be residing in less than one-fifth of the nation’s counties and a limited range of more granular “neighborhood” areas such as census tracts or block groups (Firebaugh and Schroeder 2009). Because appropriate data drawn from alternative designs were scarce, these surveys were commonly accepted as the best, and sometimes only, data available, and there was no reporting on how well or poorly they captured the diversity of contexts or living conditions at all—not even in a note or an appendix. When it came to results, sometimes context effects appeared, sometimes they didn’t, but one has to wonder how many Type II errors, or false negatives, appeared due to the paucity of sample points in many locations that would have added contextual variability. Publication bias against null findings ensured that many of these investigations never surfaced in journals.

The basic resource deficit social scientists have faced for years is that conventional survey sampling techniques do not yield the number of subjects necessary to estimate effects of exposure to stimuli exhibiting geographic variation. Geographic contexts that are represented are limited to those that underlie the population distribution, which lack important elements of variability. Consequently, the application of random samples, or more typically, random samples modified slightly by miscellaneous strata, has led to errors in the estimates of numerous contextual variables and incorrect conclusions regarding the substantive effect of these variables in regression models. What is called for is a sampling strategy that represents not only the population, but also the variation in the inhabited environments hypothesized to influence the outcomes of interest. The key is to allocate sampling effort so as to provide spatial balance to accommodate the need to estimate exposure to environmental and geographic stimuli even in areas that are less densely populated. Sometimes we need to represent places, in addition to people.

Location Dependent Nature of Opinion Formation and Socialization

In social science research it is not new that natural environments matter to opinion formation and behavior in significant domains of judgment and decision-making. A person’s exposure to a hazardous waste dump, a nuclear power plant, an extensive wildfire, or a devastating hurricane matters greatly to his or her formation of opinions about it. This is because risk assessment is distance dependent, with subjective levels of concern varying with citizens’ degree of vulnerability to the hazard (Brody et al. 2008; Larson and Santelmann 2007; Lindell and Perry 2004; Lindell and Earle 1983). In processing news from media sources, communications scholars have found that perceived susceptibility is an important general cue in processing threatening news, and that the proximity of the particular threat is a key component of perceived susceptibility (Wise et al. 2009, 271).

One need not be studying exposure to environmental hazards, weather-related catastrophes, or other location-specific characteristics of the natural environment to see (p. 99) how distance from the stimulus matters greatly to one’s reaction to it. Social and political environments, while highly diverse across space, are also relatively stable—not in the fixed sense in which a mountain range or a hurricane’s path of destruction is, but by the fact that social settings typically change very slowly, over years and even decades (Downey 2006). Political scientists have noted that the socializing forces to which people are regularly exposed typically do not exhibit wild volatility in their climates of political opinion, but maintain stability over long periods (Berelson, Lazarsfeld, and McPhee 1954, 298; Campbell et al. 1960; Huckfeldt and Sprague 1995). In this manner, the same places produce similar political outcomes across several generations, even as conditions elsewhere may change. Genes are inherited, but so also are environments, meanings, and outlooks. Indeed, it would be surprising to learn that socioeconomic environments did not shape opinions and viewpoints to some degree. Remarkably, patterns of political partisanship and opinion across localities in the 1930s predict partisanship and opinion in those same places in the 2000s and 2010s remarkably well. Habits of allegiance to parties continue for years, long after the original cause of allegiance to those parties has been forgotten. In this manner, the content of partisan and ideological labels may change over time, but the partisan balance of identifiers will stay much the same even though new citizens enter and exit the local electorate through generational replacement and migration (Merriam and Gosnell 1929, 26–27; Miller 1991; Green and Yoon 2002; Kolbe 1975). Apparently exposure to the stable socializing forces abiding in a “neighborhood” or place influences political outlook, whereas distance from them weakens the impression they make.

Although many sources of political learning and socialization are not local in their ultimate origin, they may still be moderated or mediated by contextual forces measured at various levels of geography (Reeves and Gimpel 2012). Through the process of biased information flow and filtering, places exert a socializing influence. But influential communication is not always so direct, as there is also the indirect process of “social absorption” whereby individuals learn what is considered to be normal and appropriate through observation and imitation. This process is also described as a neighborhood influence or referred to as exercising a “neighborhood effect.” In the context of political socialization literature, the idea is that what citizens know and learn about politics is influenced by local settings and the social interactions within them and is reinforced by repetition and routine (Huckfeldt 1984; Jencks and Mayer 1990). Importantly, a neighborhood effect is an independent causal impact of the local context on any number of outcomes, controlling for individual attributes (Oakes 2004). The idea is that all other things being equal, the residents of some locations will behave differently because of the characteristics of their locations (Spielman, Yoo, and Linkletter 2013; Sampson, Morenoff, and Gannon-Rowley 2002). When it comes to surveying populations for conducting studies on neighborhood effects and politics, there is reason to question whether traditional sampling strategies are useful for capturing the variation in environmental exposure theorized to have a causal impact on opinion formation, judgment, and behavior (Cutler 2007; Kumar 2007; Johnston, Harris, and Jones 2007).

(p. 100) For practitioners studying political behavior from the standpoint of campaign politics, it is evident from the emergent body of experimental research that local political environments are widely believed to matter even to short-term electoral movements. After all, it is the local political environments that these researchers are attempting to manipulate. Even if a social scientist could measure every individual personal trait, including them all in an explanatory model violates principles of parsimony and commits the atomistic fallacy by presuming that only individual factors can be causal (Huckfeldt 2014, 47). In addition, it may well be that social and institutional causes of behavior, those originating out of communities or environments, are more amenable to “policy” or campaign intervention designed to persuade voters or to stimulate higher turnout. Changing someone’s personality or fundamental psychological orientation toward politics may not be within the capacity of any campaign. But it is certainly possible to alter a voter’s information environment or try other stimuli and communications that might marginally increase turnout or persuade some share of voters to vote differently than they would otherwise.

In summary, traditional survey research designs for gathering information on voter attitudes and behavior usually ignore variability in context in favor of representation of a target population. This is understandable given that the usual goal is to forecast elections, and an accurate measure of the horse race is taken to be the standard for quality polling. Moreover, through some variation of stratified random sampling, survey research has become adept at forecasting elections within a few points. Even the much criticized surveys during the 2012 and 2014 U.S. general elections proved to be accurate when they were combined and averaged to balance out the different information sets derived from slightly varying methods (Graefe et al. 2014). When sample sizes are large, these polls also provide reasonably accurate estimates for focal subgroups of the electoral population. In the very act of achieving those goals, however, scholars frequently eliminate the variations in geographic context that are likely to matter most to understanding social environments and the interdependence among voters, limiting variation on such continua as urban and rural, economic equality and inequality, occupational differences, exposure to physical environmental conditions (e.g., water scarcity, pollution), and a variety of others.

Examining the Spatial Distribution of a Simple Random Sample

Suppose that the frame for social scientific research was the state of Ohio’s registered voter population. What if we were to try to use a typically sized random sample to study contextual effects on these voters? Just how well would that design work? We might begin by drawing a pollster’s typically sized random sample of, say, one thousand (p. 101) respondents to survey from the state’s file of registered voters. Of course to be faithful to the real world, one would start by drawing more than one thousand, since many that we would attempt to contact would refuse to cooperate or would otherwise fail to respond.2 For purposes of this chapter, we ignore that practical necessity and keep the consideration only to the initial one thousand cases.

The geographic distribution of cases from that example of one thousand cases drawn from the Ohio voter file from spring 2013 is shown in figure 5.1, with the state’s major cities displayed in gray outline and the larger counties also labeled. Predictably, the sample shows a geographic concentration of sample points in exactly the locations we would expect them to be positioned if we were trying to represent the voter population of the state: in the three major metropolitan areas running diagonally from southwest to northeast, Cincinnati, Columbus, and Cleveland, respectively. The black ellipse on the map summarizes the one standard deviation directional dispersion of the sampled points around their geographic center. What the ellipse shows is that this typical random sample achieves very good representation of the geographic distribution of the state’s electorate. Summary tabulations show that 7%, 10.1%, 11.6%, and 3.9% of all registered voters from the state’s voter file reside in Hamilton (Cincinnati), Franklin (Columbus), Cuyahoga (Cleveland), and Lucas (Toledo) Counties, respectively. In turn, 7.8%, 10.4%, 12%, and 4% of the simple random sample from figure 5.1 were distributed within these four large counties, certainly an acceptably close reflection of the true population proportions.

Simple random samples are important for undergirding statistical theory but are rarely utilized in the practice of survey research, for well-known reasons detailed elsewhere in this volume and in reputable texts (Levy and Lemeshow 2008; Bradburn and Sudman 1988; Kish 1965). One drawback is that a simple random sample, selected on the equal-probability-of-selection principle, may not provide enough cases with a particular attribute to permit analysis. More commonly, random sampling occurs within some subgroups identified by researchers before the sample is drawn, according to reasonable and compelling strata, and sometimes in more than one stage.

Across the social sciences, usually the strata chosen for surveys are attributes of individuals, such as their race, income, age group, or education level. By first stratifying into educational subgroups, for example, one can ensure that enough high school dropouts, or college graduates with two-year degrees, are included to permit comparison with more common levels of educational attainment. When stratified random sampling is sensitive to location, it is usually in the form of an area probability sample in which selected geographic units are randomly drawn with probabilities proportionate to estimated populations, and then households are drawn from these units on an equal probability basis (Cochran 1963; Kish 1965; Sudman and Blair 1999). Ordinarily the point of such sampling schemes is not to estimate the impact of contextual variation or opinion formation within the structure of localities. The resulting samples are geographically clustered in a limited number of locations to reduce costs. As Johnston and his colleagues (2007) have convincingly demonstrated, stratified samples may ensure a (p. 102) nationally representative survey of voters (after weighting) but do not ensure a representative sample of the varied socioeconomic contexts within a state or nation.

Sampling for Studying ContextTraditional Surveys and New DirectionsClick to view larger

Figure 5.1 Spatial Distribution of Simple Random Sample of Registered Voters from Ohio Voter File, 2013.

Better representation of localities is important. With respect to campaign intensity, it is well-recognized that parties and candidates invest far more effort in some places than in others. One means for capturing some of this variability in resource allocation is to (p. 103) measure spatial variation in exposure to campaign stimuli by media market area. For purposes of purchasing advertising, the A. C. Nielsen Company has divided the nation into designated market areas (DMAs) representing the loci out of which television and radio stations grouped in a region broadcast to a surrounding population. With only a couple of exceptions, Nielsen uses the nation’s counties to segment the country into mutually exclusive and exhaustive market regions. Advertisers, including businesses and political campaigns, use these market boundaries to guide the planning and purchasing of broadcast advertising.3

Ohio is presently divided into twelve DMAs, three of which are centered in other states; two in the southeast emanating from Charleston-Huntington, and Parkersburg, West Virginia; and a third in the northwest, centered in Fort Wayne, Indiana, and extending across the border to encompass two rural Ohio counties (see figure 5.2). By using the DMAs as strata, social science researchers can ensure that no media markets go entirely unrepresented in a survey. Using simple random sampling, it is possible that no cases could be drawn from the markets that are small in population.

Proportional Allocation of a Sample

To avoid the possibility that some market areas wind up without any cases at all, stratifying the sample allocation by DMA makes sense as an initial step. Then allocating the total sample population proportionally is straightforward: if the largest media market contains 50% of a state’s voters, then a total sample of one thousand would allocate five hundred survey respondents to that stratum. In the case of Ohio, about 34% of Ohio voters reside in the Cleveland media market. Stratifying by media market and allocating the sample proportionally should result in a survey that positions approximately one-third of sample members within that market. One such sample is shown in figure 5.3, which places 336 sample points in the Cleveland area, with the sample populations in other DMAs also closely proportional to their share of the Ohio voter population. The three major media markets, Cleveland, Columbus, and Cincinnati, are home to 67.8% of the total sample shown in figure 5.3.

In practice, the results of a stratified sample may not look much different than a simple random sample, but the stratification with proportional allocation ensures that at least a few voters will be drawn from each of the twelve DMAs. The standard deviational ellipse shown in figure 5.3 for the proportionally allocated sample shows slightly more sensitivity to smaller DMAs than the simple random sample in figure 5.1. Note that the proportionally allocated sample is less pulled in the direction of the sizable Cleveland area DMA and is sensitive to the cases in the smaller DMAs in western Ohio. Ideally the greater sensitivity to the smaller DMAs would permit us to obtain estimates from some strata that a simple random sample would ignore. Several of the DMAs are very small, however, and the sample size of one thousand remains too modest to represent them adequately. The Fort Wayne DMA contains only four sample points (figure 5.3), (p. 104) and three others still contain fewer than ten, far too few for adequate analysis. This is a clear reminder that the total sample size should be substantially larger than one thousand in order to obtain more confident estimates of the means for these strata under proportional allocation. This helps us explain why many polls remain inadequate for testing contextual effects even under conditions of stratified sampling by geographic units, whatever those geographic units happen to be (Johnston, Harris, and Jones 2007).

Sampling for Studying ContextTraditional Surveys and New DirectionsClick to view larger

Figure 5.2 Ohio Designated Market Area (DMA) Map.

Proportionally allocating samples to strata is certainly effortless. Researchers also consider it an improvement over simple random sampling from the standpoint of ensuring that a geographic container such as a metropolitan area or DMA thought to (p. 105) unify a population guides sample selection for purposes of generating an estimate. For especially small DMAs, however, the resulting sample subpopulations are too small to be useful. Any helpful contextual variation these DMAs might add will remain unaccounted for because they are not well represented.

Sampling for Studying ContextTraditional Surveys and New DirectionsClick to view larger

Figure 5.3 Stratified Random Sample with Proportional Allocation by DMA.

(p. 106) Unless the sample is considerably larger, any contextual characteristics that capture features closely associated with lower density environments cannot be suitably tested for their impact, including measures that benchmark important hypothesized causes of a large range of attitudes and behaviors that vary greatly by location, including equality and inequality; some dimensions of racial and ethnic diversity, longevity, health, environmental protection, crime, self-employment, social capital, and many others.

Across several decades, researchers have borrowed regularly from surveys designed for one purpose, representation of a target population, to evaluate theories and test hypotheses about geographic contexts, without incorporating a proper range of contextual variation (Stipak and Hensler 1982). This has sometimes led researchers to conclude prematurely that context does not matter or has only substantively trivial effects once we have properly controlled for individual-level characteristics (King 1996; Hauser 1970, 1974). Contextual “effects,” by these accounts, are mostly an artifact of specification error. Arguably such conclusions were based on reviewing early studies that had adopted research designs that were ill suited for testing for contextual effects in the first place. The 1984 South Bend Study by Huckfeldt and Sprague (1995) was among the first in political science to randomly sample within neighborhoods purposely chosen to represent socioeconomic diversity. Their sample of fifteen hundred respondents is concentrated within sixteen neighborhoods, containing ninety-four respondents each, and reflects a broad range of living conditions among the population’s residents (Huckfeldt and Sprague 1995; Huckfeldt, Plutzer, and Sprague 1993). Could it have been even more widely sensitive to contextual variation? Yes, perhaps, if it had been “The St. Joseph County Study,” “The Indiana Study,” or even “The Midwestern Study,” but other costly features of their program included a three-wave panel design and a separate survey of nine hundred associates of the primary respondents. Given the multiple foci of the research, narrowing the geographic scope of the work was a practical and realistic step.

In summary, under the stratified sample with proportional allocation, in order to reliably estimate values in all regions of interest, researchers would be required to greatly enlarge the sample to ensure the number of cases necessary to generate a confident estimate across the full range of contextual circumstances. A less costly alternative might be to choose some other means for allocating the sample at the start.

Balanced Spatial Allocation of a Sample

As indicated previously, sometimes the research goal is not to generate a forecast of the coming election, but to understand the impact of context, or changing some aspect of the local political environment, on an outcome. Perhaps there are hypotheses in the research about media effects, or response to advertising stimuli, some element of locally tailored campaign outreach, or reaction to public policy adoption. These hypotheses (p. 107) may be subject to testing via observation or field experimentation, but the research is carried out within and across particular geographic domains, which should then be sampled and compared accordingly.

Sometimes campaign researchers are at work fielding experimental manipulations of campaign messages, varying the content, duration, and other qualities of broadcast television and radio advertisements (e.g., Gerber et al. 2011). Relying on stratified, proportionally allocated sampling strategies to assess these effects is a poor and potentially costly alternative to designing a spatially balanced survey that will capably estimate response to the market-by-market variation being introduced by the research team.

The Ohio map with a sample of one thousand respondents drawn in equal proportions from each of the state’s twelve media markets is shown in figure 5.4. The standard deviational ellipse identified as “Strata Equal” indicates the summary of the spread of sample points from an equal allocation of a random sample across the dozen DMAs. This ellipse, which strikingly extends outward nearly to the state’s borders, marks a decided contrast with the diagonal-shaped ellipse representing the “Strata Prop” or sample points that were distributed on a population size basis. Quite visibly, the equally allocated sample is a very different one than either the simple random sample shown in figure 5.1 or the sample allocated proportionally shown in figure 5.3. Specifically, figure 5.4 shows that a sample of one thousand when divided equally among twelve DMAs results in equal groups of eighty-three respondents positioned within each market, densely crowding small markets such as Lima and Zanesville, perhaps, but more sparsely dotting Cleveland and Toledo.

Clearly the geographically balanced sampling strategy in figure 5.4 would not pass muster with a traditional pollster aiming for a close geographic representation of the state’s registered voter population. The pollster’s preference would surely be something akin to the sample shown in figure 5.3. But for a strategist testing media messages, having randomized a roll-out schedule with perhaps two advertisements, airing them for variable periods over four weeks’ time across the twelve DMAs, a more spatially sensitive strategy conveys some genuine advantages. For one, it becomes possible to produce context-specific regression estimates for all media markets for an individual opinion (i.e., candidate support) on an individual characteristic (i.e., party identification). The traditional pollster, implementing a sample design concentrated in the largest markets, would only be able to produce an estimate for a few of the state’s media markets, including Cleveland, Cincinnati, and Columbus, and these happen to be the most costly and urban ones. Additional experimental variations, at far lower cost than in the Cleveland or Cincinnati markets, can be subject to research in the lower cost markets, but not if they have no experimental subjects included in the survey sample. Representation of a state’s population is not everything. Sometimes who is predicted to win an election is one of its less interesting aspects. Researchers are frequently interested in observing geographic differences in the etiology of opinion about the candidates, estimating the influence of the survey respondents’ social environments, gauging the variable impact of issues on candidate support across the state, and evaluating the impact of voters’ social and organizational involvements on their views. These ends are (p. 108) (p. 109) more likely to be met by randomly drawing nearly equal-sized subsamples from each market while including questions about the local venues within which citizens’ lives are organized.

Sampling for Studying ContextTraditional Surveys and New DirectionsClick to view larger

Figure 5.4 Stratified Random Sample with Spatially Balanced Allocation by DMA.

Even if there is no field experimentation underway, past observational research has suggested many ways in which space and place matter to our everyday lives. Economic, social, health, and political outcomes are all hypothesized to be shaped by a multilevel world. Survey samples have only recently become large enough to produce reliable estimates of the impact of contextual variables with samples that include areas of relatively sparse population. In other cases, with some forethought and planning, it is possible to sample densely enough to represent even sparsely populated locales and media markets using conventional stratified random sampling. Such samples are costly, requiring multiple thousands and even tens of thousands of cases, but they are more available now than in previous decades thanks to easier forms of outreach to potential respondents. What is not easy to do is to retrospectively extract from the major archives surveys of size eight hundred, one thousand, and twelve hundred and use these to either reveal or debunk the existence of neighborhood or contextual effects.

Of consequence to political scientists and campaign professionals, we have long recognized that differing locations display very different political habits and outcomes. No one would argue with the notion that variation in rates of voting participation, political party support, and propensity to donate money or to show up at campaign rallies is somehow related to the presence of socializing norms, ecological conditions, and the assemblage of opportunities collected in a locale. Advantaged neighborhoods offer more optimistic, efficacious, and empowering environments than impoverished areas. Moreover, voters have been found to perceive accurately the climate of economic circumstances and opinion in their proximate environments. Awareness of these conditions, in turn, appears to be immediately pertinent to the formation of political judgments (Newman et al. 2015).

Conventional sampling strategies have permitted the accumulation of knowledge about only a limited set of context effects in social science literature, particularly those going to racial and ethnic context, and a considerably smaller number that have examined socioeconomic status. Given that variation in race/ethnic environment is often robust within the major metropolitan areas where low cost samples are frequently clustered, we should not be surprised to see so many published works addressing the subject. Should social science then conclude that racial/ethnic context is the only one that counts? Probably not, until we field and evaluate more surveys that represent exposure to a far broader range of environments than we have up to now. The social science convention that attributes important behavioral outcomes to only one level of influence, usually the most immediate one, is not only misleading, but progressively unnecessary in an era of information abundance.

To conclude, the very limited geographic coverage of traditional samples will not move us forward without much larger sample populations. Such large samples are becoming available, and there are also hybrid designs that propose to achieve population representation and spatial coverage at optimal sample size. These developments promise to advance the understanding of space and place effects in the formation of political attitudes (p. 110) and behavior, something that conventionally designed survey samples were ill-equipped to do. Across the social sciences more broadly, new study designs promise to contribute to greater knowledge about the spatial dependency and multilevel causality behind social, economic, health, and political outcomes. They won’t do so without well-formulated, multilevel theories of behavior, though. There are legitimate complaints about the ascension of data analysis techniques over theory, and these criticisms are surely apt in the study of place effects on behavior. Analysis should be driven not simply by the level of spatial data available, but by theoretical considerations governing the etiology of the behavior. The explosion in the quantity and quality of social and political data dictates that a variety of perspectives and tools should be brought to social science subject matter. But more complex and realistic designs for data analysis require more sophisticated conceptualizations of relationships within and across the multiple levels of analysis. Finally, just as the new techniques for sampling and data analysis are shared by many disciplines, so too are the theories of the underlying social processes going to draw from sources ranging across disciplines. Even as relatively youthful social science fields build their own bodies of knowledge from the rise in information, high-quality work will require awareness of developments in other fields. The answers to substantively important problems are increasingly within the reach of social scientific expertise, broadly construed, but probably out of the reach of those working narrowly within any particular social science field.

References

Berelson, B. R., P. F. Lazarsfeld, and W. N. McPhee. 1954. Voting: A Study of Opinion Formation in a Presidential Election. Chicago: University of Chicago Press.Find this resource:

    Bradburn, N. M., and S. Sudman. 1988. Polls and Surveys: Understanding What They Tell Us. San Francisco: Jossey-Bass Publishers.Find this resource:

      Brody, S. D., S. Zahran, A. Vedlitz, and H. Grover. 2008. “Examining the Relationship between Physical Vulnerability and Public Perceptions of Global Climate Change in the United States.” Environment and Behavior 40 (1): 72–95.Find this resource:

        (p. 111) Campbell, A., P. E. Converse, W. E. Miller, and D. E. Stokes. 1960. The American Voter. New York: John Wiley and Sons.Find this resource:

          Cochran, W. G. 1963. Sampling Techniques. New York: John Wiley and Sons.Find this resource:

            Cutler, F. 2007. “Context and Attitude Formation: Social Interaction, Default Information or Local Interests.” Political Geography 26 (5): 575–600.Find this resource:

              Downey, L. 2006. “Using Geographic Information Systems to Reconceptualize Spatial Relationships and Ecological Context.” American Journal of Sociology 112 (2): 567–612.Find this resource:

                Firebaugh, G., and M. B. Schroeder. 2009. “Does Your Neighbor’s Income Affect Your Happiness?” American Journal of Sociology 115 (3): 805.Find this resource:

                  Gerber, A. S., J. G. Gimpel, D. P. Green, and D. R. Shaw. 2011. “How Large and Long-lasting Are the Persuasive Effects of Televised Campaign Ads? Results from a Randomized Field Experiment.” American Political Science Review 105 (1): 135–150.Find this resource:

                    Giles, M. W., and M. K. Dantico. 1982. “Political Participation and Neighborhood Social Context Revisited.” American Journal of Political Science 26 (1): 144–150.Find this resource:

                      Graefe, A., J. S. Armstrong, R. J. Jones, and A. G. Cuzan. 2014. “Accuracy of Combined Forecasts for the 2012 Presidential Election: The PollyVote.” PS: Political Science & Politics 47 (2): 427–431.Find this resource:

                        Green, D. P., and D. H. Yoon. 2002. “Reconciling Individual and Aggregate Evidence Concerning Partisan Stability: Applying Time-Series Models to Panel Survey Data.” Political Analysis 10 (1): 1–24.Find this resource:

                          Hauser, R. M. 1970. “Context and Consex: A Cautionary Tale.” American Journal of Sociology 75 (4, pt. 2): 645–664.Find this resource:

                            Hauser, R. M. 1974. “Contextual Analysis Revisited.” Sociological Methods & Research 2 (3): 365–375.Find this resource:

                              Huckfeldt, R. R. 1984. “Political Loyalties and Social Class Ties: The Mechanisms of Contextual Influence.” American Journal of Political Science 28 (2): 399–417.Find this resource:

                                Huckfeldt, R. 2014. “Networks, Contexts, and the Combinatorial Dynamics of Democratic Politics.” Advances in Political Psychology 35 (S1): 43–68.Find this resource:

                                  Huckfeldt, R., E. Plutzer, and J. Sprague. 1993. “Alternative Contexts of Political Behavior: Churches, Neighborhoods, and Individuals.” Journal of Politics 55 (2): 365–381.Find this resource:

                                    Huckfeldt, R., and J. Sprague. 1995. Citizens, Politics and Social Communication: Information and Influence in an Election Campaign. New York: Cambridge University Press.Find this resource:

                                      Jencks, C., and S. E. Mayer. 1990. “The Social Consequences of Growing Up in a Poor Neighborhood.” In Inner-city Poverty in the United States, edited by M. McGeary, 111–186. Washington, DC: National Academy Press.Find this resource:

                                        Johnston, R., R. Harris, and K. Jones. 2007. “Sampling People or People in Places? The BES as an Election Study.” Political Studies 55: 86–112.Find this resource:

                                          King, G. 1996. “Why Context Should Not Count.” Political Geography 15 (2): 159–164.Find this resource:

                                            Kish, L. 1965. Survey Sampling. New York: John Wiley and Sons.Find this resource:

                                              Kolbe, R. L. 1975. “Culture, Political Parties and Voting Behavior: Schuylkill County.” Polity 8 (2): 241–268.Find this resource:

                                                Kumar, N. 2007. “Spatial Sampling Design for a Demographic and Health Survey.” Population Research and Policy Review 26 (3): 581–599.Find this resource:

                                                  Larson, K. L., and M. V. Santelmann. 2007. “An Analysis of the Relationship between Residents’ Proximity to Water and Attitudes about Resource Protection.” The Professional Geographer 59 (3): 316–333.Find this resource:

                                                    (p. 112) Levy, P. S., and S. Lemeshow. 2008. Sampling of Populations: Methods and Applications. 4th ed. New York: John Wiley and Sons.Find this resource:

                                                      Lindell, M. K., and T. C. Earle. 1983. “How Close Is Close Enough: Public Perceptions of the Risks of Industrial Facilities.” Risk Analysis 3 (4): 245–253.Find this resource:

                                                        Lindell, M. K., and R. W. Perry. 2004. Communicating Environmental Risk in Multiethnic Communities. Thousand Oaks, CA: Sage Publications.Find this resource:

                                                          MacKuen, M., & Brown, C. 1987. “Political Context and Attitude Change.” American Political Science Review 81 (02): 471–490.Find this resource:

                                                            Makse, T., S. L. Minkoff, and A. E. Sokhey. 2014. “Networks, Context and the Use of Spatially-Weighted Survey Metrics.” Political Geography 42 (4): 70–91.Find this resource:

                                                              Merriam, C. E., and H. F. Gosnell. 1929. The American Party System. New York: Macmillan.Find this resource:

                                                                Miller, Warren E. 1991. “Party Identification, Realignment, and Party Voting: Back to the Basics.” American Political Science Review 85 (02): 557–568.Find this resource:

                                                                  Newman, B. J., Y. Velez, T. K. Hartman, and A. Bankert. 2015. “Are Citizens ‘Receiving the Treatment’? Assessing a Key Link in Contextual Theories of Public Opinion and Political Behavior.” Political Psychology 36 (1): 123–131.Find this resource:

                                                                    Oakes J. M. 2004. “The (Mis)estimation of Neighborhood Effects: Causal Inference for a Practical Social Epidemiology.” Social Science and Medicine 58 (10): 1929–1952.Find this resource:

                                                                      Reeves, A., and J. G. Gimpel. 2012. “Ecologies of Unease: Geographic Context and National Economic Evaluations.” Political Behavior 34 (3): 507–534.Find this resource:

                                                                        Sampson, R. J., J. D. Morenoff, and T. Gannon-Rowley. 2002. “Assessing ‘Neighborhood Effects’: Social Processes and New Directions in Research.” Annual Review of Sociology 28: 443–478.Find this resource:

                                                                          Spielman, S. E., E.-H. Yoo, and C. Linkletter. 2013. “Neighborhood Contexts, Health, and Behavior: Understanding the Role of Scale and Residential Sorting.” Environment and Planning B: Planning and Design 40 (3): 489–506.Find this resource:

                                                                            Stipak, B., and C. Hensler. 1982. “Statistical Inference in Contextual Analysis.” American Journal of Political Science 26 (1): 151–175.Find this resource:

                                                                              Sudman, S., and E. Blair. 1999. “Sampling in the Twenty-First Century.” Journal of the Academy of Marketing Science 27 (2): 269–277.Find this resource:

                                                                                Wise, K., P. Eckler, A. Kononova, and J. Littau. 2009. “Exploring the Hardwired for News Hypothesis: How Threat Proximity Affects the Cognitive and Emotional Processing of Health-Related Print News.” Communication Studies 60 (3): 268–287.Find this resource:

                                                                                  Notes:

                                                                                  (1.) Because this chapter has self-critical aims, I do not cite the work of others as much as I otherwise would. The criticisms apply as much to my own work as to that of others. Where I do cite the work of others, it should be considered only as a case in point, not as singling out a particular scholar or study.

                                                                                  (2.) Contemporary pollsters commonly suggest drawing as many as fifteen or twenty times the intended number of respondents in order to fulfill the required number of completed surveys. Failures to respond by phone are generally met with repeated efforts to call back the selected respondents, unless and until they flatly refuse to cooperate. Many polling firms are now paying respondents a fee to induce their cooperation.

                                                                                  (3.) These boundaries are not impermeable, of course, and there are many examples of radio and television broadcasts that spill over into neighboring markets.