Aggregating Survey Data to Estimate Subnational Public Opinion
Abstract and Keywords
Public opinion’s role in shaping governmental actions is a central concern of democracy, yet the absence of systematic state-level survey data has inhibited analyses of public opinion at the subnational level. This essay traces the evolution of studies of public opinion at that level, first reviewing studies using surrogates derived from demographic variables. It next considers methodologies that develop state-level opinion from aggregated national samples. Finally, it discusses recent efforts to develop state-level opinion measures using post-sample stratification integrating limited survey data with demographic variables. There is evidence of significant cross-sectional and temporal variation in public opinion and policy across and within the states. Research on subnational public opinion once hinged on assumptions about opinion surrogates, but is now based on abundant and progressively rigorous opinion data. These studies reveal that public opinion plays an enormous role in subnational politics, with effects varying across issues, contexts, and conditions.
The study of subnational public opinion presents special opportunities. The fundamental benefit offered by measuring public opinion at the subnational level is that it affords uncommon opportunities to gauge the nature of the connections of opinions to their political and socioeconomic contexts, on the one hand, and the linkage of these opinions to subnational governmental outcomes, on the other. Systematic comparative analyses of the causes and consequences of public opinion across governmental units allow us to focus on the nature of the linkages between mass publics and governmental outcomes.
For many years the study of public opinion by political scientists rested on the unexplored assumption that it influenced government leaders and ultimately public policy (Shapiro 2011). The rise of modern polling techniques gave researchers a way to regularly measure people’s privately held opinions. Pioneered famously by Campbell, Converse, Miller, and Stokes in The American Voter (1960), these surveys were subsequently administered during every presidential election, albeit with modifications, in the American National Election Studies. These new data stimulated intense analytical effort concerning the correlates of participation and vote choice, levels of information respondents exhibited, and the consistency of their answers across questions (see, e.g., Converse 1964; Popkin et al. 1976; Zaller 1992; Popkin 1994).
Questions about linkages between opinion and policy became more salient as mounting research on the content and forces operating on public opinion revealed low levels of information and interest in political issues (Delli Karpini and Keeter 1996). Impressive advances occurred in survey methodologies but lacked an important (p. 317) dimension: the basis for systematic comparative analyses across governmental units. For years we have been awash in an ever-expanding sea of national and independent subnational surveys, but no attention has been paid to systematizing these surveys in a manner that would make them analytically comparable. Then and now, subnational surveys were conducted by different polling organizations, at different times, using different question wording (Parry, Kisida, and Langley 2008).
Just as the measurement and analysis of subnational public opinion offers special opportunities for linking opinions to contexts and outcomes, it also presents special and formidable challenges. The question of linkage has largely been ignored or has been approached by using crude surrogates for opinion due to the lack of an analytical infrastructure needed to produce survey-based measures of subnational opinion. While arguably one of the most significant questions in the study of government, linkage simply did not lend itself to rigorous empirical analysis, given the available data and methodologies. Such analyses would require systematic comparative data that could link well-measured opinion to similarly well-measured indicators of government actions.
A notable pioneering strategy was to study how voters’ preferences within particular constituencies (e.g., congressional districts) connect to the behavior of policymakers for that constituency (e.g., roll-call votes). The “dyadic representation” model pioneered by Miller and Stokes (1963) reported modest and variable linkages across policy areas. This study also highlighted a fundamental difficulty with analyzing linkages within subnational domains: the number of survey responses available from national surveys to gauge constituency opinion within subnational units was exceedingly small. The small number of observations within districts, in the face of modest and variable correlations between opinion and politicians’ behavior, left questions about whether the observed relationships were truly modest or an artifact of low reliability of opinion estimates owing to small numbers of observations within districts. Using different measures of opinion or methodological assumptions, subsequent studies extended the foundations of Miller and Stokes, reporting stronger evidence of opinion-policy linkages (e.g., Achen 1975, 1977 Erikson 1978; Page, Shapiro, Gronke, and Rosenberg 1984; Bartels 1991; McDonagh 1992).
The Miller and Stokes study has been enormously influential, but also highlights core methodological and inferential challenges embodied in investigating constituency opinion and policy outcomes at the subnational level. These challenges are persistent obstacles that researchers have sought to address by employing new data or leveraging existing data in increasingly sophisticated ways.
Another core debate within this evolving literature concerns how the “quality” of public opinion is tied to the “effects” it has on political outcomes. A continuing question concerns the extent to which citizen opinions shape outcomes or are instead led, manipulated, or informed by political leaders, the mass media, or other forces in the political environment. Burstein (2010) observes that our measures of opinion about specific policies derived from national surveys are generally quite poor. Researchers are commonly forced to use opinions on (arguably) related topics (e.g., self-proclaimed (p. 318) political ideology [Erikson et al. 1993], “policy mood” [Stimson et al. 1995]), but this leaves lingering questions of interpretation: Is the observed relationship or lack of relationship “genuine,” or is it an artifact of using surrogate opinion measures? Burstein (2010) argues that such measures of public opinion “provide no information at all as to what specifically the public wants” (2010, 69). Moreover, Page (2002) argues that our studies overestimate the impact of opinion on policy because of sampling bias: public opinion polls focus on issues that are important to the public, and it is on such issues that democratic governments are most likely to do what the public wants (2002, 232–235).
In general, researchers studying subnational opinion using national survey data are forced to work with survey items that are less than ideal for gauging linkages on specific policies. Instead, global or general measures have been employed that, while showing substantial evidence of linkage, do not provide insight into how specific opinions translate into specific policies. This is supported by voluminous evidence showing that most of the public does not hold opinions, or maintain consistent opinions, on many specific issues most of the time (see, e.g., Converse 1964; Zaller 1992).
If citizen opinion is absent on specific issues, what could explain the observed linkages between general opinions and specific policy outcomes? It is possible our causal arrows need to be reversed. Gabriel Lenz observes that after decades of research, “[d]etermining whether citizens lead their politicians or follow them turns out to be a lot harder than it sounds. Basic correlations between citizen policy views and their vote choice or policy outcomes does not allow researchers to disentangle which came first, citizen attitudes or electoral or policy outcome. Such correlations derived from cross-sectional research designs cannot tell these two very different outcomes apart because they are observationally equivalent (Lenz 2012, 7; see also Norrander 2000). To unpack the causal sequence requires that we examine not only differences between units, but also differences within units between cause and effect. Moreover, it requires variations in the magnitude of the causal variable to measure the magnitude of effect on political outcomes, if any. While correlations between opinion and political outcomes are a necessary condition for inferring democratic responsiveness, it is not sufficient, because this correlation could just as easily result from outcomes driving opinions. Ultimately, the sufficient condition for democratic responsiveness requires that changes of variable magnitude in opinions precede and translate into changes in variable magnitude in political outcomes.
As illustrated in the following review, research on linkages between subnational opinion and political outcomes highlights central and recurring concerns:
• The first concern is the sources of data to measure subnational opinion. In the absence of suitable subnational surveys, researchers are forced to make pragmatic decisions about alternative sources of data to gauge subnational opinion. Studies have employed surrogates or used observations obtained from national surveys, producing ever-improving but still less than ideal measures of specific opinions.
• The second concern is the sufficiency of the number of observations used to estimate subnational opinion. The number of observations available in national (p. 319) surveys for specific subnational constituencies (e.g., states, counties, congressional districts) varies tremendously across subunits. Given the very small or zero observations available for some or many subunits, reliable comparisons of opinions and their effects across many subunits are commonly limited to those subunits with sufficient observations. As a consequence, studies of subnational linkage commonly must focus on a subset of the more populous (and thus more sampled) subunits while ignoring subunits with smaller populations. This becomes particularly problematic when there are relatively few subunits, such as states, which together exhibit considerable variety in their politics and policies, that would be ignored by focusing on a handful of highly populated (and sampled) states.
• The third and related concern involves the data needed for research designs that can embrace the causal sequences involved in the opinion-policy linkage. Ultimately researchers must consider longitudinal features of opinion within subnational units. If opinion drives policy, changes in opinion must translate into changes in policy, but this requires not only sufficient observations within subnational units in general, but also sufficient observations within subunits over time to measure opinion change.
• Finally, the substance of our measures of opinion commonly derive from pragmatic choices based on available data, but these often fall short of the specificity needed to elaborate the processes whereby specific opinions translate into specific policy changes.
Overall, the evolution of the study of subnational opinion has involved progressive improvements, using new data and methodologies to produce more reliable and specific measures of subnational opinion, based on more observations, making comparisons among more subunits possible, and allowing for longitudinal analyses of subnational opinion that will ultimately be necessary for articulating the causal connections between opinion and policy across subnational units. This is a dynamic area of research that has attracted significant and sustained scholarly interest, one that promises to yield impressive dividends in the future.
Data and analytical demands for studying opinion-policy linkages have served as major impediments to progress, which also established the research frontiers surmounted by innovations and methodological advances. The study of subnational public opinion has been characterized by increasingly sophisticated methodologies for surmounting the vexing challenges of not having specific subunit survey data by leveraging various sources of available demographic and survey data.
In a democracy, policy is supposed to be linked to the preferences of the public. This linkage has served as a motivation for a wealth of studies. Most typically, studies have illustrated correlations between public opinion, measured various ways, and public (p. 320) policies, across units measured within a constant time period. Although commonly reporting significant opinion-policy linkages, these findings are vulnerable to the criticism that these relationships are the result of rival causal interpretations.
Notably, elites may be shaping opinion. Jacobs and Shapiro (2000) argue that elected officials have an incentive to convert skeptical constituents to their own position. Alternatively, opinions may come to reflect policy through migration. Studies of subnational taxation and expenditure point to the importance of voting with one’s feet (Tiebout 1956). From this perspective, strong correlations between opinion and policy simply reflect the result of geographic sorting as citizens move to jurisdictions with policies in line with their preferences. In the end, cross-sectional correlations between opinion and policy do not preclude these rival explanations. Cross-sectional correlations represent opinion policy congruence, but nothing more.
Ultimately, convincing studies of linkage between opinion and policy require investigating the causal dynamics by which the preferences of constituencies cause the behavior of representatives, independently of elite persuasion, voter mobility, and geographic sorting. It requires the exploration of the temporal order of opinion and policy data, in which current public opinion changes significantly and systematically relate to future public policy changes. When (if) such opinion change leads to policy change, this dispels skeptics’ concerns that policy might lead opinion. If current opinion predicts future policy change, independent of current policy that presumably reflects current elite preferences, it is difficult to argue that opinion was not influential. Moreover, such findings render the notion that voter mobility is driving the process implausible, because it would require vast migrations of voters in and out of jurisdictions in advance of policy changes.
An examination of the historical development of studies of public opinion in subnational jurisdictions reveals a progressive research frontier that has advanced only after solving vexing measurement and data issues. Why is this different than other areas of inquiry? Most commonly, where theories point to important questions, data are collected to answer those questions. We could imagine the collection of state-level surveys that were coordinated and archived across states. Ultimately, such data could provide valid and reliable estimates of public opinion within states that were comparable across states. Unfortunately, “[p]ublic opinion data of the subnational sort have proved particularly elusive” (Parry, Kisida, and Langley 2008, 197). The resources and rewards for such systematization and coordination do not exist: the design and execution of common questions across states detract from polling directors’ other duties, while archiving these data in a common repository is typically viewed as too cumbersome (Parry, Kisida, and Langley 2008, 211).
While we might hope that these impasses could be somehow surmounted in the future, the reality is that even if they were, it would be many, many years before such coordinated effort could produce enough state level surveys to answer any but the most preliminary questions. Moreover, such data would not allow us to examine even recent history. Hence, while the spread of electronic data collection and archiving has advanced the study of state politics (see Brace and Jewett 1995), and despite the fact (p. 321) that technologies have created “robust” state polling enterprises, “opportunities for multi-state analysis remain daunting (Parry, Kisida, and Langley 2008, 210), and these advances have not included state or subnational public opinion.
Given this impasse, creative and methodologically innovative utilization of imperfect or incomplete data to create reliable and valid measures of subnational opinion is more than a stopgap measure; it is the only way forward unless and until we develop the infrastructure to routinely coordinate, collect, organize, and archive genuine state-level polls. Given the practical obstacles involved and the historical state-level polling that can either never be obtained or does not coordinate with other state-level polls, our understanding of the role of comparative public opinion in the subnational domain will necessarily be based on the thoughtful and critical conversion of what we have into what we need.
Early Studies of Opinion and Policy in the States: Surrogates, Electoral Returns, Simulations, and Validity Issues
The comparative study of state politics dates to V. O. Key’s magisterial study, Southern Politics (1949), or earlier. By the 1960s and into the early 1970s, the comparative study of state politics had hit its stride with a stream of influential studies at the leading edge of political science inquiry (e.g., Dawson and Robinson 1963; Dye 1965, 1969a, 1969b, Hofferbert 1966; Sharkansky 1968; Sharkansky and Hofferbert 1969; Cnudde and McCrone 1969; Fry and Winters 1970; Godwin and Shepard 1976; and others). By the late 1970s, however, interest and effort in the area began to fade (see Brace and Jewett 1995). As Cohen (2006) observes, a factor that depressed enthusiasm for comparative state studies was the lack of public opinion data across the states. While scholars had developed many innovative and useful measures of aspects of state politics and policy—including policy outputs, political structures, institutional capacity, electoral competition, as well as state demographic and economic profiles—sound, direct measures of state public opinion remained elusive.
A long tradition exists of using indirect measures to capture state public opinion in lieu of survey responses. For instance, scholars have used demographics (Boehmke and Witmer 2004; Mooney and Lee 2000; Norrander and Wilcox 1999), simulations based on the demographic characteristics of state residents (Weber et al. 1972), and measures (p. 322) based on policy makers who represent a state (Berry et al. 1998, 2007; Holbrook-Provow and Poe 1987). The limitations of these indirect measures have been debated elsewhere (Brace et al. 2004, 2007; Erikson, Wright, and McIver 1993).
Surrogate Demographic Variables.
One of the most common approaches used in studies of policy responsiveness in the U.S. House of Representatives is to measure constituency policy preferences using surrogate demographic variables. Usually this involves estimating a model in which legislative roll-call behavior is depicted as a function of a wide range of district demographic characteristics obtained from the U.S. Census. The demographic variables employed in such studies typically include indicators of racial composition, education, income, age, social class, occupational distribution, urbanization, homeownership, and family composition (Pool, Abelson, and Popkin 1965; Sinclair-Deckard 1976; Weber and Shaffer 1972). In a more general analysis, Peltzman (1984) used six demographic variables measured at the county level to tap politically relevant, economic characteristics of senators’ constituencies. Kalt and Zupan’s (1984) analyzed specific industries capturing members of Congress: in their analysis of Senate voting on strip-mining regulation, they took state-level data on membership in pro-environmental interest groups and the size of various state coal producer reserves in BTUs expressed as fractions of state personal income.
Scholars adopting such an approach make some important assumptions about the political meaning of demographic characteristics. In particular, they assume that
(1) individuals’ demographic characteristics are related systematically to their policy preferences,
(2) legislators are aware of the demographic composition of their districts and take those characteristics (or at least how they interpret those characteristics) into account when making roll-call decisions, and
(3) such a relationship holds when one moves across levels of analysis (i.e., from the individual level to the aggregate level).
The first assumption is quite reasonable. Numerous studies document the demographic underpinnings of public opinion and political behavior; citizens’ general ideology and their views on public policy matters are often related to their demographic characteristics. Such a relationship may be due to the degree to which self-interest is reflected in citizens’ demographic characteristics, or else demographic characteristics might represent how different groups in society acquire different sets of symbolic attitudes through the socialization process.
Second, it does not seem unreasonable that legislators are aware of the demographic characteristics of the constituents that they represent and interpret these characteristics in such a way as to permit the demographic flavor of a district to affect their roll-call decisions (e.g., Fenno 1978). The final assumption—that the relationship between aggregate demographic characteristics and aggregate policy preferences is a reflection (p. 323) of the same relationships at the individual level—is less certain, since making such an assumption has the potential of violating classic notions of the ecological fallacy. Simply, processes that operate at the aggregate level do not need to be in effect at the individual level.
Although relationships found at the individual level often persist at the aggregate level, one must clearly take great care in making inferences about political processes across levels of analysis. Ultimately, studies that rely on demographic variables to represent constituency influences are quite limited. There is at best an imperfect relationship between demographic characteristics and policy preferences among individual citizens. Although demographic variables might have a significant impact on individuals’ policy preferences, they typically explain only a small amount of the variance in such preferences, and this means that roll-call models that simply rely on demographic variables are missing a substantial portion of the effect of constituency preferences. Moreover, the uncertainty surrounding the policy implications of demographic variables means that the policy signals directed at legislators by their constituents’ demographic characteristics are somewhat ambiguous. Knowing, for instance, that a district has a high proportion of citizens with a college education does not necessarily give a legislator clear, unambiguous signals about the policy preferences of constituents, since this demographic characteristic, like others, is not perfectly related to policy preferences.
Presidential Election Results.
Other scholars have used election returns to estimate district preferences (e.g., Canes-Wrone, Cogan, and Brady 2002; Erikson and Wright 1980). Explicitly based on electoral behavior and updated with each election, election results have the advantage of being available across all states and districts (Kernell 2009).
Election returns are popular and easily accessed proxies for district partisanship. For instance, Canes-Wrone, Cogan, and Brady (2002), Ansolabehere, Snyder, and Stewart (2001), and Erikson and Wright (1980) all use district-level presidential election returns as a proxy for district partisanship in models of legislative politics. Constituent behavior (vote choices) is the basis for the proxy and links to the partisan or ideological continuum that generally underlies electoral competition. Thus, it is reasonable to assume that a measure of district or state partisanship utilizing vote shares has high validity. Numerous scholars have also relied on presidential election results as a surrogate measure of district ideological orientation (Fleisher 1993; Glazer and Robbins 1985; Johannes 1984; LeoGrande and Jeydel 1997; Nice and Cohen 1983). The logic underlying this is grounded in standard spatial models of electoral choice. Arguably, many citizens cast their votes in presidential elections by comparing their own ideological positions with those of the competing candidates. Insofar as aggregate presidential election results reflect ideological voting in the electorate, scholars should be able to utilize presidential election results at the district level as a proxy measure of district ideology.
Unfortunately, there are shortcomings and trade-offs to this approach. Presidential vote shares in any given election may be products of short-term forces; for instance, different issues are more or less salient in any given election, and particular candidates (p. 324) are more or less popular. Most observers agree that certain presidential elections are highly ideological and that the presidential election results from those elections reflect the ideological characteristics of constituencies; the 1964, 1972, and 1988 elections come immediately to mind as elections in which support for the Democratic and Republican presidential candidates was differentiated by ideological considerations. On the other hand, we know some elections are detached from ideology; the 1968 and 1976 elections were somewhat less ideological than other elections. Clearly, not all presidential elections are equally ideological, and this affects the degree to which scholars can use district-level presidential election results as a surrogate for district ideology. Finally, presidential vote shares do not offer insight into preferences of constituencies on particular policies, nor can they measure the preferences of district subconstituencies (e.g., the preferences of Democrats or Latinos) using presidential vote shares.
LeoGrande and Jeydel (1997) explore the possibility of utilizing presidential election results as a surrogate for district ideology. They find only moderate correlations for presidential election results between adjacent elections, suggesting that the reliability of the aggregate presidential vote is not extremely high. Ultimately, presidential vote shares in any given election may be largely the product of short-term forces (Levendusky, Pope, and Jackman 2008).
In referenda elections, voters confront one or more specific policy positions on which they can express their preferences. A number of states hold referenda elections on a regular basis, and scholars have found it possible to utilize district-level data on referenda election results to estimate the policy preferences and/or ideological orientation of a given constituency.
The use of referenda data as a surrogate measure of constituency policy preferences is best represented by the work of Kuklinski (1977) and McCrone and Kuklinski (1979). In both studies, the authors utilize data from California referenda to estimate the positions of district constituencies on three dimensions that emerge from a factor analysis of the referenda data. While these scholars find that referenda data can provide quite reliable measures of district ideology, unfortunately such data are available for only a limited number of states, and vary from year to year.
Another innovation in the measurement of district opinion and constituency policy preferences is the use of simulated district opinion, a technique developed by Weber and Shaffer (1972) and subsequently utilized by several legislative scholars (Erikson 1978; Sullivan and Minns 1976; Sullivan and Uslaner 1978; Uslaner and Weber 1979). This approach takes advantage of demographic data that are available at the district level, as well as knowledge concerning the relationship between individuals’ demographic characteristics and their policy positions. In traditional simulations of constituency opinion, scholars utilize what we refer to as a “bottom-up” simulation—that is, using data from (p. 325) a lower level of aggregation (i.e., from individual-level surveys) to simulate opinion at a higher level of aggregation (e.g., the district or state level).
In such a simulation, citizen groups are identified based on their combinations of social and economic characteristics: race, income, education level, and so forth. Using national surveys, items are selected that match the grouping characteristics, and opinions of members of these combinations or groupings are obtained. Using regression, the relationship between socioeconomic and demographic characteristics and opinions is estimated. Using this model, the mean values of the socioeconomic and demographic characteristics for the district or state are then plugged in, and the model is used to simulate estimates of the district’s or state’s opinions based on the sizes of groups within the state or district.
On the face of it, this approach appears to be quite reasonable. The logic underlying the approach seems to be sensible, and simulated measures of opinion have a stronger association with roll-call behavior than measures based on small-sample estimates (Erikson 1978). Most importantly, the general availability of demographic and political variables with which to simulate public opinion means the approach allows estimating opinion across a wide range of subunits and across time.
Perhaps the most important concern that one might have about this approach is that the individual-level regressions from which the simulations derive often exhibit exceedingly low levels of fit to the data. With adjusted R2 levels that often fall below .20, measures of simulated district-level opinion have a significantly large amount of random error associated with them. This is not necessarily a surprise, since the level of measurement error in individual-level survey data is often much higher than that found in aggregate-level data. Ultimately, while bottom-up simulated measures may be an improvement over those obtained from other analytical approaches, they remain imprecise indicators of constituency opinion (Seidman 1973).
Disaggregation of National Surveys: Using Survey Data to Map Subnational Differences in Opinion
Can we study subnational linkages using data from national surveys? Famously, Miller and Stokes (1963) were the first to tackle this question. Disaggregating opinion data from national election studies at the congressional district level, they examined the linkages of these district-level opinions with the preferences of members of Congress and with their legislative votes. These survey observations were very small in number and far from representative cross-sectional samples from the early National Election Studies (NES), with the corresponding congresspersons’ roll-call votes and responses to a separate survey of their political attitudes and perceptions of their constituents’ opinions. (p. 326) Miller and Stokes found moderate linkages for opinion, but these relationships varied across issues: stronger connections for civil rights and weaker connections for foreign policy.
Beyond its substantive findings, the Miller and Stokes study also highlights many of the fundamental methodological challenges to studying linkage. It revealed the severe threats to reliability in estimates of subnational opinion using sparse numbers of survey observations in subunits (congressional districts, in this case). Almost all survey-based disaggregation methods suffer from a profound design challenge, sometimes referred to as the “Miller-Stokes” problem. The survey data they had for any individual congressional district were extremely sparse; their study used a national probability sample that had an average of only thirteen respondents per congressional district (see Achen 1977; Erikson 1978).
Miller and Stokes, and subsequent studies using disaggregated survey observations at the subnational level, reveal that the success of disaggregation hinges on the representativeness and size of the disaggregated opinion data. James Gibson (1988) made clever use of the large Stouffer survey study of tolerance, revealing that there was some correlation between public opinion and the repressiveness of the anticommunist legislation that states adopted (Gibson 1988).
In Statehouse Democracy (1993), Erikson, Wright, and McIver reinvigorated state politics research on public opinion. They showed that one could combine survey observations from multiple years on opinions that were stable across time and then disaggregated to the subnational unit (in their case states). By combining survey observations from the same polling organization from multiple years, they were able to obtain more observations per state and more reliable measures of opinion.
Erikson, Wright, and McIver gauge state opinion based on a question about self-proclaimed political ideology. Their ideology measure has become widely used in studies of state politics and policymaking. This general measure of opinion is strongly and significantly related to general features of governmental outcomes across the states. These include spending on education, the scope of Medicaid and Aid for Families with Dependent Children, the legalization of gambling, passage of the Equal Rights Amendment, capital punishment, and issues related to state spending and tax effort and progressivity (e.g., Lascher et al. 1996; Camobreco 1998; Mooney and Lee 2000).
The pooling methodology pioneered by Erikson, Wright, and McIver (1993) has also been extended to other surveys to measure specific issue opinions (e.g., Brace et al. 2002). This has allowed scholars to address questions about linkages between specific policies and issues at the subnational level (e.g., Arceneaux 2002; Brace et al. 2002; Brace and Jewett 1995; Burstein 2010 Johnson, Brace, and Arceneaux 2005; Brace and Boyea 2008; Norrander and Wilcox 1999).
Disaggregation of national survey data has advanced the study of subnational linkage by producing more valid and reliable measures of subnational opinion. This approach is not without limitations, however. Notably, a problem with national surveys is that (p. 327) the amount of information per state is directly proportional to state population. Less populous states tend to have inadequate sample sizes. For example, if using CBS/NYT polls from 1977 to 2007 to measure party identification, there are 436 respondents from Illinois (the fifth most populous state), 180 from Kentucky (the median state), and only 32 from Delaware (the fifth least populous state) in a typical year. In addition, some years (e.g., 2005) have less information than others, leading to very small samples for the less populous states in certain years.
The aggregation method also does not address nonrepresentative samples resulting from the survey design. Many national surveys use primary sampling units (PSUs) that are not fully representative subnational sampling frames. The crucial point is that while the design may be unbiased in terms of expected values at the national level, any particular implementation of the sampling design could produce a nonrepresentative selection of PSUs for a particular subunit.
These problems are mitigated to a large extent. As Brace et al. (2002) illustrate, more populous states also have more PSUs and thus are less vulnerable to bias. Alternatively, less populous states exhibit much less variation in opinion, and in this more homogenous environment, bias is less likely. As Brace et al. (2002) note, the risk of bias is greatest in less populated states (low population coverage) with substantial variation in public opinion (low population homology). Depending on the issue, this situation is rare. In sum, while there are fewer PSUs in less populous states, there is also less diversity of opinion in these states, and even an unrepresentative PSU could be representative. Alternatively, in populous states where there is substantial diversity of opinion across geographical areas, there are more PSUs to capture this diversity.
The disaggregation of national surveys has produced measures of subnational opinion of heightened reliability and validity that have contributed to major advances in our understanding of linkages of opinion and policy in subnational settings. This method, however, has intrinsic limitations. The success of disaggregation across years depends on stable underlying attitudes. This necessarily limits research focus to survey items that exhibit stable opinions over the short or not so short run.
Using disaggregation, scholars have been limited to using attitudes shown to be stable across time to produce cross-sectional measures of opinion. This precludes many issues about which opinion is volatile. It limits the substantive breadth of the types of policies and opinions that are suitable for study. More important, the stability required for suitable disaggregation also means that longitudinal analyses are largely not possible. Disaggregated opinion data are suited to addressing cross-sectional correlations between suitably stable opinions and related measures of state policies.
Cross-sectional research afforded by disaggregated opinion measures has revealed strong and convincing correlations between suitably stable measures of subnational opinion and subnational policies. While these links are quite strong, correlation is not causality. Cross-sectional analyses cannot unravel the many complex temporal patterns embodied in the opinion-policy nexus that produces these correlations.
(p. 328) Multilevel Regression and Post-stratification: Expanding the Scope of Issues and the Longitudinal Analysis of Opinion Change
Disaggregation of national survey observations to subnational units has produced convincing measures of subnational opinion on an array of issues. Measures developed from this methodology have established strong and statistically significant cross-sectional differences in opinions across the states or other subunits that in turn reveal connections to elite behavior and/or policy. These endeavors have established clearly the necessary condition for inferring linkage: opinions vary across states and correlate with state policies.
Without this strong foundation, it would make little sense to explore complex questions about opinion-policy linkages: if opinion, convincingly measured, did not correlate with policy, further analyses would be unwarranted. Given the strong correlations, it then makes sense to “unpack” the causal sequences that underpin the observed correlations between opinions and policies. From this perspective, disaggregation and resulting research form an important building block in pursuit of a cumulative and systematic understanding of the opinion-policy nexus. Disaggregation has its limits, but they do not undermine the utility of the measures derived from this technique. Unlike measures of subnational opinion developed from surrogates or simulations, where the measures suffered from intractable flaws, disaggregated opinion measures suffer limits, but not fundamental flaws.
The fundamental limit of disaggregated measures of subunit opinion is that they are limited to cross-sectional analyses of the opinion-policy linkage. These cross-sectional findings, while important, remain vulnerable to rival causal interpretations. As noted above, elites have an incentive to convert skeptical constituents to their own opinion; if so, elites may be shaping opinion rather than the opposite (Jacobs and Shapiro 2000). In addition, subunit opinions may come to reflect policy through population migration. Strong correlations between opinion and policy could simply reflect the result of geographic sorting as citizens move to jurisdictions with policies in line with their preferences.
Ultimately, the next chapters of exploring the linkage between opinion and policy require investigating the causal dynamics by which the preferences of constituencies cause the behavior of representatives, independent of elite persuasion, voter mobility, and geographic sorting. It requires the exploration of the temporal order of opinion and policy data, in which current public opinion changes significantly relate to future public policy changes. When (if) such opinion change leads policy change, this dispels skeptics’ concerns that policy might lead opinion. If current opinion predicts future policy change, independent of current policy that presumably reflects current elite preferences, (p. 329) it is difficult to argue that opinion was not influential. Moreover, such findings render the notion that voter mobility is driving the process implausible, because it would require vast migrations of voters in and out of jurisdictions in advance of policy changes.
At present, many of the most compelling questions about opinion-policy linkages concern temporal processes and highlight the need for convincing measures of subnational opinion that vary over time. In light of the obstacles described to this point concerning measurement of subnational opinion, this may seem a very tall order. Where once we had no survey-based measures of subnational opinion, extensive effort produced survey-based, cross-sectional measures of subnational opinion. Given that there have been no dramatic changes in the general qualities and quantities of data available to researchers, the question is how we can leverage existing data to produce convincing measures of subnational opinion that can vary between states and within states over time.
The latest advanced technique used to estimate state-level public opinion, as well as public opinion at other levels of aggregation (especially legislative districts but also others), builds on the simulation methods that used national-level survey data in conjunction with state-level census data. This multilevel regression and post-stratification method (MRP), developed by Park, Gelman, and Bafumi (2006), incorporates demographic and geographic information to improve survey-based estimates of each geographic unit’s public opinion on individual issues. It improves upon the estimation of the effects of individual- and state-level predictors by employing recent advances in multilevel modeling, a generalization of linear and generalized linear modeling, in which relationships between grouped variables are themselves modeled and estimated. This partially pools information about respondents across states to learn about what drives individual responses. Whereas the disaggregation method copes with insufficient samples within states by combining surveys, MRP compensates for small within-state samples by using demographic and geographic correlations.
Unlike earlier simulation methods, MRP uses the location of the respondents to estimate state-level effects on responses, using state-level predictors such as region or state-level (aggregate) demographics (e.g., those not available at the individual level) to model these unit-level effects. In this way, all individuals in the survey, no matter their location, yield information about demographic patterns that can be applied to all state estimates, and those residents from a particular state or region yield further information about how much predictions within that state or region vary from others, after controlling for demographics. In the final step, post-stratification weights the estimates for each demographic-geographic respondent type (post-stratified) by the percentages of each type in the actual state populations.
This multilevel model allows us to use many more respondent types than classical methods would do. This improves accuracy by incorporating more detailed population information. An additional benefit of MRP is that modeling individual responses is itself substantively interesting, in that one can study the relationship between demographics and opinion and inquire what drives differences between states: demographic composition or residual cultural differences.
(p. 330) Recent studies have highlighted the virtues of MRP measures compared to other approaches (Lax and Phillips 2009b; Park, Gelman, and Bafumi 2004, 2006; Pacheco 2011). Lax and Phillips illustrate the trade-offs between disaggregation and MRP to consider whether the latter is worth the additional analytical and implementation costs. When subunit sample sizes are small to medium in size but for very large samples, the additional implementation costs may outweigh any additional benefits of MRP. They also illustrate how additional demographic information improves estimation, and that MRP can be employed successfully even on small samples, such as a single national poll. Most recently, Warshaw and Rodden (2012) show that MRP produces more accurate estimates of district-level public opinion on individual issues than either disaggregation of national surveys or presidential vote shares.
The MRP method has been used on a large scale by Lax and Phillips (2009a), who showed how state policies toward gay rights were responsive to public opinions about these rights—more so than any effect of liberal-conservative ideology. Extending this to thirty-nine policies covering eight issue areas—abortion, education, electoral reform, gambling, gay rights, health care, immigration, and law enforcement—they found that state policies are highly responsive to state publics’ issue-specific preferences, statistically controlling for other variables.
Scholars have only just begun to extend these innovations to other subnational jurisdictions. These pioneering studies have illustrated levels of responsiveness to citizen preferences. In municipal politics research, scholars confronted the same obstacles as others studying subnational politics, namely a lack of suitable surveys to weigh public preferences (Palus 2010; Trounstine 2010). Urban politics scholars used crude demographic surrogates for citizen preferences with the same weaknesses as others obtained with such surrogates. Others narrowed their focus to cities with large survey samples (Palus 2010). While this was useful, there remain questions about the generalizability of these select cities’, or of such large cities’, results to smaller cities.
Largely because of the lack of satisfactory measures of citizen opinions in cities, until recently there had been no systematic studies of the responsiveness of city policies to the preferences of their citizens. Tausanovitch and Warshaw (2014) surmounted this obstacle using seven large-scale surveys containing over 275,000 respondents with MRP to produce estimates of citizen opinion for 1,600 cities and towns across the United States. Notably, they found that city governments are responsive to their citizens’ preferences across a wide range of policy areas, with many substantive impacts that are quite large. They also found that liberal cities spend over twice as much per capita as conservative cities, with higher and less regressive tax systems than their conservative counterparts.
At an even more local level, Michael Berkman and Eric Plutzer have explored the linkages of citizen preferences to school board politics (2005). To surmount the lack of suitable survey data at the school district level, the authors devised small polity inference, a statistical technique that combines elements of the simulation approach, aggregation, and Bayesian hierarchical models with post-stratification. Among many interesting findings, these authors discovered that school funding decisions were most responsive to citizen preferences not where there were independently elected school (p. 331) boards, but instead when these decisions were made by the more professional politicians in city or county government, and where more professional politicians appoint school board members (2005, 156–157).
This review of the past half century of the study of subnational public opinion has illustrated a progressive research program. In the beginning, students of opinion presumed that opinion influenced politics, but rarely if ever looked at connections. Students of comparative policy sought linkages to opinion, but had no convincing measures of subnational public attitudes. In the absence of subnational opinion data, the comparative study of survey-based measures of opinion and subnational indicators of government action languished. A large reason for this stasis was the daunting research obstacles that questions of linkage entailed: examination of patterns of opinion and patterns of policy was required. Either comparative analyses of the connections between opinion and policy across subunits or longitudinal analyses of opinion change and policy change within single units were also required.
Most generally, the limitations of this early period are quite clear. Convincing opinion measures derived from information at the subnational level were simply not available. Even fifty years later, we do not have a repository of systematic survey observations collected at the state or subnational levels. To break this impasse required the development of innovative approaches capable of using limited data in a convincing manner. The last twenty-five years have witnessed a revolution in important innovations that have facilitated the development of subnational measures of opinion that are derived from national survey data.
Disaggregation of national surveys to subnational units produces valid estimates of state opinion. The reliability of these estimates hinges on the numbers of observations available within subunits. Pooling more national surveys can increase reliability if the opinions measured exhibit statistically demonstrable stability across the pooled national samples. While enhancing reliability, particularly in smaller states with typically few observations in single national surveys, the requirement that only stable opinion indicators be pooled also means that this approach is unsuitable for longitudinal analyses on subnational opinion and policy change. As such, disaggregation has been instrumental in establishing strong patterns of cross-sectional correspondence between opinion and policy in subnational units, but is inadequate for moving on to more complex questions concerning the processes that connect opinion and policy. This is the new frontier of the study of subnational opinion and policy.
The new frontier of opinion-policy research focuses on the breadth of linkages across different polices, but also focuses on forces that promote change in opinion and policy, and how change in opinion relates to change in policy across subunits. In this latest stage in the evolution of the study of opinion-policy linkage, the data demands should be (p. 332) apparent. We not only need valid and reliable estimates of subnational opinion; we need them over time as well.
To date, MRP has been the most fruitful method for producing valid and reliable longitudinal measures of subnational opinion. Combining the advantages of disaggregating national survey observations to the subnational level, this approach also employs ideas from simulation studies to integrate demographic information to produce valid and reliable estimates of subnational opinion. Where subnational units have large numbers of observations, MRP differs little from simple disaggregation. More important, in the many subunits where there are few observations, MRP has been shown to be demonstrably superior to disaggregation.
These characteristics of MRP offer attractive benefits that will hasten progress. By producing superior estimates for small sample subunits, analyses can better integrate patterns between opinion and policy across more subunits. The MRP method can also produce estimates across a wider array of policies because, unlike disaggregation, MRP does not limit inquiry to opinions that are stable across the period pooled. Finally, and relatedly, just as MRP can produce estimates of opinion across more subunits with fewer data, it can also produce annual estimates of opinion for subunits, also not possible with disaggregation.
The MRP, or any future methods that can produce reliable and valid measures of subnational opinion on a wide array of issues over time, will advance the study of public opinion generally, and linkage specifically, by providing the means to address important lingering questions. By expanding the breadth of issues available for study, researchers can expand our knowledge of the substantive dimensions of linkage and assay differential levels of public interest and elite responsiveness. By allowing for analyses of longitudinal change in opinion within states, researchers can explore the forces promoting change in subunit opinions and the consequences of those changes on elite behavior and government outcomes. Scholars may explore the conditions in which elites respond to public opinion and those in which they may seek to manipulate it to their ends (Jacobs and Shapiro 2000), or in which policy attenuates public concern (Wlezien 2004, Johnson, Brace and Arceneaux 2005).
Achen, C. H. 1975. “Mass Political Attitudes and the Survey Response.” American Political Science Review 69 (4): 1218–1231.Find this resource:
Achen, C. H. 1977. “Measuring Representation: Perils of the Correlation Coefficient.” American Journal of Political Science 21 (4): 805–815.Find this resource:
Ansolabehere, S., J. M. Snyder Jr., and C. Stewart. 2001. “Candidate Positioning in U.S. House Elections.” American Journal of Political Science 45 (1): 136–159.Find this resource:
Arceneaux, K. 2002. “Direct Democracy and the Link Between Public Opinion and State Abortion Policy.” State Politics & Policy Quarterly 2 (4): 372–387.Find this resource:
Ardoin, P. J., and J. G. Garand. 2003. “Measuring Constituency Ideology in U.S. House Districts: A Top-Down Simulation Approach.” Journal of Politics 65 (4): 1165–1189.Find this resource:
(p. 333) Bartels, L. M. 1991. “Constituency Opinion and Congressional Policy Making: The Reagan Defense Buildup.” American Political Science Review 85: 457–474.Find this resource:
Beck, P. A., and T. R. Dye. 1982. “Sources of Public Opinion on Taxes: The Florida Case.” Journal of Politics 44 (1): 172–182.Find this resource:
Boehmke, F. J., and R. Witmer. 2004. “Disentangling Diffusion: The Effects of Social Learning and Economic Competition on State Policy Innovation and Expansion.” Political Research Quarterly 57 (1): 39–51.Find this resource:
Berkman, M., and E. Plutzer. 2005. Ten Thousand Democracies: Politics and Public Opinion in America’s School Districts. Washington, DC: Georgetown University Press.Find this resource:
Berry, W. D., E. J. Ringquist, R. C. Fording, and R. L. Hanson. 1998. “Measuring Citizen and Government Ideology in the American States, 1960–93” American Journal of Political Science 42 (1): 327–348.Find this resource:
Berry, W. D., E. J. Ringquist, R. C. Fording, and R. L. Hanson. 2007. “A Rejoinder: The Measurement and Stability of State Citizen Ideology.” State Politics & Policy Quarterly 7 (2): 160–166.Find this resource:
Brace, P., K. Arceneaux, M. Johnson, and S. Ulbig. 2004. “Does State Political Ideology Change over Time?” Political Research Quarterly 57 (4): 529–540.Find this resource:
Brace, P., K. Arceneaux, M. Johnson, and S. Ulbig. 2007. “Reply to ‘The Measurement and Stability of State Citizen Ideology’.” State Politics and Policy Quarterly 7 (2): 133–140.Find this resource:
Brace, P., and B. Boyea. 2008. “State Public Opinion, the Death Penalty and the Practice of Electing Judges.” American Journal of Political Science 52 (2): 360–372.Find this resource:
Brace, P., and A. Jewett. 1995. “The State of State Politics Research.” Political Research Quarterly 48 (3): 643–681.Find this resource:
Brace, P., and M. Johnson. 2006. “Does Familiarity Breed Contempt? Examining the Correlates of State-Level Confidence in the Federal Government.” In Public Opinion in State Politics, edited by J. E. Cohen, 19–37. Stanford, CA: Stanford University Press.Find this resource:
Brace, P., K. Sims-Butler, K. Arceneaux, and M. Johnson. 2002. “Public Opinion in the American States: New Perspectives Using National Survey Data.” American Journal of Political Science 46 (1): 173–189.Find this resource:
Burstein, P. 2010. “Public Opinion, Public Policy, and Democracy.” In Handbook of Politics and Society in Global Perspective, edited by K. T. Leicht and J. C. Jenkins, 63–79. New York: Springer.Find this resource:
Camobreco, J. F. 1998. “Preferences, Fiscal Policies, and the Initiative Process.” Journal of Politics 60 (3): 819–829.Find this resource:
Campbell, A., P. Converse, W. Miller, and D. Stokes. 1960. The American Voter. New York: John Wiley and Sons.Find this resource:
Canes-Wrone, B., J. F. Cogan, and D. W. Brady. 2002. “Out of Step, Out of Office: Electoral Accountability and House Members’ Voting.” American Political Science Review 96 (1): 127–140.Find this resource:
Carsey, T. M., and J. J. Harden. 2010. “New Measures of Partisanship, Ideology, and Policy Mood in the American States.” State Politics & Policy Quarterly 10 (2): 136–156.Find this resource:
Citrin, J. 1979. “Do People Want Something for Nothing: Public Opinion on Taxes and Government.” National Tax Journal Supplement 32 (June): 113–130.Find this resource:
Cnudde, C. F., and D. J. McCrone. 1966. “The Linkage between Constituency Attitudes and Congressional Voting Behavior: A Causal Model.” American Political Science Review 60 (1): 66–72.Find this resource:
Cnudde, C. F., and D. J. McCrone. 1969. “Party Competition and Welfare Policies in the American States.” American Political Science Review 63 (3): 858–866.Find this resource:
(p. 334) Cohen, J. E., ed. 2006. Public Opinion in State Politics. Stanford, CA: Stanford University Press.Find this resource:
Converse, P. E. 1964. “The Nature of Belief Systems in Mass Publics.” In Ideology and Discontent, edited by D. Apter, 206–261. New York: Free Press.Find this resource:
Dawson, R. E., and J. A. Robinson. 1963. “Inter-Party Competition, Economic Variables, and Welfare Policies in the American States.” Journal of Politics 25 (2): 265–289.Find this resource:
Della Karpini, Michael X. and Scott Keeter. 1996. What Americans Know About Politics and Why It Matters. New Haven, CT: Yale University Press.Find this resource:
Dye, T. R. 1965. “Malaportionment and Public Policy in the States.” Journal of Politics 27 (3): 586–601.Find this resource:
Dye, T. R. 1969a. “Income Inequality and American State Politics.” American Political Science Review 63 (1): 157–162.Find this resource:
Dye, T. R. 1969b. “Executive Power and Public Policy in the States.” Western Political Quarterly 22 (4): 926–939.Find this resource:
Erikson, R. S. 1976. “The Relationship Between Public Opinion and State Policy: A New Look Based on Some Forgotten Data.” American Journal of Political Science 20 (1): 25–36.Find this resource:
Erikson, R. S. 1978. “Constituency Opinion and Congressional Behavior: A Reexamination of the Miller-Stokes Representation Data.” American Journal of Political Science 22 (3): 511–535.Find this resource:
Erikson, R. S. 1981. “Measuring Constituency Opinion: The 1978 Congressional Election Study.” Legislative Studies Quarterly 6 (2): 235–545.Find this resource:
Erikson, R. S., and G. C. Wright. 1980. “Policy Representation of Constituency Interests.” Political Behavior 2 (1): 91–106.Find this resource:
Erikson, R. S., G. C. Wright, and J. P. McIver. 1993. Statehouse Democracy: Public Opinion and Policy in the American States. New York: Cambridge University Press.Find this resource:
Erikson, R. S., G. C. Wright, and J. P. McIver. 2006. “Public Opinion in the States: A Quarter Century of Change and Stability.” In Public Opinion in State Politics, edited by J. E. Cohen, 228–253. Stanford, CA: Stanford University Press.Find this resource:
Fenno, R. F. 1978. Homestyle: House Members in Their Districts. Boston: Little, Brown.Find this resource:
Fleisher, R. 1993. “Explaining the Change in Roll-Call Voting Behavior of Southern Democrats.” Journal of Politics 55 (2): 327–341.Find this resource:
Fry, B. R., and R. F. Winters. 1970. “The Politics of Redistribution.” American Political Science Review 64 (2): 508–522.Find this resource:
Gelman, A., and J. Hill. 2007. Data Analysis Using Regression and Multilevel Hierarchical Models. Cambridge, UK: Cambridge University Press.Find this resource:
Gelman, A., and T. C. Little. 1997. “Poststratification into Many Categories Using Hierarchical Logistic Regression.” Survey Methodology 23 (2): 127–135.Find this resource:
Gibson, J. 1988. “Political Intolerance and Political Repression During the McCarthy Red Scare.” American Political Science Review 82 (2): 511–529.Find this resource:
Glazer, A., and M. Robbins. 1985. “Congressional Responsiveness to Constituency Change.” American Journal of Political Science 29 (2): 259–273.Find this resource:
Godwin, R. K., and W. B. Shepard. 1976. “Political Processes and Public Expenditures: A Re-examination Based on Theories of Representative Government.” American Political Science Review 70 (4): 1127–1135.Find this resource:
Green, D. P., and A. E. Gerken. 1989. “Self-Interest and Public Opinion Toward Smoking Restrictions and Cigarette Taxes.” Public Opinion Quarterly 53 (1): (Spring): 1–16.Find this resource:
Hofferbert, R. I. 1966. “The Relation between Public Policy and Some Structural and Environmental Variables in the American States.” American Political Science Review 60 (1): 73–82.Find this resource:
(p. 335) Holbrook-Provow, T. M., and S. C. Poe. 1987. “Measuring State Political Ideology.” American Politics Quarterly 15 (3): 399–416.Find this resource:
Jennings, E. T., Jr. 1979. “Competition, Constituencies, and Welfare Policies in the American States.” American Political Science Review 73 (2): 414–429.Find this resource:
Jacobs, L. R., and R. Y. Shapiro. 1994. “Studying Substantive Democracy.” PS: Political Science and Politics 27 (1): 9–17.Find this resource:
Jacobs, L. R., and R. Y. Shapiro. 2000. Politicians Don’t Pander: Political Manipulation and the Loss of Democratic Responsiveness. Chicago: University of Chicago Press.Find this resource:
Johannes, J. R. 1984. To Serve the People: Congress and Constituency Service. Lincoln: University of Nebraska Press.Find this resource:
Johnson, M., P. Brace, and K. Arceneaux. 2005. “Public Opinion and Dynamic Representation in the American States: The Case of Environmental Attitudes.” Social Science Quarterly 86 (1): 87–108.Find this resource:
Jones, R. S., and W. E. Miller. 1984. “State Polls: Promising Data Sources for Political Research.” Journal of Politics 46 (4): 1182–1192.Find this resource:
Joslyn, R. A. 1980. “Manifestations of Elazar’s Political Subcultures: State Public Opinion and the Content of Political Campaign Advertising.” Publius 10 (2): 37–58.Find this resource:
Kalt, J. P., and M. A. Zupan. 1984. “Capture and Ideology in the Economic Theory of Politics.” American Economic Review 74 (3): 279–300.Find this resource:
Kastellec, J. P., J. R. Lax, and J. H. Phillips. 2010. “Public Opinion and Senate Confirmation of Supreme Court Nominees.” Journal of Politics 72 (3): 767–784.Find this resource:
Kernell, G. 2009. “Giving Order to Districts: Estimating Voter Distributions with National Election Returns.” Political Analysis 17(3): 215–235.Find this resource:
Key, V. O. 1949. Southern Politics in State and Nation. New York: Knopf.Find this resource:
Kuklinski, J. H. 1977. “Constituency Opinion: A Test of the Surrogate Model.” Public Opinion Quarterly 41 (1): 34–40.Find this resource:
Lascher, E. L., M. G. Hagen, S. A. Rochlin, 1996. “Gun Behind the Door? Ballot Initiatives, State Policies, and Public Opinion.” Journal of Politics 58 (3): 760–775.Find this resource:
Lax, J. R., and J. H. Phillips. 2009a. “Gay Rights in the States: Public Opinion and Policy Responsiveness.” American Political Science Review 103 (3): 367–386.Find this resource:
Lax, J. R., and J. H. Phillips. 2009b. “How Should We Estimate Public Opinion in the States?” American Journal of Political Science 53 (1): 107–121.Find this resource:
Lenz, G. S. 2012. Follow the Leader? How Voters Respond to Politicians’ Policies and Performance. Chicago: University of Chicago Press.Find this resource:
LeoGrande, W., and A. S. Jeydel. 1997. “Using Presidential Election Returns to Measure Constituency Ideology: A Research Note.” American Politics Quarterly 25 (1): 3–19.Find this resource:
Levendusky, M. S., J. C. Pope, and S. D. Jackman. 2008. “Measuring District-Level Partisanship with Implications for the Analysis of U.S. Elections.” Journal of Politics 70 (3): 736–753.Find this resource:
McCrone, D. J., and J. H. Kuklinski. 1979. “The Delegate Theory of Representation.” American Journal of Political Science 23 (2): 278–300.Find this resource:
McDonagh, E. L. 1992. “Representative Democracy and State Building in the Progressive Era.” American Political Science Review 86: 938–950.Find this resource:
Miller, W. E., and D. E. Stokes. 1963. “Constituency Influence in Congress.” American Political Science Review 57 (1): 45–56.Find this resource:
Mooney, C. Z., and M.-H. Lee. 2000. “The Influence of Values on Consensus and Contentious Morality Policy: U.S. Death Penalty Reform, 1956–1982.” Journal of Politics 62 (1): 223–239.Find this resource:
(p. 336) Nice, D., and J. Cohen. 1983. “Ideological Consistency among State Party Delegations to the U.S. House, Senate, and National Conventions.” Social Science Quarterly 64 (4): 871–879.Find this resource:
Nicholson, S. P. 2003. “The Political Environment and Ballot Proposition Awareness.” American Journal of Political Science 47 (3): 403–410.Find this resource:
Norrander, B., and C. Wilcox. 1999. “Public Opinion and Policymaking in the States: The Case of Post-Roe Abortion Policy.” Policy Studies Journal 27(4): 707–722.Find this resource:
Norrander, B., 2000. “The Multi-Layered Impact of Public Opinion on Capital Punishment Implementation in the American States.” Political Research Quarterly 53 (4): 771–793.Find this resource:
Norrander, B., and C. Wilcox. 2001. “Measuring State Public Opinion with the Senate National Election Study.” State Politics & Policy Quarterly 1 (1): 111–125.Find this resource:
Pacheco, J. 2011. “Using National Surveys to Measure Dynamic U.S. State Public Opinion: A Guideline for Scholars and an Application.” State Politics & Policy Quarterly 11 (4): 415–539.Find this resource:
Page, B. 2002. “The Semi-Sovereign Public.” In Navigating Public Opinion, edited by J. Manza, F. L. Cook, and B. I. Page, 325–344. New York: Oxford University Press.Find this resource:
Page, B. I., and R. Y. Shapiro. 1992. The Rational Public: Fifty Years of Trends in Americans’ Policy Preferences. Chicago: University of Chicago Press.Find this resource:
Page, B. I., R. Y. Shapiro, P. W. Gronke, and R. M. Rosenberg. 1984. “Constituency, Party and Representation in Congress.” Public Opinion Quarterly 48 (4): 741–756.Find this resource:
Palus, C. K. 2010. “Responsiveness in American Local Governments.” State and Local Government Review 42 (2): 133–150.Find this resource:
Park, D. K., A. Gelman, and J. Bafumi. 2004. “Bayesian Multilevel Estimation with Poststratification: State-Level Estimates from National Polls.” Political Analysis 12 (4): 375–385.Find this resource:
Park, D. K., A. Gelman, and J. Bafumi. 2006. “State-Level Opinions from National Surveys: Poststratification Using Multilevel Logistic Regression.” In Public Opinion in State Politics, edited by J. Cohen, 209–228. Palo Alto, CA: Stanford University Press.Find this resource:
Parry, J. A., B. Kisida, and R. E. Langley. 2008. “The State of State Polls: Old Challenges, New Opportunities.” State Politics & Policy Quarterly 8 (2): 198–216.Find this resource:
Peltzman, S. 1984. “Constituent Interest and Congressional Voting.” Journal of Law and Economics 27 (1): 181–210.Find this resource:
Percival, G. L., M. Johnson, and M. Neiman. 2009. “Representation and Local Policy: Relating County-Level Public Opinion to Policy Outputs.” Political Research Quarterly 62 (1): 164–177.Find this resource:
Pool, I. D. S., and R. Abelson. 1961. “The Simulmatics Project.” Public Opinion Quarterly 25 (2): 167–183.Find this resource:
Pool, Ithiel de Sola, Robert P. Abelson and Samuel Popkin. 1965. Candidates, Issues and Strategies. Cambridge, MA: M.I.T. Press.Find this resource:
Popkin, S. 1994. The Reasoning Voter: Communication and Persuasion in Presidential Elections. Chicago: University of Chicago Press.Find this resource:
Popkin, S., J. Gorman, C. Phillips, and J. Smith. 1976. “Comment: What Have You Done for Me Lately? Toward a Theory of Voting.” American Political Science Review 70 (September): 779–805.Find this resource:
Seidman, D. 1973. “Simulation of Public Opinion: A Caveat.” Public Opinion Quarterly 39 (3): 331–342.Find this resource:
Shapiro, R. Y. 2011. “Public Opinion and American Democracy.” Public Opinion Quarterly 75 (5): 982–1017.Find this resource:
Sharkansky, I. 1968. Spending in the American States. Chicago: Rand McNally.Find this resource:
(p. 337) Sharkansky, I., and R. I. Hofferbert. 1969. “Dimensions of State Politics, and Public Policy.” American Political Science Review 63 (3): 867–880.Find this resource:
Sinclair-Deckard, B. 1976. “Electoral Marginality and Party Loyalty in the House.” American Journal of Political Science 20 (3): 469–481.Find this resource:
Stimson, J. A., M. B. Mackuen and R. S. Erikson. 1995. “Dynamic Representation.” American Political Science Review 89(3): 543–565.Find this resource:
Stouffer, S. A. 1955. Communism, Conformity, and Civil Liberties: A Cross-Section of the Nation Speaks. Garden City, NY: Doubleday.Find this resource:
Sullivan, J. L., and D. R. Minns. 1976. “Ideological Distance between Candidates: An Empirical Examination.” American Journal of Political Science 20 (3): 439–469.Find this resource:
Sullivan, J. L., and E. M. Uslaner. 1978. “Congressional Behavior and Electoral Marginality.” American Journal of Political Science 22 (3): 536–553.Find this resource:
Tausanovitch, C., and C. Warshaw. 2013. “Measuring Constituent Policy Preferences in Congress, State Legislatures and Cities.” Journal of Politics 75 (2): 330–342.Find this resource:
Tausanovitch, C., and C. Warshaw. 2014. “Representation in Municipal Government.” American Political Science Review 108 (3): 605–641.Find this resource:
Tiebout, C. 1956, “A Pure Theory of Local Expenditures.” Journal of Political Economy 64 (5): 416–424.Find this resource:
Trounstine, J. 2010. “Representation and Accountability in Cities.” Annual Review of Political Science 13: 407–423.Find this resource:
Uslaner, E. M., and R. E. Weber. 1979. “U.S. State Legislators’ Opinions and Perceptions of Constituency Attitudes.” Legislative Studies Quarterly 4 (4): 563–585.Find this resource:
Warshaw, C., and J. Rodden. 2012. “How Should We Measure District-Level Public Opinion on Individual Issues?” Journal of Politics 74 (1): 203–219.Find this resource:
Weber, R. E., A. H. Hopkins, M. L. Mezey, and F. J. Munger. 1972. “Computer Simulation of State Electorates.” Public Opinion Quarterly 36 (4): 549–565.Find this resource:
Weber, R. E., and W. R. Shaffer, 1972. “Public Opinion and American State Policymaking.” Midwest Journal of Political Science 16 (4): 683–699.Find this resource:
Whittaker, M., G. M. Segura, and S. Bowler. 2005. “Racial/Ethnic Group Attitudes toward Environmental Protection in California: Is ‘Environmentalism’ Still a White Phenomenon?” Political Research Quarterly 58 (3): 435–447.Find this resource:
Wlezien, C. 1995. “The Public as Thermostat: Dynamics of Preferences for Spending.” American Journal of Political Science 39 (4): 981–1000.Find this resource:
Wlezien, C. 2004. “Patterns of Representation: Dynamics of Public Preferences and Policy.” Journal of Politics 66 (1): 1–24.Find this resource:
Wlezien, C. 2011. Public Opinion and Public Policy in Advanced Democracies. Oxford Bibliographies Online. Oxford, UK: Oxford University Press.Find this resource:
Wright, G., R. S. Erikson, and J. P. McIver. 1985. “Measuring State Partisanship and Ideology Using Survey Data.” Journal of Politics 47 (2): 469–489.Find this resource:
Wright, G., and J. P. McIver. 2007. “Measuring the Public’s Ideological Preferences in the 50 States: Survey Responses versus Roll Call Data.” State Politics & Policy Quarterly 7 (2): 141–151.Find this resource:
Zaller, John R. 1992. The Nature and Origins of Mass Opinion. New York: Cambridge University Press.Find this resource: