Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: null; date: 21 January 2018

Latent Constructs in Public Opinion

Abstract and Keywords

Many of the most important constructs in public opinion research are abstract, latent quantities that cannot be directly observed from individual questions on surveys. Examples include ideology, political knowledge, racial prejudice, and consumer confidence. In each of these examples, individual survey questions are merely noisy indicators of the theoretical quantities that scholars are interested in measuring. This chapter describes a number of approaches for measuring latent constructs such as these at both the individual and group levels. It also discusses a number of substantive applications of latent constructs in public opinion research. Finally, it discusses methodological frontiers in the measurement of latent constructs.

Keywords: Public Opinion, Latent variables, Ideology, Representation, Measurement

Introduction

Many of the most important constructs in public opinion research are abstract, latent quantities that cannot be directly observed from individual questions on surveys. The accurate measurement of these concepts “is a cornerstone of successful scientific inquiry” (Delli Carpini and Keeter 1993, 1203). Some prominent examples of latent constructs in public opinion research are policy mood, political knowledge, racial resentment, consumer confidence, political activism, and trust in government. In each instance the available data on surveys are merely noisy indicators of the theoretical quantities that scholars are interested in measuring. Thus, multiple indicators are necessary to construct a holistic measure of the latent quantity (Jackman 2008). For example, imagine that scholars wanted to measure religiosity (e.g., McAndrew and Voas 2011; Margolis 2013). It is self-evident that self-reports of church attendance on a survey are merely noisy indicators of respondents’ underlying religiosity. Moreover, they capture only one aspect of religiosity. Scholars could construct a more holistic measure of citizens’ underlying religiosity by averaging across multiple indicators of religiosity, such as church attendance, membership in religious organizations, belief in God, donations to a church, and so forth.

There are a number of reasons to believe that survey items are often best viewed as noisy indicators of underlying latent attitudes (see Jackman 2008 for a review).1 One plausible view is that individual survey questions have measurement error due to vague or confusing question wording (Achen 1975). Another view is that survey respondents sample from a set of mentally accessible considerations when they provide their responses to individual questions (Zaller and Feldman 1992). If a respondent answers the same survey question multiple times, he or she would provide slightly different responses each time even though the underlying latent trait is stable. Measurement error on surveys could also be driven by the conditions of the interview (mode, location, time of day, etc.), the respondents’ level of attentiveness on the survey (Berinsky, Margolis, and Sances 2014), or characteristics of the interviewer (e.g., attentiveness, race, ethnicity, gender, education level) (e.g., Anderson, Silver, and Abramson 1988).

Overall, this perspective suggests that the usage of multiple indicators almost always reduces measurement error and improves estimates of the underlying latent construct (Ansolabehere, Rodden, and Snyder 2008). As more indicators become available, the measurement of the latent construct of interest will generally become more accurate. In addition, recent work shows how survey designers can use computerized adaptive testing (CAT) to further improve measurement accuracy and precision (Montgomery and Cutler 2013).

Examples of Latent Public Opinion Constructs

There are a number of prominent examples of latent constructs in public opinion research. One is policy liberalism or mood. Surveys typically include many questions about respondents’ preferences on individual policies. They might include questions about universal healthcare, abortion, welfare, tax cuts, and environmental policy. One approach is to analyze these questions separately (e.g., Lax and Phillips 2009a; Broockman 2016). However, in practice survey respondents’ views on these individual questions are generally highly correlated with one another. If respondents have liberal views on universal healthcare, they probably also have liberal views on other policy issues. This is because responses on individual policy questions largely stem from respondents’ underlying ideological attitudes. Thus, their views on many policy questions can be mapped onto a one- or two-dimensional policy liberalism scale (Ansolabehere, Rodden, and Snyder 2008; Treier and Hillygus 2009; Bafumi and Herron 2010; Tausanovitch and Warshaw 2013).2 Moreover, when individuals’ responses are averaged across many issue questions, their latent policy liberalism tends to be very stable over time (Ansolabehere, Rodden, and Snyder 2008).

Another prominent latent construct is political knowledge. A variety of theories suggest that variation in political knowledge influences political behavior. Like other latent concepts, knowledge is not a concept that can be directly measured based on a single survey question (Delli Carpini and Keeter 1993). At best, individual survey questions capture a subset of citizens’ knowledge about politics. Instead, political knowledge is thought to be an agglomeration of citizens’ knowledge of many aspects of the political process. Indeed, researchers have found that one (Delli Carpini and Keeter 1993) or two (Barabas et al. 2014) latent dimensions capture the bulk of the variation in citizens’ political knowledge.

Racial prejudice and resentment are core concepts in the field of political behavior. Indeed, racial resentment has been shown to influence a variety of political attitudes and actions. But there is no way to capture racial prejudice or resentment through a single survey question. Instead, researchers typically ask respondents many questions that serve as indicators of prejudice. Then all of these questions are aggregated to produce a summary measure of prejudice (e.g., Kinder and Sanders 1996; Tarman and Sears 2005; Carmines, Sniderman, and Easter 2011).

One of the most important metrics of the health of the U.S. economy is consumer confidence. The University of Michigan has used public opinion surveys to track consumer confidence since the late 1940s (Mueller 1963; Ludvigson 2004). Consumer confidence is measured using an index of multiple survey questions that all tap into consumers’ underlying, latent views about the economy. This index has been used in a huge literature in economics, finance, and political economy (e.g., De Boef and Kellstedt 2004; Ludvigson 2004; Lemmon and Portniaguina 2006).

Measuring Latent Opinion at the Individual Level

Scholars have used a variety of models to measure latent variables at the individual level. The objective of each of these models is to measure a continuous latent variable using responses to a set of survey questions that are assumed to be a function of that latent variable. In this section I discuss the four most common measurement techniques: additive scales, factor analysis, item-response, and mixed models for the analysis of both continuous and ordinal data.

Additive Models

The simplest way to measure latent opinion is to just take the average of the responses to survey items that are thought to represent a particular latent variable (e.g., Abramowitz and Saunders 1998). For instance, imagine that a survey has four questions that tap into respondents’ political knowledge, including a question about the number of justices on the Supreme Court, one that asks respondents to name the current vice president, one that asks the percentage required for Congress to override a presidential veto, and one that asks the length of a president’s term. One way to measure political knowledge is to simply add up the number of correct answers to these four questions.

In some cases, this simple approach may work well. However, additive scales have several major weaknesses vis-à-vis the more complex approaches discussed below. First, they treat all survey items identically and assume that every item contributes equally to the underlying latent dimension (Treier and Hillygus 2009). Second, it is difficult to determine the appropriate dimensionality of the latent scale using additive models. In the case of political knowledge, for example, Barabas et al. (2014) actually identify several theoretically important dimensions. Third, it is necessary to determine the correct polarity of each question in advance (e.g., which response is the “correct” or “liberal” answer). This is often infeasible for larger sets of questions or for complicated latent variables. Fourth, additive models are ill-suited to multi-chotomous or continuous response data. Finally, additive models do not enable the characterization of measurement error or uncertainty.

Factor Analysis

Factor analysis is the most common latent variable model used in applied research (Jackman 2008). It has been used in a large number of studies to estimate the public’s latent policy liberalism (e.g., Ansolabehere, Rodden, and Snyder 2006, 2008; Carsey and Harden 2010), political knowledge (e.g., Delli Carpini and Keeter, 1993) or racial prejudice (e.g., Tarman and Sears 2005). Factor analysis is based on the observed relationship between individual items on a survey. For instance, imagine a Bayesian model of citizens’ policy liberalism, with the single latent factor, θi. For each individual i, we observe J continuous survey questions, denoted yi = (y1i, …, yji, …, yJi). We can model yi as a function of citizens’ policy liberalism (θi) and item-specific factor loadings λ = (λ1, …, λj, …, λJ),

yi~NJ(λθi,Ψ),
(1)

where NJ indicates a J-dimensional multivariate normal distribution and Ψ is a J × J covariance matrix (Quinn 2004).

Factor analysis models have a number of advantages over simple additive scales. They enable each survey item to differentially contribute to the latent construct. They also enable the construction of complex multidimensional scales. Finally, they enable the model to determine the polarity of each item. Factor analysis models can be run in the statistical program R using the Psych or MCMCPack packages. They can also be easily estimated in other software packages such as Stata.

Item Response Models for Dichotomous and Ordinal Data

Factor-analytic models assume that the observed indicators are continuous. Thus, conventional factor analysis can produce biased estimates of latent variables with binary indicators (Kaplan 2004). For binary variables, therefore, we need a different measurement model. The most common class of measurement models for binary survey items comes from item response theory (IRT) (see Johnson and Albert 2006). These models are also well-suited for Bayesian inference, which makes it possible to characterize the uncertainty in the latent scale. In addition, Bayesian IRT models can easily deal with missing data and survey items where respondents answer “Don’t know.”

The conventional two-parameter IRT model introduced to political science by Clinton, Jackman, and Rivers (2004) characterizes each policy response yij{ 0,1 } as a function of subject i’s latent ideology (θi), the difficulty (αj) and discrimination (βj) of item j, where

Pr[yij=1]=Φ(βjθiαj)
(2)

where Φ is the standard normal cumulative distribution function (CDF) (Jackman 2009, 455; Fox 2010, 10). βj is referred to as the “discrimination” parameter because it captures the degree to which the latent trait affects the probability of a yes answer. If βj is 0, then question j tells us nothing about the latent variable being measured. We would expect βj to be close to 0 if we ask a completely irrelevant question, such as one about the respondent’s favorite color. The “cut point” is the value of αj/βj at which the probabilities of answering yes or no to a question are fifty-fifty.

Scholars can run Bayesian IRT models using off-the-shelf software such as MCMCpack (Martin et al. 2011) or the ideal function in the R package pscl (Jackman 2012). They can also run fast approximations of some types of IRT models using the R package emIRT (Imai, Lo, and Olmsted Forthcoming).3 For more complicated IRT models, they can use fully Bayesian software such as Jags or Stan.

Models for Mixtures of Continuous, Ordinal, and Binary Data

Factor analytic models are best for continuous data, while IRT models are best for binary and ordinal data (Treier and Hillygus 2009). To measure latent variables that are characterized by a variety of different types of indicators (continuous, ordinal, binary), it is necessary to use a model appropriate for mixed measurement responses (Quinn 2004). This model characterizes a latent variable using a mixture of link models that are tailored to the data. The R package MCMCpack implements a Bayesian mixed data factor analysis model that can be used with survey data (Martin et al. 2011). It is also possible to develop more complicated models for mixed data using fully Bayesian software such as Jags or Stan.

Evaluating the Success of a Latent Variable Model

The quality of the inferences about a latent variable are usually assessed with reference to two key concepts: validity and reliability (Jackman 2008). The concept of validity taps the idea that a latent variable model should generate unbiased measures of the concept that “it is supposed to measure” (Bollen 1989, 184). The concept of reliability taps into the amount of measurement error in a given set of estimates.

Adcock and Collier (2001) suggest a useful framework for evaluating the validity of a measurement model. First, they suggest that models should be evaluated for their content validity. Are the indicators of the latent variable operationalizing the full substantive content of the latent construct? To assess this, they suggest examining whether “key elements are omitted from the indicator,” as well as whether “inappropriate elements are included in the indicator” (538). For example, indicators for respondents’ latent opinion about climate change should be substantively related to climate change rather than some other policy area. Moreover, they should include all relevant substantive areas related to citizens’ views on climate change.

Next, Adcock and Collier (2001) suggest that models should be evaluated for their convergent validity. Are the estimates of a latent variable closely related to other measures known to be valid measures of the latent construct? For example, estimates of respondents’ policy liberalism should be highly correlated with their symbolic ideology.

Third, they suggest that models should be evaluated for their construct validity. Do the estimates of a latent variable correspond to theoretically related concepts? This form of validation is particularly useful when there is a well-understood causal relationship between two related concepts. For example, estimates of policy liberalism should be closely related to respondents’ voting behavior and partisan identification.

The concept of reliability assesses the amount of measurement error in a set of estimates. A measurement would be unreliable if it contained large amounts of random error (Adcock and Collier 2001). The reliability of a measure is crucial for determining its usefulness for applied research. Indeed, measurement error in latent variables used as regression predictors leads to severely biased estimates in substantive analyses (Jackman 2008; Treier and Jackman 2008).

Depending on the data sources available, there are a number of ways to assess the reliability of a measurement. One of the most popular approaches is to use “test-retest” reliability (Jackman 2008). Under the assumption that the latent variable does not change, the correlation between the measure of the latent variable in two time periods is an estimate of the reliability of the measures. Ansolabehere, Rodden, and Snyder (2008) use this approach to assess the stability of the mass public’s policy liberalism across panel waves of the American National Election Study (ANES). They find that measures of individuals’ policy liberalism in one wave are strongly correlated with a measure of their latent policy liberalism two or four years later. Another approach for assessing reliability is to examine inter-item reliability based on the average level of correlation among the survey items used to generate a latent construct, normalized by the number of items.

Jackman (2008) points out that there is often a “bias-variance trade-off” in latent variable estimation. Increasing the number of indicators used in a latent variable model may increase the reliability of the resulting estimates at the cost of less content validity. For example, imagine that a researcher wanted to measure the public’s latent views about abortion policy. Given the low-dimensional structure of the mass public’s policy liberalism, the researcher would probably be able to increase the reliability of her measure by including survey items about other issue areas in her measurement model. However, this approach would violate Adcock and Collier (2001)’s dictum that indicators for a particular latent construct should be substantively related to the construct being measured rather than to some other policy area.

Individual-Level Applications

Latent public opinion constructs have been used for a wide variety of substantive applications in political science. In this section I briefly discuss two of these applications.

Polarization

It is widely agreed that the latent ideology of members of Congress and other elites have grown increasingly polarized in recent decades (Poole and Rosenthal 2007). Are the changes in elite polarization caused by increasing polarization at the mass level (Barber and McCarty 2015)? To address this question we need holistic measures of the individual-level policy liberalism of the American public at a variety of points of time. Hill and Tausanovitch (2015) do this using data from the ANES. They find little increase in the polarization of the mass public’s policy liberalism between 1956 and 2012. Their results strongly suggest that elite polarization is not caused by changes in mass polarization (see Barber and McCarty 2015 for more on this debate).

Outside of the United States there has been less work on the structure of the mass public’s preferences. One recent exception is China, where several papers have examined the mass public’s policy preferences along one or more dimensions (e.g., Lu, Chu, and Shen 2016; Pan and Xu 2015). For example, Pan and Xu (2015) identify a single, dominant ideological dimension to public opinion in China. They find that individuals expressing preferences associated with political liberalism, favoring constitutional democracy and individual liberty, are also more likely to express preferences associated with economic liberalism, such as endorsement of market-oriented policies, and preferences for social liberalism, such as the value of sexual freedom. Notably, they also find little evidence of polarization in the Chinese public’s policy preferences.

Political Knowledge

The causes and consequences of variation in citizens’ political knowledge are core questions in the literature on policy behavior (e.g., Mondak 2001). A large literature uses scaled measures of latent political knowledge in the American context. For example, many studies examine the consequences of variation in political knowledge for political accountability and representation. Jessee (2009) and Shor and Rogowski (2010) find that higher knowledge individuals are more likely to hold legislators accountable for their roll-call positions. Bartels (1996) finds that variation in political knowledge has important consequences for the outcomes of elections.

There is a smaller literature that focuses on the causes and consequences of variation in political knowledge outside of the United States. For example, Pereira (2015) measures cross-national variation in political knowledge in Latin America based on a Bayesian item response model that explicitly accounts for differences in the questions across countries. Using surveys from Latin America and the Caribbean, he demonstrates that contextual factors such as level of democracy, investments in telecommunications, ethnolinguistic diversity, and type of electoral system have substantial effects on knowledge.

Measuring Latent Opinion at the Group Level

While many research questions require individual-level estimates of latent opinion, a number of other research questions focus on the effect of variation in group-level opinion on salient political outcomes. For example, scholars often seek to characterize changes in the policy mood of the electorate (e.g., Stimson 1991; Erikson, MacKuen, and Stimson 2002; Bartle, Dellepiane-Avellaneda, and Stimson 2011). Another important question in American politics is the dyadic link between constituents’ policy views and the roll-call votes of their legislators (Miller and Stokes 1963). To evaluate dyadic representation, scholars need measures of the public’s average policy preferences in each state or legislative district. Moreover, a variety of studies have gone even further and sought to examine whether some groups are represented better than others. Do legislators skew their roll-call votes toward the views of co-partisans (Kastellec et al. 2015; Hill 2015)? Are legislators more responsive to voters than nonvoters (Griffin and Newman 2005)? Do the wealthy get better representation than the poor (Bartels 2009; Gilens 2012; Erikson 2015)? To address these sorts of questions, scholars need accurate measures of the average latent preferences for each group.

Disaggregation

The simplest way to estimate group-level opinion is to measure latent opinion at the individual level and then take the mean in each group. For example, Carsey and Harden (2010) use a factor analytic model to measure the public’s policy liberalism in the United States in 2010. Then they measure state-level opinion by taking the average opinion in each state. Lax and Phillips (2009b) call this approach “disaggregation.” The primary advantage of disaggregation is that scholars can estimate latent opinion with a set of individual-level survey questions, an appropriate individual-level measurement model (e.g., a factor analytic or IRT model), and the respondent’s place of residence (e.g., Erikson, Wright, and McIver 1993; Brace et al. 2002). Thus, it is very straightforward for applied researchers to generate estimates of public opinion in each geographic unit. However, there are rarely enough respondents to generate precise estimates of the preferences of people in small geographic areas using simple disaggregation. Most surveys have only a handful of respondents in each state and even fewer in particular legislative districts or cities.

Smoothing Opinion Using Multilevel Regression and Post-stratification (MRP)

A more nuanced approach is to combine individual-level estimates of latent opinion with a measurement model that smooths opinion across geographic space (e.g., Tausanovitch and Warshaw 2013). Indeed, even very large sample surveys can contain small or even empty samples for many geographic units. In such cases, opinion estimates for subnational units can be improved through the use of multilevel regression and post-stratification (MRP) (Park, Gelman, and Bafumi 2004). The idea behind MRP is to model respondents’ opinion hierarchically based on demographic and geographic predictors, partially pooling respondents in different geographic areas to an extent determined by the data. The smoothed estimates of opinion in each geographic-demographic cell (e.g., Hispanic women with a high school education in Georgia) are then weighted to match the cells’ proportion in the population, yielding estimates of average opinion in each area. These weights are generally built using post-stratification-based population targets. But they sometimes include more complicated weighting designs (Ghitza and Gelman 2013). Subnational opinion estimates derived from MRP models have been shown to be more accurate than ones based on alternative methods, even with survey samples of only a few thousand people (Park, Gelman, and Bafumi 2004; Lax and Phillips 2009b; Warshaw and Rodden 2012; but see Buttice and Highton 2013 for a cautionary note).

Scholars can build state-level MRP models in R using the mrp (Malecki et al. 2014) or dgo (Dunham, Caughey, and Warshaw 2016) packages. They can program customized MRP models using the glmer function in the lme4 package.4 More complicated MRP models can be built using fully Bayesian software such as Jags or Stan.

Hierarchical Group-Level IRT Model

Most public opinion surveys only contain a handful of questions about any particular latent construct. For example, most surveys only contain a few questions about policy. Moreover, they might only contain one question about other latent constructs such as trust in government or political activism. The sparseness of questions in most surveys largely precludes the use of respondent-level dimension-reduction techniques on the vast majority of available public opinion data. To overcome this problem, scholars have developed a variety of measurement models that are estimated at the level of groups rather than individuals (Stimson 1991; Lewis 2001; McGann 2014). This enables scholars to measure latent constructs using data from surveys that only ask one or two questions about the construct of interest, which would be impossible with models that are estimated at the individual level. For example, Caughey and Warshaw (2015) develop a group-level IRT model that estimates latent group opinion as a function of demographic and geographic characteristics, smoothing the hierarchical parameters over time via a dynamic linear model. They reparameterize equation (2) as

pij=Φ[(θiκj)/σj],
(3)

where κj = αj/βj and σj=βj1 (Fox 2010, 11). In this formulation, the item threshold κj represents the ability level at which a respondent has a 50% probability of answering question j correctly.5 The dispersion σj, which is the inverse of the discrimination βj, represents the magnitude of the measurement error for item j. Given the normal ogive IRT model and normally distributed group abilities, the probability that a randomly sampled member of group g correctly answers item j is

pgj=Φ[(θ¯gκj)/σθ2+σj2],
(4)

where θ¯g is the mean of the θi in group g, σθ is the within-group standard deviation of abilities, and κj and σj are the threshold and dispersion of item j (Mislevy 1983, 278).

Rather than modeling the individual responses yij, as in a typical IRT model, Caughey and Warshaw (2015) instead model sgj=ingjyi[g]j, the total number of correct answers to question j out of the ngj responses of individuals in group g (e.g., Ghitza and Gelman 2013). Assuming that each respondent answers one question and each response is independent conditional on θi, κj, and σj, the number of correct answers to item j in each group, sgj, is distributed binomial (ngj, pgj), where ngj is the number of nonmissing responses. The model in Caughey and Warshaw (2015) then smooths the estimates of each group using a hierarchical model that models group means as a function of each group’s demographic and geographic characteristics (Park, Gelman, and Bafumi 2004).

This group-level IRT model enables the usage of data from hundreds of individual surveys, which may only contain one or two policy questions. Similarly to the MRP models discussed above, the group-level estimates from this model can be weighted to generate estimates for geographic units. This approach enables scholars to measure policy liberalism and other latent variables across geographic space and over time in a unified framework. Scholars can run group-level IRT models using the R package dgo (Dunham, Caughey, and Warshaw 2016).

Group-Level Applications

Latent public opinion constructs that are measured at the group level have been used for a wide variety of substantive applications in political science. In this section I briefly discuss three of these applications.

Describing Variation in Ideology Across Time and Space

One of the most basic tasks of public opinion research is to describe variation in the mass public’s views across time or geographic space. To this end, a large body of work in the American politics literature has focused on longitudinal variation in latent policy liberalism at the national level. For example, Stimson (1991) measures variation in the public’s policy mood at the national level in the United States over the past fifty years. Likewise, Bartle, Dellepiane-Avellaneda, and Stimson (2011) and McGann (2014) measure policy mood in England from 1950 to 2004; Stimson, Thiébaut, and Tiberj (2012) measure policy mood in France; and Munzert and Bauer (2013) measure changes in the public’s policy preferences in Germany.

Another large body of work focuses on measuring variation in latent policy liberalism across geography. For example, Carsey and Harden (2010) use an IRT model to measure variation in the public’s policy liberalism across the American states. However, their approach generates unstable estimates below the state level. To address this problem, Tausanovitch and Warshaw (2013) combine an IRT and MRP model to generate cross-sectional estimates of the public’s policy liberalism in every state, legislative district, and city in the country during the period 2000–2012. More recent work in the American politics literature has sought to measure variation in the public’s policy liberalism across both geographic space and time on a common scale. Enns and Koch (2013) measure state-level variation in policy mood between 1956 and 2010, while Caughey and Warshaw (2015) measure variation in policy liberalism in the American states between 1972 and 2012. Both studies produce estimates in every state-year during these periods.

There is also a growing literature that examines variation in latent opinion cross-nationally. Caughey, O’Grady, and Warshaw (2015) use a Bayesian group-level IRT model to develop measures of policy liberalism in Europe. They find that countries within Europe have become more polarized over time, and that patterns of ideology are starkly different across economic and cultural issues. Sumaktoyo (2015) measures religious conservatism levels in twenty-six Islamic countries. He finds that Afghanistan and Pakistan, along with other Arab countries, are the most conservative Islamic countries. In contrast, Turkey is relatively moderate. The only Muslim-majority countries that are less religiously conservative than Turkey are post-Soviet countries.

Representation in the United States

One of the foundations of representative democracy is the assumption that citizens’ preferences should correspond with, and inform, elected officials’ behavior. This form of representation is typically called dyadic representation (Miller and Stokes 1963; Weissberg 1978; Converse and Pierce 1986). Most of the literature in American politics on dyadic representation focuses on the association between the latent policy liberalism of constituents and the roll-call behavior of legislators. These studies generally find that legislators’ roll-call positions are correlated with the general ideological preferences of their districts (e.g., Clinton 2006). However, there is little evidence that candidates’ positions converge on the median voter (Ansolabehere, Snyder, and Stewart 2001; Lee, Moretti, and Butler 2004).

If legislators’ positions are not converging on the median voter, perhaps they are responding to the positions of other subconstituencies in each district, such as primary voters or other activists. Of course, this question is impossible to examine without good estimates of each subconstituency’s opinion in every legislative district. As a result, a variety of recent studies have used variants of the measurement models discussed above to examine the link between the policy liberalism of primary voters (Bafumi and Herron 2010; Hill 2015), donors (Barber 2016), and other subconstituencies and the roll-call behavior of legislators.

A growing body of work in American politics is moving beyond the study of dyadic representation in Congress to examine the links between public opinion and political outcomes at the state and local levels. Erikson, Wright, and McIver (1993) and many subsequent studies have examined representation at the state level. More recently, Tausanovitch and Warshaw (2014) extend the study of representation to the municipal level, where they find a strong link between public opinion and city policy outputs.

Racial Prejudice

Section 5 of the Voting Rights Act (VRA; 1965) targeted states that were purported to have high levels of racial prejudice. To evaluate the validity of the VRA’s coverage formula, it would be useful to have a measure of the level of racial prejudice in every state. To this end, Elmendorf and Spencer (2014) use an individual-level IRT model to scale the racial prejudice levels of approximately fifty thousand respondents to two large surveys in 2008. Then they use MRP to estimate the average level of racial prejudice in every state and county in the country. They find the highest levels of racial prejudice in southern states such as Mississippi and South Carolina. However, they also find high levels of racial prejudice in several other states, such as Wyoming, Pennsylvania, and Ohio. Their findings provide policymakers with information about contemporary levels of racial prejudice in the United States that could be useful for future revisions to the VRA and other federal laws protecting minorities.

Substantive Frontiers

Public opinion work utilizing latent variables is likely to pursue a variety of exciting, substantive directions in coming years. In this section I focus on three types of research that investigate the consequences of citizens’ latent policy liberalism for political outcomes. First, scholars are likely to focus more attention on spatial voting and electoral accountability. Second, the availability of new techniques for measuring changes in latent opinion over time will facilitate more attention on the dynamic responsiveness of elected officials and public policies to changes in the public’s views. Third, there is likely to be more focus on representation and dyadic responsiveness in comparative politics.

Spatial Voting

The theory of spatial or proximity voting (Black 1948; Downs 1957; Enelow and Hinich 1984) is one of the central ideas in scholarship on voting and elections. The spatial voting theory’s most important prediction is that the ideological positions of candidates and parties should influence voters’ decisions at the ballot box. This electoral connection helps ensure that legislators are responsive to the views of their constituents (Mayhew 1974).

In recent years a number of prominent papers in the American politics literature have examined whether citizens vote for the most spatially proximate congressional candidate (e.g., Jessee 2009; Joesten and Stone 2014; Shor and Rogowski 2010; Simas 2013). These studies all proceed by estimating the policy preferences of citizens and legislators on a common scale. This enables them to examine whether citizens vote for the most spatially proximate candidate. However, it is important to note that there are three major limitations of this literature. First, Lewis and Tausanovitch (2013) and Jessee (2016) show that joint scaling models rely on strong assumptions that undermine their plausibility. These studies suggest that scholars should exercise caution in using estimates that jointly scale legislators and the mass public into the same latent space. Second, Tausanovitch and Warshaw (2015) show that existing measures of candidates’ ideology only improve marginally on the widely available heuristic of party identification. As a result, they conclude that these measures fall short when it comes to testing theories of representation and spatial voting on Congress. Third, there is little attention to causal identification in the literature on spatial voting. Most studies in this literature use cross-sectional regressions that do not clearly differentiate spatial proximity between voters and candidates from other factors that may influence voters’ decisions in the ballot box.

Future studies on spatial voting in congressional elections are likely to use new advances in measurement and causal inference to overcome these limitations. There are likely to continue to be rapid advances in our ability to measure the ideology of political candidates. Moreover, Jessee (2016) points the way toward several promising approaches to improve the plausibility of models that jointly scale the policy liberalism of candidates and the mass public into the same space.

There is also a growing amount of work on spatial voting in a comparative perspective. For example, Saiegh (2015) jointly scales voters, parties, and politicians from different Latin American countries on a common ideological space. This study’s findings indicate that ideology is a significant determinant of vote choice in Latin America. However, it is important to note that many of the challenges discussed above in the American context also face scholars of spatial voting in comparative politics.

Dynamic Representation in the United States

A limitation of virtually all of the existing studies on representation is that they use cross-sectional research designs. This makes it impossible to examine policy change, which is both theoretically limiting and problematic for strong causal inference since the temporal order of the variables cannot be established (Lowery, Gray, and Hager 1989; Ringquist and Garand 1999). Indeed, most existing studies cannot rule out reverse causation. For example, cross-sectional studies of dyadic representation in Congress could be confounded if legislators’ actions are causing changes in district-level public opinion (Lenz 2013; Grose, Malhotra, and Parks Van Houweling 2015). To address these concerns, the next generation of studies in this area is likely to focus on whether changes in public opinion lead to changes in political outcomes (e.g., Page and Shapiro 1983; Erikson, MacKuen, and Stimson 2002; Caughey and Warshaw 2016).

Representation in Comparative Politics

Compared to the United States, there has been much less attention to the study of mass-elite linkages in other advanced democracies (Powell 2004, 283–284). One of the primary barriers to research on representation in comparative politics has been the lack of good measures of constituency preferences. However, the availability of new models to scale latent opinion and of new methods to smooth the estimates of opinion across geography and over time has the potential to facilitate a new generation of research on representation in comparative politics (e.g., Lupu and Warner Forthcoming).

Hanretty, Lauderdale, and Vivyan (2016) examine the dyadic association between members of the British parliament and their constituencies. They use an IRT model to estimate the British public’s policy liberalism on economic issues and an MRP model to estimate the preferences of each constituency. They find a strong association between constituency opinion and members’ behavior on a variety of left-right issues.

The next generation of work on representation in comparative politics is likely to focus on whether public policies are responsive to public opinion and what institutional conditions facilitate responsiveness. Do changes in levels of government spending reflect dynamics in the mass public’s policy liberalism on economic issues (Soroka and Wlezien 2005)? Are the immigration policies of European countries responsive to the policy preferences of their citizens on immigration issues? Do countries’ decisions about war and peace reflect the latent preferences of citizens for retribution (Stein 2015)? Do changes in religious conservatism affect democratic stability or the onset of civil war (Sumaktoyo 2015)?

Methodological Frontiers

There are also a variety of important methodological frontiers in research on latent constructs in public opinion. An important one is the question of how to properly assess the appropriate number of dimensions required to summarize public opinion. Indeed, there is little agreement in the literature about how to assess the dimensionality of public opinion data. Another important frontier is the development of better computational methods to work with large public opinion data sets. Computational challenges are one of the main barriers facing scholars who wish to develop complicated latent variable models for large public opinion data sets. A third frontier is the continued development of better statistical methods to summarize latent opinion at the subnational level. Finally, there has recently been an explosion of work that examines public opinion using non-survey-based data. This work is likely to continue to grow in the years to come.

Assessing Dimensionality

The question of whether a particular latent construct is best modeled with one or multiple dimensions is not easily resolved. For example, a variety of studies find that the main dimension of latent policy liberalism or ideology in the United States is dominated by economic policy items (e.g., Ansolabehere, Rodden, and Snyder 2006). However, there is a vigorous, ongoing debate about whether social issues map to the main dimension or constitute a second dimension of latent policy liberalism. Some studies find that social issues constitute a second dimension of latent policy liberalism (Ansolabehere, Rodden, and Snyder 2006; Treier and Hillygus 2009), while others find that social issues map to the main dimension of policy liberalism (Jessee 2009; Tausanovitch and Warshaw 2013), at least in the modern era. One of the challenges in this literature has been that there is little agreement about how to assess the dimensionality of public opinion data. Another challenge is that existing computational approaches are often ill-suited to estimating multidimensional models.

Future studies should seek to rigorously examine the appropriate number of dimensions required to summarize public opinion. At a theoretical level, scholars should offer clear criteria for assessing the appropriate number of dimensions. At an empirical level, scholars should examine whether the dimensionality of the mass public’s policy liberalism, as well as other latent constructs, varies across geography or over time. For example, it is possible that the public’s policy liberalism was multidimensional during the mid-twentieth century but has gradually collapsed to a single dimension along similar lines to the increasingly one-dimensional roll-call voting in Congress.

Computational Challenges

Computational challenges are one of the main barriers facing scholars who wish to develop complicated latent variable models for large public opinion data sets. Standard Bayesian Markov chain Monte Carlo (MCMC) algorithms can be quite slow when applied to large data sets. As a result, researchers are often unable to estimate their models using all the data and are forced to make various shortcuts and compromises (Imai, Lo, and Olmsted 2015). Since a massive data set implies a large number of parameters under these models, the convergence of MCMC algorithms also becomes difficult to assess.

Fortunately there is a large body of ongoing work seeking to address the computational challenges in large-scale latent variable models. Andrew Gelman and his collaborators have recently developed the software package Stan to perform fully Bayesian inference (Gelman, Lee, and Guo 2015).6 While Stan is an improvement on earlier MCMC algorithms, it is still relatively slow with large data sets. An alternative approach is to utilize expectation-maximization (EM) algorithms that approximately maximize the posterior distribution under various ideal point models (Imai, Lo, and Olmsted 2015). The main advantage of EM algorithms is that they can dramatically reduce computational time. They can estimate an extremely large number of ideal points on a laptop within a few hours. However, they generally do not produce accurate estimates of uncertainty, which can reduce their usefulness for many empirical applications (Jackman 2008).7

Measuring Subnational Latent Opinion

Which groups are better represented? Are the rich better represented than the poor (Erikson 2015)? Do voters receive better representation than nonvoters (Griffin and Newman 2005)? Are whites better represented than racial minorities? To answer questions such as these, we need to develop accurate estimates of the latent opinion of demographic subgroups within individual states and other geographic units.

Most existing smoothing models are ill-suited to examine questions such as these, because they assume that differences in the opinions of various demographic groups, such as blacks and whites, are constant across geography.8 To address these complications, new smoothing models should incorporate more complicated interactions between demographics and geography (e.g., Leemann and Wasserfallen 2016). For example, they might allow the relationship between income and latent opinion to vary across geography, using racial diversity as a hierarchical predictor for this relationship (Hersh and Nall 2015). In the best example of recent work in this area, Ghitza and Gelman (2013) build an MRP model with a complicated set of interactions that enables them to model the voting behavior of different income and racial groups in each state. They find that swings in turnout between the 2004 and 2008 presidential elections were primarily confined to African Americans and young minorities.

Scholars should be aware, however, that there is a trade-off between bias and error when they are developing more complicated smoothing models. More complicated models will inevitably reduce bias in estimates of subgroups’ opinion. But more complicated models will generally have less shrinkage across geography than simpler models, which is likely to lead to greater error in the estimates for any particular group. Indeed, Lax and Phillips (2013) find that more complicated interactions between demographic categories often lead to substantially less accurate estimates of mean opinion in each geographic unit.

Beyond Surveys

In recent years, there has been an explosion of work that examines public opinion using non-survey-based data. For example, Bonica (2013, 2014) scales millions of campaign contributions to measure the latent campaign finance preferences of millions of Americans. Bond and Messing (2015) demonstrate that social media data represent a useful resource for testing models of legislative and individual-level political behavior and attitudes. They develop a model to estimate the ideology of politicians and their supporters using social media data on individual citizens’ endorsements of political figures. Their measure places on the same scale politicians and more than six million citizens who are active in social media. Similarly, Barberá (2015) develops a model to measure the political ideology of Twitter users based on the assumption that their ideology can be inferred by examining which political actors each user is following. He applies this method to estimate ideal points for a large sample of both elite and mass public Twitter users in the United States and five European countries.

While these new methods are very promising, scholars still need to carefully define the target population of interest. For example, Bond and Messing’s (2015) estimates of the ideology of Facebook users are not necessarily representative of the United States as a whole since not everyone uses Facebook. Another limitation of these sources of data is that they are generally only available for recent time periods. Thus, they are unsuitable for extending our knowledge of public opinion back in time using dynamic measurement models. Finally, it is often unclear what theoretical construct these new models are capturing. For example, are campaign finance data capturing donors’ ideology, partisanship, or some other latent construct? To evaluate this question, scholars could compare a given set of individuals’ campaign finance preferences with their Twitter ideal points, with their Facebook ideal points, or with policy liberalism from survey data (see, e.g., Hill and Huber Forthcoming).

Conclusion

This is an exciting time to be doing research that utilizes latent constructs in public opinion. The development of new and improved methods for summarizing latent constructs in public opinion has led to a wide variety of substantive advances, including work on polarization, representation, political knowledge, and racial resentment. The next generation of work in American politics is likely to focus on areas such as assessing changes in mass polarization over time at the subnational level, dynamic representation at the state and local levels, and spatial voting in elections. There is also a growing body of work in comparative politics that utilizes latent constructs in public opinion to examine important questions such as the causes and consequences of political knowledge, dyadic representation in Westminster democracies, and the effect of changes in religious conservatism on democratic stability.

Data and Example Code

Public Opinion Data Sources

Models for Measuring Latent Opinion at the Individual Level

  • Bayesian Factor analytic and IRT models can be run using off-the-shelf software such as MCMCpack (Martin et al. 2011) or the ideal function in the R package pscl (Jackman 2012).

  • A variety of EM IRT models can be run using the R package emIRT (Imai, Lo, and Olmsted 2015).

  • For more complicated IRT models, researchers can use fully Bayesian software such as Bugs, Jags, or Stan.

Model for Measuring Latent Opinion at the Group Level

References

Abramowitz, A. I., and K. L. Saunders. 1998. “Ideological Realignment in the US Electorate.” Journal of Politics 60 (3): 634–652.Find this resource:

    Achen, C. H. 1975. “Mass Political Attitudes and the Survey Response.” American Political Science Review 69 (4): 1218–1231.Find this resource:

      Adcock, R., and D. Collier. 2001. “Measurement Validity: A Shared Standard for Qualitative and Quantitative Research.” American Political Science Review 95 (3): 529–546.Find this resource:

        Anderson, B. A., B. D. Silver, and P. R. Abramson. 1988. “The Effects of the Race of the Interviewer on Race-Related Attitudes of Black Respondents in SRC/CPS National Election Studies.” Public Opinion Quarterly 52 (3): 289–324.Find this resource:

          Ansolabehere, S., J. M. Snyder Jr., and C. Stewart III. 2001. “Candidate Positioning in U.S. House Elections.” American Journal of Political Science 45 (1): 136–159.Find this resource:

            Ansolabehere, S., J. Rodden, and J. M Snyder. 2006. “Purple America.” Journal of Economic Perspectives 20 (2): 97–118.Find this resource:

              Ansolabehere, S., J. Rodden, and J. M. Snyder Jr. 2008. “The Strength of Issues: Using Multiple Measures to Gauge Preference Stability, Ideological Constraint, and Issue Voting.” American Political Science Review 102 (2): 215–232.Find this resource:

                Bafumi, J., and M. C. Herron. 2010. “Leapfrog Representation and Extremism: A Study of American Voters and Their Members in Congress.” American Political Science Review 104 (3): 519–542.Find this resource:

                  Barabas, J., J. Jerit, W. Pollock, and C. Rainey. 2014. “The Question (s) of Political Knowledge.” American Political Science Review 108 (4): 840–855.Find this resource:

                    Barber, M. J. 2016. “Representing the Preferences of Donors, Partisans, and Voters in the US Senate.” Public Opinion Quarterly 80 (S1): 225–249.Find this resource:

                      Barber, M., and N. McCarty. 2015. “Causes and Consequences of Polarization.” In Solutions to Polarization in America, edited by Nathaniel Persily, 15–58. Cambridge University Press.Find this resource:

                        Barberá, P. 2015. “Birds of the Same Feather Tweet Together: Bayesian Ideal Point Estimation Using Twitter Data.” Political Analysis 23 (1): 76–91.Find this resource:

                          Bartle, J., S. Dellepiane-Avellaneda, and J. Stimson. 2011. “The Moving Centre: Preferences for Government Activity in Britain, 1950–2005.” British Journal of Political Science 41 (2): 259–285.Find this resource:

                            Bartels, L. M. 1996. “Uninformed Votes: Information Effects in Presidential Elections.” American Journal of Political Science 40 (1): 194–230.Find this resource:

                              Bartels, L. M. 2009. “Economic Inequality and Political Representation.” In The Unsustainable American State, eds. Lawrence Jacobs and Desmond King, 167–196. Oxford University Press.Find this resource:

                                Berinsky, A. J., M. F. Margolis, and M. W. Sances. 2014. “Separating the Shirkers from the Workers? Making Sure Respondents Pay Attention on Self-Administered Surveys.” American Journal of Political Science 58 (3): 739–753.Find this resource:

                                  Biemer, P. P., R. M. Groves, L. E. Lyberg, N. A. Mathiowetz, and S. Sudman. 2011. Measurement Errors in Surveys, vol. 173. New York: John Wiley & Sons.Find this resource:

                                    Black, D. 1948. “On the Rationale of Group Decision-Making.” Journal of Political Economy 56 (1): 23–34.Find this resource:

                                      Bollen, K. A. 1989. Structural Equations with Latent Variables. Series in Probability and Mathematical Statistics. New York: John Wiley and Sons.Find this resource:

                                        Bond, R., and S. Messing. 2015. “Quantifying Social Medias Political Space: Estimating Ideology from Publicly Revealed Preferences on Facebook.” American Political Science Review 109 (1): 62–78.Find this resource:

                                          Bonica, A. 2013. “Ideology and Interests in the Political Marketplace.” American Journal of Political Science 57 (2): 294–311.Find this resource:

                                            Bonica, A. 2014. “Mapping the Ideological Marketplace.” American Journal of Political Science 58 (2): 367–386.Find this resource:

                                              Brace, P., K. Sims-Butler, K. Arceneaux, and M. Johnson. 2002. “Public Opinion in the American States: New Perspectives using National Survey Data.” American Journal of Political Science 46 (1): 173–189.Find this resource:

                                                Broockman, D. E. 2016. “Approaches to Studying Policy Representation.” Legislative Studies Quarterly 41 (1): 181–215.Find this resource:

                                                  Buttice, M. K., and B. Highton. 2013. “How Does Multilevel Regression and Poststratification Perform with Conventional National Surveys?” Political Analysis 21 (4): 449–467.Find this resource:

                                                    Carmines, E. G., P. M. Sniderman, and B. C. Easter. 2011. “On the Meaning, Measurement, and Implications of Racial Resentment.” Annals of the American Academy of Political and Social Science 634 (1): 98–116.Find this resource:

                                                      Carsey, T. M., and J. J. Harden. 2010. “New Measures of Partisanship, Ideology, and Policy Mood in the American States.” State Politics & Policy Quarterly 10 (2): 136–156.Find this resource:

                                                        Caughey, D., T. O’Grady, and C. Warshaw. 2015. “Ideology in the European Mass Public: A Dynamic Perspective.” Paper presented at the 2015 ECPR General Conference in Montreal, Canada.Find this resource:

                                                          Caughey, D., and C. Warshaw. 2015. “Dynamic Estimation of Latent Public Opinion Using a Hierarchical Group-Level IRT Model.” Political Analysis 23 (2): 197–211.Find this resource:

                                                            Caughey, D., and C. Warshaw. 2016. “Dynamic Responsiveness in the American States, 1936–2012.” Paper presented at the 2014 American Political Science Association Conference.Find this resource:

                                                              Clinton, J. D. 2006. “Representation in Congress: Constituents and Roll Calls in the 106th House.” Journal of Politics 68 (2): 397–409.Find this resource:

                                                                Clinton, J., S. Jackman, and D. Rivers. 2004. “The Statistical Analysis of Roll Call Data.” American Political Science Review 98 (2): 355–370.Find this resource:

                                                                  Converse, P. E., and R. Pierce. 1986. Political Representation in France. Cambridge, MA: Harvard University Press.Find this resource:

                                                                    De Boef, S., and P. M. Kellstedt. 2004. “The Political (and Economic) Origins of Consumer Confidence.” American Journal of Political Science 48 (4): 633–649.Find this resource:

                                                                      Delli Carpini, M. X., and S. Keeter. 1993. “Measuring Political Knowledge: Putting First Things First.” American Journal of Political Science 37 (4): 1179–1206.Find this resource:

                                                                        Downs, A. 1957. An Economic Theory of Democracy. New York: Harper and Row.Find this resource:

                                                                          Dunham, J., D. Caughey, and C. Warshaw. 2016. “dgo: Dynamic Estimation of Group-level Opinion.” R package version 0.2.3. https://jamesdunham.github.io/dgo/.

                                                                          Elmendorf, C. S., and D. M. Spencer. 2014. “The Geography of Racial Stereotyping: Evidence and Implications for VRA Preclearance After Shelby County.” California Law Review 102: 1123–1180.Find this resource:

                                                                            Enelow, J. M., and M. J. Hinich. 1984. The Spatial Theory of Voting: An Introduction. Cambridge: Cambridge University Press.Find this resource:

                                                                              Enns, P. K., and J. Koch. 2013. “Public Opinion in the U.S. States: 1956 to 2010.” State Politics and Policy Quarterly 13 (3): 349–372.Find this resource:

                                                                                Erikson, R. S. 2015. “Income Inequality and Policy Responsiveness.” Annual Review of Political Science 18: 11–29.Find this resource:

                                                                                  Erikson, R. S., G. C. Wright, and J. P. McIver. 1993. Statehouse Democracy: Public Opinion and Policy in the American States. New York: Cambridge University Press.Find this resource:

                                                                                    Erikson, R. S., M. B. MacKuen, and J. A. Stimson. 2002. The Macro Polity. New York: Cambridge University Press.Find this resource:

                                                                                      Fox, J.-P. 2010. Bayesian Item Response Modeling: Theory and Applications. Springer. e-book.Find this resource:

                                                                                        Gelman, A., B. Shor, D. Park, and J. Cortina. 2009. Red State, Blue State, Rich State, Poor State: Why Americans Vote the Way They Do. Princeton, NJ: Princeton University Press.Find this resource:

                                                                                          Gelman, A., D. Lee, and J. Guo. 2015. “Stan: A probabilistic Programming Language for Bayesian Inference and Optimization.” Journal of Educational and Behavioral Statistics 40 (5): 530–543.Find this resource:

                                                                                            Ghitza, Y., and A. Gelman. 2013. “Deep Interactions with MRP: Election Turnout and Voting Patterns among Small Electoral Subgroups.” American Journal of Political Science 57 (3): 762–776.Find this resource:

                                                                                              Gilens, M. 2012. Affluence and Influence: Economic Inequality and Political Power in America. Princeton, NJ: Princeton University Press.Find this resource:

                                                                                                Griffin, J. D, and B. Newman. 2005. “Are Voters Better Represented?” Journal of Politics 67 (4): 1206–1227.Find this resource:

                                                                                                  Grose, C. R., N. Malhotra, and R. P. Van Houweling. 2015. “Explaining Explanations: How Legislators Explain Their Policy Positions and How Citizens React.” American Journal of Political Science 59 (3): 724–743.Find this resource:

                                                                                                    Hanretty, C., B. E. Lauderdale, and N. Vivyan. 2016. “Dyadic Representation in a Westminster System.” Legislative Studies Quarterly. In Press.Find this resource:

                                                                                                      Hersh, E. D., and C. Nall. 2015. “The Primacy of Race in the Geography of Income-Based Voting: New Evidence from Public Voting Records.” American Journal of Political Science 60 (2): 289–303.Find this resource:

                                                                                                        Hill, S J. 2015. “Institution of Nomination and the Policy Ideology of Primary Electorates.” Quarterly Journal of Political Science 10 (4): 461–487.Find this resource:

                                                                                                          Hill, S., and G. Huber. Forthcoming. “Representativeness and Motivations of Contemporary Contributors to Political Campaigns: Results from Merged Survey and Administrative Records.” Political Behavior.Find this resource:

                                                                                                            Hill, S. J, and C. Tausanovitch. 2015. “A Disconnect in Representation? Comparison of Trends in Congressional and Public Polarization.” Journal of Politics 77 (4): 1058–1075.Find this resource:

                                                                                                              Hoffman, M. D., and A. Gelman. 2014. “The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo.” Journal of Machine Learning Research 15 (1): 1593–1623.Find this resource:

                                                                                                                Imai, K., J. Lo, and J. Olmsted. Forthcoming. “Fast Estimation of Ideal Points with Massive Data.” American Political Science Review.Find this resource:

                                                                                                                  Jackman, S. 2008. “Measurement.” In The Oxford Handbook of Political Methodology, edited by Janet M. Box-Steffensmeier, Henry E. Brady, and David Collier, 119–151. Oxford: Oxford University Press.Find this resource:

                                                                                                                    Jackman, S. 2009. Bayesian Analysis for the Social Sciences. Hoboken, NJ: John Wiley and Sons.Find this resource:

                                                                                                                      Jackman, S. 2012. “pscl: Classes and Methods for R Developed in the Political Science Computational Laboratory, Stanford University.” Department of Political Science, Stanford University. R package version 1.04.4.Find this resource:

                                                                                                                        Jessee, S. A. 2009. “Spatial Voting in the 2004 Presidential Election.” American Political Science Review 103 (1): 59–81.Find this resource:

                                                                                                                          Jessee, S. 2016. “(How) Can We Estimate the Ideology of Citizens and Political Elites on the Same Scale?” American Journal of Political Science 60 (4): 1108–1124.Find this resource:

                                                                                                                            Joesten, D. A., and W. J. Stone. 2014. “Reassessing Proximity Voting: Expertise, Party, and Choice in Congressional Elections.” Journal of Politics 76 (3): 740–753.Find this resource:

                                                                                                                              Johnson, V. E., and J. H. Albert. 2006. Ordinal Data Modeling. New York, NY: Springer Science & Business Media.Find this resource:

                                                                                                                                Kaplan, D. 2004. The Sage Handbook of Quantitative Methodology for the Social Sciences. Thousand Oaks, CA: Sage Publications Inc.Find this resource:

                                                                                                                                  Kastellec, J. P., J. R. Lax, M. Malecki, and J. H. Phillips. 2015. “Polarizing the Electoral Connection: Partisan Representation in Supreme Court Confirmation Politics.” Journal of Politics 77 (3): 787–804.Find this resource:

                                                                                                                                    Kastellec, J. P., J. R. Lax, and J. Phillips. 2010. “Estimating State Public Opinion with Multi-level Regression and Poststratification Using R.” Unpublished manuscript.Find this resource:

                                                                                                                                      Kinder, D. R., and L. M. Sanders. 1996. Divided by Color: Racial Politics and Democratic Ideals. Chicago: University of Chicago Press.Find this resource:

                                                                                                                                        Lax, J. R., and J. H. Phillips. 2009a. “Gay Rights in the States: Public Opinion and Policy Responsiveness.” American Political Science Review 103 (3): 367–386.Find this resource:

                                                                                                                                          Lax, J. R., and J. H. Phillips. 2009b. “How Should We Estimate Public Opinion in the States?” American Journal of Political Science 53 (1): 107–121.Find this resource:

                                                                                                                                            Lax, J. R, and J. H. Phillips. 2013. “How Should We Estimate Sub-national Opinion using MRP? Preliminary Findings and Recommendations.” Working paper.Find this resource:

                                                                                                                                              Lee, D. S., E. Moretti, and M. J. Butler. 2004. “Do Voters Affect or Elect Policies? Evidence from the U. S. House.” Quarterly Journal of Economics 119 (3): 807–859.Find this resource:

                                                                                                                                                Leemann, Lucas and Fabio Wasserfallen. 2016. Extending the Use and Prediction Precision of Subnational Public Opinion Estimation. American Journal of Political Science. In Press.Find this resource:

                                                                                                                                                  Lemmon, M., and E. Portniaguina. 2006. “Consumer Confidence and Asset Prices: Some Empirical Evidence.” Review of Financial Studies 19 (4): 1499–1529.Find this resource:

                                                                                                                                                    Lenz, G. S. 2013. Follow the Leader? How Voters Respond to Politicians’ Policies and Performance. Chicago: University of Chicago Press.Find this resource:

                                                                                                                                                      Lewis, J. B. 2001. “Estimating Voter Preference Distributions from Individual-Level Voting Data.” Political Analysis 9 (3): 275–297.Find this resource:

                                                                                                                                                        Lewis, J. B., and C. Tausanovitch. 2013. “Has Joint Scaling Solved the Achen Objection to Miller and Stokes.” Paper presented at Political Representation: Fifty Years after Miller & Stokes, Center for the Study of Democratic Institutions, Vanderbilt University, Nashville, TN, March 1–2.Find this resource:

                                                                                                                                                          Lowery, D., V. Gray, and G. Hager. 1989. “Public Opinion and Policy Change in the American States.” American Politics Research 17 (1): 3–31.Find this resource:

                                                                                                                                                            Lu, Y., Y. Chu, and F. Shen. 2016. “Mass Media, New Technology, and Ideology An Analysis of Political Trends in China.” Global Media and China. In Press.Find this resource:

                                                                                                                                                              Ludvigson, S. C. 2004. “Consumer Confidence and Consumer Spending.” Journal of Economic Perspectives 18 (2): 29–50.Find this resource:

                                                                                                                                                                Lupu, N., and Z. Warner. Forthcoming. “Mass–Elite Congruence and Representation in Argentina.” In Malaise in Representation in Latin American Countries: Chile, Argentina, Uruguay, edited by Alfredo Joignant, Mauricio Morales, and Claudio Fuentes. New York: Palgrave Macmillan.Find this resource:

                                                                                                                                                                  Malecki, M., J. Lax, A. Gelman, and W. Wang. 2014. “mrp: Multilevel Regression and Poststratification.” R package version 1.0-1. https://github.com/malecki/mrp.

                                                                                                                                                                  Margolis, M. 2013. “The Reciprocal Relationship between Religion and Politics: A Test of the Life Cycle Theory.” MIT Political Science Department research paper.Find this resource:

                                                                                                                                                                    Martin, A. D., K. M. Quinn, and J. H. Park 2011. “Mcmcpack: Markov Chain Monte Carlo in R.” Journal of Statistical Software 42 (9): 1–21.Find this resource:

                                                                                                                                                                      Mayhew, D. 1974. The Electoral Connection. New Haven, CT: Yale University Press.Find this resource:

                                                                                                                                                                        McAndrew, S., and D. Voas. 2011. “Measuring Religiosity using Surveys.” Survey Question Bank: Topic Overview 4 (2): 1–15.Find this resource:

                                                                                                                                                                          McGann, A. J. 2014. “Estimating the Political Center from Aggregate Data: An Item Response Theory Alternative to the Stimson Dyad Ratios Algorithm.” Political Analysis 22 (1): 115–129.Find this resource:

                                                                                                                                                                            Miller, W. E., and D. E. Stokes. 1963. “Constituency Influence in Congress.” American Political Science Review 57 (1): 45–56.Find this resource:

                                                                                                                                                                              Mislevy, R. J. 1983. “Item Response Models for Grouped Data.” Journal of Educational Statistics 8 (4): 271–288.Find this resource:

                                                                                                                                                                                Mondak, J. J. 2001. “Developing Valid Knowledge Scales.” American Journal of Political Science 45 (1): 224–238.Find this resource:

                                                                                                                                                                                  Montgomery, J. M., and J. Cutler. 2013. “Computerized Adaptive Testing for Public Opinion Surveys.” Political Analysis 21 (2): 172–192.Find this resource:

                                                                                                                                                                                    Mueller, E. 1963. “Ten Years of Consumer Attitude Surveys: Their Forecasting Record.” Journal of the American Statistical Association 58 (304): 899–917.Find this resource:

                                                                                                                                                                                      Munzert, S., and P. C. Bauer. 2013. “Political depolarization in German public opinion, 1980–2010.” Political Science Research and Methods 1 (1): 67–89.Find this resource:

                                                                                                                                                                                        Page, B. I., and R. Y. Shapiro. 1983. “Effects of Public Opinion on Policy.” American Political Science Review 77 (1): 175–190.Find this resource:

                                                                                                                                                                                          Pan, J., and Y. Xu. 2015. “China’s Ideological Spectrum.” MIT Political Science Department research paper.Find this resource:

                                                                                                                                                                                            Park, D. K., A. Gelman, and J. Bafumi. 2004. “Bayesian Multilevel Estimation with Poststratification: State-Level Estimates from National Polls.” Political Analysis 12 (4): 375–385.Find this resource:

                                                                                                                                                                                              Pereira, F. B. 2015. “Measuring Political Knowledge Across Countries.” Paper presented at the 2015 Midwest Political Science Association conference.Find this resource:

                                                                                                                                                                                                Poole, K. T., and H. Rosenthal. 2007. Ideology & Congress. New Brunswick, NJ: Transaction Publishers.Find this resource:

                                                                                                                                                                                                  Powell, G. B. 2004. “Political Representation in Comparative Politics.” Annual Review of Political Science 7: 273–296.Find this resource:

                                                                                                                                                                                                    Quinn, K. M. 2004. “Bayesian Factor Analysis for Mixed Ordinal and Continuous Responses.” Political Analysis 12 (4): 338–353.Find this resource:

                                                                                                                                                                                                      Ringquist, E. J., and J. C. Garand. 1999. “Policy Change in the American States.” In American State and Local Politics: Directions for the 21st Century, edited by Ronald E. Weber, and Paul Brace, 268–299. New York: Chatham House/Seven Bridges Press.Find this resource:

                                                                                                                                                                                                        Saiegh, S. M. 2015. “Using Joint Scaling Methods to Study Ideology and Representation: Evidence from Latin America.” Political Analysis 23 (3): 363–384.Find this resource:

                                                                                                                                                                                                          Shor, B., and J. C. Rogowski. 2010. “Congressional Voting by Spatial Reasoning.” Presented at the annual meeting of the Midwest Political Science Association, Chicago.Find this resource:

                                                                                                                                                                                                            Simas, E. N. 2013. “Proximity Voting in the 2010 US House Elections.” Electoral Studies 32 (4): 708–717.Find this resource:

                                                                                                                                                                                                              Soroka, S. N., and C. Wlezien. 2005. “Opinion–Policy Dynamics: Public Preferences and Public Expenditure in the United Kingdom.” British Journal of Political Science 35 (4): 665–689.Find this resource:

                                                                                                                                                                                                                Stein, R. 2015. “War and Revenge: Explaining Conflict Initiation by Democracies.” American Political Science Review 109 (3): 556–573.Find this resource:

                                                                                                                                                                                                                  Stimson, J. A. 1991. Public Opinion in America: Moods, Cycles, and Swings. Boulder, CO: Westview.Find this resource:

                                                                                                                                                                                                                    Stimson, J. A., C. Thiébaut, and V. Tiberj. 2012. “The Evolution of Policy Attitudes in France.” European Union Politics 13 (2): 293–316.Find this resource:

                                                                                                                                                                                                                      Sumaktoyo, N. G. 2015. “Islamic Conservatism and Support for Religious Freedom.” Working paper presented at the 2016 Southern Political Science Association Conference.Find this resource:

                                                                                                                                                                                                                        Tarman, C., and D. O. Sears. 2005. “The Conceptualization and Measurement of Symbolic Racism.” Journal of Politics 67 (3): 731–761.Find this resource:

                                                                                                                                                                                                                          Tausanovitch, C., and C. Warshaw. 2013. “Measuring Constituent Policy Preferences in Congress, State Legislatures and Cities.” Journal of Politics 75 (2): 330–342.Find this resource:

                                                                                                                                                                                                                            Tausanovitch, C., and C. Warshaw. 2014. “Representation in Municipal Government.” American Political Science Review 108 (3): 605–641.Find this resource:

                                                                                                                                                                                                                              Tausanovitch, C., and C. Warshaw. 2015. “Estimating Candidate Positions in a Polarized Congress.” Working paper presented at the 2015 American Political Science Association Conference.Find this resource:

                                                                                                                                                                                                                                Treier, S., and D. S. Hillygus. 2009. “The Nature of Political Ideology in the Contemporary Electorate.” Public Opinion Quarterly 73 (4): 679–703.Find this resource:

                                                                                                                                                                                                                                  Treier, S., and S. Jackman. 2008. “Democracy as a Latent Variable.” American Journal of Political Science 52 (1): 201–217.Find this resource:

                                                                                                                                                                                                                                    Warshaw, C., and J. Rodden. 2012. “How Should We Measure District-Level Public Opinion on Individual Issues?” Journal of Politics 74 (1): 203–219.Find this resource:

                                                                                                                                                                                                                                      Weissberg, R. 1978. “Collective vs. Dyadic Representation in Congress.” American Political Science Review 72 (2): 535–547.Find this resource:

                                                                                                                                                                                                                                        Zaller, J., and S. Feldman. 1992. “A Simple Theory of the Survey Response: Answering Questions Versus Revealing Preferences.” American Journal of Political Science 36 (3): 579–616.Find this resource:

                                                                                                                                                                                                                                          Notes:

                                                                                                                                                                                                                                          (1) For a more general overview of the sources of measurement error on surveys, see Biemer et al. (2011).

                                                                                                                                                                                                                                          (2) Some studies call this latent construct “mood” (Stimson 1991), others calls it “ideology” (Hill and Tausanovitch 2015), and others call it a measure of citizens’ “ideal points” (Bafumi and Herron 2010), while still others call it “policy preferences” (Treier and Hillygus 2009; Tausanovitch and Warshaw 2013) or “policy liberalism” (Caughey and Warshaw 2015). In the balance of this chapter I generally call this latent construct “policy liberalism” to distinguish it from symbolic ideology or other related concepts.

                                                                                                                                                                                                                                          (3) See below for more discussion of the advantages and disadvantages of emIRT.

                                                                                                                                                                                                                                          (4) See Kastellec, Lax, and Phillips (2010) for a primer about estimating MRP models in R.

                                                                                                                                                                                                                                          (5) In terms of a spatial model, κj is the cutpoint, or point of indifference between two choices.

                                                                                                                                                                                                                                          (6) Stan uses the no-U-turn sampler (Hoffman and Gelman 2014), an adaptive variant of Hamiltonian Monte Carlo, which itself is a generalization of the familiar Metropolis algorithm. It performs multiple steps per iteration to move efficiently through the posterior distribution.

                                                                                                                                                                                                                                          (7) See Hill and Tausanovitch (2015) for an example of where inaccurate characterization of uncertainty from a latent variable model would change the conclusions of an important substantive analysis.

                                                                                                                                                                                                                                          (8) Several studies have shown that differences in the opinion of various demographic groups are far from constant. For instance, Gelman et al. (2009) and Hersh and Nall (2015) show that income is more correlated with opinion in poorer, racially diverse areas. In richer areas with less diversity, there is little link between income and opinion.