Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

date: 15 July 2019

Election Forecasting: The Long View

Abstract and Keywords

This article offers a new way to evaluate the pros and cons of predicting US presidential elections: the long view versus the short view. Election forecasters who take the long view stress electoral theory and lead time, examining model performance over several contests. For this view, an overarching goal remains knowledge of how the electoral process works. In contrast, forecasters who take the short view stress accuracy exclusively. Forecasts are made repeatedly, especially near the election. The short view depends increasingly on polls until nothing else matters. The short view also risks setting back the study of elections, for example, fostering the idea of an unstable, even volatile electorate, even though American voters have shown great stability. Finally, the short view forgets the lesson that most variation in national election outcomes can be predicted, even explained, by established rules of political behavior.

Keywords: political science, election forecasting, voting, elections, forecasting, voters

Oct. 1–7 … Obama and Romney were tied at 48% among likely voters. After that, Romney moved ahead in mid-October during the presidential debate period, holding a three- to five-point lead … before Superstorm Sandy devastated many areas on the East Coast Oct. 29–30… . Between Oct. 22–28 and Nov. 1–4, voter support for Obama increased by six points in the East, to 58% from 52%. (Gallup 2012)

We know that about 90% of Americans have, and have had, a party identification, and vote on it.

(Lewis-Beck et al. 2008, 114)

Casual readers of the polls during election season will get the impression that American voters are fickle. The quotes above are supporting evidence from the Obama-Romney race. Election forecasting models based on polls, often updated until election day itself, also reflect this impression. These “short-view” forecasts have received considerable attention in the media in recent years. In contrast, political scientists have been surveying and studying American voters for well over a half century, arriving at conclusions that make voters look anything but fickle. In addition, political scientists have been forecasting elections for at least a quarter century. These forecasting models apply what has been learned about voting and elections via scientific research and can generally make very accurate forecasts months before election day. These “long-view” forecasts receive more attention from and have more credibility among academics. If the long-view forecasts are able to predict the outcome accurately well in advance, why all the short-view updating? In short, everyone wants to know how the campaign is going, newspapers need to sell news, and pollsters need to take polls. It turns out that much of the action surrounding these short-view forecasts is just noise, which the long view tries to tune out to get a clearer signal of the election. This article explains the differences between long-view and short-view forecasting and contemplates the pros and cons of each.1 Both views have value, but they have different and important consequences, as discussed below.

Election forecasting has become popular, moving from the ivory tower to the front page. Before an election, there are conversations aplenty among pundits, politicians, and the people, each backing their favorite forecaster. These conversations start earlier and earlier, as is appropriate for forecasting, which aims to predict well before the election day itself. The election forecasting world can be divided up in various ways. (For a current division, consult Lewis-Beck and Stegmaier [2014a].) In the recent past we have separated these forecasters into modelers, poll users, marketers, and experts (Lewis-Beck and Tien 2012). To oversimplify, the modelers traditionally use single-equation statistical models derived from election theories, the poll users follow single- or aggregated-survey estimates, the marketers study candidate trading prices in political stock markets, and the experts (such as Charlie Cook and Stu Rothenberg) employ judgments from the campaign trail. This quadrille makes for a good way to begin understanding the forecasting game.

Election Forecasting: The Long View

Election forecasters who take the long view tend to stress electoral theory in their models and frame their predictions a good distance in time from the election. (Evaluation criteria of election forecasting instruments are discussed in Lewis-Beck [2005] and Lewis-Beck and Tien 2011.) Accuracy, of course, is important, but not at the sacrifice of lead—the calendar days before the election. It is easier to forecast an election the day before it rather than 180 days in advance. But by the day before, the forecast itself has little intrinsic interest, since the real outcome will be known the next day. Accuracy comes from better theory and an optimal lead time before the election itself.

Strong theory emerges from parsimony—that is, a few carefully selected explanatory variables, which also help prevent the model looking good simply by chance. Such prudence has merit, for not much knowledge about political behavior can be stated with confidence. As Occam’s razor reminds us, explanations with fewer independent variables are likely to be better, all else being equal. Further, with these parsimonious models, mistakes of measurement are more easily exposed. Finally, the quest for parsimony pushes toward accounting for generalization covering all cases, rather than any one particular election outcome.

Thus, in the long view, prediction focuses on the overall outcome. In addition to theoretical parsimony, empirical parsimony holds sway. The forecast targets the main political event: the national result on Election Day. Following is a stylized example of well-known long-view approaches, in equation form:

White House Party Vote Sharet= f(Politicst-6, Economicst-6, Cyclest-1)
(1)

where the three independent (explanatory) variables are scored some distance (e.g., six months) from the dependent variable (the election outcome to be explained), and all are measured on nationwide time-series data. A concrete example of the long-view approach comes from our Jobs Model (Lewis-Beck and Tien 2012), with the dependent variable of White House party share of the two-party vote, and the independent variables of presidential popularity, economic growth × incumbent interaction, jobs creation, and incumbency advantage, all measured in advance.

In our 2012 article, the details of these measures are spelled out: presidential popularity is the first Gallup poll in July of the election year. Economic growth is percentage change in nonannualized Gross National Product (GNP) (constant dollars) from the fourth quarter of the year prior to the election to the second quarter of the election year, using data from the Survey of Current Business; the incumbency variable that economic growth is multiplied by is scored 1 when the elected president is running and 0.5 when it is an open seat race. Jobs creation is the percentage change in jobs over the first 3.5 years of the president’s term, calculated as follows: (number employed in June of the election year—number employed in January of the inauguration year)/number employed in January of the inauguration year) x 100. The employment numbers are from the civilian labor force (sixteen years and older), reported in the Bureau of Labor Statistics, Current Population Survey of Households (not seasonally adjusted). Incumbency advantage is scored 1 if the incumbent party candidate is the elected president (1956, 1972, 1980, 1984, 1992, 1996, 2004, 2008) or following a president who died in office (1948, 1964), scored 0 if the incumbent party candidate has a tolerable relationship with the previous president (1952, 1976, 1988, 2008), or scored –1 if the incumbent party candidate and the president are not united (1960, 1968, 2000).

The view, or perspective, of the Jobs Model can be regarded as “long” in the sense of adequate lead time before the specific contest (a precise forecast can be issued at the end of August of the election year), but also “long” in the sense of examining model performance over several contests. In other words, the perspective attends to history, the record over past elections. For the long view, an overarching goal remains accumulation of knowledge about how the electoral process works, as assessed by the predictive power of models built from leading electoral theories. Success, then, becomes more of a long-run phenomenon, with different forecasters competing over time (Lewis-Beck and Rice 1992, 102). As the competition among models unfolds, some models should advance in status and others should retreat, a consequence of scientific trial and error. (For evidence on this question, see Holbrook [2010].) To move this systematic enterprise forward, the ability to replicate each and every method is paramount.

Replication means the methods must be transparent; for example, Forecaster A can read what Forecaster B did and reproduce the results. Transparency allows certain steps (e.g., polls included, measures taken, data used, estimation methods) to be ruled in or out. One physical sciences metaphor for long-view forecasting comes from climatology. The day-to-day temperatures gathered over a longer period of time permit climatologists to predict weather conditions for specific geographical areas, even the entire globe. In like manner, by taking into account central laws of electoral behavior in democracies, we can foresee, within calculable limits, national election outcomes. Most analysts who hold to the long view are academic forecasters, such as professors who publish scholarly papers on the subject.

Election Forecasting: The Short View

Election forecasters who take the short view tend to stress accuracy and to downplay theory and lead. The accuracy measures here tend to rely on the most recent poll(s) and can be a poor predictor of the final outcome. Since accuracy ranks as the sine qua non, forecasts are made repeatedly, especially as the election itself draws closer. Accuracy does not come from better vote modeling or more parsimony. Rather, it comes from frequent estimates of the vote itself, as proxied by preference questions in polls, in a continual updating through Election Day. The underlying equation, although not always expressed, is simple enough:

White House Party Share = f(Vote Intentions, t-x)
(2)

where vote intentions are measured in national or state polls. A concrete example of the short-view method comes from the Princeton Election Consortium, which began forecasting US presidential elections in 2012 after giving snapshot results for the 2004 and 2008 elections. The consortium forecast employs an automated meta-analysis, using only state-level polls to give state-by-state forecasts of the election, which can be added up for a national electoral vote total (Wang 2015).

In the short view, an overarching goal is up-to-date reporting of the horse race between the candidates. Accumulation of knowledge about how the electoral process works ceases to be relevant. For example, we know that retrospective, economic voting has greater strength in elections with an incumbent running, which many short-view models ignore. Success exists in the present, rather than as a long-run phenomenon. In a word, this forecasting is ahistorical. Scientific trial and error disappears, as do attempts to replicate the forecasts of others. That means a neglect of transparency. Still, since every polling house may not be equal, efforts are made to reduce noise. Analysts select out suspect polls and/or combine others, to arrive at some sort of average prediction. This process can become complicated and opaque to public view, especially when the analyst acts to combine state-level, as opposed to national-level, polls. (See the excellent discussions of these and other problems when using polls to forecast in Blumenthal [2014] and Traugott [2014].). It should go without saying that pollsters themselves repeatedly state that the vote intentions they make are not forecasts, but a “snapshot in time.” This admonition tends to be ignored by many forecasters and most news outlets, especially those with a short view.

The physical sciences metaphor for short-view forecasting comes from applied meteorology. The local weather forecaster, every day on the nightly news, makes a prediction about tomorrow’s temperature. There is an elaborate scientific model behind that prediction, but the viewers do not see that. Instead, they see a series of highs and lows reported for the next five to ten days. This tracking of temperature holds interest for them as consumers of weather, just as the tracking of vote intentions holds interest for consumers of elections. A major difference for the latter, of course, is that those consumers only actually experience the vote forecast on election day, whereas the former experience the temperature forecast every day. This problem—that elections are rare events, chronologically speaking—seriously undercuts the value of short-view forecasting. Most analysts who hold to this short view are media forecasters, such as professionals who publish newspaper articles on the subject.

The Long View versus the Short View: Differences and Consequences

The views on forecasting discussed above are stylized, which means no one forecaster may be perfectly represented in the descriptions. (They are what used to be called Weberian “ideal types,” if you will.) Nevertheless, in the real world of election forecasters, one readily recognizes whether their source of inspiration comes from the long view or the short view. To summarize, the major differences between long-view and short-view forecasting models are the following. First are the inputs or predictor variables. In long-view models the predictor variables are usually measured at the national level and selected by strong theory learned from voting behavior studies. Short-view models, in contrast, use vote intention survey questions from multiple surveys gathered at the national or state levels, and these are updated frequently. Second are transparency and replication. Long-view models are explicit about how the forecast is derived, and the data are made available to others for replication. Short-view models often do not reveal exactly how their inputs are measured, making replication difficult if not impossible. Complicating the matter is that survey houses often do not spell out how they determine “likely voters” in their polls, a filter the short-view forecasters rely on.

Let us highlight the consequences of stressing one view over the other. We begin with influences on election forecasting itself. It seems fair to say that, these days, the long view plays second fiddle to the short view. An obvious reason has to do with the delivery of the short view: its forecasts appear in news outlets with a large following, whereas long-view forecasts appear in scholarly journals with a narrow readership. Media attention has the great value of bringing the scientific idea of election forecasting to the people. However, its ahistorical quality, while perhaps necessary for a newspaper (which must say something “new” every day), distorts what we know historically about the vagaries of election forecasting.

Take a case in point. In September 2014 the blog Monkey Cage included an article with the following title: “The Secret Truths of Elections Forecasting: Psst! The Forecasters Mostly Agree” (Sides 2014). The author argued that the “models are very similar.” Consider such claims as applied to the 2014 congressional contests by certain long-view forecast models. Abramowitz (2014), Bafumi, Erikson, and Wlezien (2014), Campbell (2014), King (2014), and Lewis-Beck and Tien (2014b) offered structural models forecasting the House results. The independent variables for each model clearly differ from one to another (see Table 1 for a list). Moreover, their forecasts are different, with House predictions of Democratic loss from minus 4 to minus 39 seats. Contrary to the headline, then, these House forecasters did not agree. The Abramowitz (2014, 296) forecast, which underpredicted the Republican gains, was most likely affected by the use of the generic ballot question based on registered voters. In 2014 the actual turnout was more Republican than all registered voters were. The King forecast of –39, which overpredicted the Republican gains, was likely affected by the use of the Gallup approval question in two of its four variables. The other three models were all within three seats of the actual thirteen-seat pickup for Republicans.

Table 1 2014 Midterm House Election Forecasting Models of Democratic Party Seat Change

Forecast Model

Predictor Variables

Forecast

Abramowitz 2014

  • Generic ballot question

  • Seats held in previous election

  • Pres. election victory margin

-4

Bafumi, Erikson, and Wlezien 2014

  • 1st step: Generic ballot question

  • President’s party

  • 2nd Step:

-14

Campbell 2014

Seats-in-trouble index

-16

Highton, McGhee, and Sides

  • Real GDP change

  • Presidential approval

  • Midterm or not

  • Presidential vote in district

  • Incumbency

  • Candidate quality differential

  • Campaign spending

  • Party of president

-11

King 2014

  • Number of seats held by president’s party

  • Real disposable income change

  • Positive event during campaign or not

  • Presidential referendum or not

-39

Lewis-Beck and Tien 2014a

  • Read disposable income change

  • Presidential approval

  • Midterm or not

  • Seats-in-play

-15

(*) For descriptions of how each variable is measured, consult the original articles.

However, as Sides (2014) notes, his immediate concern is the Senate forecasts: “Three models—Pollster’s, Sam Wang’s, and Drew Linzer’s at Daily Kos—rely exclusively on polls.” First, it is worth noting that the use of polls alone makes them decidedly different from Senate models that include substantive variables as predictors. Second, about these Senate forecasts themselves, Sides concludes: “Most everyone has a similar forecast.” Yes and no, we would say. He offers data from four Senate forecasters, giving the Democrats the following seat outcomes: 48, 48, 49, 51. Yes, they are close. But no, they are not all similar; the first three call for a Republican Senate, while the last calls for a Democratic one. Finally, as Sides concedes, a quarrel exists over the certainly levels of these forecasts, which depend on the forecast margin of victory and the amount of error expected around that margin. Sam Wang, whose article in Politico (May 27, 2014, p. 1) touched off this Sides article, reported certainty numbers ranging from 41% to 77%.

Our message is not that Sides is “right” and Wang is “wrong.” Rather, we wish to show that, in just one cross-section of time, from an election year when races looked close, noteworthy disagreements can be found among even a small sample of forecasters. If we look back over the series of post–World War II American national elections, we can usually find disagreement, even considerable disagreement, among the forecasting community. Take, for example, the differences among the forecasting teams published in the regular pre-election PS: Political Science and Politics collation of presidential predictions. Here are ranges, in percentages (of the two-party vote), for the White House party (most of these forecasts were made around Labor Day in the election year): 1992 (44.8% to 55.7%); 1996 (54.8% to 58.1%); 2000 (53% to 62%); 2004 (49.9% to 57.6%); 2008 (41.8% to 52.7%); 2012 (46.9% to 53.8%). These disagreements are not trivial; rather, the ends of these points spread from an incumbent loss to an incumbent landslide. (See more discussion on such numbers in Lewis-Beck [2005].) This wide spread among forecasters has not really diminished, as the number of forecasters has increased. In the 2012 presidential elections, the incumbent point spread among thirteen forecasting teams from this same source was from 46.9% to 53.8% (Campbell 2012). Further, five of these point estimates gave a loss to Obama, while eight gave a win. These numbers largely came from forecasters practicing in the long-view tradition, and show that forecasters can, and do, disagree widely and routinely.

Another consequence of the short view for the art of forecasting is increased reliance on polls. The need for a current estimate, particularly as the election nears, requires that the polls be utilized increasingly, at the expense of other approaches (especially the structural one, resting on theory). Sides (2014) remarks on relying “exclusively on polls” and offers a telling quotation from expert insider Stuart Rothenberg: “During the final six weeks or so of an election, my assessments of races are based almost entirely on state-level and district-level survey data.” The trouble with exclusive reliance on polls, of course, is that they can be wrong.2

Consider further the track record of the once-gold standard, the Gallup poll. Let us set aside the 1948 debacle, when Dewey was called over Truman, on grounds of Gallup’s clear methodological improvement since then. We still find that the 1992 Gallup final pre-presidential election poll was considerably off, by 5.2 percentage points. It recovered in 1996, with a final error of just 1.9 points. But again, in 2000, it finally gave Bush 48% and Gore 46%. This two-percentage point difference was so small the organization declared the race “too close to call.” Note also that it incorrectly put Bush ahead of Gore in the popular vote (a mistake that the modelers did not make). Furthermore, Gallup’s final poll estimate lacked any lead time, as it was released on election day itself—Tuesday, November 7—making it no longer a forecast. (For more discussion of these numbers, see Lewis-Beck [2001].)

To avoid such errors, forecasters wedded to polls follow different tactics. One eschews polls from houses regarded as biased and follows a high reputation poll, such as Pew. Another favors one type of poll over another, for example, private polls over public polls, the polls of firm X over those of firm Y. Yet another eliminates polls that seem to be outliers. An approach to handling outliers, and the noise problem generally, involves combining virtually all available polls from, say, a day or a week (Graefe 2014). Such averaging may “work,” if the combination obeys Condorcet’s jury theorem, which calculates the probability that the polls give the right answer; however, the theorem requires that, at bottom, at least 50% of these joined polls are right (Murr 2015). But it may be that, in a given combination, the bad apples overwhelm the good.

Even supposing a sound combinatory strategy, including proper weighting, all comes to naught if insufficient polls are available. Such a condition—absence of enough polls—may actually occur. According to Blumenthal (2014, 297), Gallup’s editor in chief speculated that aggregation could lead some polling firms to conclude that it would be more effective to just aggregate and analyze other polls, rather than run their own. Then individual firms, lacking sufficient incentive, may simply stop producing, or publicly releasing, their results. In sum, forecasters run high risks when they are driven to rely solely on polling data.

In addition to negative consequences for the forecasting enterprise, adherence to the short view poses negative consequences for the understanding of voting and elections. First, its stress on continuous updating of the horse race conveys the impression to students of politics, and to citizens generally, that election outcomes are fickle. That is, they can and do change on a dime, on an almost daily basis. There are many examples. Look at the tremendous fluctuations in the trial-heat polls going into the notorious 2000 presidential contest. Gallup polls in late September (the fourteen reported September 15–30) had Gore leading seven times, Bush five times, with ties twice. And the swings were wild. Within the week of September 18–24, Gore went from a ten-point lead to a three-point lag. October showed much the same. In the week covering October 2–7, Gore’s nine-point positive spread became a negative eight-point spread. (See the discussion in Lewis-Beck [2001, 12].) Taken at face value, the 2000 presidential forecast here changed overnight from a Democratic landslide to a Democratic debacle.

It is tempting to say these dramatic changes are the exception, until we recall data from recent elections. For example, in the last days of the 2012 Obama-Romney race, some polls found Romney suddenly ahead. Of course, poll averaging (or other types of adjusting), if done with care, can smooth out these fluctuations. But such averaging does not serve as a panacea for all the many sources of polling error. With respect to the sample itself, it may be too small, or simply biased. Virtually all polls now must be weighted after the fact, in order to be made “representative.” And these weighting decisions, often confidential, vary from house to house. Even assuming sound “probability-like” samples, there are serious instrumentation questions. What should be done with “undecideds?” How can real voters be identified? To what extent are online responses usable? Should candidate names or parties be mentioned? What about question order effects?

Thus, while the “total survey error” can be reduced, significant error will remain. (See Weisberg’s [2005] excellent text on total survey error.) These imperfect vote intention estimates, regularly reported in the press, feed the illusion that voting preferences are quite unstable. This notion seems journalistically prevalent, despite the fact that we know, as noted initially, that about 90% of Americans have, and have had, a party identification, and vote on it (Lewis-Beck et al. 2008, 114). Indeed, in the 2008 Obama-McCain campaign, from something like January to November, a leading panel survey showed that only about 5% of respondents actually changed their vote intention (Jackman 2012). Thus, the American electorate exhibits considerable underlying stability, despite what the polls suggest (Jackman 2014).

This underlying stability can actually be found in some of the short-view forecasts themselves. For example, the Princeton Election Consortium, which uses state-level polling, consistently had Obama winning in 2012, from the earliest forecast we found (August 25, 2012) until its forecast the day before the election itself (November 6, 2012). Such was the case for its popular two-party vote margin forecast, as well as its Electoral College vote forecast, which never had Obama with fewer than 300 electoral votes. The FiveThirtyEight blog in 2012 also had Obama winning in each of its forecasts. Twenty-one weeks before the election, FiveThirtyEight issued its first forecast for 2012 and had Obama winning 50.4% of the vote. Its final forecast the day before the election had Obama winning 50.6% of the vote, and the range of forecasts between these two dates was 50.3% to 51.4%. With such underlying stability among the electorate, why all the updating? As we have said, candidates, voters, pundits, and campaign staff all want to know how the campaign is going; newspapers need to sell content; and pollsters need to take polls. It turns out much action surrounding these short-view forecasts is just noise, which the long-view tries to tune out to get a clearer signal of the election.

An additional negative consequence for elections scholars, stemming from the domination of the short view, concerns the atrophy of cumulative knowledge about how elections work at the macro level. After all, in looking at democratic political systems and how they distribute voting power, what counts are aggregate electoral outcomes. Which party will control the White House? The House? The Senate? The forecasting modelers with the long view offer scientific answers to these queries, derived from theory-driven macro models. These simple models, resting heavily on the fundamentals (i.e., core political-economic variables), manage to do a stellar job of predicting almost all the variation in US national electoral outcomes. (See, for example, the statistical fit of such models in Erikson and Wlezien [2012].) Thus, contrary to the impression left by short-term forecasters, election rhythms have a rhyme. That is to say, election outcomes can largely be explained, and in straightforward ways. Forecasts grounded in strong theory (or the fundamentals) should result in more accurate results over the long haul.

Why do these long-view models work so well? Because they are based on things we know about how voters behave. The variables identified in the specifications of the leading models, while measured in the aggregate, tap into individual research on the importance of issues, partisan identification, and candidate leadership qualities. In a recent essay, William Mayer (2014) evaluates what we have learned, as political scientists, from presidential election forecasting. He lists several things, including the following:

  1. 1. Voters seek change, after a party has been in the White House for a number of terms.

  2. 2. Voters are heavily retrospective in their evaluations, especially regarding the economy.

  3. 3. Economic evaluations are very important to voters, especially trends in the recent past.

  4. 4. Campaigns matter, especially when they are unskillful.

On this last point, we have shown directly that the forecasting error from leading structural models can be increased (or decreased) by taking into account strategic campaigning with regard to the economy (Lewis-Beck and Nadeau 2012). Clearly, forecasting efforts inspired by the long view make noteworthy contributions to our knowledge of American political behavior.

The Long View versus the Short View: Synthetic Models as a Way Out?

Thus far, we have relied on a stylized characterization of election forecasting models, grouping them into two ideal types, those of the long view and those of the short view. But outside these idealizations exist models that combine elements of both perspectives. Specifically, some forecasting models have used vote intention as one variable in a regression equation, alongside fundamental variables. Good early examples are Campbell and Wink (1990) and Wlezien and Erikson (2004). These models share characteristics of both the short view and the long view, while leaning toward the latter. In some ways, too, they offer a prelude to the “media modeling” that became very popular in 2012, for example, with the blog FiveThirtyEight at the New York Times. The most developed scholarly version of such a model appears in Linzer (2013) and his blog, Votamatic. Essentially, these models, which Lewis-Beck and Stegmaier (2014b) dubbed “synthetic,” begin by looking at long-term structural factors but rely more on polls as the election approaches. In their extended investigation of US presidential election campaigns and outcomes, Erikson and Wlezien (2012) have contributed considerably to this notion of combining as predictors fundamental factors and polling results.

Lewis-Beck and Dassonneville (2015) have developed the idea further in their comparative investigation of synthetic models, applied across a sample of countries. The synthetic model takes the following general form:

Incumbent Vote Share = f(Government Popularity, Economic Growth, Vote Intention)
(3).

That is, the national vote share of the government party (coalition) can be forecast from two fundamentals: the popularity of the government (as measured in the polls) and the growth of the economy, measured six months before the election, plus one “omitted variable” (standing for absent explanatory variables) represented by government vote intention (as measured by the polls, at monthly intervals from t-6 to t-1).

This formulation combines the long view with the short view, offering a dynamic—but orderly—update across a campaign, from six months out until the eve of the contest. Comparative empirical test results (on Germany, Ireland, and Great Britain) are promising, although they are ex post facto. Still more promising is the contemporary application, ex ante, to the 2015 British election (Lewis-Beck et al. 2015). Forecasts for that election were generally far off, especially from short-view models driven by vote intention data. However, the above synthetic model, with a released forecast on data four months before the contest, did rather well when compared to polls. Indeed, the model did better than polls taken the day before the election. It pointed to the Conservative win of seats and votes. It also foresaw the collapse of the Liberal Democrats and was close in its estimates for the smaller parties (e.g., UKIP, SNP, Greens.). Thus the synthetic approach, for the UK as well as the US case, not to mention others, seems to offer a way out of the long-view versus short-view dilemma, combining the best of both worlds.

Summary and Conclusions

Election forecasting approaches and perspectives can be organized in different ways. In this article we organize forecasters according to whether they take the long view or the short view. The former tends to rely on substantive but parsimonious national models that have considerable lead time, with only occasional updating. The latter relies mostly on vote intention, measured repeatedly in polls across the campaign, where no lead time gains privilege. The short view of forecasting has gained favor, but not without negative consequences. Because it is ahistorical, the picture of how election forecasters and forecasts compare becomes distorted, in the direction of seeing too much homogeneity. Thus, disagreements and difficulties are underestimated. Further, because it must be ever newsworthy, short-view forecasting depends more and more on the polls until, in the end, nothing else seems to matter. But this holds danger, for polls can go wrong, and do.

The latest case of polls “going wrong” as forecasts comes from across the Atlantic Ocean, in the 2015 British election. As we have shown, tempering these polling results with the structural factors from the long view, in the form of a synthetic model, can set things right. Aside from the difficulties arising when the long view is neglected, the short view imposes costs in the study of voting and elections, as the American case shows. The idea of an unstable electorate takes hold in the public mind, despite the fact that the American voter, as explained by Campbell et al. (1960) and onward, has shown great stability. Finally, we forget the lesson of the structural models, which reveal that most swing in national election outcomes can be predicted, and even explained, by a handful of established rules of political behavior.

References

Abramowitz, Alan I. 2014. “Forecasting the 2014 Midterm Elections with the Generic Ballot Model.” PS: Political Science & Politics 47 (04): 772–774.Find this resource:

Bafumi, Joseph, Robert S. Erikson, and Christopher Wlezien. 2014. “National Polls, District Information, and House Seats: Forecasting the 2014 Midterm Election.” PS: Political Science & Politics 47 (04): 775–778.Find this resource:

Blumenthal, Mark. 2014. “Polls, Forecasts, Aggregators.” PS: Political Science and Politics 47 (2): 297–300.Find this resource:

Cambell, Angus, Philip E. Converse, Warren E. Miller, and Donald Stokes. 1960. The American Voter. Chicago: University of Chicago Press Books.Find this resource:

Campbell, James E. 2012. “Forecasting the 2012 American National Elections.” PS: Political Science & Politics 45 (4): 610–613.Find this resource:

Campbell, James E. 2014. “The Seats-in-Trouble Forecast of the 2014 Midterm Congressional Elections.” PS: Political Science & Politics 47 (04): 779–781.Find this resource:

Campbell, James E., and Kenneth A. Wink. 1990. “Trial Heat Forecasts of the Presidential Vote.” American Politics Research 18: 251–269.Find this resource:

Erikson, Robert S., and Christopher Wlezien. 2012. The Timeline of Presidential Elections: How Campaigns Do (and Do Not) Matter. Chicago: University of Chicago Press.Find this resource:

Gallup. 2012. “Romney 49%, Obama 48% in Gallup’s Final Election Survey.” November 5. http://www.gallup.com/poll/158519/romney-obama-gallup-final-election-survey.aspx (October 28, 2015).

Graefe, Andreas. 2014. “Accuracy of Vote Expectation Surveys in Forecasting Elections.” Public Opinion Quarterly 78(S1), 204–232 (forthcoming).Find this resource:

Highton, Benjamin, Eric McGhee, and John Sides. 2014. “Election Fundamentals and Polls Favor the Republicans.” PS: Political Science & Politics 47 (04): 786–788.Find this resource:

Holbrook, Thomas. 2010. “Forecasting U.S. Presidential Elections.” In The Oxford Handbook of American Elections and Political Behavior, edited by Jan E. Leighley. Oxford: Oxford University Press, 346–374.Find this resource:

Jackman, Simon. 2012. “Movers and Stayers in the 2008 U.S. Presidential Election Campaign.” Unpublished manuscript (cited with permission).Find this resource:

Jackman, Simon. 2014. “The Predictive Power of Uniform Swing.” PS: Political Science & Politics 47 (02): 317–321.Find this resource:

King, James D. 2014. “Midterm Congressional Elections as Designed Presidential Referenda: Predicting the 2014 House of Representative Election.” Paper presented at the Annual Meeting of the American Political Science Association, Washington, DC.Find this resource:

Lewis-Beck, Michael S. 2001. “Modelers v. Pollsters: The Election Forecasts Debate.” Harvard International Journal of Press and Politics 6 (2): 10–14.Find this resource:

Lewis-Beck, Michael S. 2005. “Election Forecasting: Principles and Practice.” British Journal of Politics and International Relations 7 (2): 145–164.Find this resource:

Lewis-Beck, Michael S., and Ruth Dassonneville. 2015. “Forecasting Elections in Europe: Synthetic Models.” Research and Politics 2 (1). doi: 10.1177/2053168014565128.Find this resource:

Lewis-Beck, Michael S., William Jacoby, Helmut Norpoth, and Herbert Weisberg. 2008. The American Voter Revisited. Ann Arbor: University of Michigan Press.Find this resource:

Lewis-Beck, Michael S., and Richard Nadeau. 2012. “Does a Presidential Candidate’s Campaign Affect the Election Outcome?” Foresight 24: 15–18.Find this resource:

Lewis-Beck, Michael S., Richard Nadeau, and Éric Bélanger. 2015. “The British General Election: Synthetic Forecasts.” Paper presented at the conference “Forecasting the 2015 British General Election,” New Theatre, London School of Economics, March 27.Find this resource:

Lewis-Beck, Michael S., and Tom W. Rice. 1992. Forecasting Elections. Washington, DC: CQ Press.Find this resource:

Lewis-Beck, Michael S., and Mary Stegmaier. 2014a. “Weather, Forecasts, Elections: After Richardson.” PS: Political Science and Politics 47 (2): 322–325.Find this resource:

Lewis-Beck, Michael S., and Mary Stegmaier. 2014b. “United States Presidential Election Forecasting: An Introduction.” PS: Political Science and Politics 47 (2): 284–288.Find this resource:

Lewis-Beck, Michael S., and Charles Tien. 2011. “Election Forecasting” In The Oxford Handbook of Economic Forecasting, edited by Michael Clements and David Hendry, 655–671. Oxford: Oxford University Press.Find this resource:

Lewis-Beck, Michael S., and Charles Tien. 2012. “Election Forecasting in Turbulent Times.” PS: Political Science and Politics 45: 625–629.Find this resource:

Lewis-Beck, Michael S., and Charles Tien. 2014a. “Congressional Election Forecasting: Structure-X Models for 2014.” PS: Political Science & Politics 47 (04): 782–785.Find this resource:

Lewis-Beck, Michael S., and Charles Tien. 2014b. “Proxy Models and Nowcasting: US Presidential Elections in the Future.” Presidential Studies Quarterly 44 (3): 506–521.Find this resource:

Linzer, Drew A. 2013. “Dynamic Bayesian Forecasting of Presidential Elections in the States.” Journal of the American Statistical Association 108 (501): 124–134.Find this resource:

Mayer, William G. 2014. “What, If Anything, Have We Learned from Presidential Election Forecasting?” PS: Political Science & Politics 47 (02): 329–331.Find this resource:

Murr, Andreas E. 2015. “The Wisdom of Crowds: Applying Condorcet’s Jury Theorem to Forecasting U.S. Presidential Elections.” International Journal of Forecasting. 31(3): 895–1007.Find this resource:

Sides, John. 2014. “The Secret Truths of Elections Forecasting: Psst! The Forecasters Mostly Agree.” Monkey Cage (blog). Washington Post, September 22. https://www.washingtonpost.com/news/monkey-cage/wp/2014/09/22/the-secret-truths-of-election-forecasting/ (October 28, 2015).Find this resource:

Traugott, Michael W. 2014. “Public Opinion Polls and Election Forecasts.” PS: Political Science and Politics 47 (2): 342–344.Find this resource:

Wang, Sam. 2014. “The War of the Senate Models: Handicapping the handicapping of the 2014 elections.” Politico Magazine, May 27, 2014. http://www.politico.com/magazine/story/2014/05/the-war-of-the-senate-models-107132_full.html?print#.VjERTzZdH99 (October 28, 2015).Find this resource:

Wang, Samuel S.-H. 2015. “Origins of Presidential Poll Aggregation: A Perspective from 2004 to 2012.” International Journal of Forecasting 31 (2): 898–909.Find this resource:

Weisberg, Herb. 2005. The Total Survey Error Approach: A Guide to the New Science of Survey Research. Chicago: University of Chicago Press.Find this resource:

Wlezien, Christopher and Robert S. Erikson. 2004. “The Fundamentals, the Polls and the Presidential Vote.” PS: Political Science and Politics 37: 747–751.Find this resource:

Notes:

(1) The idea of the importance of the “long view” comes from Lee Sigelman, in discussion with Robert Erikson. A founder of the Monkey Cage blog, Lee remarked that a key contribution political scientists could make to the national political debate was their ability to “take the long-view,” rather than simply to focus on the day-to-day issues and controversies.

(2) Long-term forecast models certainly can be wrong for any given election. Differences in predictions among long-term forecasts are about differing theories of voting and elections. However, forecast models that rely on strong theory should be more accurate than not over the long run.