# Economic Methods in Positive Political Theory

## Abstract and Keywords

This article focuses on economic methods in political science, specifically on positive political theory. It provides a sketch of the two canonical approaches to developing a positive political theory: collective preference theory and game theory. It is argued that these two techniques are distinguished by their trade-offs, despite having some clear formal differences. The article also considers other specific techniques within the game-theoretic approach, which are designed to accommodate two important analytical characteristics that are distinctive to political science.

Keywords: economic methods, positive political theory, canonical approaches, collective preference theory, game theory, game-theoretic approach, analytical characteristics

1 Introduction

Economics and political science share a common ancestry in
“political economy” and both are concerned with the decisions of people facing
constraints, at the individual level and in the aggregate. But while rational choice
theory in some form or other has been a cornerstone of economic reasoning for over a
century, with the mathematical development of this theory beginning in the middle of
the nineteenth century, its introduction to political science is relatively recent
and far from generally accepted within the discipline.^{1} Three books
proved seminal with respect to the application
(p. 811)
of economic
methods in political science, the first of which is Kenneth Arrow’s *Social Choice
and Individual Values* (1951, 1963).^{2}

Although mathematical models of voting can be found at least as far back as the thirteenth century (McLean 1990) and despite Paul Samuelson’s claim, in the foreword to the second edition of the book, that the subject of Arrow’s contribution was “mathematical politics,” an appreciation within political science (as opposed to economics) of the significance of both Arrow’s possibility theorem itself or, more importantly for this chapter, the axiomatic method with which it is established was slow in coming. An exception was William Riker, who quickly understood the depth of Arrow’s insight and the significance of an axiomatic theory of preference aggregation, both for normative democratic theory and for the positive analysis of agenda-setting and voting.

The remaining two of the three seminal books are Anthony Downs’s *An Economic
Theory of Democracy* (1957) and William Riker’s *The Theory of Political
Coalitions* (1962). These books were distinguished for political science by
their use of rational choice theory and distinguished for rational choice theory by
their explicit concern with politics.

Downs’s
1957 volume covers a wide set of issues but is perhaps most noted
for his development of the spatial model of electoral competition and for the
decision-theoretic argument suggesting rational individuals are unlikely to vote.
The spatial model builds on an economic model of retail location due to
Hotelling
(1929) and Smithies (1941).
Approximately a decade after the publication of Downs’s book, Davis and
Hinich (1966, 1967) and
Davis, Hinich, and
Ordeshook (1970) described the multidimensional version of the
(political) spatial model, the mathematics of which has given rise to a remarkable
series of results, exposing the deep structure of a variety of preference
aggregation rules, most notably, simple plurality rule (e.g. Plott 1967;
McKelvey
1979; Schofield 1983; McKelvey and Schofield
1987; Saari 1997). Similarly, Downs’s decision-theoretic approach to
turnout elicited a variety of innovations as authors sought variations on the theme
to provide a better account of participation in large elections (e.g. Riker and Ordeshook
1968; Ferejohn and Fiorina 1975). But although treating voters as
taking decisions independent of any consideration of others’ behaviour (as the
Downsian decision-theoretic approach surely does) yields some insight, the character
of most if not all political behavior is intrinsically strategic, for which the
appropriate model is game theoretic.^{3}

Riker understood the importance not only of Arrow’s theorem and a mathematical
theory of preference aggregation, he also recognized that game theory, the
quintessential theory of strategic interaction between rational agents, was the
natural tool with which to analyze political behaviour. In his 1962 book, Riker
exploited a cooperative game-theoretic model, due to von Neumann and Morgenstern
(1944), to develop an
(p. 812)
understanding of
coalition structure and provide the first thoroughgoing effort to apply game theory
to understand politics. Cooperative game theory is distinguished essentially by the
presumption that if gains from cooperation or collusion were available to a group of
agents, then those gains would surely be realized. As such, it is closely tied to
the Arrovian approach to preference aggregation and much of the early work
stimulated by Riker’s contribution reflected concerns similar to those addressed in
the possibility theorem. An important concept here is that of the *core* of a
cooperative game.

Loosely speaking, if an alternative *x* is in the core, then any coalition of
individuals who agree on a distinct alternative *y* that they all strictly
prefer, cannot be in a position to replace *x* with *y*. For example, the
majority rule core contains only alternatives that cannot be defeated under majority
voting. Similarly, if we imagine that a group uses a supramajority rule requiring at
least 2/3 of the group to approve any change, then *x* is in the core if there
is no alternative *y* such that at least 2/3 of the group strictly prefer
*y* to *x*. So if a group involves nine individuals, five of whom
strictly prefer *x* to *y* and the remaining four strictly prefer *y*
to *x*, then the majority rule core (when the choice is between *x* and
*y*) is *x* alone whereas both *x* and *y* are in the 2/3 rule
core.

The concept of the core is intuitively appealing as a predictor of what might
happen. Given the actions available to individuals under the rules governing any
social interaction, and assuming that coalitions can freely form and coordinate on
mutually advantageous courses of action, the core describes those outcomes that
cannot be overturned: even if a coalition does not like a particular core outcome,
the very fact that the outcome is in the core means that the coalition is powerless
to overturn it. On the other hand, when the core is empty (that is, fails to contain
any alternatives) then its use as a solution concept for a cooperative
game-theoretic model is suspect. For example, suppose a group of three persons has
to use majority rule to decide how to share a dollar and suppose every individual
cares exclusively about their own share. Then every possible outcome (that is,
division of the dollar) can be upset by a majority coalition. To see this, suppose a
fair division of $(1/3) to each individual is proposed; then individuals (say, A and
B) can propose and vote to share the dollar evenly between themselves and give
nothing to individual C; but then A and C can propose and vote to give $(2/3) to A
and $(1/3) to C. Because A and C care only about their own shares, this proposal
upsets the proposal favoring A and B. But by the same token, a division that shares
the dollar equally between C and B, giving nothing to A, upsets the outcome that
gives B nothing; and so on. In this example, the core offers no guidance about what
to expect as a final outcome. Furthermore, it does not follow that the core being
empty implies instability or continued change. Rather, core emptiness means only
that every possible outcome can in principle be overturned; as such, the model
offers *no* prediction at all. It is an unfortunate fact, therefore, that, save
in constrained environments, the core of any cooperative game-theoretic model of
political behavior is typically empty, attenuating the predictive or explanatory
content of the model.

Discovering the extent to which the core failed to exist was disappointing and
induced at least some pessimism about the general value of formal economic reasoning
as a tool for political science. Things changed with the development of
(p. 813)
techniques within economics and game theory that greatly
extended the scope and power of *non*-cooperative game theory, in which there
is no presumption that the existence of gains from cooperation are realized. And at
least at the time of writing this chapter, it is non-cooperative game theory that
dominates contemporary positive political theory.

This chapter concerns economic methods in political science. It is confined
exclusively to positive (formal) political theory, paying no attention to
econometric methods for empirical political science. Furthermore, I adopt the
perspective that a central task for positive political theory is to understand the
relationship between the preferences of individuals comprising a polity and the
collective choices from a set of possible alternatives over which the individuals’
preferences are defined.^{4} The next
section sketches the two canonical approaches to developing a positive political
theory, collective preference theory and game theory. I briefly argue that despite
some clear formal differences, these two techniques are essentially distinguished by
the trade-off each makes with respect to a minimal democracy constraint and a demand
that well-defined predictions are generally guaranteed. Moreover, the attempt to
develop collective preference theory as an explanatory framework for political
science reveals two important analytical characteristics, distinctive to political
science rather than economics. The subsequent section, therefore, considers some
more specific techniques within the game-theoretic approach designed to accommodate
these characteristics. A third section concludes.

2 Two Approaches From Economics

Economics is rooted in the choices of individuals, albeit with a broad
notion of what counts as an “individual” when useful, as in the theory of markets
where firms are often treated as individuals. And the basic economic model of
individual choice is decision theoretic: in its simplest variant, individuals are
assumed to have preferences over a set of feasible alternatives that are complete
(every pair of alternatives can be ranked) and transitive (for any three
alternatives, say *x*, *y*, *z*, if *x* is preferred to *y*
and *y* is preferred to *z*, then *x* must be preferred to *z*)
and to choose an alternative (e.g. purchasing bundles of groceries, cars, education,
…) that maximizes their preferences, or payoffs, over this set. The predictions of
the model, therefore, are given
(p. 814)
by studying how the set of
maximal elements varies with changes in the feasible set. Now it is certainly true
that individuals make political decisions but those decisions of interest to
political science are not primarily individual consumption or investment decisions;
rather, they are decisions to vote, to participate in collective action, to adopt a
platform on which to run for elected office, and so forth. In contrast to canonical
decision-making in economics, therefore, what an individual chooses in politics is
not always what an individual obtains (e.g. voting for some electoral candidate does
not ensure that the candidate is elected). Thus, the link between an individual’s
decisions in politics and the consequent payoffs to the individual is attenuated
relative to that for economic decisions: the basic political model of individual
choice is game theoretic.

The preceding observations suggest two approaches to understanding how individual
preferences connect to political, or collective, choices and both are pursued: a
*direct* approach through extending the individual decision-theoretic model
to the collectivity as a whole, an approach essentially begun with Arrow
(1951, 1963) and
Black
(1958); and an *indirect* approach through exploring the
consequences of mutually consistent sets of strategic decisions by instrumentally
rational agents, an approach with roots in von Neumann and Morgenstern
(1944) and Nash (1951).

Under the direct (*collective preference*) approach to social choice,
individuals’ *preferences* are directly aggregated into a “social preference”
which, as in individual decision theory, is then maximized to yield a set of best
(relative to the maximand) alternatives, the collective choices. But although
individual preferences surely influence individual decisions such as voting, there
is no guarantee that individuals’ preferences are revealed by their decisions (for
example, an individual may have strict preferences over candidates for electoral
office, yet choose to vote strategically or to abstain). Under the indirect
(*game-theoretic*) approach to social choice, therefore, it is individuals’
*actions* that are aggregated to arrive at collective choices. Faced with a
particular decision problem, individuals rarely have to declare their preferences
directly but instead have to take some action. For example, in a multicandidate
election under plurality rule, individuals must choose the candidate for whom to
vote and may abstain; the collective choice from the election is then decided by
counting the recorded votes and not by direct observation of all individuals’
preferences over the entire list of candidates. It is useful to be a little more
precise.

A preference profile is a list of preferences, one for each individual in the
society, over a set of alternatives for that society. An abstract collective choice
rule is a rule that assigns collective choices to each and every profile; that is,
for any list of preferences, a collective choice rule identifies the set of outcomes
chosen by society. Similarly, a preference aggregation rule is a rule that
aggregates individuals’ preferences into a single, complete, “social preference”
relation over the set of alternatives; that is, for any profile, the preference
aggregation rule collects individual preferences into a social preference relation
over alternatives. It is important to note that while the theory (following
economics) presumes individual preferences are complete and transitive, the only
requirement at this point of a social preference relation is that it is complete.
For any profile and preference aggregation rule, we can identify those
(p. 815)
alternatives (if any) that are ranked best by the social
preference relation derived from the profile by the rule. With a slight abuse of the
language, this set is known as the *core* of the preference aggregation rule at
the particular profile of concern.^{5} Taken
together, therefore, a preference aggregation rule and its associated core for all
possible preference profiles is an instance of an abstract collective choice rule.
Thus, the extension of the classical economic decision-theoretic model of individual
choice to the problem of collective decision-making, the direct approach mentioned
above, can be described as the analysis of the abstract collective choice rules
defined by the core of various preference aggregation rules.

The analytical challenge confronted by the direct approach is to find conditions
under which preference aggregation relations exist and yield well-defined, that is,
non-empty, cores. This approach has focused on two complementary issues: delineating
classes of preference aggregation rule that are consistent with various sets of
*desiderata* (for instance, Arrow’s possibility theorem (1951, 1963) and
May’s theorem (May
1952) characterizing majority rule) and describing the
properties of particular preference aggregation rules in various
environments^{6} (for instance, Plott’s characterization of
majority cores (Plott
1967) in the spatial model and the chaos theorems of
McKelvey 1976
1979, and
Schofield 1978,
1983).
Contributions to the first issue rely heavily on axiomatic methods whereas
contributions to the second have, for the most part, exploited the spatial voting
model in which the feasible set of alternatives is some subset of (typically)
*k*-dimensional Euclidean space and individuals’ preferences can be described
by continuous quasi-concave (loosely, single peaked in every direction) utility
functions.

From the perspective of developing a decision-theoretic approach to prediction and
explanation at the collective level, the results from collective preference theory
are a little disappointing. There exist aggregation rules that justify treating
collective choice in a straightforward decision-theoretic way only if the
environment is very simple, having a minimal number of alternatives from which to
choose or satisfying severe restrictions on the sorts of preference profiles that
can exist (for instance, profiles of single-peaked preferences over a fixed ordering
of the alternatives), or if the preferences of all but a very few are ignored in the
aggregation (as in dictatorships). Moreover, in the context of the spatial model,
most of the aggregation procedures observed in the world are, at least in principle,
subject to chronic instability unless politics concerns only a single issue.^{7} Nevertheless, a great deal has been learned from the
collective preference approach to political decision-making about the properties and
implications of preference aggregation and voting rules,
(p. 816)
and
about the normative and descriptive trade-offs inherent in choosing one rule over
another.^{8}

Unlike the direct collective preference approach, the indirect approach to
collective choice through the aggregation of the strategic decisions of
instrumentally rational individuals begins by specifying the collection of possible
decision, or strategy, profiles that could arise. A strategy for an individual
specifies what the individual would do in every possible contingency that could
arise in the given setting. A strategy profile is then a list of individual
strategies, one for each member of the polity. An outcome function is a rule that
identifies a unique alternative in the set of possible social alternatives with
every feasible strategy profile. A specification of all possible strategy profiles
along with an outcome function is called a *mechanism*. An abstract theory of
how individuals make their respective decisions under any mechanism is a rule that
associates a strategy profile with every preference profile; that is, for any given
preference profile, an abstract decision theory assigns a set of possible strategy
profiles consistent with the theory when individuals’ preferences are described by
the given list of preferences.^{9} For any
preference profile, mechanism, and decision theory, we can identify those
alternatives that could arise as outcomes from strategy profiles consistent with the
decision theory at that preference profile. This is the set of *equilibrium
outcomes* under the mechanism at the preference profile. Then the indirect
approach to preference aggregation can be described as the analysis of the
collective choice rules defined by the sets of equilibrium outcomes of various
mechanisms and theories of individual decision-making.

There is no effort under the indirect approach to treat collective decision-making as in any way analogous to individual decision-making. Instead, individuals make choices (vote, contribute to collective action, and so forth) taking account of the choices of others and the likely consequences of various combinations of the individuals’ decisions. Although such choices are expected to reflect individual preferences, there is no presumption that they do so in any immediately transparent or literal fashion. The approach therefore requires both a theory of how individuals make their decisions (the abstract theory of decision-making considered above) and a description of how the resulting decisions are mapped into collective choices (a specification of the outcome function). Putting these two components together with preferences then yields a model of collective choice through the aggregation of individual decisions.

Unlike the collective preference approach, there is little difficulty with developing coherent predictive models of collective choice within the game-theoretic framework. That is, while the social choice mapping derived from a preference aggregation rule rarely yields maximal elements, the mapping derived from a mechanism and theory of individual behavior is typically well defined. This fact, coupled with the flexibility of the approach with respect to modeling institutional details, uncertainty, and (p. 817) incomplete information, has led to game theory dominating contemporary formal theory. Indeed, it has been argued that the adoption of (in particular) non-cooperative game-theoretic techniques represents a fundamental shift in methodology from those of collective preference theory (e.g. Baron 1994; Diermeier 1997). Yet, at least from a formal perspective, the difference between the collective preference and the game-theoretic approaches is not so stark.

Both approaches to collective choice, the direct and the indirect, yield social
choice rules, taking preference profiles into collective choices. Thus any result
concerning such rules must apply equally to both. In particular, it is true that a
social choice rule is generally guaranteed to yield a non-empty core only if it
violates a “minimal democracy” property, where “minimal democracy” means that if all
but at most one individuals strictly prefer some alternative *x* to another
*y*, then *y* should not be ranked strictly better than *x* under
the choice rule (Austen-Smith and Banks 1998). Whence it follows that the
indirect approach ensures existence of well-defined solutions by violating minimal
democracy, whereas the direct approach insists on minimal democracy at the expense
of ensuring non-empty cores in any but the simplest settings. On this account, the
direct and the indirect approaches to understanding collective decision-making are
complementary rather than competitive. Which sort of model is most appropriate
depends on the problem at hand and, in some important cases, their respective
predictions are intimately related (Austen-Smith and Banks 1998, 2004). Moreover, the
collective preference approach has revealed two general analytical characteristics
of collective decision-making peculiar to political science relative to economics,
characteristics that have stimulated important methodological and substantive
innovations in the game-theoretic approach. It is to these characteristics and the
innovations they have induced that I now turn.

3 Two Analytical Characteristics

The first characteristic exposed by collective preference theory involves the role of opportunities for trade. In economics, it is typically the case that, for any given society, the greater are the opportunities for trade the more likely it is that welfare-improving trade takes place. The analogue to increasing opportunities for trade in politics is increasing the dimensionality of the policy space in the spatial model or the number of alternatives in the finite-alternative model. As the number of alternatives or issue dimensions on which the preferences of a given population can differ grows, so too does the number of opportunities for winning coalitions to agree on a change from any policy; with one dimension, for example, coalitions must agree either to “move policy to the left” or to “move policy to the right” but, with two dimensions, there are uncountable directions in which to change policy, and preferences can be distributed over the plane, permitting more coalitions to form against any given (p. 818) policy. But it is precisely in such complex settings that preference aggregation rules are most poorly behaved: with one dimension the median voter theorem ensures a well-defined collective choice under majority rule (Black 1958) but, with two dimensions, the existence of such a choice is an extremely rare event and virtually any pair of alternatives can be connected by a finite sequence of majority-preferred steps (McKelvey 1979). Thus increasing “opportunities for trade” in the political setting exacerbate the problems of reaching a collective choice rather than ameliorate them.

The second characteristic concerns large populations. In economics it is the
market that aggregates individual decisions into a collective outcome. As the
number of individuals grows, the influence of any single agent becomes negligible
and, in the limit, instrumentally rational individuals act as price-takers;
moreover, large populations tend to smooth over non-convexities and irregularities
at the individual level, justifying an approximation that all members of the
population act as canonic economic theory presumes. These nice properties do not
hold in political settings. Individual decisions are aggregated through voting and
although the likelihood that any individual is pivotal vanishes as the electorate
grows, for any finite society that likelihood is not zero: under majority rule in
the classical Downsian spatial model, the median voter is pivotal whether there
are three voters or three billion and three. So not only can the collective choice
depend critically on a single person’s decision, it is unjustified to treat each
agent analogously to a “price-taker” and non-convexities and irregularities can
matter a great deal depending on precisely where they are located in the
population. The “correct” model of decision-making here is therefore to presume
individuals condition their choices on being pivotal and act as if their vote or
contribution or whatever tips the balance in favour of one or other collective
choice: either they are not in fact pivotal in which case their decision is
irrelevant, or they are pivotal in which case their decision determines the
collective choice.^{10}

The conclusion that rational individuals condition their vote decisions on the event that they are pivotal is a strategic, game-theoretic, perspective and it is within this framework that efforts to tackle the problems raised by each characteristic have been undertaken.

## 3.1 Institutions and Explanation

A virtue of the collective preference methodology (and, to a large extent, the cooperative game-theoretic methodology exploited by Riker 1962 and others) is that it is essentially “institution free,” focusing exclusively on how domains of preference profiles are mapped into collective choices without attention to how the profiles might be recorded, from where the alternatives might arise, and so on. The idea underlying the axiomatic method of collective preference theory is to abstract from empirical and detailed institutional complications and study whole classes of possible institution-satisfying (p. 819) particular properties. A limitation of this method for an explanatory theory, however, is the typical emptiness of the core in complex settings, that is, those with many issues or alternatives over which to choose. And although non-cooperative game theory typically requires an exhaustive description of the relevant institutional details in any application, it is rarely hampered by questions of the existence of solutions. This observation prompted a shift in emphasis away from a collective preference methodology tailored to avoid concerns with the details of any application, to a non-cooperative game-theoretic approach that embraces such details as intrinsic to the analysis.

Two illustrations of the role of institutional detail in finessing problems of existence in complex political environments are provided by the use of particular sorts of agenda in committee decision-making from finite sets of alternatives (see Miller 1995 for an overview) and the citizen-candidate approach to electoral competition in the spatial model, whereby the candidates contesting an election are themselves voters who strategically choose whether or not to run for office at some cost (Osborne and Slivinsky 1996; Besley and Coate 1997).

In the classical preference profile to illustrate the instability of majority
rule (the Condorcet paradox), three committee members have strict preferences over
three alternatives such that each alternative is best in one person’s ordering,
middle ranked in a second person’s ordering, and worst in a third person’s
ordering. There is no majority core in this example, with every alternative being
beaten by one of the others under majority *preference*. However, committee
decisions are often governed by rules, such as the amendment agenda. Under the
amendment agenda, one alternative is first voted against another and the majority
winner of the *vote* (not preference) is then put against the residual
alternative in a final majority vote to determine the outcome. It is well known
that the unique subgame perfect Nash equilibrium (an instance of a theory of
individual decision-making in the earlier language) prescribes that individuals
vote with their immediate (sincere) preferences at the final division to yield two
conditional outcomes, one for each possible winner at the first division, and then
vote sincerely at the first division with respect to these two conditional
outcomes. Assuming majority preference is always strict, this backwards induction
procedure invariably produces a unique prediction which in general depends on the
ordering of the alternatives as well as the distribution of individual preferences
per se.^{11} Moreover, the set of possible outcomes from
amendment agendas (on any given finite set of alternatives and any finite
committee) as a function of the preference profiles inducing a strict majority
preference relation is now completely characterized (Banks 1985).

Similar to the difficulty with many alternatives illustrated by the Condorcet
paradox, the majority rule core in the multidimensional spatial model is typically
empty, and core emptiness means the model offers *no* positive predictions
beyond the claim that for every policy, there exists an alternative policy and a
majority that strictly
(p. 820)
prefers that alternative. But policies
are offered by candidates and candidates are themselves members of the electorate.
It is natural, therefore, to treat the set of potential candidates as being
exactly the set of citizens. Furthermore, since citizens are endowed with policy
preferences, other things equal and conditional on being elected, a successful
candidate has no incentive to implement any platform other than his or her most
preferred policy. In turn, rational voters recognize that whatever a candidate
drawn from the electorate might promise in the campaign, should the candidate be
elected then that person’s ideal policy is the final outcome. When the set of
potential candidates coincides with the set of voters, therefore, there is no
essential difference between the problem of electoral platform selection and the
problem of candidate entry: explaining the distribution of electoral policy
platforms in the citizen-candidate model is equivalent to explaining the
distribution of citizens who choose to run for electoral office. And assuming that
it is costly to run for office, individuals weigh the expected gains (which
depend, *inter alia*, on who else is running) from entering an election
against this cost when deciding whether to run for office. It follows that
alternatives are costly to place on the agenda in the citizen-candidate model and
it is not hard to see, then, that these institutional details introduce sufficient
stickiness to ensure the existence of equilibria. Moreover, by varying parameters
such as the cost of entry, the electoral rule of concern, and so forth, various
comparative predictions concerning policy outcomes and electoral system are
available.^{12}

## 3.2 Information and Large Populations

Beyond questions of core existence, the application of the collective
preference model to environments in which individuals face considerable
uncertainty, either about the implications of any collective decision (imperfect
information) or about the preferences of others (incomplete information), is
awkward. Non-cooperative game theory, however, can readily accommodate uncertainty
and informational variations.^{13} And whereas
uncertainty is unnecessary for developing a coherent theory of economic behavior
among large populations (in particular, the theory of perfect competition), it is
uncertainty that provides a hook on which to develop a coherent theory of
political behavior among large populations.

As remarked above, a peculiarity of political decision-making relative to economic decision-making is that consideration of large populations greatly complicates rather than simplifies the analysis of individual decisions. In markets, each consumer becomes negligible with respect to influencing price as the number of consumers (p. 821) grows and, therefore, is properly conceived as taking prices as given; in electorates, however, while it remains true that the likelihood that any single vote tips the outcome becomes vanishingly small as the number of voters grows, it is not true (at least for finite electorates) that the behavior of any given voter should be conditioned on the almost sure event that the voter’s decision is consequentially irrelevant. This is not usually a problem for classical collective preference theory which, as the name suggests, focuses on aggregating given preference profiles, not vote profiles. Nor is it any problem for Nash equilibrium theory insofar as there are a huge number of equilibrium patterns of voting in any large election (other than with unanimity rule), most of which look empirically silly. But empirical voting patterns are not arbitrary. And once account is taken of the fact that preferring one candidate to another in an election does not imply voting for that candidate (individuals can abstain or vote strategically), there is clearly a severe methodological problem with respect to analysing equilibrium behavior in large electorates.

One line of attack has been by brute force, using combinatorial techniques to compute the probability that a particular vote is pivotal, conditional on the specified (undominated) votes of others (Ledyard 1984; Cox 1994; Palfrey 1989). But this is cumbersome and places considerable demands on exactly what it is that individuals know about the behavior of others. In particular, individuals are assumed to know the exact size of the population. Myerson (1998, 2000, 2002) relaxes this assumption and develops a novel theory of Poisson games to analyse strategic behavior in large populations.

Rather than assume the size of the electorate is known, suppose that the actual
number of potential voters is a random variable distributed according to a Poisson
distribution with mean *n*, where *n* is large. Then the probability
that there is any particular number of voters in the society is easily calculated.
As a statistical model underlying the true size of any electorate, the Poisson
distribution uniquely exhibits a very useful technical property, *environmental
equivalence*: under the Poisson distribution, any individual in the realized
electorate believes that the number of other individuals in the electorate is also
a random variable distributed according to a Poisson distribution with the same
mean. And because the number and identity of realized individuals in the
electorate is a random variable, it is enough to identify voters by *type*
rather than their names, where an individual’s type describes all of the
strategically relevant characteristics of the individual (for example the
individual’s preferences over the candidates seeking office in any election). If
the list of possible individual types is fixed and known, then the distribution of
each type in a realized population of any size is itself given by a Poisson
distribution.

The preceding implications of modeling population size as an unobserved draw from
a Poisson distribution allow a relatively tractable and appealing strategic theory
of elections. Because only types are relevant, individual strategies are
appropriately defined as depending only on voter type rather than on voter
identity. Thus individuals know only their own types, the distribution of possible
types in the population, that the population size is a random draw from a Poisson
distribution, and that all individuals of the same type behave in the same way.
Call such a strategic model a
(p. 822)
*Poisson game*. An
equilibrium to a Poisson game is then a specification of strategies, one for each
type, such that, for any individual of any type, the individual is taking a best
decision taking as given the strategies of all other types, conditional on his or
her beliefs regarding the numbers of individuals of each type in the electorate.
Such equilibria exist and have well-defined limits with strictly positive turnout
as the mean population size increases. Myerson proposes using these limiting
equilibria as the basis of predictions about political behavior in large
populations. And to illustrate the relative elegance of the method over the usual
combinatoric approach, in Myerson (2000) he provides a version of a theorem on turnout
and candidate platform convergence due to Ledyard (1984) and,
in Myerson
(1998), he establishes a Condorcet jury theorem (see also
Myerson
2002 for a comparative analysis of three-candidate elections
under scoring rules using a Poisson game framework).^{14}

In economics, markets also serve to aggregate information through relative prices. There are no relative prices explicit in elections, yet it is not only implausible to presume voters know the true size of the electorate, it is also implausible that they know the full implications of electing one candidate rather than another. Because the likelihood that any single vote is pivotal in a large election is negligible, the incentives for any one voter in a large electorate to invest in becoming better informed regarding the candidates for election are likewise negligible. Thus Downs (1957) argued that voters in large populations would be “rationally ignorant.” But just as is the case with his theory of participation, Downs’s argument is decision-theoretic and does not necessarily apply once the strategic character of political behavior is made explicit. In particular, an instrumentally rational voter conditions her vote on the event that she is pivotal; and in the presence of asymmetric information throughout the electorate, conditioning on the event of being pivotal can yield a great deal of information about what others know. To see this, consider an example in which two candidates are competing for a majority of votes in a three-person electorate. Suppose each voter receives a noisy private signal correlated with which of the two candidates would be best and (for simplicity) suppose further that all voters share identical full-information preferences. Now if the first two voters are voting sincerely relative to their signals and the third voter is pivotal, it must be the case that the first two voters have received conflicting information about the candidates, in which case the third voter can base her vote on all of the available information distributed through the electorate, even though that distribution was not publicly known.

Exactly what are the information aggregation properties of various electoral
schemes is currently subject to much research. In some settings, the logic
sketched above can yield quite perverse results; for example, Ordeshook and Palfrey
(1988) provide an example in which an almost sure Condorcet
winner (that is, an alternative against which no alternative is preferred by a
strict majority) is surely defeated in an amendment agenda with incomplete
information. And in other settings, it turns
(p. 823)
out that
elections are remarkably efficient at aggregating information; Feddersen and Pesendorfer
(1996) prove a striking full-information equivalence theorem
for two-candidate elections with costly voting under any majority or
super-majority rule shy of unanimity, namely despite the fact that a significant
proportion of the electorate might abstain, the limiting outcome as the population
grows is almost surely the outcome that would arise if all individuals voted under
complete information.^{15}

4 Conclusion

To all intents and purposes, the methods of contemporary positive political theory coincide with the methods of contemporary economic theory. The most widespread framework for models of campaign contributions at present derives from the common agency problem introduced by Bernheim and Whinston (1986); Rubinstein’s model of alternating offer bargaining (Rubinstein 1982) has been developed and extended to ground a general theory of legislative decision-making and coalition formation; Spence’s theory of costly signaling games (Spence 1974) and Crawford and Sobel’s (1982) extension of this theory to costless (cheap-talk) signaling provide the tools for a theory of legislative committees, delegation, informational lobbying, debate, and so on. More recently, the growth of interest in behavioral economics, experimental research, and so forth is beginning to appear in the political science literature. Rather than sketch these and other applications of economic methods to political science, this chapter attempts to articulate a broader (likely idiosyncratic) view of positive political theory since the importation of formal rational choice theory to politics. After all, political decision-making has at least as much of a claim to being subject to rational choice as economic decision-making; political agents make purposive decisions to promote their interests subject to constraints. It would be odd, then, to discover that the methods of economics are of no value to the study of politics.

## References

Arrow, K. J. 1951. *Social Choice
and Individual Values*. New Haven, Conn.: Yale University Press.Find this resource:

——1963. *Social Choice and
Individual Values*, 2nd edn. New Haven, Conn.: Yale University
Press.Find this resource:

(p. 824)
Austen-Smith, D. and Banks, J. S. 1988. Elections,
coalitions and legislative outcomes. *American Political Science Review*, 82:
405–22.Find this resource:

————1998. Social choice
theory, game theory and positive political theory. In *Annual Review of
Political Science*, vol. i, ed. N. Polsby. Palo Alto, Calif: Annual
Reviews.Find this resource:

————1999.
*Positive Political Theory*, i: Collective Preference. Ann Arbor:
University of Michigan Press.Find this resource:

————2004.
*Positive Political Theory*, ii: Strategy and Structure. Ann Arbor:
University of Michigan Press.Find this resource:

Banks, J. S. 1985.
Sophisticated voting outcomes and agenda control. *Social Choice and
Welfare*, 1: 295–306.Find this resource:

Baron, D.
1994. A sequential choice perspective on legislative organization. *Legislative
Studies Quarterly*, 19: 267–96.Find this resource:

Bernheim, D. and
Whinston, M.
1986. Common agency. *Econometrica*, 54: 923–42.Find this resource:

Besley, T.
and Coate, S.
1997. An economic model of representative democracy. *Quarterly Journal of
Economics*, 112: 85–114.Find this resource:

Black, D. 1958. *The Theory of
Committees and Elections*. Cambridge: Cambridge University Press.Find this resource:

Cox, G.
W. 1990. Centripetal and centrifugal incentives in electoral systems.
*American Journal of Political Science*, 34: 903–935.Find this resource:

——1994. Strategic voting equilibria under the single nontransferable vote.
*American Political Science Review*, 88: 608–21.Find this resource:

Crawford, V. and Sobel, J. 1982. Strategic information
transmission. *Econometrica*, 50: 1431–51.Find this resource:

Davis, O. A. and
Hinich, M.
J. 1966. A mathematical model of policy formation in a democratic
society. In *Mathematical Applications in Political Science*, ii, ed.
J. Bernd.
Dallas, Tex. Southern Methodist University Press.Find this resource:

————1967. Some
results related to a mathematical model of policy formation in a democratic
society. In *Mathematical Applications in Political Science*, iii, ed.
J. Bernd.
Dallas, Tex. Southern Methodist University Press.Find this resource:

————and Ordeshook,
P. C. 1970. An expository development of a mathematical model of the
electoral process. *American Political Science Review*, 64: 426–48.Find this resource:

Diermeier, D. 1997. Explanatory concepts in formal political theory. Mimeo, Stanford University.Find this resource:

——Eraslan, H., and Merlo, A. 2003. A structural model of
government formation. *Econometrica*, 71: 27–70.Find this resource:

Downs,
A. 1957. *An Economic Theory of Democracy*. New York:
Harper.Find this resource:

Feddersen, T. J. and Pesendorfer, W. 1996. The swing
voter’s curse. *American Economic Review*, 86: 408–424.Find this resource:

Ferejohn, J.
A. and Fiorina, M. P. 1975. Closeness counts only in horseshoes and dancing.
*American Political Science Review*, 69: 920–5.Find this resource:

Fudenberg, D. and Tirole, J. 1991. *Game Theory*.
Cambridge, Mass.: MIT Press.Find this resource:

Harsanyi, J. 1967–8.
Games with incomplete information played by “Bayesian” players, parts I, II and
III. *Management Science*, 14: 159–82, 320–34, 486–502.Find this resource:

Hotelling, H. 1929.
Stability in competition. *Economic Journal*, 39: 41–57.Find this resource:

Ledyard, J. 1984. The pure theory of
large two-candidate elections. *Public Choice*, 44: 7–43.Find this resource:

McLean, I. 1990. The
Borda and Condorcet principles: three medieval applications. *Social Choice and
Welfare*, 7: 99–108.Find this resource:

McKelvey, R.
D. 1976. Intransitivities in multidimensional voting models and some
implications for agenda control. *Journal of Economic Theory*, 12:
472–82.Find this resource:

(p. 825)
McKelvey, R. D. 1979. General conditions for
global intransitivities in formal voting models. *Econometrica*, 47:
1086–112.Find this resource:

——1986. Covering, dominance and institution-free properties of
social choice. *American Journal of Political Science*, 30:
283–314.Find this resource:

——and Schofield, N. J. 1987. Generalized symmetry
conditions at a core point. *Econometrica*, 55: 923–34.Find this resource:

May, K. O. 1952. A set
of independent necessary and sufficient conditions for simple majority decision.
*Econometrica*, 20: 680–4.Find this resource:

Miller,
N. R. 1980. A new solution set for tournaments and majority voting.
*American Journal of Political Science*, 24: 68–96.Find this resource:

——. 1995. *Committees, Agendas and
Voting*. Chur: Harwood Academic.Find this resource:

Myerson, R. B.
1991. *Game Theory: Analysis of Conflict*. Cambridge, Mass.: Harvard
University Press.Find this resource:

——1998. Population uncertainty and Poisson games. *International
Journal of Game Theory*, 27: 375–92.Find this resource:

——1999.
Theoretical comparisons of electoral systems. *European Economic Review*, 43:
671–97.Find this resource:

——2000.
Large Poisson games. *Journal of Economic Theory*, 94: 7–45.Find this resource:

——2002. Comparison of scoring rules in Poisson voting games. *Journal of
Economic Theory*, 103: 217–51.Find this resource:

Nash, J. F. 1951.
Noncooperative games. *Annals of Mathematics*, 54: 289–95.Find this resource:

Ordeshook, P. and Palfrey, T., 1988. Agendas, strategic voting and
signaling with incomplete information. *American Journal of Political
Science*, 32: 441–66.Find this resource:

Osborne, M.
J. and Slivinski, A. 1996. A model of political competition with
citizen-candidates. *Quarterly Journal of Economics*, 111: 65–96.Find this resource:

Palfrey, T. R. 1989. A
mathematical proof of Duverger’s Law. In *Models of Strategic Choice in
Politics*, ed. P. C. Ordeshook. Ann Arbor: University of Michigan Press.Find this resource:

——and Rosenthal, H. 1983. A strategic calculus of
voting. *Public Choice*, 41: 7–53.Find this resource:

Persson, T.,
Roland, G. and
Tabellini,
G. 1997. Separation of powers and political accountability.
*Quarterly Journal of Economics*, 112: 310–27.Find this resource:

Plott, C. R. 1967. A
notion of equilibrium and its possibility under majority rule. *American
Economic Review*, 57: 787–806.Find this resource:

Riker, W. H. 1962. *The Theory of
Political Coalitions*. New Haven, Conn: Yale University Press.Find this resource:

——and Ordeshook, P.
C. 1968. A theory of the calculus of voting. *American Political
Science Review*, 62: 25–43.Find this resource:

Rubinstein, A. 1982. Perfect equilibrium in a
bargaining model. *Econometrica*, 50: 97–109.Find this resource:

Saari, D. G. 1997. The generic existence of a core
for *q*-rules. *Economic Theory*, 9: 219–60.Find this resource:

Schofield, N.
J. 1978. Instability of simple dynamic games. *Review of Economic
Studies*, 45: 575–94.Find this resource:

——1983. Generic instability of majority rule. *Review of Economic Studies*,
50: 695–705.Find this resource:

Schwartz, T. 1972. Rationality and
the myth of the maximum. *Nous*, 7: 97–117.Find this resource:

Smithies,
A. 1941. Optimum location in spatial competition. *Journal of
Political Economy*, 49: 423–39.Find this resource:

Spence, A. M. 1974.
*Market Signaling: Informational Transfer in Hiring and Related Screening
Processes*. Cambridge, Mass.: Harvard University Press.Find this resource:

Spence, M.
1973. Job market signaling. *Quarterly Journal of Economics*, 87:
355–79.Find this resource:

Von Neumann, J., and Morgenstern, O.
1944. *Theory of Games and Economic Behaviour*. Princeton, NJ: Princeton
University Press.Find this resource:

## Notes:

(^{1})
A suggestion (first made to me in conversation
many years ago by Barry Weingast) as to why the two disciplines differ so markedly
with respect to the use of mathematical modeling is that political science has no
analogous concept to that of the *margin* in economics. And the importance of the
margin in this respect lies less with its substantive content than with the
amenability of its logic to elementary diagrammatic representation. Economic
theorizing evolved into its contemporary mathematical form through a diagrammatic
development of the logic of the margin, whereas positive political theory, almost of
necessity, bypassed any such graphical development and jumped directly to applied game
theory.

(^{2})
Duncan Black’s *The Theory of Committees and
Elections* (1958) has some claim to be included as a fourth such book. However,
although Black considers similar issues to those taxing Arrow, his concern was more
limited than that of Arrow and his particular contribution to political science was
generally recognized only after the importance of Arrow’s work had begun to be
appreciated.

(^{3})
Palfrey and Rosenthal
1983 and Ledyard 1984 provide the earliest fully strategic models of
turnout.

(^{4})
Of course, this understanding itself reflects a
largely consequentialist perspective intrinsic to economics. Insofar as there is
consideration with any economic process, it is rarely with the process per se but with
respect to the outcomes supported or induced by that process. This remains true for
normative analysis. For example, axiomatic characterizations of procedures for dispute
resolution (such as bargaining or bankruptcy) rarely exclude all references to the
consequences of using such procedures: Pareto efficiency and individual rationality
are common instances of such consequentialist properties. In contrast, a
consequentialist perspective is less well accepted within political science at large,
where (*inter alia*) there is widespread concern with, say, the legitimacy of
procedures independent of the outcomes they might induce.

(^{5})
The abuse arises since, strictly speaking, the
core is defined with respect to a given family of coalitions. To the extent that a
preference aggregation rule can be defined in terms of so-called decisive, or winning
coalitions, the use of the term is standard. But not all rules can be so defined in
which case the set of best elements induced by such a rule is not a core in the strict
sense (see, for example, Austen-Smith and Banks 1999, ch. 3). The terminology in these
instances is therefore an abuse but a useful and harmless one nevertheless.

(^{6})
That is, various admissible classes of preference
profiles and sorts of feasible sets of alternatives.

(^{7})
See Austen-Smith and Banks
1999 for an elaboration of these claims.

(^{8})
It is worth pointing out here, too, that the
typical emptiness of the core has stimulated work on solutions concepts other than the
core for collective preference theory (e.g. Schwarz 1972;
Miller
1980; McKelvey 1986).

(^{9})
Examples of such decision theories include Nash
equilibrium and its refinements. See Fudenberg and Tirole
1991 or Myerson 1991.

(^{10})
While this applies to any large finite
electorate, proceeding to the limit in which each voter is infinitessimally small
removes even this prescription regarding how strategically rational agents may behave,
on which more below.

(^{11})
It is worth noting in this example that the
equilibrium outcome surely violates minimal democracy because, for every possible
decision, there is an alternative that is strictly preferred by two of the three
individuals.

(^{12})
The contemporary literature on game-theoretic
models of comparative institutions is large and growing. Examples include
Austen-Smith and Banks
1988; Cox 1990; Persson, Roland, and Tabellini
1997; Myerson 1999; and Diermeier, Eraslan, and Merlo
2003.

(^{13})
The theoretical foundations were laid by
Harsanyi
1967–8. Two particularly important papers since then for political
science are Spence
1973, who introduced the class of signaling games, and
Crawford and Sobel
1982, who extended this class to include cheap-talk (costless)
signaling.

(^{14})
Condorcet jury theorems address the problem of
choosing one of two alternatives when voters are uncertain about which is most in
their interests. Typically, the theorems connect the size of the electorate (jury) to
the probability that majority voting outcomes coincide with the majority choice that
would be made under no uncertainty.

(^{15})
The logic of this result is that the while, as
Downs suggested, the *relative* number of voters voting informatively declines as
the electorate grows due to the diminishing likelihood of being pivotal, the
*absolute* number of voters voting informatively increases as the electorate
grows at a faster rate, and it is the latter that dominates the information
aggregation.