Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

date: 16 February 2019

Introduction: The New Philosophy of Economics

Abstract and Keywords

This article outlines a vision of what philosophy of economics should be. Its focus is on economic science, as opposed to the somewhat broader field taken by the philosophy of economics in general. In drawing this distinction, it follows Lionel Robbins, whose Essay on the Nature and Significance of Economic Science had its seventy-fifth anniversary of publication celebrated in 2007. Economists are far less preoccupied with abstract, grand, unified theories than they once were. Therefore, philosophers of economics should be less interested in this, too. Led by Nancy Cartwight and others, this was a conclusion many philosophers of science had arrived at independently. But this has led many to criticize economics because they did not quickly pick up on changes taking place in economics graduate curricula or circulating working papers. The new, situational, and pragmatic economics beautifully exemplifies the themes that philosophers of sciences had recently gathered from other disciplines.

Keywords: philosophy, economics, Lionel Robbins, Nancy Cartwight, science

This volume—its selection of topics, authors, and approaches—reflects a specific vision of what philosophy of economics should be. In this chapter, we outline and defend that vision. Past philosophy of economics was guided in large part by assumptions derived from both philosophy of science and economics that have been displaced by more recent developments in both fields. It is these developments that motivate the orientation of this book.

Our focus is on economic science, as opposed to the somewhat broader field taken by the philosophy of economics in general. In drawing this distinction we follow Lionel Robbins, whose Essay on the Nature and Significance of Economic Science had its seventy‐fifth anniversary of publication celebrated last year (2007). Robbins carefully distinguished economic science, by which he meant the systematic search for objective economic relationships, from the many purely practical aims to which economic reasoning is put. The manager of a warehouse deciding how much floor space to devote to aisles, thus trading off between storage capacity and ease of access, is engaged in economics but not, Robbins would say, in economic science. Similarly, the ruler who decides that every family should have a free weekly chicken in their pot hands over to economic technicians the problem of selecting a means of funding the redistribution. Everyday, practical economics (p. 4) is arguably more important than economic science, and certainly far more widespread. It is also less shrouded in philosophical confusion—populist obtuseness about trade and other matters being dire confusion but not philosophical confusion—and so is best left in the hands of outstanding commentators such as Kay (2005) and Harford (2005). The topics taken up in this book are more arcane then theirs, and harder to get unambiguously right.

Old and New Philosophy of Science

Philosophy of economics through the 1980s had an intimate relationship both to developments in economics itself and to the predominant approaches in philosophy of science, though with some lag in responding to developments in both areas. The technical innovations that dominated economics from the 1950s to the 1970s combined with some general philosophy of science assumptions in the 1950s to produce a philosophy of economics with a distinct character. Understanding that history is important for seeing how and why the philosophy of economics is undergoing a major change in character.

Recent historical and sociological studies of science have deepened our understanding of the extent to which “official” paradigms—that is, graduate textbook presentations—of scientific disciplines can hide much heterogeneity in actual practice and interpretation among researchers. This was true of the economics of the 1950s and 1960s, where practical empirical and policy work went on in parallel with the development of advanced mathematical theory. However, technically extending the neoclassical paradigm certainly garnered the greatest prestige in the profession. In the hands of theorists such as Samuelson, Arrow, Debreu, and McKenzie, microeconomics was preoccupied with theorem proving, much of it clearly motivated by ambitions to achieve maximum generality. Formalization was thus a paramount consideration. Consistent marginalist models of consumer and producer behavior were developed. This work culminated in the Arrow‐Debreu‐McKenzie model, which showed that, under certain assumptions about consumers and producers, such as convexity of demand and constant returns to scale, an equilibrium exists in that there is a set of prices such that aggregate supply will equal aggregate demand for every commodity in the economy. The First Theorem of Welfare Economics showed that every general equilibrium is Pareto efficient, and the Second Theorem establishes that every general equilibrium can be brought about by some set of prices. Though these achievements occurred before the 1970s, they furnished the image of economics on which leading philosophers of economics were still mainly focused in the 1980s and 1990s; see, for example, Hausman (1992) and Rosenberg (1993), which were arguably the most influential publications in philosophy of economics during those decades.

(p. 5)

Parallel influences from philosophy of science that informed thinking about economics in the 1970s and 1980s generally involved a core set of assumptions that do not fall under any standard label. Those assumptions included the following:

  1. 1. Theories are the central content of science. A mature science ideally produces one clearly identifiable theory that explains all the phenomena in its domain. In practice, a science may produce different theories for different subdomains, but the overarching scientific goal is to unify those theories by subsuming them under one encompassing account.

  2. 2. Theories are composed of universal laws relating and ascribing properties to natural kinds and are best understood when they are described as formalized systems. Philosophy of science can aid in producing such formalizations by the application of formal logic.

  3. 3. The fundamental concepts of science should have clear definitions in terms of necessary and sufficient conditions. General philosophy of science is in large part about clarifying general scientific concepts, especially explanation and confirmation. The goal is to produce a set of necessary and sufficient conditions for application of these concepts. These definitions are largely tested against linguistic intuitions about what we would and would not count as cases of explanation and confirmation.

  4. 4. Explanation and confirmation have a logic—they conform to universal general principles that apply to all domains and do not rest on contingent empirical knowledge. A central goal of philosophy of science is to describe the logic of science. Explanation involves, in some sense still to be clarified, deductions from laws of the phenomena to be explained. Whether a science is well supported by evidence can be determined by asking whether the theory bears the right logical relationship to the data cited in support of it.

  5. 5. Holism: Theories are wholes that are evaluated as units against the evidence. Evidence always bears on theories as totalities, not on single hypotheses considered one at a time. This thesis was a common meeting ground of the descendents of logical empiricism, such as W.V. Quine and philosophers usually perceived as the leaders of logical empiricism's overthrow, Thomas Kuhn and Imre Lakatos. Lakatos's specific incorporation of holism through his conception of the units of scientific testing as entire research programs, which include abstract ‘hard‐core’ assumptions buffered from direct confrontation with observation and experiment, seemed to many philosophers to fit economics especially closely.

  6. 6. The criteria for explanation and confirmation allow us to demarcate properly scientific theories from pseudoscientific accounts, which tend to sacrifice due attention to confirmation in favor of apparent explanation, and in so doing fail to be genuinely explanatory.

  7. (p. 6)
  8. 7. It is a serious open question to what extent any of the social sciences are real sciences. This question is best explored by comparing their logical structures with those characteristic of physics and, to a lesser extent, chemistry, geology, and biology.

These were some of the key ideas in the philosophy of science in the 1960s and 1970s that shaped how philosophy of economics was done. Of course, not all tenets were equally influential in every application nor were they always explicitly advocated. However, a significant subset of them were implicit in most philosophical accounts of economics.

We may note an important feature that general equilibrium theory and the kind of philosophy of science just described have in common: both emphasize highly general, abstract, unifying structures that are independent of and prior to specific empirical research projects. Neoclassical general equilibrium theory was precisely the kind of scientific product that philosophy of science saw as its target, for reasons internal to its own dynamic that had emerged from the logical positivism and logical empiricism of the earlier twentieth century. Understanding economics meant understanding the status of general equilibrium theory. That understanding would come from identifying the fundamental commitments of the theory and clarifying their cognitive status. What were the theoretical laws? Did they have the proper logical form of laws? What were their truth conditions? What was their connection to observation, especially given the simplifying assumptions that seemed essential to all theory in the neoclassical vein?

As we noted, the two most influential philosophers of economics of the last three decades of the previous century, Alex Rosenberg and Daniel Hausman, were centrally concerned with these questions. Rosenberg's (1976, 1992) main preoccupation was the question of whether the laws of microeconomics have the right form to be laws, while Hausman (1981, 1992) primarily investigated the truth conditions for ceteris paribus generalizations and sought to identify the fundamental general assumptions of contemporary neoclassical economics. Economic methodologists took similar approaches in assessing current theory. For Blaug (1980) the key question was the alleged need for falsifiablity of the neoclassical paradigm and the extent to which stringent tests as defined by Popper were present in economics. His judgment was largely negative. Cauldwell (1982) shared the emphasis on assessing neoclassical theory and on the importance and possibility of determining its scientific status by means of general criteria drawn from the philosophy of science. Others asked similar questions through the lenses of Lakatos (Hands 1993) or Kuhn. Is neoclassical theory a progressive or degenerating research program? What were the paradigm shifts in the history of economic thought? Is the abstract character of general equilibrium theory defensible on grounds that every research program includes a sheltered hard core that does not make definite empirical predictions on its own, or is its protective belt so resistant to efforts at falsification as to render the approach pseudoscientific, as Blaug has repeatedly alleged? To repeat, not all these authors affirmed all seven of the assumptions listed earlier, (p. 7) and they did not uniformly interpret the ones they did affirm. Yet they shared a commitment to using general philosophy of science criteria to assess the epistemic status of neoclassical theory as the fundamental preoccupation of philosophy of economics.

Useful work was done using the seven tenets, and we do not deny that there is some truth in each of them, provided that enough (relatively massive) qualifications and escape clauses are added. However, as philosophers and philosophically inclined economists began turning their attention to economics, philosophy of science was moving beyond the tenets toward a more subtle understanding of science. All the tenets are, to a significant extent, misleading as descriptions of actual scientific practice. Let us consider them again one at a time, now with an eye for their problems.

  1. 1. Theories as central: “The” theory in a given discipline is typically not a single determinate set of propositions. What we find instead are common elements that are given different interpretations according to context. For example, genes play a central role in biological explanation, but what exactly a gene is taken to be varies considerably depending on the biological phenomena being explained (Moss 2004). Often we find no one uniform theory in a research domain, but rather a variety of models that overlap in various way but that are not fully intertranslatable. Cartwright (1980) gives us the example of models of quantum damping, in which physicists maintain a toolkit of six different mathematical theories. Because these aren't strictly compatible with one another, a traditional perspective in the philosophy of science would predict that physicists should be trying to eliminate all but one. However, because each theory is better than the others for governing some contexts of experimental design and interpretation, but all are reasonable in light of physicists' consensual informal conception of the basic cause of the phenomenon, they enjoy their embarrassment of riches as a practical boon. There is much more to science than theories: experimental setup and instrument calibration skills, modeling ingenuity to facilitate statistical testing, mathematical insight, experimental and data analysis paradigms and traditions, social norms and social organization, and much else—and these other elements are important to understanding the content of theories.

  2. 2. Theories, laws, and formalization: Laws in some sense play a crucial role in scientific theories. Absent any trace of what philosophers call modal structure, it is impossible to see how scientists can be said to rationally learn from induction (Ladyman & Ross 2007, Chapter 2). However, some of our best science does not emphasize laws in the philosopher's sense as elegant, context‐free, universal generalizations, but instead provides accounts of temporally and spatially restricted context‐sensitive causal processes as its end product. Molecular biology is a prime example in this regard, with its emphasis on the causal mechanisms behind cell (p. 8) functioning that form a complex patchwork of relations that cannot be aggregated into an elegant framework. Expression in a clear language, quantitative where possible, is crucial to good science, but the ideal of a full deductive system of axioms and theorems is often unattainable, and not, as far as one can see, actually sought by many scientific subcommunities that are nevertheless thriving.

  3. 3. Conceptual analysis: Some important scientific concepts are not definable in terms of necessary and sufficient conditions but are instead much closer to the protypes that, according to cognitive science, form the basis for our everyday concepts of kinds of entities and processes. The concept of the gene is again a good example. There is no definition of gene in terms of its essential characteristics that covers every important scientific use of the concept. Cartwright (2007) has argued recently that the same holds even for so general and philosophical an idea as cause: there are different senses of cause with different relevant formalizations and evidence conditions. Equally important, the traditional philosophical project of testing definitions against what we find it appropriate to say is of doubtful significance. Who is the relevant reference group? The intuitive judgments of philosophers, whose grasp of science is often out of date and who are frequently captured by highly specific metaphysical presuppositions, do not and should not govern scientific usage at all (Ladyman & Ross 2007, Chapter 1). Questions about the usage of scientists is certainly more relevant, but this also may not be the best guide to the content of scientific results; when scientists pronounce on the limitations of concepts, they, in effect, step outside their professional roles and don philosophical mantles. The most important aspect of science's relationship to everyday or philosophical conceptual order is its relentless opportunism in pursuit of new observations and new ways of modeling data, which leads it to resist all attempts to restrict it within comfortable and familiar metaphysical or epistemological boundaries.

  4. 4. The logic of confirmation and explanation: Confirmation and explanation are complex practices that do not admit of a uniform, purely logical analysis. Explanations often have a contextual component set by the background knowledge of the field in question that determines the question to be answered and the kind of answer that is appropriate (van Fraassen 1981). Sometimes, that context may invoke laws, but often it does not, at least not in any explicit way. Confirmation likewise depends strongly on domain‐specific background knowledge in ways that make a purely logical and quantitatively specifiable assessment of the degree to which specified evidence supports an hypothesis unlikely. Such general things as can be said about confirmation are sufficiently abstract that they are unhelpful on their own. The statements, “A hypothesis is well supported if all sources of error have been ruled out” or “A hypothesis is (p. 9) well supported by the evidence if it is more consistent with the evidence than any other existing hypothesis” are hard to argue with. Yet to make any use of these standards in practice requires fleshing out how error is ruled out in the specific instance or what consistency with the evidence comes to in that case. Other all‐purpose criteria such as “X is confirmed if and only if X predicts novel evidence” or “X is confirmed if and only if X is the only hypothesis that has not been falsified” are subject to well‐known counter examples and difficulties of interpretation.

  5. 5. Holism: It is a fallacy to infer from the fact that every hypothesis is tested in conjunction with background theory that evidence only bears on theories as wholes (Glymour 1980). By embedding hypotheses in differing background theoretical and experimental setups, it is possible to attribute blame and credit to individual hypotheses. Indeed, this is how the overwhelming majority of scientists view the overwhelming majority of their own research results. Judged on the basis of considerations that scientists typically introduce into actual debates about what to regard as accepted results, the relationships between theories, applications, and tests propogated by Quine and Lakatos look like philosophers' fantasies. In particular, they make science seem like unusually empirically pedantic philosophy. It is no such thing.

  6. 6. Science and pseudoscience: Several of the insights about science already discussed suggest that judging theories to be scientific or pseudoscientific is a misplaced enterprise. Scientific theories and their evidence form complexes of claims that involve diverse relations of dependence and independence and, as a result, are not subject to uniform or generic assessment. Any general criteria of scientific adequacy that might be used to distinguish science from pseudoscience are either too abstract on their own to decide what is scientific or is not, or they are contentious (Kitcher 1983). This is not to deny that astrology, creation “science,” and explicitly racialist sociobiology are clearly quackery or disguised ideology; it is merely to point out that these judgments must be supported case by case, based on specific empirical knowledge.

  7. 7. Scientific social science: The foregoing discussion of science and pseudoscience should make it obvious that questions about the “genuine” scientific status of all, or some particular, social science are sensible only if (1) they are posed as questions about specific bodies of social research, and (2) they are approached as concrete inquiries into the evidential and explanatory success of that body of work. Assessing scientific standing is continuous with the practice of science itself.

From the perspective of this more nuanced philosophy of science, traditional philosophy of economics was often engaged in unfruitful projects. Adherence to the seven tenets has encouraged engagement in discussions conducted at levels (p. 10) of abstraction too elevated and remote from empirical research to contribute to understanding the practice or content of economics. Economic methodology, following McCloskey (1985) has more than one form: Methodology is abstract philosophical commentary on science, whereas methodology informs the day‐to‐day concerns of practicing economists and is taught, for example, in econometrics courses. Contra McCloskey, it is not reasonable to think of this as a simple dichotomy rather than a continuum. For example, is one teaching Methodology or methodology if one explores the difference between so‐called Granger causation in Markov process models of financial systems and more general relationships of asymmetric influence transmission between variables that a philosopher would be more likely to regard as “real” causation? Notice that an economist might be interested in this issue for reasons that have nothing to do with philosophical curiosity; consulting clients are often looking for levers they can pull that will have predictible effects, something that merely Granger‐causal relations don't, in general, deliver. However, we agree with McCloskey that much philosophy of economics has been conducted toward the extreme Methodology end of the spectrum and, as a result, is of little consequence for economics. Asking whether neoclassical economics meets Popperian falsificationist standards or whether its statements have the right logical form to be laws is unlikely to be of much relevance because these questions presuppose the simplistic picture of science that we reject. Philosophical commentary, to be useful, must engage much more closely at the other end of the spectrum where methodology influences the practice of economics.

Philosophy of economics also needs to change in order to give appropriate weight to major innovations that have transformed practice in economics since the midcentury heyday of general equilibrium theory that has been the prime focus of most recently past philosophy of economics. To these innovations, and to the bases of their significance, we now turn.

Old and New Economics

As noted earlier, from the 1950s through the 1970s scientific economists were preoccupied with highly abstract models. These included not only general equilibrium models, taken as foundational for microeconomics. Though philosophers at the time largely ignored macroeconomics, the period was dominated first by technical debates between Keynesians and monetarists over the character of the Phillips curve, and later (then on into the 1980s) by the rise of rational expectations modeling. Since the 1980s, however, there has been a dramatic expansion of the range of modeling activities and forms of model testing. This expansion can mostly be attributed to four developments. In order of importance as we see them, they are: (1) the development of massive computing power, facilitating the testing of highly specified causal (p. 11) models against ever larger data sets; (2) the rise of game theory, which made economists' outputs useful to more agents than just government planners, and provided a rigorous theoretical framework for relaxing unrealistic informational symmetries; (3) the increasing integration of economic research with work from other disciplines; and (4) the increasing turn among economists to empirical experimentation.

We will briefly discuss each of these exciting developments. However, we want to make it clear that we do not think of these developments as paradigm shifts in the Kuhnian sense. We are indeed suspicious of the value of that hoary idea to any science, at least to any science once it establishes a track record of techniques for modeling relatively clean empirical data. The view that whole disciplines undergo revolutions—that is, transformations in their fundamental ontologies—crucially rests on the assumptions about holism and the putative logic of confirmation in philosophy of science we questioned earlier. It is far from clear that scientists make implicit metaphysical reference to so‐called natural kinds, as many philosophers imagine. In general, we doubt that there is typically a precise fact of the matter about what terms used in scientific theories refer to; we deny that philosophers have sound motivation for supposing that such reference is intended given that most scientists find the suggestion strange; and we think the main epistemic import of science is missed if too much attention is paid to the semantic formulation of theories. Ladyman and Ross (2007) argue that sciences appear far more progressive than Kuhnians imagine if one attends, not to statements of theories, but to mathematical structures that guide the design of experiments and tests of observations. Like physics, economics has accumulated and refined mathematical structures for use in modeling, rather than replaced earlier structures with novel ones. Computable general equilibrium models differ in their detailed structure and in their purpose from the general equilibrium models of the 1950s and 1960—they are about detailed empirical investigation rather than proving uniqueness and stability. However, they are in their abstract structures clearly related to and successors of those models from the 1950s and 1960s.

(1) Number Crunching

As Paul Humphreys (2004 and this volume) argues, it is probable that the sciences will turn out never to have experienced a technological development more significant in the long run than the staggering expansion in computational capacity that has followed the innovation of microprocessing, and which is still far from complete.

Critics of economics have long scoffed at economists' defense of heroically simplified models on mere grounds of tractibility. (See, for one of many examples, Addleson 1997.) Such critics have not generally had in mind, one hopes, that economists ought to use intractable models instead. Rather, the intended view (unusually explicit in Addleson) has been that economists should give up on attempting quantitative representation in favor of qualitative and empirically richer understanding.

(p. 12)

What formerly made richer quantitative models intractible was not economists' inability to write them down or imagine what operations it would take to solve them. What generated most intractability was the physical incapacity of teams of human brains to compute solutions to models with as many variables as we can often see should appear in ideal pictures of real systems of interest. Beginning slowly in the 1940s, but multiplying on an annual basis since the late 1980s, this capacity constraint is now triumphantly shattered. Ever‐greater complexity is worth modeling because ever‐more complex models can be solved.

What is relevant here is not literally the limiting case of the representation of complex reality, a single supermodel (so to speak) literally isomorphic to the world itself. Such a model would be epistemically pointless. Understanding consists in generalization, which is a species of simplification. On the other hand, there is no doubt that, for most of its history, computational restrictions forced economists to make do with the frustrating method of asking what circumstances would be like if they were based upon interaction of far fewer variables than clearly made real differences to parametric outcomes. This formerly nonoptional method puts a huge premium on the economist's power of imagination—a factor that critics of “toy” modeling seldom credit—and is indeed the source of a good deal of the intellectual thrill that economists appreciate. Every economist, for example, at some point in her or his early training saw in a rush the stunning implications of extending Adam Smith's trivial model of the pin factory to the whole interlocked network of global production. The lucky economist's whole career is a series of such leaps of imagination.

It is, thus, to be feared that economics might become more boring, as computers take over where insight was once necessary. However, where accuracy is concerned, there is no serious room for doubt that real computation is vastly superior to intuition. It might be objected that—at least for the moment, while we wait upon the fuller flowering of artificial intelligence—computers in economics can only do what we already understand clearly enough to program them to do. Here, however, supply of a problem and resulting demand for solutions have worked their usual effects; the explosion of computational capacity has been closely followed by a boom in invention of new econometric techniques. As Kincaid's and Du Plessis's papers in this volume remind us, at the moving frontier, critics are naturally alert to the imperfections of these techniques, and restlessness for improvement in them will be a permanent state of affairs. However, only the ignorant could deny that our ability to model increasingly rich causal hypotheses grows steadily, along with our capacity to subject these models to ever wider ranges of tests, with complementary strengths and weaknesses, against ever larger sets of data. This is reflected in the proportion of cousework a typical doctoral student in economics now must devote to learning analysis packages compared to her predecessor of a decade ago. This quiet revolution in the curriculum is tangible evidence that economists are becoming more like dentists in the way Keynes (1930) hoped they would. Perhaps this is sad for economists but a breakthrough for economics—and for its consumers.

(p. 13)

The importance of the computational boom at the philosophical level is straightforward. It simultaneously reduces reliance on high abstractions, and permits greater weight to be given to empirical testing of hypotheses—especially causal hypotheses—by means of regression analyses on independent variables in models that aim at completeness. Prestige in the discipline, we believe, is beginning to accrue to highly fruitful achievements in model specification in just the way it has for decades been associated with elegant uniqueness proofs. Continuing worries of Popperians such as Blaug (2002) that economics is unmoored from confrontation with recalcitrant nature are increasingly anachronistic.

(2) Game Theory

The rise of game theory as the single most important branch of mathematics in microeconomics unfolded glacially, beginning in 1944 but not emerging into the central precincts of the discipline until the 1980s. Notwithstanding the fact that game‐theoretic reasoning is now second nature to every economist under 50, its significance is often underappreciated. Economists still frequently express the opinion, for example, that game theory has obviously been useful in industrial organization and auction theory, but it has otherwise not fulfilled its promise. It is clear enough what people who say such things are getting at. Game theory has not been very productive of sweeping theoretical generalizations. Game‐theoretic models of phenomena tend to have shallow reach and to need rewriting from scratch, whenever data force even minor adjustments to specifications of players' utility functions or strategy sets. In consequence of both this very real limitation and of the multiplicity of Nash equilibria in most games of interest, game‐theoretic models usually fare poorly at quantitative prediction except when applied to designed mechanisms. The equilibrium refinement program (Kreps 1990), which was launched in the hope of providing relief from the embarrassing richness of equilibria, quickly degenerated into a formalized philosophical argument about what we ought to mean by the honorific “rational.”

The problem with this familiar way of trying to minimize the importance of game theory is that it interprets the sources of value and potential influence of formal technology too narrowly. In general, anyone who becomes excited about a new kind of mathematics because they think they'll at last be able to deductively nail a range of empirical magnitudes is on the road to disappointment. (Philosophers are invited to reflect on their experience with formal logic at the beginning of the previous century.) The misleading example of Newton notwithstanding, that is not mainly how mathematics transforms science. It does so, instead, by providing scientists with fruitful new ways of organizing and representing phenomena. By this criterion, game theory has had a more transformative impact on economics than any other set of tools since differential calculus in the 1870s. We pointed out in the first section that much science is not about abstract theories but about providing detailed causal mechanisms according to context of application. That is what current game theory seeks to do.

(p. 14)

Game theory has radically expanded econonomists' sense of how much of the social and commercial world it is possible for them to rigorously model. The example of industrial organization (IO) theory is illuminating in this regard. Before game theory, economists could formally handle monopolistic and perfectly competititive markets. This made their work interesting to government planners concerned with efficiency limits for the purpose of identifying targets, but it left economic modeling and advice far less relevant to business people, the majority of whom live in ologopolistic environments, often characterized by increasing returns to scale. Game theory took over IO because it is tailor‐made for oligopoly and wholly unreliant on the decreasing returns assumption. Its resulting relevance to the real domain of commerce persists despite the limitations of the game theorist's ability to make reliable quantitative predictions (except about designed mechanisms); used with due care and understanding, game theoretic models of oligopolistic industries are devices of unsurpassed power for identifying the parameters on which strategists must focus, and for further specifying those to which major regime shifts are most sensitive (Ghemawat 1998, Sutton 1998). In turn, the basic property of game theory that permits the rigorous modeling of oligopolistic scenarios is its ability to capture asymmetries among agents. These include asymmetries in utility functions and strategy sets, but, most importantly by far, they include asymmetric information.

In a quite deep sense, game theory is based on information. The fundamental solution concept of game theory, Nash equilibrium (NE), can be defined in terms of it: a vector of strategies is a NE if (1) each player has information about the strategic choices of the other players, and (2) the strategy of each player is the best for herself, given her information about the strategies of the other players. What is important about information here, in philosophical terms, is that there is no such thing as false information (Dretske 1981); yet NE analysis carries no commitment to players having full information. In NE, each player does the best they can relative to what they're in a position to know. This is distinct from bounded rationality, in which agents fail to compute consequences of what they are in a position to know.

It is difficult to exaggerate the importance to contemporary economics of asymmetric information models, based on the game‐theoretic analysis of signaling and screening, for which Akerloff, Spence, and Stiglitz shared the Nobel Prize. It is the basis for contemporary understanding of incomplete contracting, which is in turn criterial for the economic understanding of law, regulations, and the importance of institutions. It underscores the many specific, path‐dependent explanations of failures of convergent development in political economy. (See Kincaid's and Fields's chapters in this volume.) It accounts for instabilities in markets even when traders take due care to protect themselves from their own tendencies to asymmetrically evaluate risks of gains and losses or underweight small probabilities. Most importantly, attention to informational asymmetries encourages economists to attend to distinguishing parameters of relatively specific situations instead (p. 15) of turning to the most general model that represents a phenomenon at a high level of abstraction. Again, this doesn't imply a flight from generalization, but rather a means to achievement of generalizations that deliver more fruitful, often qualitative, predictions.

This perspective suggests a better conceptualization of the venerable distinction between ‘institutional’ and ‘neoclassical’ economics than is typical of older treatments in the philosophy of economics and methodology literatures. The overwhelming majority of economic models are intended as accounts of short‐run phenomena. Following Binmore (1998), something relatively precise can be meant by ‘short run’: this denotes a range of time scales in which games are played among agents whose utility functions are stable and whose preferences can thus be treated as exogenous. This is the familiar domain of neoclassical economics. Now suppose we think of institutions as the sources of determination of game structures, including not only strategy sets and distributions or flows of information, but also of preferences. Then we can think of the medium run as denoting the range of timescales in which, due to the influence of an institutional framework treated as constant, agents seek new sources of information and their preferences adapt in consequence. Game theory, now applied according to the principles of ‘behavioral game theory’ (Camerer 2003), remains the necessary modeling technology on this timescale. Since behavioral game theory models the influence of institutions (including social norms) on the dynamics of preferences, it implements the research program traditionally associated with institutionalism (as standardly constrasted with the neoclassical research program; see, e.g., Bowles et al. 2005, Chapter 2). Because preferences are allowed to move in the medium run, if we follow tradition by identifying agents with utility functions then the agents playing medium‐run games cannot be numerically the same agents as feature in the short‐range games the medium‐range games determine, even if the same biological entities instantiate the agents at both timescales (Ross 2005, 2006). The idea here is that nothing forces us to identify the agent in some market transaction with the whole career of a biological person; the agent need last only as long as the episode being modeled, especially if ‘agency’ is understood as a modeling device rather than the name of a natural kind. Finally, following principles pioneered by Young (1998) but now in wide use, we can deploy evolutionary game theory (Weibull 1995) to model the long run, in which even the variables used to define institutions are controlled endogenously. At this timescale the players of games are strategies themselves, which transcend their biological embodiments who come and go with each generation.

In this framework the principles of game‐theoretic reasoning and representation provide new methods for modeling economic processes at every level of abstraction and aggregation. Where once economics could model only the input‐output properties of entire markets, treating both production and consumption functions as black boxes, thanks to the flexibility of game theoretic representation we can, at least in principle, model almost any social process in terms of responses (p. 16) to changes in relative scarcities and/or availability of information. The old tension between neoclassical and institutional economics is dissolved in recognition of the fact that modeling is scale‐relative. Without the resources provided by game theory that allow us to implement this insight in real working models, this could at best have been a vague philosophical suggestion.

Game theory provides a powerful new basis for enriching the interactions between economists and workers in other disciplines. To this we now turn.

(3) Interdisciplinarity

Economists have a long history of defending their separateness from other branches of science. This gave rise to a main preoccupation for past philosophy of economics, encapsulated in our citation of Hausman's (1992) The Inexact and Separate Science of Economics as an exemplary consolidation of the previous generation's perspective. The tradition Hausman represents is a venerable one among philosophers of science who engaged with economics; his title alludes to John Stuart Mill.

When Hausman, and also Rosenberg (1992), focus on the separateness of economics, the main neighboring discipline that interests them is psychology. Several narratives of the campaign by economists to distance their activity from psychology have been written, from diverse and often incompatible perspectives. All tend to agree in emphasizing a tradition of “de‐psychologizing” the concept of utility, which began with Pareto (1909 / 1971), was given rhetorical clarity and prominence by Robbins (1935), and was completed by the banishment from economics of mentalistic concepts (belief, desire, agency itself) in Samuelson's (1947) methodology based on revealed preference theory. A currently popular interpretation of this tradition (Bruni & Sugden 2007, Angner and Loewenstein forthcoming) depicts it as basically a long mistake, in which preference for a certain form of theory driven by physics envy (Mirowski 1989) later joined forces with primitive and ill‐digested behaviorism to cause a divorce that did far more damage to economics than to psychology. Fortunately, according to this widespread view, the new behavioral economics (e.g., Camerer & Loewenstein 2004) represents a return to courtship, with remarriage clearly around the corner.

Alternative narratives of the historical relationship between economics and psychology are possible. Ross (2005, forthcoming) argues that economists after the 1930s seemed committed to an account of individual decision making that competes with—and ultimately loses the competition with—accounts emanating from psychology only because of the rhetorical commitment of many famous economists to methodological individualism. However, Ross argues, methodological individualism has done little if any work in driving important economic hypotheses; its motivation has been primarily ideological, and secondarily philosophical, rather than scientific. The strongest apparent counterexample to this suggestion is the microfoundations movement in macroeconomics that arose in the 1970s. However, the representative agent approach to modeling that forms the core of (p. 17) microfoundations makes no pretense to providing a plausible account of actual individuals; it is simply a device for bringing the technical rigor of microeconomics to bear in macroeconomics. If macroeconomics had clearer mathematical foundations than microeconomics, instead of the other way around, economists would doubtless have been busy seeking “macrofoundations,” with just as little philosophical significance. Ross's revisionary account reads the history of economics as the history of models of opportunity‐cost minimization by production‐consumption systems, almost all of which, in the actual applications of economic theory that have been empirically important, have been identified with social aggregates. If economics is the science of opportunity‐cost minimization, then behavioral economics might most accurately be conceived as a branch of psychology, specifically, the psychology of valuation and reward learning.

Neither side in this argument can or should want to deny that the level of interaction between economists and psychologists has been steadily increasing over the past two decades. It is readily accepted in contemporary economic literature, as it once was not, that findings from psychology may motivate the modeling of agents with utility functions that are exotic by comparison with simple maximization of expected financial returns or, alternatively, hedonic utility. Daniel Kahneman was awarded the Nobel Prize in Economics in 2006 for his lifetime of work that used ingenious psychological experiments to establish the theoretical and empirical value of Prospect Theory (which then spawned a family of variants) as an alternative to Expected Utility Theory (EUT) in providing a basis for representing choice and valuation functions. Neither EUT nor any of its rivals has proven generally superior as empirical accounts of the human computation of behavioral choice (Camerer & Harless 1994; Hey & Orme 1994; Ross 2005, 174–176; Binmore 2007, 21–22). Rather, psychology and economics share with other sciences, as Cartwright (1989 and elsewhere) has stressed, availability of a handy multiplicity of reduced form modeling equations for opportunistic exploitation of data in a range of recurrently constructed experimental paradigms. Guala (2005) shows how experimenters have used these as tools for gathering a “library of phenomena,” rather than the basis (at least, so far) for an overarching theory of the behavioral response to scarcity and risk. We should expect this library to be of direct relevance to both the psychology of valuation and a more abstract theory of opportunity‐cost minimization (where the latter incorporates constraints on the availability of, and asymmetries in, information).

That one can distinguish economics and psychology by reference to different primary objectives does not entail that, with respect to any given exercise in experimental design, prediction, or explanation, we ought to be able to draw an unambiguous line of disciplinary demarcation. We pointed out earlier that it is often a mistake to identify scientific areas with their abstract theories, and it equally can be a mistake to see differences in abstract theories as defining the different sciences. The boundaries of three disciplines—economics, psychology and computational neuroscience—blur in the thriving new enterprise of neuroeconomics (Montague (p. 18) & Berns 2002; Glimcher 2003; Montague et al 2006; Politser 2008). The essential motivation for this mélange is the development of several technologies that permit controlled observation of correlates of live functional processing in the brains of humans (and other animals), including brain areas associated with valuation and reward. Interpretations of the role and place of economics in this enterprise, and of the possible significance of the resulting discoveries to microeconomic theory and methodology, are multifarious and in too early a state of ferment to be profitably grouped into classes in a general review like the present one. For preliminary articulations of perspectives in the gathering debate see Camerer et al 2005; Caplin & Dean 2007; Ross et al. 2008, Chapter 8; Gul & Pesendorfer 2008; and Ross's chapter in this volume).

Our earler emphasis on economics as a science frequently preoccupied with aggregate phenomena should remind us that pschology is only one neighboring science whose borderland with economics has come under extensive review. Indeed, the frontier over which sorties have been fought for longest is that between economics and sociology. In classical political economy the border was scarcely demarcated at all. It later hardened under the idea that the two disciplines rested on rival accounts of fundamental human motivation, with economists emphasizing individualistic aviriciousness and sociologists emphasizing drives for social status. The emerging reconciliation between institutional and neoclassical approaches to modeling, as discussed earlier, now allows us to see this implausibly totalistic pair of commitments in terms far more charitable to both sides; we can say that sociologists concentrated mainly on phenomena best captured by medium‐run models, with endogenous institutional and group‐normative variables, whereas economists modeled short‐run phenomena in which preferences remained fixed. This potential compatibility was obscured for decades by the fact that economists embraced formal and quantitative models long before sociologists did, combined with the fact that, before the rise of game theory, economics lacked technology for linking these models to accounts of the medium or long runs. However, we can see in the work of Goffman (1959), and other sociologists influenced by him, anticipation of game‐theoretic insights in everyday descriptive prose. Since then, sociology (at least in North America) has experienced a massive infusion of what economists regard as rigor. On the other side of the fuzzy border, we find theorists of social norms modeling them as objective functions optimized by agents who are generated, at the medium‐run scale, by processes of socialization (Bicchieri et al. 1996; Mantzavinos 2004; Bicchieri 2005; Woodward's chapter in this volume). If it becomes widely recognized in economics, as we think it should, that methodological individualism is an ideological or philosophical prejudice rather than a scientifically justified principle, then an eventual fusion of economics and sociology seems to us more likely than the much‐prophecied merger of economics and psychology.

Game theory plays an equally important, though not exclusive, role in another thriving interdisciplinary bridge into and out of economics, that with (p. 19) biology. The game‐theoretic route was opened by Maynard Smith's (1982) depiction of Darwinian selection as the play of multiple long‐run games among strategies embodied in genetic dispositions of individual organisms. The maximand in these games is fitness rather than utility, with each generation of genetically similar organisms representing a tournament among some of the possible moves. A rich formal literature quickly developed that establishes bidirectional mappings between Maynard Smith's dynamical equilibrium concept, the Evolutionary Stable Strategy, and Nash equilibrium (Weibull 1995). This work has contributed to economics in two important ways. First, it provides a canonical and rigorous framework for modeling evolutionary competition among firms and other strategies for the organization of production, which had been introduced into economics by Hayek and brought into the mainstream by pioneering work of Nelson and Winter (1982). This is precisely the technical bridge between medium‐run modeling of institutional change and short‐run modeling of prices and quantities to which we have continually referred in the present section. Second, establishment of formal relationships between dynamic and static solution concepts in game theory has been the basis for a formal theory of the learning of equilibrium play that allows intuitively dubious Nash equilibria to be selected against by forces less arbitrary than philosophically motivated and computationally demanding refinements of rationality (Samuelson 1998).

Economic and game theoretic reasoning inform parts of biology more basic than abstract evolutionary modeling. The field of behavioral ecology, which models animals and plants as interactively adapting to (and sometimes partly constructing) their environments so as to maximize expected fitness, is directly informed by game theory (Krebs & Davies 1997; Dugatkin & Reeve 2000). Economists are apt to doubt that this activity has much to do with them. However, an intriguing new literature has begun to emerge that models some behavior of nonhuman animals in terms of participation in competitive markets (Noë et al. 2001). The key to this is partner choice, which many game‐theoretic models in behavioral ecology assume away without biological motivation. For example, small, reef‐dwelling fish visited by larger fish who come to have parasites eaten out of their mouths extract payment by devouring some of their client fish's scales. The cleaners have been found to practice systematic, and optimal, price discrimination against clients who have fewer service options because their ecology requires them to stay close to a single reef (Bshary & Noë 2003). These markets appear to stabilize at competitive (Bertrand) equilibrium. Instances of this kind should interest all economists, because they vividly illustrate selection of efficient outcomes under circumstances where there is no temptation to think that the agents consciously deliberate.

This need not be mysterious. One possible mechanism suggested by a great deal of evidence in the animal‐learning literature is that implementation of simple matching algorithms allows animals to maximize reward rates (Gallistel 1990). This will tend to produce behavior corresponding to utility maximization in environments where there is no significant opportunity for investment. Where animals (p. 20) are faced with investment choices, such as in parenting, the theorist may need to consult the long run and search for genetic dispositions learned over the evolutionary lineage of the animal in question. Then people, who regularly encounter ecologically non‐natural investment choices, emerge as the especially interesting case. Perhaps people adapt to environments with non‐natural investment opportunities only by relying on institutions, such as financial markets, that quickly punish them (by rising capital charges) whenever they use biologically natural melioration under circumstances in which utility‐maximization would have delivered higher expected returns.

When their attention is turned in this way to human cultural adaptation, as mediated through institutional change, economists naturally become involved in explanatory projects shared with anthropologists. Recent work that has attracted considerable attention in the literature exploits anthropologists' understanding of the ways in which alternative organizations of production in different societies are stabilized by specific social norms that adapt people's incentives efficiently. The best‐known application of this body of knowledge is Henrich et al.'s (2004) interdisciplinary project to determine how members of different small‐scale societies expect one another to play in ultimatum games. Other notable recent applications of economic logic to long‐run cultural evolution are provided by Seabright (2004) and Clark (2007), who each study, in very different ways, the cultural adaptations that make capitalism possible. In the case of Clark's analysis, this controversially involves an aspect of genetic adaptation, thus simultaneously integrating economic, anthropological, and biological analysis.

We conclude this section by considering a related group of disciplines with which economics has always been in close contact: the broadly philosophical cluster composed of political philosophy, public choice political science, and normative decision theory. What is recently novel in this area is not the existence of interdisciplinary relations per se but their nature.

Formal welfare analysis and its close cousin in philosophy, applied utilitarianism, both petered out in the decades after the war due to doubts about quantitative interpersonal comparisons. While economists spent several years first showing that perfectly competitive markets achieve welfare optima at general equilibrium, then expended further effort discovering that this fact has far less policy significance than imagined (Lipsey & Lancaster 1956), philosophers interested in the normative domain busied themselves with conceptual analysis of moral language. (In our view, the economists' voyage into a cul‐de‐sac was illuminating and important, whereas the philosophers' trip down their blind alley was merely a disciplinary embarrassment.) Substantive normative theory was reborn in philosophy thanks to Rawls (1971). At first, Rawls's approach might have been thought to provide limited scope for interanimation with economics, due to his explicit rejection of utilitarianism in favor of broadly Kantian foundations. (As a caveat, it must be noted that Amartya Sen and a few other economists sympathized with Rawls in this respect, and have been strongly influenced by him.) However, a (p. 21) surprisingly strong bridge to Rawlsian justice has recently been built up by Binmore (1994, 1998, 2005), who in turn relies crucially on Harsanyi's (1955, 1977) resucitation of formal utilitarianism. The key here, yet again, is game theory. It furnished Harsanyi with the tool for breaking out of the dead end in welfare economics that had been arrived at by the combination of skepticism about interpersonal comparisons, the Lipsey‐Lancaster theorem, and Arrow's impossibility theorem. Binmore then uses Nash bargaining theory to show how people with plausible Darwinian psychologies, which emphatically do not include Rawls's transcendent Kantian moral commitments, can nevertheless arrive at Rawlsian justice rather than the maximization of net collective utility. On Binmore's analysis, norms of justice are institutions in the strict sense discussed earlier, that is, endogenous long‐run variables in evolutionary games that constrain bargaining equilibria in shorter‐run models.

We have not included first‐order discussions of justice and distribution in this Handbook. We could at best have covered these topics inadequately, a poor compromise in light of the many excellent sources to which readers can be referred instead. In any case, these issues do not mainly fall within the purview of the philosophy of science, or, therefore, that of philosophy of economic science. They are topics for political philosophy informed by economic analysis. However, we have included some second‐order work on the nature of the economic analysis that political philosophers receive as consumers. Methodological questions about this activity are questions for economic scientists and philosophers of science.

Just at the point in time when Rawls was reviving substantive political philosophy and Harsanyi was breathing life back into welfare economics, a rude shock was experienced by the main community of applied welfare economists, the development theorists. Their orthodoxy, derived from Solow's (1957) exogenous growth model, led them to expect that the economies of the rich and poor countries should converge due to faster growth in markets where relative capital shortages would generate higher average returns. However, beginning in the mid‐1970s two of the world's three less developed continents, Africa and South America, went into comparative decline. At first, this was partly masked by the co‐occurrence of a long recession in the rich world. However, by the end of the 1980s it had become clear that if the Solow model's prediction was correct, this must be only in a significantly longer run than previously thought. South America returned to decent growth in the 1990s. However, up until the resurgence of commodity prices in 2005, most of sub‐Saharan Africa endured three full decades of economic shrinkage. To insist on faith in long run convergence in that context would simply be a way of seeking to excuse evident irrelevance to real policy choice.

Most economists of course did not do this. As reviewed in Kincaid's chapter in this volume, the Solow growth model has been largely supplanted by so‐called endogenous growth models, according to which poverty traps are a recurrent possibility due to mutual co‐dependence of financial, physical, and human capital accumulation. Most theorists believe that Africa has been mired in such a trap.

(p. 22)

This Handbook is of course not the venue for exploration of the contest between exogenous and endogenous growth models. However, the rise of the new growth theory has had important consequences for methodological issues in policy‐led economic science.

Where belief in the Solow model encouraged relative passivity (with some infusion of financial capital from outside poor economies called for if one wanted to speed up growth processes that would occur naturally anyway), the endogenous growth perspective implies a need for activism to pull countries out of poverty traps. However, the search for effective bases for activism has been deeply frustrating. It cannot be said that economists, in their role as positive scientists, have identified any clear recipes for bringing about sustainable upward inflection points in national growth curves of poor countries (Easterly 2001).

It is a standard dogma among philosophers of science, of all theoretical persuasions, that scientists do not discard prevailing models until replacements are available. This is for the pragmatic reason that, in the absence of a model, no one knows what to do that would qualify as knowledge development. We agree that the history of science largely bears out this belief. However, matters are importantly different where policy choices, especially policy choices relevant to the health and even survival of millions of people, are concerned. Persisting with application of a model in which theorists have lost confidence, merely because a consensus alternative is not yet on the scene, is politically and perhaps also morally unacceptable. At the very same time, the perceived need for better policy‐relevant science grows more urgent. Development economics is in a state of high crisis, and, therefore, there is more development economics being conducted than ever before.

It would be a peculiar philosopher of economics who did not find this situation extremely interesting. It is too soon for us, or anyone, to be able to say how the passage through the current crisis will ultimately transform economics. However, we will be so bold as to speculate a bit. We think the urgent quest for effective growth policies will greatly accelerate current trends, repeatedly indicated earlier in this chapter, toward a preference for models of local scope, emphasizing historically and institutionally contingent asymmetries of information, over putatively general, ahistorical, nonsituational models such as the Solow growth model or—to cite another example for the sake of generality—the Heckscher‐Ohlin theory of international trade. Heckscher‐Ohlin models are increasingly giving way to so‐called gravity models in which the underlying theoretical assumption is extremely banal—that contingent historical and geographical relationship variables between countries determine their comparative levels of trade—and all of the economist's acquired skill, knowledge, and ingenuity go into constructing specific models of this relationship that are sufficiently complete, and sufficiently carefully constructed with respect to partitioning of independent variables, to allow for convincing econometric estimation and testing.

Very often, the small‐scale models coming to dominate economics are game theoretic. Indeed, game theory may be the key basis for understanding the record (p. 23) of failure in development policy in Africa. Public choice models, the major contributions of political scientists to a particularly close interdisciplinary partnership with economics, often predict rent‐seeking activities of individuals and institutions that reflect deep learning and will thus tend to be highly resilient and persistent in the face of attempts at altruistically or disinteresedly motivated reform. Public choice models have come in for a good deal of harsh criticism (see, e.g., Shapiro 2005). This has mainly focused on the features they allegedly share with neoclassical microeconomic models: overgenerality, insensitivity to historical and institutional contingencies, commitment to methodological individualism, and reliance on a version of Homo economicus whose motives are unreasonably consistent and self‐centred. In our opinion, the public choice literature has indeed (with important exceptions) been more deserving of these charges than the mainstream tradition in microeconomics. However, it is possible to enrich the first generation of public choice models by drawing on the resources of multiscale game theory as outlined in earlier sections.

A movement in this direction is evident in the development literature (Platteau 2000). We predict that the key scientific contribution to more successful antipoverty policy over the coming years will lie in the area of mechanism design. This requires, first, that one achieve focus on an appropriate scale, at which modeling simplifies phenomena sufficiently for important variables to stand out as salient, but not so drastically that causal pathways connecting control variables—potential policy levers—with dependent variables targeted for reform disappear inside black boxes. Then, one searches for a game‐theoretic model that identifies a set of institutional structures, which some authority could plausibly put in place by manipulation of the levers identified in step one, that would incentivize individuals and groups to act as if they valued the public interest in economic growth rather than to impede it through rent‐seeking.

Though activity of this kind is engaged in for the sake of policy, and policy is ultimately driven by normative commitments, the commitments in question tend to be relatively banal and noncontroversial by comparison with the epochal “markets or governments?” ideological battles that political philosophers have often identified as undercurrents in economics. (Dasgupta's chapter in this Handbook provides rich and eloquent detail on this theme.) The hard questions in the new policy economics, as it operates closer to the ground and relies on game theory and the econometric applications made possible by vast number crunching resources, are methodological and scientific far more than they are normative. Readers of Fields's chapter will gain a good sense of the extent to which leading methodological debates in economics now tend to take pragmatism for granted, and to focus on how we can derive the maximum quantity of useful information from the lavish streams of data now available to us. It is for this reason that, in a volume that confines its focus to economic science rather than to every kind of debate in which economists engage, we have included chapters on the pragmatic foundations of welfare and development economics, while leaving arguments between Kantian (p. 24) and utilitarian normative‐foundational frameworks for continued treatment in other sources.

Its richer set of tools having opened new veins of activity, economics as a discipline has been growing steadily in confidence. This has made economists impatient with methodological arguments—there are so many more pressing things to get on with. At the same time, confidence encourages them to engage opportunistically with contributions from other disciplines, rather than devote energy to patroling their own boundaries. We do not think that economics is blending into psychology or any other discipline (though it might swallow sociology). However, it is certainly being importantly changed in its scientific character, and for the better, by interacting with the neighbors.

(4) Experimentation

Some commentators, including some economists, give the impression that controlled experimental testing of economic hypotheses is a recent and novel phenomenon. It is not. As early as 1931, L.L. Thurstone attempted to experimentally derive subjects' utility functions from their choice behavior in his laboratory. The experimental work for which Vernon Smith shared the Nobel Prize with Kahneman goes back to the 1960s. What certainly is true is that economists are doing far more experimental work than at any past time and, more importantly, that this research has recently begun to be taught in leading economics graduate programs. Increasing numbers of graduate students are even encouraged to learn laboratory methods. Experimental economics is shedding its status as a fringe practice.

Because the very essence of an experiment as a contribution to knowledge is replicability, experimentation gains positive value from methodological dogma to a greater extent than other kinds of scientific activity. Experimental economics has accordingly stabilized around several distinctive norms. Guala (2005, 232–233) identifies four methodological “precepts.” These are rules which, if broken, will preclude publication of an experiment in an economics journal unless the economist in question has a specific, scientifically based, and scientifically persuasive, explicit justification. Guala's precepts are:

  1. 1. Nonsatiation: choose a medium of reward such that of two otherwise equivalent alternatives, subjects will always choose the one yielding more of the reward medium.

  2. 2. Saliency: the reward must be increasing in the good and decreasing in the bad outcomes of the experiment.

  3. 3. Dominance: the rewards dominate any subjective costs associated with participation in the experiment.

  4. 4. Privacy: each subject in an experiment receives information only about her own incentives.

(p. 25)

We have some doubts about whether privacy merits being regarded as a genuine precept. Though it is indeed more often observed than not, we suggest that this is mainly because it is often necessary to try to maintain dominance. Dominance is often very difficult to achieve, or to be confident one has achieved, as Ken Binmore in particular has repeatedly pointed out. This is just one aspect of a general, partly philosophical, question to which experimental economists should, in our view, devote substantial attention: What circumstances are necessary and sufficient for belief that an experiment is externally valid, that is, that it successfully models in the laboratory the structural parameters and (parametric and nonparametric) incentives relevant to whatever class of ecological circumstances the experimenter is ultimately interested in predicting and explaining?

As Cartwright (1989) has emphasized, this general issue arises for all laboratory experimentation in science, but different sciences have varying means available for dealing with it. Experimental economists are particularly limited by ethical proscriptions against confronting subjects with coercive incentives or allowing them to exit experiments poorer in monetary wealth than they entered. Guala fails to include one clear precept of experimental economics, namely, nondeception, which further challenges designers' ingenuity in contriving to achieve external (and internal) validity. Economists are conditioned by their training to perceive experimenter credibility with subjects as a commons good. That is, any single team of experimenters could gain a degree of design freedom by being willing to deceive subjects about real payoffs. Deception is not ethically proscribed by institional review boards and research ethics committees, since psychologists engage in it as a matter of course. However, if many economic experimenters regularly deceived subjects, word might be expected to get around in subject pools that economists' subject briefings cannot be trusted. In an economic experiment, confidence that subjects operate in accordance with the incentives ascribed to them, as reinforced by the experimental design, is crucial to the point of the enterprise. Thus the community of experimental economists faces a many‐person Prisoner's Dilemma unless outside authorities act to change the game. The outside authorities in question are journal editors, who coordinate on a rule according to which they will not publish reports of experiments in which subjects are deceived. Economists determined to deceive subjects can seek to publish their results in psychology journals, and then take their chances with their tenure and promotion committees. However, one cannot ask one's graduate students to take chances with the acceptability of their thesis chapters. The nondeception precept has been extended to neuroeconomics, following a 2004 vote by the founding membership of the Society for Neuroeconomics.

To avoid problems of external validity, economists are increasingly conducting field experiments in which subjects' incentives are known to be those operative in the wild because the subjects are not taken out of the wild in the first place. Duflo (2006) provides a review and extended discussion of two representative examples. As she indicates, field experiments have lately become especially emphasized in (p. 26) development economics. This reflects the fact, as discussed in the previous section, that development models are becoming increasingly local and situational. Assumptions about information, costs, and incentives built into such models, not being typically defensible merely by appeal to generic ideas about rationality, require explicit empirical testing. A common design involves finding two similar local populations—for example, two villages—one of which is designated a treatment group and receives a policy initiative intended to change incentives or information, while the other control group does not receive the initiative. Underlying the experiment must be a causal model in which administered variables appear on the left‐hand side of the model equation and some independent policy objective (e.g., an increase in per capita household consumption expenditure, or the proportion of women's days occupied by fetching fuel and water, etc.) appears on the right‐hand side. Following a prestipulated observation period, the model is tested econometrically to determine how well it accounts for variance in outcomes between the villages. We think it is accurate to say that this methodology is quickly coming to be accepted as the gold standard in development economics. Note that the only restrictions on independent variables arise from testing constraints, not from an a priori model of economic agents or of markets.

We have placed experimentation at the bottom of our list of main sources of recent change in economics. This is in part because we dispute the stereotype common among philosophers that economists have been, at least until recently, uninterested in testing their models against reality. According to our reading of the history, they were always willing enough, but labored under severe limitations. The most important of these was shortage of computational power needed to test models that were not drastically simplified. The second, interacting directly with the first, was the fact that introducing interactivity into models through use of game theory usually blows up complexity. The best strategy when a scientist cannot rigorously test finely fitted models is to retreat to models of wide scope that rely on maximally general theoretical principles. A justification that economists for years could have given for extreme idealization, regardless of whether they perceived the need for such justification, was that they had no other effective choice in many domains of interest.

Consequently, the mainstream core of economics for several decades looked just like what a science, according to the received view in mid‐twentieth‐century philosophy of science, was supposed to look like. We believe that the received view is and always was largely mistaken. Therefore, it mistook a virtue economists made of necessity for a deep methodological commitment based on philosophical presuppositions. Later, when received opinion in philosophy of science came increasingly into question, economics as philosophers had misunderstood it fell under suspicion as well.

We suggest that a more accurate narrative is as follows. Economists have generally been sceptical about the importance of grand methodological and philosophical principles. The rhetoric of the discipline resonates, throughout its whole (p. 27) history, with an ideal of close empirical testing of hypothesized relationships. This is reflected in, among other things, economists' perennial resistance to the kinds of highly abstract constructs that dominate psychology. (A reader who doubts what we're saying here is advised to consult the first few chapters of Samuelson's Foundations. If any document has a claim to being the high church expression of mainstream postwar philosophy of economics among economists, this is it.) Modern classics in the philosophy of economics, Hausman (1992) and Rosenberg (1993), missed this underlying spirit in the discipline because they were written at the end of a long period of frustration with the available tools. A decade‐and‐a‐half later, an extraordinary new set of instruments delivered from the workshops of computer science and mathematics have profoundly altered capabilities. Sensibilities, which always adjust with a lag, are duly following. It is no surprise that it is time to re‐cast the philosophy of economics.

Organization of the Handbook

A standard but tedious practice in introductions to collections is to briefly describe the contents of each chapter. We suppose this is intended to serve two purposes. First, it might demonstrate that the collection in question has a raison d'être expressed in its ordering and development of topics. Second, it can guide a consumer who is trying to decide which chapters to actually read.

We will eschew this approach. We believe we have now adequately explained why we have edited this new Handbook, and included in it the material that we have. We also hope that readers will be able to gather which chapters interest them by looking at the titles and the names of the authors. The one traditional introductory function we do think still needs to be performed is to briefly state the book's principles of organization. This is for the benefit of the particularly devoted student of the philosophy of economics who plans to read the Handbook right through, and would like a bit of orientation first.

We referred at several points in the Introduction to classic statements of the conception of philosophy of economics that, according to us, are now being displaced, thanks to changes in disciplinary practice. The clearest and most influential expressions of this conception, in our view, are due to Daniel Hausman and Alex Rosenberg. A third equally influential contributor of their generation, at least in Europe, has been Uskali Mäki. We therefore deemed it appropriate to open the Handbook by asking Hausman, Rosenberg, and Mäki to restate their general orientations and motivations. We explicitly requested them to write partly from autobiographical standpoints, since the point of this exercise is increased clarity about recent intellectual history. The first part of the Handbook is comprised mainly of these three chapters. It is rounded off and balanced with a chapter from Philip (p. 28) Mirowski, who has been the most provocative, but also widely read and highly influential, critical historian of the grand neoclassical self‐conception. Taken together, Part I of the Handbook showcases the image of economics against which a majority of philosophers of science have increasingly reacted. It thus describes a platform relative to which the rest of the book's contents amount to a complex response.

Part II surveys the impact on microeconomics of the new technologies, methods and interdisciplinary relationships described in this Introduction. The topics covered are the evolution of the concept of personal utility in game theory, social preferences, the nature of the individual as incorporated in economics, the relationship between behavioral economics and neuroeconomics, issues in the design of economic experiments, methodological questions from mechanism design, and evolutionary economics.

Part III turns to issues in the study of aggregate phenomena. The topics taken up are the impact of cheap computational power on modeling, the current status of microfoundations for macroeconomics, methods for trying to isolate reliable policy levers from macroeconomic data, the turn from general to situational explanations and policy studies of growth, and the similar turn in studies of labor markets.

Part IV concerns the relationship between current controversies in economic science and welfare. Topics considered are the nature and measurability of welfare, the justification and methodology of interpersonal utility comparison, recent proposals to revive broadly Benthamite (that is, psychologistic) conceptions of well being for use in economics, and the relationship between facts and values in development economics.

It will be obvious from a cursory inspection of these topics that this Handbook covers significantly different ground from previous volumes that aimed at presenting the state of the art in philosophy of economics. Our aim in this Introduction has been to explain why this is so, and to identify underlying general changes in both philosophy of science and economics that explain the more specific aspects of novelty.

We conclude by summarizing the general message. Economists are far less preoccupied with abstract, grand, unified theories than they once were. Therefore, philosophers of economics should be less interested in this, too. Led by Nancy Cartwight and others, this was a conclusion many philosophers of science had arrived at independently. But this has led many to criticize economics because they did not quickly pick up on changes taking place in economics graduate curricula or circulating working papers. The new, situational, and pragmatic economics beautifully exemplifies the themes that philosophers of sciences had recently gathered from other disciplines.

However, in saying that economists are less devoted to grand, unified theories, we do not at all deny that there are unifying themes in contemporary economics. We agree with Gintis (2007) that use of game theory, with its associated focus on (p. 29) informational asymmetries, underlies not only what is most distinctively novel across most of economics, but extends the discipline's integration in the larger suite of behavioral sciences. Unlike Gintis, we do not interpret game theory as itself a behavioral theory; it is a part of mathematics, making no empirical claims. To this extent, our stance harkens back to aspects of positivism and even Kantianism, so we are not dismissive of the relevance of philosophy. The key is for philosophers to keep their ears as close as possible to the ground—in this case, the ground being the economics seminar rooms around the world in which the graduate students gather. We hope this Handbook aids that purpose.

References

Addleson, M. (1997). Equilibrium Versus Understanding. London: Routledge.Find this resource:

Angner, E. & Loewenstein, G. (forthcoming). “Behavioral Economics.” In U. Mäki, Ed., Handbook of the Philosophy of Science Volume 13: Economics. London: Elsevier.Find this resource:

Binmore, K. (1994). Game Theory and the Social Contract, Volume 1: Playing Fair. Cambridge, MA: MIT Press.Find this resource:

Binmore, K. (1998). Game Theory and the Social Contract, Volume 2: Just Playing. Cambridge, MA: MIT Press.Find this resource:

Binmore, K. (2005). Natural Justice. Oxford: Oxford University Press.Find this resource:

Binmore, K. (2007). Does Game Theory Work? The Bargaining Challenge. Cambridge, MA: MIT Press.Find this resource:

Blaug, M. (1980). The Methodology of Economics. Cambridge, England: Cambridge University Press.Find this resource:

Blaug, M. (2002). “Ugly Currents in Modern Economics.” In U. Mäki, ed., Fact and Fiction in Economics, 35–36. Cambridge, England: Cambridge University Press.Find this resource:

Bicchieri, C., Jeffrey, R. & Skyrms, B., eds. (1996). The Dynamics of Norms. Cambridge, England: Cambridge University Press.Find this resource:

Bicchieri, C. (2005). The Grammar of Society. Cambridge, England: Cambridge University Press.Find this resource:

Bowles, S., Edwards, R. & Roosevelt, F. (2005). Understanding Capitalism. 3rd edition. Oxford: Oxford University Press.Find this resource:

Bruni, L. & Sugden, R. (2007). “The Road Not Taken: How Psychology Was Removed from Economics and How It Might Be Brought Back.” The Economic Journal 117: 146–173.Find this resource:

Bshary, R. & Noë, R. (2003). “Biological Markets: The Ubiquitous Influence of Partner Choice on the Dynamics of Cleaner Fish—Client Reef Fish Interactions.” In P. Hammerstein, Ed., Genetic and Cultural Evolution of Cooperation, 167–184. Cambridge, MA: MIT Press.Find this resource:

Caldwell, B. (1982). Beyond Positivism. London: George Allen and Unwin.Find this resource:

Camerer, C. & Harless, D. (1994). “The Predictive Utility of Generalized Expected Utility Theories.” Econometrica 62: 1251–1290.Find this resource:

Camerer, C. (2003). Behavioral Game Theory. Princeton: Russell Sage / Princeton University Press.Find this resource:

(p. 30) Camerer, C. & Loewenstein, G. (2004). “Behavioral Economics: Past, Present, and Future.” In C. Camerer, G. Loewenstein & M. Rabin, Eds., Advances in Behavioral Economics, 3–51. Princeton: Princeton University Press.Find this resource:

Camerer, C., Loewenstein, G. & Prelec, D. (2005). “Neuroeconomics: How Neuroscience Can Inform Economics.” Journal of Economic Literature 43: 9–64.Find this resource:

Caplin, A. & Dean, M. (2007). “The Neuroeconomic Theory of Learning.” American Economic Review 97: 148–152.Find this resource:

Cartwright, N. (1980). “The Reality of Causes in a World of Instrumental Laws.” In P. Asquith & R. Giere, Eds., PSA 1980, Volume 2, 38–48. East Lansing, MI: Philosophy of Science Association.Find this resource:

Cartwright, N. (1989). Nature's Capacities and their Measurement. Oxford: Oxford University Press.Find this resource:

Cartwright, N. (2007). Hunting Causes and Using Them. Cambridge, England: Cambridge University Press.Find this resource:

Clark, G. (2007). A Farewell to Alms. Princeton: Princeton University Press.Find this resource:

Dretske, F. (1981). Knowledge and the Flow of Information. Cambridge, MA: MIT Press.Find this resource:

Duflo, E. (2006). “Field Experiments in Development Economics.” In R. Blundell, W. Newey, & T. Persson, Eds., Advances in Economics and Econometrics Volume II, 322–348. Cambridge, England: Cambridge University Press.Find this resource:

Dugatkin, L. & Reeve, H. (2000). Game Theory and Animal Behavior. Oxford: Oxford University Press.Find this resource:

Easterly, W. (2001). The Elusive Quest for Growth. Cambridge, MA: MIT Press.Find this resource:

Gallestel, C.R. (1990). The Organization of Learning. Cambridge, MA: MIT Press.Find this resource:

Ghemawat, P. (1998). Games Businesses Play. Cambridge, MA: MIT Press.Find this resource:

Gintis, H. (2007). “A Framework for the Unification of the Behavioral Sciences.” Behavioral and Brain Sciences 30: 1–16.Find this resource:

Glimcher, P. (2003). Decisions, Uncertainty and the Brain. Cambridge, MA: MIT Press.Find this resource:

Glymour, C. 1980. Theory and Evidence. Princeton: Princeton University Press.Find this resource:

Goffman, E. (1959). The Presentation of Self in Everyday Life. New York: Anchor.Find this resource:

Guala, F. (2005). The Methodology of Experimental Economics. Cambridge: Cambridge University Press.Find this resource:

Gul, F. & Pesendorfer, W. (2008). “The Case for Mindless Economics.” In A. Caplin & A. Schotter, Eds., The Handbook of Economic Methodologies, Volume 1. Oxford: Oxford University Press.Find this resource:

Hands, W. (1993). Testing, Rationality, and Progress. Lanham, MD: Rowman and Littlefield.Find this resource:

Harford, T. (2005). The Undercover Economist. Oxford: Oxford University Press.Find this resource:

Harsanyi, J. (1955). “Cardinal Welfare, Individualistic Ethics, and the Interpersonal Comparison of Utility.” Journal of Political Economy 63: 309–321.Find this resource:

Harsanyi, J. (1977). Rational Behavior and Bargaining Equilibrium in Games and Social Situations. Cambridge: Cambridge University Press.Find this resource:

Hausman, D. (1981). Capital, Profits, and Prices. New York: Columbia University Press.Find this resource:

Hausman, D. (1992). The Inexact and Separate Science of Economics. Cambridge, England: Cambridge University Press.Find this resource:

Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E. & Gintis, H. (2004). Foundations of Human Sociality. Oxford: Oxford University Press.Find this resource:

Hey, J. & Orme, C. (1994). “Investigating Generalizations of Expected Utility Theory Using Experimental Data.” Econometrica 62: 1291–1326.Find this resource:

(p. 31) Humphreys, P. (2004). Extending Ourselves. Oxford: Oxford University Press.Find this resource:

Kay, J. (2005). Culture and Prosperity. London: Collins.Find this resource:

Kitcher, P. (2003). The Advancement of Science. Oxford: Oxford University Press.Find this resource:

Keynes, J.M. (1930 [1963]). “Economic Possibilities for Our Grandchildren.” Reprinted in Keynes, J.M., Essays in Persuasion, 358–373. New York: Norton.Find this resource:

Krebs, J. & Davies, N. (1997). Behavioral Ecology: An Evolutionary Approach. 2nd ed. London: Wiley‐Blackwell.Find this resource:

Kreps, D. (1990). Game Theory and Economic Modelling. Oxford: Oxford University Press.Find this resource:

Ladyman, J. & Ross, D. (2007). Every Thing Must Go: Metaphysics Naturalized. Oxford: Oxford University Press.Find this resource:

Lipsey, R. & Lancaster, G. (1956). “The General Theory of Second Best.” Review of Economic Studies 24: 11–32.Find this resource:

Mantzavinos, C. (2004). Individuals, Institutions and Markets. Cambridge: Cambridge University Press.Find this resource:

Maynard Smith, J. (1982). Evolution and the Theory of Games. Cambridge: Cambridge University Press.Find this resource:

McCloskey, D. (1985). The Rhetoric of Economics. Madison: University of Wisconsin Press.Find this resource:

Mirowski, P. (1989). More Heat Than Light. New York: Cambridge University Press.Find this resource:

Montague, P.R. & Berns, G. (2002). “Neural Economics and the Biological Substrates of Valuation.” Neuron 36: 265–284.Find this resource:

Montague, P.R., King‐Cassas, B. & Cohen, J. (2006). “Imaging Valuation Models in Human Choice.” Annual Review of Neuroscience 29: 417–448.Find this resource:

Moss, L. (2004). What Genes Can't Do. Cambridge: MIT Press.Find this resource:

Nelson, R. & Winter, S. (1982). An Evolutionary Theory of Economic Change. Cambridge, MA: Harvard University Press.Find this resource:

Noë, R., van Hoof, J. & Hammerstein, P., Eds. (2001). Economics in Nature. Cambridge, England: Cambridge University Press.Find this resource:

Pareto, V. (1909 / 1971). Manual of Political Economy. New York: Augustus Kelley.Find this resource:

Platteau, J. (2000). Institutions, Social Norms and Economic Development. London: Routledge.Find this resource:

Politser, P. (2008). Neuroeconomics: A Guide to the New Science of Making Choices. Oxford: Oxford University Press.Find this resource:

Rawls, J. (1971). A Theory of Justice. Cambridge, MA: Harvard University Press.Find this resource:

Robbins, L. (1935). An Essay on the Nature and Significance of Economic Science. 2nd ed. London: Macmillan.Find this resource:

Rosenberg, A. 1976. Microeconomic Laws. Pittsburgh: University of Pittsburgh Press.Find this resource:

Rosenberg, A. (1992). Economics: Mathematical Politics or Science of Diminishing Returns? Chicago: University of Chicago Press.Find this resource:

Ross, D. (2005). Economic Theory and Cognitive Science: Microexplanation. Cambridge, MA: MIT Press.Find this resource:

Ross, D. (2006). “The Economic and Evolutionary Basis of Selves.” Journal of Cognitive Systems Research 7: 246–258.Find this resource:

Ross, D., Sharp, C., Vuccinich, R. and Spurrett, D. Midbrain Mutiny: The Picoeconomics and Neuroeconomics of Disordered Gambling. Cambridge, MA: MIT Press.Find this resource:

Ross, D. (forthcoming). “The Economic Agent: Not Human, but Important.” In U. Mäki, Ed., Handbook of the Philosophy of Science Volume 13: Economics. London: Elsevier.Find this resource:

Samuelson, P. (1947). Foundations of Economic Analysis. Cambridge, MA: Harvard University Press.Find this resource:

(p. 32) Samuelson, L. (1998). Evolutionary Games and Equilibrium Selection. Cambridge, MA: MIT Press.Find this resource:

Seabright, P. (2004). The Company of Strangers. Princeton: Princeton University Press.Find this resource:

Shapiro, I. (2005). The Flight from Reality in the Human Sciences. Princeton: Princeton University Press.Find this resource:

Solow, R. (1957). “Technical Change and the Aggregate Production Function.” The Review of Economics and Statistics 39: 312–320.Find this resource:

Sutton, J. (1998). Technology and Market Structure. Cambridge, MA: MIT Press.Find this resource:

Thurstone, L. (1931). “The Indifference Function.” Journal of Social Psychology 2: 139–167.Find this resource:

van Fraassen, B. (1981). The Scientific Image. Oxford: Oxford University Press.Find this resource:

Weibull, J. (1995). Evolutionary Game Theory. Cambridge, MA: MIT Press.Find this resource:

Young, H.P. (1998). Individual Strategy and Social Structure. Princeton: Princeton University Press.Find this resource: