James M. Joyce
This article is concerned with Bayesian epistemology. Bayesianism claims to provide a unified theory of epistemic and practical rationality based on the principle of mathematical expectation. In its epistemic guise, it requires believers to obey the laws of probability. In its practical guise, it asks agents to maximize their subjective expected utility. This article explains the five pillars of Bayesian epistemology, each of which claims and evaluates some of the justifications that have been offered for them. It also addresses some common objections to Bayesianism, in particular the “problem of old evidence” and the complaint that the view degenerates into an untenable subjectivism. It closes by painting a picture of Bayesianism as an “internalist” theory of reasons for action and belief that can be fruitfully augmented with “externalist” principles of practical and epistemic rationality.
Computational economics is a relatively new research technique in economics, but it is inexorably taking its place alongside the more traditional methods of general theory, abstract modeling, data analysis, and the more recent experimental economics. Perhaps because of its relative newness, the term computational economics currently has no determinate meaning. In contemporary use, it refers to a heterogeneous cluster of techniques implemented on concrete digital computers ranging from the numerical solution of the Black-Scholes partial differential equation for pricing options through automated trading strategies to agent-based computer simulations of the evolution of cooperation. Because of this heterogeneity, it is not possible to provide a comprehensive coverage of the topic in this article. Another reason for this restricted scope is that many of the methods used in computational economics have considerable technical interest but no particular philosophical relevance.
L. A. Paul
Counterfactual analyses have received a good deal of attention in recent years, resulting in a host of counterexamples and objections to the simple analysis and its descendants. The counterexamples are often complex and can seem baroque to the outsider (indeed, even to the insider), and it may be tempting to dismiss them as irrelevant or uninteresting. But while we may be able to ignore some counterexamples because the intuitions they evoke are unclear or misguided, the importance of investigating the causal relation via investigating counterexamples should not be underestimated.
This article finds it characteristic of orthodox Bayesians to hold that for each person and each hypothesis it comprehends, there is a precise degree of confidence that person has in the truth of that proposition, and that no person can be counted as rational unless the degree of confidence assignment it thus harbors satisfies the axioms of the probability calculus. In focusing exclusively on degrees of confidence, the Bayesian approach tells nothing about the epistemic status of the doxastic states epistemologists have traditionally been concerned about—categorical beliefs. The purpose of this article is twofold. First, it aims to show that, as powerful as many of such criticisms are against orthodox Bayesianism, there is a credible kind of Bayesianism. Second, it aims to show how this Bayesianism finds a foundation in considerations concerning rational preference.
To an unappreciated degree, the history of Western philosophy is the history of attempts to understand why mathematics is applicable to Nature, despite apparently good reasons to believe that it should not be. A cursory look at the great books of philosophy bears this out. Plato's Republic invokes the theory of “participation” to explain why, for instance, geometry is applicable to ballistics and the practice of war, despite the Theory of Forms, which places mathematical entities in a different (higher) realm of being than that of empirical Nature. This argument is part of Plato's general claim that theoretical learning, in the end, is more useful than “practical” pursuits. John Stuart Mill's account of the applicability of mathematics to nature is unique: it is the only one of the major Western philosophies which denies the major premise upon which all other accounts are based. Mill simply asserts that mathematics itself is empirical, so there is no problem to begin with.
This article focuses on naturalism. It makes one terminological distinction: between methodological naturalism and ontological naturalism. The methodological naturalist assumes there is a fairly definite set of rules, maxims, or prescriptions at work in the “natural” sciences, such as physics, chemistry, and molecular biology, this constituting “scientific method.” There is no algorithm which tells one in all cases how to apply this method; nonetheless, there is a body of workers—the scientific community—who generally agree on whether the method is applied correctly or not. Whatever the method is, exactly—such virtues as simplicity, elegance, familiarity, scope, and fecundity appear in many accounts—it centrally involves an appeal to observation and experiment. Correct applications of the method have enormously increased our knowledge, understanding, and control of the world around us to an extent which would scarcely be imaginable to generations living prior to the age of modern science.
This article examines the controversy between Isaac Newton and Gottfried Wilhelm Leibniz concerning the priority in the invention of the calculus. The dispute began in 1708, when John Keill accused Leibniz of having plagiarized Newton’s method of fluxions. It will be shown that the mathematicians participating in the controversy in the period between 1708 and 1730—most notably Newton, Leibniz, Keill, and Johann Bernoulli—held different conceptions of mathematical method. The dispute began in a political climate agitated by the Hanoverian succession and was intertwined with tensions dividing the Royal Court. It developed into a discussion of technical issues concerning the relation between mathematics and natural philosophy and the methods of the integral calculus.
Jean Paul Van Bendegem
This chapter examines the possibility of discrete time. At first sight, the answer seems trivial, but actually, it raises a number of interesting questions, both philosophical and scientific. First, the chapter explains what interpretations of discrete time are not considered. Then, it addresses two key philosophical problems: if there are such things as chronons, smallest “bits” of time, do they have extensions and can a distance function, that is, a duration, be defined on them? Second, the chapter discusses the relation between discrete time and discrete space, showing that the former implies the latter. Thus, with applications in mind, both time and space are to be seen as discrete. This leads, third, to the hardest problem of all: whether discrete time is applicable in physical theories.
This article presents intellectual context for computational modeling, namely the manner in which it fits into the collective enterprise of advancing modern social science theory, and also assesses the claims made by critics and proponents of computational modeling in the social sciences, with a special focus on complexity models. Prediction takes a back seat for the most influential mathematical models from the social sciences. Computational models can be tools for explanation. Inductive-statistical (IS), deductive-nomological (DN), causal-mechanical (CM), and causally relevant (CR) explanations have both insight and prediction as elements of scientific explanation, though they vary in their emphases. The article then turns to computational models, and specifically models in the complexity tradition. There is little doubt that computational models permit the analysis of the aggregation of behaviors of diverse, adaptive agents which might alter their decision rules in response to aggregate patterns better than other modeling methods, especially game theoretic models.
Michael D. Resnik
This article focuses on Quine's positive views and their bearing on the philosophy of mathematics. It begins with his views concerning the relationship between scientific theories and experiential evidence (his holism), and relate these to his views on the evidence for the existence of objects (his criterion of ontological commitment, his naturalism, and his indispensability arguments). This sets the stage for discussing his theories concerning the genesis of our beliefs about objects (his postulationalism) and the nature of reference to objects (his ontological relativity). Quine's writings usually concerned theories and their objects generally, but they contain a powerful and systematic philosophy of mathematics, and the article aims to bring this into focus.
This article on rationality and game theory deals with the modeling of interaction between decision makers. Game theory aims to understand situations in which decision makers interact. Chess is an example of such interaction, as are firms competing for business, politicians competing for votes, jury members deciding on a verdict, animals fighting over prey, bidders competing in auctions, threats and punishments in long-term relationships, and so on. What all these situations have in common is that the outcome of the interaction depends on what the parties jointly do. Rationality assumptions and equilibrium play are the basic ingredients of game theory. The main focus of this article is on the relationship between rationality assumptions and equilibrium play.
This article plans to sketch the outlines of the Quinean point of departure, then to describe how Burgess and this article differ from this, and from each other, especially on logic and mathematics. Though this discussion touches on the work of only these three among the many recent “naturalists,” the moral of the story must be that “naturalism,” even restricted to its Quinean and post-Quinean incarnations, is a more complex position, with more subtle variants, than is sometimes supposed.
John W. Lango
Born in England, and having held positions in mathematics at Cambridge University and the University of London, Alfred North Whitehead (1861–1947) became an American philosopher at the age of sixty-three, by crossing the Atlantic to hold a position in philosophy at Harvard. A main theme of this article on the American Whitehead is that his philosophical writings at Harvard are not merely of antiquarian interest but instead have considerable relevance for a variety of current philosophical topics. The focus is on the metaphysical system in his magnum opus, Process and Reality: An Essay in Cosmology (1929).