In Physics, Aristotle starts his positive account of the infinite by raising a problem: “[I]f one supposes it not to exist, many impossible things result, and equally if one supposes it to exist.” His views on time, extended magnitudes, and number imply that there must be some sense in which the infinite exists, for he holds that time has no beginning or end, magnitudes are infinitely divisible, and there is no highest number. In Aristotle's view, a plurality cannot escape having bounds if all of its members exist at once. Two interesting, and contrasting, interpretations of Aristotle's account can be found in the work of Jaako Hintikka and of Jonathan Lear. Hintikka tries to explain the sense in which the infinite is actually, and the sense in which its being is like the being of a day or a contest. Lear focuses on the sense in which the infinite is only potential, and emphasizes that an infinite, unlike a day or a contest, is always incomplete.
Aristotle created logic and developed it to a level of great sophistication. There was nothing there before; and it took more than two millennia for something better to come around. The astonishment experienced by readers of the Prior Analytics, the most important of Aristotle's works that present the discipline, is comparable to that of an explorer discovering a cathedral in a desert. This article explains and evaluates some of Aristotle's views about propositions and syllogisms. The most important omission is the difficult subject of syllogisms involving modalities. Aristotle distinguishes two relations of opposition that can obtain between propositions with the same subject- and predicate-expressions: contrariety and contradiction. In every canonical syllogism, one term is common to the two premises: it is called “middle term,” or simply “middle.” The remaining two terms of the premises are the only ones occurring in the conclusion: they are called “extreme terms,” or simply “extremes.”
Much of Aristotle's thought developed in reaction to Plato's views, and this is certainly true of his philosophy of mathematics. To judge from his dialogue, the Meno, the first thing that struck Plato as an interesting and important feature of mathematics was its epistemology: in this subject we can apparently just “draw knowledge out of ourselves.” Aristotle certainly thinks that Plato was wrong to “separate” the objects of mathematics from the familiar objects that we experience in this world. His main arguments on this point are in Chapter 2 of Book XIII of the Metaphysics. There are three distinct lines of argument: The first concerns the objects of geometry (that is, points, lines, planes, and solids); the second deals with the Platonist principles which are applied to arithmetic and geometry; the third is about substance as living things, especially animals, and perhaps man in particular. In addition to the above, this article also examines Aristotle's treatment of infinity.
James M. Joyce
This article is concerned with Bayesian epistemology. Bayesianism claims to provide a unified theory of epistemic and practical rationality based on the principle of mathematical expectation. In its epistemic guise, it requires believers to obey the laws of probability. In its practical guise, it asks agents to maximize their subjective expected utility. This article explains the five pillars of Bayesian epistemology, each of which claims and evaluates some of the justifications that have been offered for them. It also addresses some common objections to Bayesianism, in particular the “problem of old evidence” and the complaint that the view degenerates into an untenable subjectivism. It closes by painting a picture of Bayesianism as an “internalist” theory of reasons for action and belief that can be fruitfully augmented with “externalist” principles of practical and epistemic rationality.
Matthew L. Jones
This chapter sketches the challenges Leibniz faced in building a calculating machine for arithmetic, especially his struggle to coordinate with skilled artisans. The chapter surveys his philosophical remarks about such machines and the practical knowledge needed to make them and recounts the eighteenth-century legacy of his failure to produce an adequately functional machine. In appreciating artisanal skill, Leibniz praised the power and necessity of skilled modes of perceiving and acting, even as he underscored their limits and called for their replacement through techniques meant to perfect human reasoning. He cast his machine as a palpable intervention in the major early-modern debate about what sorts of causes were philosophically licit in explaining the creation and emergence of the ordinary phenomena of nature. Leibniz used his calculating machine as evidence that he could be a new kind of state counselor, one capable of seeing the openings necessary to improve technology and polity at once.
This chapter discusses the history of Leibniz's work on infinitesimal calculus, of which a considerable part is still unknown. His new method, emerging from studies in the summing of infinite number series and the quadrature of curves, combines two procedures with opposite orientation: differentiation and integration. These two procedures are united in a common formalism introducing, in 1675, the symbols d and ∫ for differentiation and integration. Subsequently, Leibniz and his followers developed new rules and solution methods and applied the calculus to physics. During Leibniz’s lifetime, the public success of his calculus was overshadowed by discussions of the foundations of his methods and the priority dispute. While infinitesimals were eliminated from the calculus during the nineteenth century, nonstandard analysis reinstated them again. The status of infinitesimals in Leibniz’s own philosophy of mathematics is still disputed.
Computational economics is a relatively new research technique in economics, but it is inexorably taking its place alongside the more traditional methods of general theory, abstract modeling, data analysis, and the more recent experimental economics. Perhaps because of its relative newness, the term computational economics currently has no determinate meaning. In contemporary use, it refers to a heterogeneous cluster of techniques implemented on concrete digital computers ranging from the numerical solution of the Black-Scholes partial differential equation for pricing options through automated trading strategies to agent-based computer simulations of the evolution of cooperation. Because of this heterogeneity, it is not possible to provide a comprehensive coverage of the topic in this article. Another reason for this restricted scope is that many of the methods used in computational economics have considerable technical interest but no particular philosophical relevance.
The introduction of the concept of computation in cognitive science is discussed in this article. Computationalism is usually introduced as an empirical hypothesis that can be disconfirmed. Processing information is surely an important aspect of cognition so if computation is information processing, then cognition involves computation. Computationalism becomes more significant when it has explanatory power. The most relevant and explanatory notion of computation is that associated with digital computers. Turing analyzed computation in terms of what are now called Turing machines that are the kind of simple processor operating on an unbounded tape. Turing stated that any function that can be computed by an algorithm could be computed by a Turing machine. McCulloch and Pitts's account of cognition contains three important aspects that include an analogy between neural processes and digital computations, the use of mathematically defined neural networks as models, and an appeal to neurophysiological evidence to support their neural network models. Computationalism involves three accounts of computation such as causal, semantic, and mechanistic. There are mappings between any physical system and at least some computational descriptions under the causal account. The semantic account may be formulated as a restricted causal account.
L. A. Paul
Counterfactual analyses have received a good deal of attention in recent years, resulting in a host of counterexamples and objections to the simple analysis and its descendants. The counterexamples are often complex and can seem baroque to the outsider (indeed, even to the insider), and it may be tempting to dismiss them as irrelevant or uninteresting. But while we may be able to ignore some counterexamples because the intuitions they evoke are unclear or misguided, the importance of investigating the causal relation via investigating counterexamples should not be underestimated.
This article finds it characteristic of orthodox Bayesians to hold that for each person and each hypothesis it comprehends, there is a precise degree of confidence that person has in the truth of that proposition, and that no person can be counted as rational unless the degree of confidence assignment it thus harbors satisfies the axioms of the probability calculus. In focusing exclusively on degrees of confidence, the Bayesian approach tells nothing about the epistemic status of the doxastic states epistemologists have traditionally been concerned about—categorical beliefs. The purpose of this article is twofold. First, it aims to show that, as powerful as many of such criticisms are against orthodox Bayesianism, there is a credible kind of Bayesianism. Second, it aims to show how this Bayesianism finds a foundation in considerations concerning rational preference.