In Physics, Aristotle starts his positive account of the infinite by raising a problem: “[I]f one supposes it not to exist, many impossible things result, and equally if one supposes it to exist.” His views on time, extended magnitudes, and number imply that there must be some sense in which the infinite exists, for he holds that time has no beginning or end, magnitudes are infinitely divisible, and there is no highest number. In Aristotle's view, a plurality cannot escape having bounds if all of its members exist at once. Two interesting, and contrasting, interpretations of Aristotle's account can be found in the work of Jaako Hintikka and of Jonathan Lear. Hintikka tries to explain the sense in which the infinite is actually, and the sense in which its being is like the being of a day or a contest. Lear focuses on the sense in which the infinite is only potential, and emphasizes that an infinite, unlike a day or a contest, is always incomplete.
Aristotle created logic and developed it to a level of great sophistication. There was nothing there before; and it took more than two millennia for something better to come around. The astonishment experienced by readers of the Prior Analytics, the most important of Aristotle's works that present the discipline, is comparable to that of an explorer discovering a cathedral in a desert. This article explains and evaluates some of Aristotle's views about propositions and syllogisms. The most important omission is the difficult subject of syllogisms involving modalities. Aristotle distinguishes two relations of opposition that can obtain between propositions with the same subject- and predicate-expressions: contrariety and contradiction. In every canonical syllogism, one term is common to the two premises: it is called “middle term,” or simply “middle.” The remaining two terms of the premises are the only ones occurring in the conclusion: they are called “extreme terms,” or simply “extremes.”
Much of Aristotle's thought developed in reaction to Plato's views, and this is certainly true of his philosophy of mathematics. To judge from his dialogue, the Meno, the first thing that struck Plato as an interesting and important feature of mathematics was its epistemology: in this subject we can apparently just “draw knowledge out of ourselves.” Aristotle certainly thinks that Plato was wrong to “separate” the objects of mathematics from the familiar objects that we experience in this world. His main arguments on this point are in Chapter 2 of Book XIII of the Metaphysics. There are three distinct lines of argument: The first concerns the objects of geometry (that is, points, lines, planes, and solids); the second deals with the Platonist principles which are applied to arithmetic and geometry; the third is about substance as living things, especially animals, and perhaps man in particular. In addition to the above, this article also examines Aristotle's treatment of infinity.
Kentaro Fujimoto and Volker Halbach
This chapter sketches the motivations for treating truth as a primitive notion and developing axiomatic theories of truth. Then the main axiomatic systems of typed and type-free truth are surveyed.
James M. Joyce
This article is concerned with Bayesian epistemology. Bayesianism claims to provide a unified theory of epistemic and practical rationality based on the principle of mathematical expectation. In its epistemic guise, it requires believers to obey the laws of probability. In its practical guise, it asks agents to maximize their subjective expected utility. This article explains the five pillars of Bayesian epistemology, each of which claims and evaluates some of the justifications that have been offered for them. It also addresses some common objections to Bayesianism, in particular the “problem of old evidence” and the complaint that the view degenerates into an untenable subjectivism. It closes by painting a picture of Bayesianism as an “internalist” theory of reasons for action and belief that can be fruitfully augmented with “externalist” principles of practical and epistemic rationality.
The principle that every statement is bivalent (i.e. either true or false) has been a bone of philosophical contention for centuries, for an apparently powerful argument for it (due to Aristotle) sits alongside apparently convincing counterexamples to it. This chapter analyzes Aristotle’s argument, then, in the light of this analysis, examines three sorts of problem case for bivalence. Future contingents, it is contended, are bivalent. Certain statements of higher set theory, by contrast, are not. Pace the intuitionists, though, this is not because excluded middle does not apply to such statements, but because they are not determinate. Vague statements too are not bivalent, in this case because the law of proof by cases does not apply. The chapter goes on to show how this opens the way to a solution to the ancient paradox of the heap (or Sorites) that draws on quantum logic.
Matthew L. Jones
This chapter sketches the challenges Leibniz faced in building a calculating machine for arithmetic, especially his struggle to coordinate with skilled artisans, surveys his philosophical remarks about such machines and the practical knowledge needed to make them, and recounts the eighteenth-century legacy of his failure to produce a machine understood to be adequately functional.
This chapter discusses the history of Leibniz's work on infinitesimal calculus of which a considerable part is still unknown. His new method, emerging from studies in the summing of infinite number series and the quadrature of curves, combines two procedures with opposite orientation, differentiation and integration. These two procedures are united in a common formalism introducing in 1675 the symbols d and ∫ for differentiation and integration. Subsequently, Leibniz and his followers developed new rules and solution methods, and applied the calculus to physics. During Leibniz’s lifetime the public success of his calculus was overshadowed by discussions over the foundations of his methods and the priority dispute with Newton. While infinitesimals were eliminated from the calculus during the 19th century, non-standard analysis reinstated them again. The status of infinitesimals in Leibniz’s own philosophy of mathematics is still disputed.
The coherence theory holds that truth consists in coherence amongst our beliefs. It can thus rule out radical scepticism and avoid the problems of the correspondence theory. Considerations about meaning and verification have also pointed philosophers in the same direction. But if it holds all truth to consist in coherence it is untenable: there must be some truths that do not, truths about what people believe. This causes problems for traditional coherence theories, and also for verificationists and anti-realists. The admission of a grounding class of truths that do not consist in coherence also raises the question why there should be such systematic agreement between these. This cannot properly be explained by anything that is said within the theory whose truth is constituted by coherence with the grounding class. Kant saw this problem, and postulated “things as they are in themselves.” Others dismiss it; but that is not satisfactory.
Computational economics is a relatively new research technique in economics, but it is inexorably taking its place alongside the more traditional methods of general theory, abstract modeling, data analysis, and the more recent experimental economics. Perhaps because of its relative newness, the term computational economics currently has no determinate meaning. In contemporary use, it refers to a heterogeneous cluster of techniques implemented on concrete digital computers ranging from the numerical solution of the Black-Scholes partial differential equation for pricing options through automated trading strategies to agent-based computer simulations of the evolution of cooperation. Because of this heterogeneity, it is not possible to provide a comprehensive coverage of the topic in this article. Another reason for this restricted scope is that many of the methods used in computational economics have considerable technical interest but no particular philosophical relevance.
The introduction of the concept of computation in cognitive science is discussed in this article. Computationalism is usually introduced as an empirical hypothesis that can be disconfirmed. Processing information is surely an important aspect of cognition so if computation is information processing, then cognition involves computation. Computationalism becomes more significant when it has explanatory power. The most relevant and explanatory notion of computation is that associated with digital computers. Turing analyzed computation in terms of what are now called Turing machines that are the kind of simple processor operating on an unbounded tape. Turing stated that any function that can be computed by an algorithm could be computed by a Turing machine. McCulloch and Pitts's account of cognition contains three important aspects that include an analogy between neural processes and digital computations, the use of mathematically defined neural networks as models, and an appeal to neurophysiological evidence to support their neural network models. Computationalism involves three accounts of computation such as causal, semantic, and mechanistic. There are mappings between any physical system and at least some computational descriptions under the causal account. The semantic account may be formulated as a restricted causal account.
This chapter introduces constitutivism about practical reason, which is the view that we can justify certain normative claims by showing that agents become committed to these claims simply in virtue of acting. According to this view, action has a certain structural feature—a constitutive aim, principle, or standard—that both constitutes events as actions and generates a standard of assessment for action. We can use this standard of assessment to derive normative claims. In short, the authority of certain normative claims arises from the bare fact that we are agents. This chapter explains the constitutivist strategy, surveys the extant attempts to generate constitutivist theories, and considers the problems and prospects for the theory.
Given constructivism’s enduring popularity and appeal, it is perhaps something of a surprise that there remains considerable uncertainty among many philosophers about what constructivism is even supposed to be. My aim in this chapter is to make some progress on the question of how constructivism should be understood. I begin by saying something about what kind of theory constructivism is supposed to be. Next, I consider and reject both the standard proceduralist characterization of constructivism and Sharon Street’s ingenious standpoint characterization. I then suggest an alternative characterization according to which what is central is the role played by certain standards of correct reasoning. I conclude by considering the implications of this account for evaluating the success of constructivism. I suggest that certain challenges raised against constructivist theories are based on dubious understandings of constructivism, whereas other challenges only properly come into focus once a proper understanding is achieved.
This chapter reviews the major contextual theories of truth and paradox. These theories are all motivated by a certain kind of liar discourse, sometimes called the strengthened liar or revenge liar. A contextual framework for the analysis of this kind of discourse is presented, drawing on Stalnaker’s and Lewis’s—and others’—work on context-change. The various contextual theories of truth differ in their specific treatments of revenge discourses. According to Burge’s hierarchical theory and Simmons’s non-hierarchical singularity theory, the predicate “true” is a context-sensitive predicate. According to the hierarchical approaches of Parsons and Glanzberg, the context-dependence of truth is derived from the context-dependence of quantifier domains, while for Barwise and Etchemendy, it is situations that may expand with the context. Any approach to the liar faces the threat of new paradoxes tailored to that approach, and these contextual theories are no exception. Challenges to these contextual theories are examined.
A classical formulation of the correspondence theory of truth tells us that truth is a general relational property, involving a characteristic relation to some portion of reality. The relation is said to be correspondence; the portion of reality is said to be a fact. Even so, the theory has a lengthy history, and many versions relied on objects rather than facts. This chapter reviews the various options for formulating a correspondence theory of truth, along with the relata they presuppose, and the nature of the correspondence relation they rely upon. It concentrates on fact-based theories, and the nature of the truth-bearers and facts they presuppose.
L. A. Paul
Counterfactual analyses have received a good deal of attention in recent years, resulting in a host of counterexamples and objections to the simple analysis and its descendants. The counterexamples are often complex and can seem baroque to the outsider (indeed, even to the insider), and it may be tempting to dismiss them as irrelevant or uninteresting. But while we may be able to ignore some counterexamples because the intuitions they evoke are unclear or misguided, the importance of investigating the causal relation via investigating counterexamples should not be underestimated.
This article finds it characteristic of orthodox Bayesians to hold that for each person and each hypothesis it comprehends, there is a precise degree of confidence that person has in the truth of that proposition, and that no person can be counted as rational unless the degree of confidence assignment it thus harbors satisfies the axioms of the probability calculus. In focusing exclusively on degrees of confidence, the Bayesian approach tells nothing about the epistemic status of the doxastic states epistemologists have traditionally been concerned about—categorical beliefs. The purpose of this article is twofold. First, it aims to show that, as powerful as many of such criticisms are against orthodox Bayesianism, there is a credible kind of Bayesianism. Second, it aims to show how this Bayesianism finds a foundation in considerations concerning rational preference.
A taxonomy of theories of truth is provided. Two versions of deflationist theories of “true” are distinguished, T-schema deflationism and semantic-descent deflationism. These are distinguished from a deflationist theory of truth-ascription, and distinguished in turn from a deflationist theory of truths—a view that the various truths share no significant property. Opposed to these deflationist positions are various substantivalist truth theories. It is suggested that the semantic-descent deflationist theory of “true” and the deflationist theory of truths are correct, although the considerations that support or attack these different deflationist theories are largely independent of one another. A deflationist theory of truth-ascription is denied, however. Sometimes statements do attribute a truth-property to a set of statements. The chapter ends with an evaluation of the cases where supplementing a formal language with a truth predicate is not conservative. It is argued that these cases do not bear on debates about truth deflationism.
This chapter examines Leibniz’s determinant theory and analyzes the contribution of ars characteristica, ars combinatoria, and ars inveniendi to this theory. It explains that the art of inventing suitable characters led to numerical double indices while the combinatorial art helped to represent a determinant as a sum. Moreover, the chapter discusses inhomogeneous systems of linear equations and the elimination of a common variable in the determinant theory. It also explores Leibniz’s work related to symmetric functions, dyadic, and duodecimal number system.
Erich H. Reck
From the start of the analytic tradition in philosophy, in the works of Frege, Russell, and the early Wittgenstein, the use of logic to address philosophical problems has been a central theme. In this essay, the contributions of three logicians who played formative roles in analytic philosophy’s second phase, from the 1920s to the 1950s, are considered: Rudolf Carnap, Kurt Gödel, and Alfred Tarski. Besides surveying their philosophically most significant results, the essay traces their mutual influence, from their initial meetings in Central Europe to their later activities in the US, where each of them emigrated and their paths continued to cross. It also contrasts the strikingly different convictions of these three thinkers on basic philosophical issues, which did not prevent them from interacting fruitfully.The discussion revolves around the following topics: the transformation of modern logic, especially the rise of meta-logic; logicism and its relation to formal axiomatics; the notions of truth, logical truth, and logical consequence; formal semantics, metaphysics, and epistemology; and philosophical methodology.