The question of whether natural languages have compositional semantics continues to attract considerable interest, as do questions about the reasons for wanting compositionality, the consequences of compositionality, and the very formulation of the principle of compositionality. This article begins by developing a precise definition of compositionality. In this article some technical consequences of that definition are explored. The article then examines two compositionally problematic semantic phenomena, and proposes compositional treatments thereof. The last section closes by asking why one might want a compositional meaning theory, and attempting to explain the philosophical significance of compositionality.
The first virtue of the rough characterization of concepts in terms of ways of thinking of objects and properties, and their role in that-clauses, is that it highlights the relation between concepts and reference. The second virtue of the initial characterization is that it establishes the prima facie relevance of what has come to be called Frege's Principle in the individuation of concepts. A third virtue of the initial characterization is that it brings out a phenomenon whose significance is insufficiently appreciated. There is a phenomenon of productivity for thought about mental states that is just as striking as the original phenomenon of the productivity of conceptual thought about the non-mental world.
Mark Greenberg and Gilbert Harman
Conceptual role semantics (CRS) is the view that the meanings of expressions of a language (or other symbol system) or the contents of mental states are determined or explained by the role of the expressions or mental states in thinking. The theory can be taken to be applicable to language in the ordinary sense, to mental representations, conceived of either as symbols in a ‘language of thought’ or as mental states such as beliefs, or to certain other sorts of symbol systems. CRS rejects the competing idea that thoughts have intrinsic content that is prior to the use of concepts in thought. According to CRS, meaning and content derive from use, not the other way round.
This article discusses the thesis that a subject can have a concept, think thoughts containing it, that she incompletely understands. The central question concerns how to construe the distinction between having a concept and understanding it. Two important versions of the thesis are distinguished: a metasemantic version and an epistemic version. According to the first, the subject may have concept C without being a fully competent user, in virtue of deference to other speakers or to the world. According to the second, the subject may have a concept without being able to provide a proper explication of it. It is argued that whereas the epistemic version is plausible, the metasemantic version faces some challenges. First, it needs to be explained precisely how deference enables a speaker to have C. Second, metasemantic incomplete understanding is in tension with the idea that concepts serve to capture the subject’s cognitive perspective.
Informational semantics takes the primary — at least the original — home of meaning to be the mind: meaning as the content of thought, desire, and intention. The meaning of beliefs, desires, and intentions is what it is we believe, desire, and intend. The sounds and marks of natural language derive their meaning from the communicative intentions of the agents who deploy them. As a result, the information of chief importance to informational semantics is that occurring in the transactions between animals and their environments. So for informational semantics the very existence of thought and, thus, the possibility of language depends on the capacity of (some) living systems to transform information (normally supplied by perception) into meaningful (contentful) inner states like thought, intention, and purpose.
There is a sense in which it is trivial to say that one accepts intention- (or convention-)based semantics. For if what is meant by this claim is simply that there is an important respect in which words and sentences have meaning (either at all or the particular meanings that they have in any given natural language) due to the fact that they are used, in the way they are, by intentional agents (i.e. speakers), then it seems no one should disagree. For imagine a possible world where there are physical things which share the shape and form of words of English or Japanese, or the acoustic properties of sentences of Finnish or Arapaho, yet where there are no intentional agents (or where any remaining intentional agents don't use language). In such a world, it seems clear that these physical objects, which are only superficially language-like, will lack all meaning.
This article begins with a sketchy historical introduction to the topic, which will help bring into focus some of the pressing issues for philosophy in the twenty-first century. ‘Intentionality’ as it is typically used in analytic philosophy, meaning, roughly representationor ‘aboutness’, derives from the work of Franz Brentano. For Brentano mental states are essentially related to certain kinds of objects or contents that have ‘intentional inexistence’ within the states. These came to be called ‘intentional objects’. Brentano was particularly concerned with the problem of how we can represent things that don't exist outside of the mind, such as unicorns.
The purpose and plan of the Handbook is described herein. Key concepts in the contemporary literature on reasons and normativity are introduced, and the forty-four chapters that make up the main body of the Handbook are each summarized. In the process, important connections between the chapters are highlighted. A distinctive feature of the Handbook is said to be the way in which it surveys work on normative reasons in both ethics and epistemology, focusing, when appropriate, on issues concerning unity or lack of it in different domains. It is noted that discussions of reasons and normativity in philosophy of language, philosophy of mind, and aesthetics are also surveyed in the Handbook.
This article aims to explore a pattern of reasoning about language and thought that seems to virtually guarantee a distortion of what precisely constitutes thinking. The guiding idea is a simple one, an idea officially embraced by many philosophers, but an idea the implications of which are too easily neglected. The article states a belief in the thesis that something functions as a representation only in so far as it is given a use by a representing agent. This thesis — that representation requires a representing agent — applies in equal measure to non-linguistic ‘pictorial’ and to linguistic representations.
This chapter set outs the variety of eighteenth-century approaches to the relations between language and thought, beginning with post-Lockean debates focused on the status of abstract general ideas, and ending with anti-empiricist Scottish philosophy at the end of the century (especially Thomas Reid). The empiricist theory of signs, notably in George Berkeley, is one important dimension of the discussions: ‘Ideas’ are centre stage, although they do not exhaust the empiricist furniture of the mind. There is also a different philosophical trend illustrated by neglected figures (James Harris, Lord Monboddo), which may be termed Platonic, and which affects eighteenth-century philosophical conceptions of language. The project of conjectural histories of language (Adam Smith) and views about the connections between linguistic skills and the social nature of human beings are also covered.
Paul Pietroski and Stephen Crain
The article illustrates that humans have a language faculty, a cognitive system that supports the acquisition and use of certain languages, with several core properties. The faculty is apparently governed by principles that are logically contingent, specific to human language, and innately determined. A naturally acquirable human language (Naturahl) is a finite-yet-unbounded language, with two further properties that include: its signals are overt sounds or signs, and it can be acquired by a biologically normal human child, given an ordinary course of human experience. Any biologically normal human child can acquire any Naturahl, given an ordinary course of experience with users of that language. An E-language is a set of signal-interpretation pairs, while an I-language is a procedure that pairs signals with interpretations. The I-languages that children acquire are biologically implementable, since they are actually implemented in human biology. A function has a unique value for each argument, but Naturahls admit the possibility of ambiguity. A domain general learning procedure might help children learn the environments in which negative polarity items (NPI) can appear but acquiring the constraint on where such expressions cannot appear is another matter. The language faculty makes it possible to acquire an I-language that permits questions with a medial-wh, even if one does not encounter such questions.
The article discusses the ways in which natural language might be implicated in human cognition. The Soviet psychologist Lev Vygotsky developed his ideas on the interrelations between language and thought, both in the course of child development and in mature human cognition. One of Vygostky's ideas concerned the ways in which the language deployed by adults can scaffold children's development, yielding what he called a ‘zone of proximal development’. He argued that what children can achieve alone and unaided is not a true reflection of their understanding. Vygotsky focused on the overt speech of children, arguing that it plays an important role in problem solving, partly by serving to focus their attention, and partly through repetition and rehearsal of adult guidance. Clark draws attention to the many ways in which language is used to support human cognition, ranging from shopping lists and post-it notes, to the mental rehearsal of remembered instructions and mnemonics, to the performance of complex arithmetic calculations on pieces of paper. Researchers have claimed that animals and pre-verbal infants possess a capacity for exact small-number judgment and comparison, for numbers up to three or four. There is also some evidence that natural language number-words might be constitutive of adult possession and deployment of exact number concepts, in addition to being developmentally necessary for their acquisition.
This chapter argues that literature—or at least certain kinds of literature—facilitates mentalization. Book reading shares some of the same features as mind reading. Psychoanalysis, via the work of Hanna Segal and others, has been interested in the differences between escapist literature and literature that encourages genuine psychological engagement. This chapter engages that issue via the lens of mentalization. It focuses specifically on literary form rather than on content, and examines the ways in which some kinds of literary form facilitate mentalizing capacities. More narrowly, it shows how different kinds of literary techniques—the free indirect discourse employed by Jane Austen, the tight structure of the sonnet form—enable different mentalizing abilities and develop our capacity for self-reflection.
The objective of the article is to discuss the evolution, hypothesis, and some the more prominent arguments for massive modularity (MM). MM is the hypothesis that the human mind is largely or entirely composed from a great many modules. Modules are functionally characterizable cognitive mechanisms that tend to possess several features, which include domain-specificity, informationally encapsulation, innateness, inaccessibility, shallow outputs, and mandatory operation. The final thesis that comprises MM mentions that modules are found not merely at the periphery of the mind but also in the central regions responsible for such higher cognitive capacities as reasoning and decision-making. The central cognition depends on a great many functional modules that are not themselves composable into larger more inclusive systems. One of the families of arguments for MM focuses on a range of problems that are familiar from the history of cognitive science such as problems that concern the computational tractability of cognitive processes. The arguments may vary considerably in detail but they share a common format. First, they proceed from the assumption that cognitive processes are classical computational ones. Second, given the assumption that cognitive processes are computational ones, intractability arguments seek to undermine non-modular accounts of cognition by establishing the intractability thesis.
Something that a theorist of language may hope to understand — or to understand better as the result of his theorizing — is the notion of meaning, at least as it applies to sentences, words, and other linguistic expressions. As we try to attain such an improved understanding, Strawson's warning is salutary. As he says, we cannot hope to understand the notion of an expression's meaning unless we enjoy at least a basic understanding of the nature of human speech (and writing). For our sentences and words have meaning only in so far as they could be used in speech or writing. He is also right to say that we cannot hope to understand speech unless we take account of the aim of communication. Some speech, of course, has no communicative purpose.
The notion of narrow content arises from Hilary Putnam's well-known article ‘The Meaning of “Meaning” ‘. Putnam raised the question of whether the meaning of a word in a given subject's mouth is fixed by the subject's psychological states in (what he termed) ‘the narrow sense’. According to Putnam, a psychological state is narrow if a subject's being in that state does not presuppose the existence of anything outside the subject. The idea is that a narrow psychological state is intrinsic to the subject: it does not in any essential way require the subject to stand in any particular relations to anything in her environment.
This article develops Frege's conception of answerability, and his correlative views on psychologism of the first sort. Compared to prior philosophers, such as British empiricists, Frege is a minimalist in the demands he sets on answerability. If he is ever less than minimalist, that is something that flows out of his particular conception of logic. The article then turns to Wittgenstein's (last) conception of answerability, by which Frege is not quite minimalist enough. That allows us to see how the pursuit of answerability might lead to psychologism of the second kind.
The article gives an overview of several distinct theses demonstrating representationalism in cognitive science. Strong representationalism is the view that representational mental states have a specific form, in particular, that they are functionally characterizable relations to internal representations. The proponents of strong representationalism typically suggest that the system of internal representations constitutes a language with a combinatorial syntax and semantics. Braddon-Mitchell and Jackson argued that mental representations might be more analogous to maps than to sentences. Waskan argued that mental representations are akin to scale models. Fodor and Fodor and Pylyshyn argued that certain pervasive features of thought can only be explained by the hypothesis that thought takes place in a linguistic medium. A physical symbol system (PSS) hypothesis is a version of strong representationalism, the idea that representational mental states are functionally characterizable relations to internal representations. The representational content has a significant role in computational models of cognitive capacities. The internal states and structures posited in computational theories of cognition are distally interpreted in such theories. The distal objects and properties that determine the representational content of the posited internal states and structures serve to type-individuate a computationally characterized mechanism. Strong Representationalism, as exemplified by the PSS hypothesis, construes mental processes as operations on internal representations.
The distinctive claim made by semantic externalism is that a subject's thought contents are partly individuated by her environment, and do not supervene on her ‘inner states’, such as her brain states. One of the main objections to this position is the claim that it is incompatible with self-knowledge. A subject's knowledge of her own thoughts seems quite different from her knowledge of what others think. A subject uses behavioural evidence to know what others think. However, typically, a subject can know what she herself thinks without inferring this from her own behaviour, and even prior to manifesting any behaviour which could constitute grounds for such an inference.
Wide or externalist content is individuated in part by reference to features of a subject's external surroundings, by her physical environment or the community with which she shares a language. Externalists hold that physically identical subjects might have thoughts with different, wide, contents. Internalists either deny that content is wide, claiming that all content supervenes on intrinsic physical states of the subject, and is hence narrow, or insist, at the very least, that the content that individuates a subject's thoughts in predictions and explanations of her behaviour must supervene on her intrinsic properties. This article begins by discussing the arguments for and against wide content. It then argues that genuinely explanatory content is wide.