Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

date: 19 September 2019

Introduction

Abstract and Keywords

The notion of compositionality was first introduced as a constraint on the relation between the syntax and the semantics of languages. It was later postulated as an adequacy condition also for other representational systems such as structures of mental concepts, computer programs, and even neural architectures. Syntax is compositional in that it builds more complex well-formed expressions recursively, on the basis of smaller ones, while semantics is compositional in that it constructs the meanings of larger expressions on the basis of the meanings of smaller ones. Unless empirical linguistic or psychological evidence were to require it nothing else should be needed to understand language. Traditionally, the term compositionality is used in opposition to various forms of holism, with which compositionality is often said to be inconsistent.

Keywords: compositionality, syntax, semantics of languages, computer programs, neural architectures

The notion of compositionality was first introduced as a constraint on the relation between the syntax and the semantics of languages (Frege, 1914; Montague, 1970b). In this context compositionality demands that the meaning of every syntactically complex expression of a language (save, maybe, for idioms) be determined by the meanings of its syntactic parts and the way they are put together. Compositionality was later postulated as an adequacy condition also for other representational systems such as structures of mental concepts (see, e.g., Fodor’s (1975, 1998a, 2008) Language of Thought), computer programmes (Janssen, 1986), and even neural architectures (Fodor and Pylyshyn, 1988; Smolensky, 1990/1995b). Various mathematical frameworks for compositionality have been provided (Partee et al., 1990; Hodges, 2001), and stronger and weaker versions have been formulated (Szabó, 2000a, this volume). Although the force and justification of compositionality as a requirement for representational systems is still disputed, it is fair to say that compositionality today is widely recognized as a key issue all over the cognitive sciences and remains a challenge for various models of cognition that are in an apparent conflict with it.

This Handbook brings together chapters on the issue of compositionality from formal-logical, semantic, psychological, linguistic, philosophical, and neuroscientific points of view. Our emphasis has been on the breadth of approaches to the notion, illustrating its key status in the study of language and mind today, and the need for strong interdisciplinarity in investigating it. Often regarded as a virtually defining feature for the discipline of formal semantics, compositionality has led to myriad controversies in regards to its scope, formulation, and psychological reality. This Handbook aims to provide an inroad into these controversies, by covering almost every major aspect of these controversies (with the exception of a few questions, particularly in the domain of comparative cognition, where the degree of systematicity and compositionality of the non-human animal mind is an important ongoing problem; see e.g. McGonigle and Chalmers, 2006).

(p. 2) All our contributors have been asked to do more than just give an overview of the issues raised by compositionality in their domain of expertise: They were invited to take a particular, perhaps even controversial, stance towards it. It is our hope that this Handbook will find an audience in the broader cognitive science community, including philosophy, linguistics, psychology, neuroscience, computer science, modelling, and logic. In the rest of the introduction, we give brief sketches of the individual contributions and their interconnections.

1 History and Overview

Part I of this Handbook collects four contributions that introduce the debate on compositionality from historical, formal-semantical, philosophical, and linguistic points of view, respectively. The history of the notion in the nineteenth and twentieth centuries is reviewed by Janssen, while Kracht extends this history, starting from Montague’s work in the 1970s and subsequent developments in semantic theory today. As Janssen shows, in the nineteenth-century, Frege’s discussion of compositionality and its apparent opposite, namely contextuality, was richly embedded in nineteenth-century German discussions among psychologists, linguists, and logicians such as Trendelenburg, Wundt, and Lotze. The Principle of Contextuality maintained that, even though judgements are composed of concepts, these have meaning only in the context of the judgements. Though nowadays widely replaced by the principle of compositionality, Frege himself never quite abandoned the Principle of Contextuality. Compositionality in its contemporary form is rather a creation of Frege’s student, Carnap (1947), and, later, of Montague (1970a, 1970b, 1973). Montague’s seminal work, in particular, here reviewed by Kracht, was based on Carnap’s extension—intension distinction, which replaced Frege’s sense—reference distinction. Janssen shows that, although compositionality is a widely accepted principle in linguistic and logical practice, reasons for endorsing it are not so much principled as practical or pragmatic. Kracht, too, illustrates that compositionality as a method in formal-logical analysis was not a principled decision for Montague, and that it is not always clear whether his analyses were fully compositional.

Szabó, adopting a more philosophical point of view, discusses the formulation of the principle of compositionality, a difficult issue in itself. A standard formulation is (C):

  1. (C) The meaning of a complex expression is a function of the meanings of its constituents and the way they are combined.

As Szabó points out, the moment we look at this formulation, questions arise. Does ‘is a function of’ mean ‘is determined by’? Is it meanings that are being combined, or is it syntactic constituents? Are we talking about the meanings that constituents have individually, or that they have when taken together? One sensible disambiguation of C yields a principle that, Szabó argues, makes for a useful empirical hypothesis in the study of natural language meaning: It says that, once we have fixed the individual meanings (p. 3) of syntactic constituents in a given expression, and have fixed its syntactic structure, there is nothing else that determines meaning. However, relevant notions involved in its formulation will themselves have to be clarified when the hypothesis is investigated. Moreover, when this hypothesis has entered various research agendas in philosophy, linguistics, and psychology, it has always been strengthened, and its strengthenings are not entirely equivalent to one another.

Zimmermann illustrates the role that compositionality has played as a constraint on semantic analysis in linguistic practice via a number of case studies, such as quantified noun phrases in object position and intensional arguments. Such problems are encountered when, given some syntactic whole that has a semantic value, the question arises what the semantic values of its immediate parts must be, if the meaning of the whole is to come out from their combination. Zimmermann gives an account of a number of generic strategies that have been used to overcome these problems so as to maintain compositionality. For example, the syntactic tree structure is re-analysed in such a way that the expression as a whole has different parts than it initially seemed.

2 Compositionality in Language

The primary concern of the ‘linguists’ principle of compositionality’, as Szabó describes it in his contribution, is the relationship between meaning and structure. Although compositionality is very widely assumed as a constraint for this relationship, linguists and philosophers have fundamentally disagreed on whether the principle is empirically valid, and in what form it can be maintained. Direct Compositionality (DC) in the sense of Jacobson, in particular, wants to set a ‘gold standard’ for what this relationship is like: this standard is that the mapping of syntactic form to semantic content exploits no ‘hidden’ level of syntactic representation from which semantic interpretation is read off (such as the level of logical form or ‘LF’ explored by generative grammarians in the l980s: see e.g. Huang, l995). Instead, semantics reads off interpretations from the surface form of expressions directly: these (and potentially the context of use) provide all the information that is required. Cases of linguistic expressions whose meaning has been thought to require a hidden (or more abstract) level of representation are solved by assigning a more complex semantic type to the word or constituent that causes the problem. In Jacobson’s view, a compositional semantics and a compositional syntax are minimally required to understand languages. Syntax is compositional in that it builds more complex well-formed expressions recursively, on the basis of smaller ones, while semantics is compositional in that it constructs the meanings of larger expressions on the basis of the meanings of smaller ones (ultimately words, or rather morphemes). Unless empirical linguistic or psychological evidence were to require it (which she argues it doesn’t) nothing else should be needed to understand language. She proposes (p. 4) that the strategy of assigning a more complex semantic type to the troublesome constituent shows that one can make do with only a compositional semantics and syntax.

A quite different dimension of the compositionality problem has to do with the relations between language and thought. It is widely assumed that language expresses thought and that thoughts are made up of mental concepts under a certain mode of composition—an assumption revisited by Hinzen and Pietroski later in the volume. However, how does the composition of concepts in thought and the composition of linguistic meanings relate? Pietroski, adopting a Chomskyan internalist linguistic perspective, suggests that the ‘adicities’ (number of argument places) of concepts might differ from those of the lexical meanings of the words that label these concepts. He speculates that the lexical meanings are uniformly monadic, requiring saturation by a single argument, and that syntax, by combining the words with these meanings, simply conjoins them. In this way, language ‘re-formats’ concepts in accordance with its own structural resources, and so as to fulfil the demands of semantic composition. For example, there are no polyadic concepts such as POKED(E,X,Y) in language, where E is a variable over events. Language rather ‘decomposes’ this concept into three conjoined predicates: poked(E) & Theme(E,Y) & Agent(E,X), where Theme and Agent are special thematic predicates.

Traditionally, the term compositionality is used in opposition with various forms of holism, with which compositionality is often said to be inconsistent. Thus, if in a system the identity and properties of an object are partially determined through their relations to other objects and properties, as the holist claims, then, in an intuitive sense, the system won’t be compositional—it won’t be ‘made up’ of its parts, in such a way that the parts do not in turn depend on the whole of which they form a part. Pelletier surveys this important debate and distinguishes an older, ontological and metaphysical debate, on whether there are wholes that cannot be thought of as individuals or combinations of them, from the more recent debate on how some properties of wholes such as sentences are a function of properties of their parts (such as their meanings). Potentially, the holist’s challenge is only severe when posed to the latter of the two debates just mentioned. Other problems remain for the holistic view, from a psychological and language acquisition perspective.

Holism is just one potential threat to the compositional analysis of language or thought. Contextualism is another, and Jerry Fodor has long argued that contextualism is inconsistent with compositionality. Récanati sorts out for which notions of contextualism and compositionality this is so. For example, is the same meaning of ‘cut’ involved in ‘cutting the grass’ and ‘cutting the cake’, which is then contributed compositionally to the content? As Récanati notes, the activity seems different in both cases, but then again, in a somewhat unusual context, it might actually be the same. So does context need to enter as an argument into the composition function, and will that preserve a sensible notion of compositionality? According to Récanati, this does indeed seem to be the case, yet not in the sense in which indexicals such as ‘here’ depend on context: While indexicals depend for their semantic evaluation on a process of ‘saturation’, the meaning of a word, though pragmatically modulated, does not depend on such a process. Yet if (p. 5) we factor the pragmatically modulated meanings of the parts of expressions into our notion of compositionality, a sensible version of the latter can be preserved.

Kaplan style semantics—a notion introduced by Westerstãhl to subsume a certain class of semantic theories—takes the idea seriously that certain kinds of semantic values may depend on extra-linguistic contexts. Those semantics have in common that they permit a distinction between contexts of utterance and circumstances of evaluation. In agreement with Kaplan (1989) and Lewis (1980), their basic truth relation is the following: A sentence s is true at context c in circumstance d. For the three main kinds of semantic values, viz. characters, contents, and extensions, together with their interrelations, two versions of Kaplan style semantics are formally developed: The truth-functional version treats the contents of sentences as functions from circumstances to truth-values. The structural version, by contrast, regards the contents of sentences as structured entities, in which properties, individuals, and the like literally occur. In both versions the primary semantic value of an expression is its character: a function from contexts to contents. The evaluation of an expression at a circumstance is now relative to a context where the context of utterance determines the circumstance of evaluation. With these notions at hand, Westerstãhl shows how even problematic phenomena like indexicals, unarticulated constituents, modulation, or pragmatic intrusion (see Récanati) may be dealt with in a strictly compositional, contextually compositional, or weakly contextually compositional way. The strength of the compositionality property employed depends on the kind of semantic values chosen. The only remaining challenge to the compositionality of content from the point of view of context dependence are so-called monsters.

The classical model of composition in model-theoretic semantics (Montague 1970a, 1973) assumes that a lexicon provides a set of basic expressions that are each assigned a lexical meaning corresponding to one of a restricted set of semantic types, to which the rules of semantic composition then apply. It could, however, be that if we have two autonomous systems of syntactic and semantic composition, the types that are respectively combined in each system do not match. This can happen in at least two ways. First, some lexical items don’t correspond to non-decomposed meanings: They have a non-overt underlying structure from which their meaning is composed. Second, some syntactic construction types do not correspond to a single way of combining the semantic types assigned to its constituents. The first possibility is called lexical decomposition and is the topic of the next sub-part. As for the second possibility, Löbner calls it sub-compositionality. He argues that it is frequently instantiated in natural languages. Assuming compositionality as a methodological principle prevents us from seeing this empirical fact. Using gradation of German verbs with sehr (‘very’, ‘a lot’) as a case study, he shows that semantic patterns of gradation divide syntactic types of verbs into several semantic sub-types with their own sub-rules of composition, which depend on the type of lexical meaning of the verb. Having distinct syntactic sub-types corresponding to the semantic sub-types is not plausible in these instances. In line with the older evidence that Montague’s (1973) definition of NPs as generalized quantifiers ignores important syntactic and semantic differences among these constructions, Löbner’s chapter provides new (p. 6) evidence that a homomorphy of composition operations in the syntactic and semantic domains, which would rule out sub-compositionality as a priori incompatible with the design of grammar, cannot be assumed.

3 Compositionality in Formal Semantics

Compositionality is frequently debated not only in philosophical and linguistic semantics, but also in theoretical psychology as well as classical and connectionist computer science, partly because it can be formalized. A number of mathematical results that link the principle of compositionality to other formal semantic principles can thus be proven. It is fair to say that compositionality is one of the most important principles of what is now called ‘formal semantics’. Part III illustrates the role that the principle of compositionality plays in formal semantics and discusses some of the formal results in this field.

As a synopsis of many of the articles in this Handbook illustrates, there is no universally accepted formal definition of compositionality. However, compositionality in formal semantics is typically understood as a property of a relation, function, or morphism between the syntax and the semantics of a language. In this light, the expression ‘syntactic compositionality’—sometimes used in the literature to refer to the property of languages to allow for syntactic combinations—is an oxymoron. The most widely used formal definitions of compositionality treat the syntax of a language as a single-sorted or multi-sorted algebra. In the single-sorted approach, only one carrier set—the set of terms—is assumed, and syntactic rules correspond to partial functions from Cartesian products of the set of terms into the set of terms. By contrast, the multi-sorted approach uses a multitude of carrier sets for the different syntactic categories. In this case, a syntactic rule corresponds to a total function from a product of typically two (or more than two) syntactic categories, for example the set of adjectives and the set of nouns, into some syntactic category, for example the set of noun phrases. In both the single-sorted and the multi-sorted approach, compositionality is then defined in terms of a homomorphism between the syntax algebra and a semantic algebra (see Janssen, 1986; Partee et al., 1990; Hodges, 2001; Werning, 2005b). The latter is again single-sorted— one set of meanings as carrier set—or multi-sorted—a carrier set for each semantic category. It is crucial to see here that semantic compositionality is not a property of the structure of meanings, the semantic algebra, taken in isolation. Rather, it is a property that characterizes the interaction between syntax and semantics.

Saying more about this interaction is Hodges’s goal in Chapter 11. The chapter has two parts—a historical one and a formal one. In the historical part, the roots of the idea of compositionality are traced back to the Aristotelian theory of meaning—as we have received it via medieval Arab philosophers like Ibn Sina and Al-Farabi. It is indeed (p. 7) astonishing to see how close certain passages from Al-Farabi and from the French medieval philosopher Abelard are to well-known formulations of the principle of compositionality by Frege. Returning to modern times, Hodges discusses various notions of a constituent or constituent structure—for example Bloomfield’s (1933)—to come up with his own account in terms of frames (roughly speaking, complex expressions). His notion of a constituent is then used to define what he calls PTW compositionality (‘PTW’ is a reference to Partee, ter Meulen, and Wall, 1990). It is the property that an equivalence relation on frames possesses if and only if it is invariant with regard to the substitution of equivalent constituent expressions. Aside from ambiguity and metaphor, Hodges discusses whether tahrifs are a threat to PTW compositionality: This Arab word refers to the phenomenon that the occurrence of a word in a sentence has its meanings changed by its context in the sentence. How does this phenomenon relate to Frege’s Principle of Contextuality that the meaning of a word is what it contributes to the meaning of a sentence? This question leads Hodges to the so-called extension problem: Given that one has already assigned meanings to some subset of the set of expressions in a language by some meaning function, what are the conditions under which one can extend this original meaning function to the rest of the language? Hodges’s (2001) famous extension theorem states that there exists an up-to-equivalence unique extension if the following conditions are fulfilled: (i) The subset is cofinal in the total set of expressions of the language: Every expression of the total set is a constituent of some expression of the subset. (ii) The original and the extended meaning function are compositional. (iii) The original and the extended meaning function are Husserlian: No two synonyms belong to different semantic categories. (iv) The extended meaning function is fully abstract with regard to the original meaning function: For every pair of non-synonymous expressions of the total set, there exists a witness for their meaning difference in the subset. Such a witness is a complex expression that contains one of the non-synonymous expressions as a constituent and its substitution by the other would either change the meaning of the complex expression or render it meaningless (Werning, 2004).

Together with other formal questions on compositionality, the extension problem is also discussed by Sandu in Chapter 12. Sandu begins with a number of equivalent formulations of the principle of compositionality and critically reviews a number of formal triviality results associated with it. The chapter then focuses on the relation of the principle of compositionality to the Principle of Contextuality before presenting a number of compatibility and incompatibility results. The former, for example, show how one can devise a compositional interpretation that agrees with assumed facts about the meanings of complex noun phrases. He illustrates this strategy with problem cases such as the extensions of combinations with non-intersecting adjectives like ‘big’ (see also Récanati’s contribution to this Handbook) or the prototypes of certain noun— noun compositions like ‘pet fish’ (see also Prinz’s and Wisnewski’s contributions). The incompatibility results turn on the question of whether the principle of compositionality is consistent with other semantic principles. The sling shot argument (Gödel, 1944; Davidson, 1984; Neale, 1995), for example, purports to show that compositionality (p. 8) conflicts with the assumption that two sentences, though both true, may well denote distinct facts. In the case of Independent Friendly Logics (Hintikka and Sandu, 1997), it can be shown that compositionality and the semantic assumption that every formula is associated with a set of assignments cannot be reconciled. Finally, Sandu considers the question of equivalent compositional interpretations in the context of the debate between Lewis (1980) and Stalnaker (1999) on the problem of indexicals.

In Chapter 13, Fernando extends the notion of compositionality beyond the sentence level and applies it to discourses. At the level of discourses, compositionality seems in tension with the idea that the meaning of a sentence is intimately related, if not identical, to its truth-conditions. If one substitutes truth-conditionally equivalent sentences in a discourse, the referents of anaphors may change or get lost. This would result in a change of the meaning of the discourse even though compositionality grants that synonymous sentences should be substitutable salva significatione. Anaphor resolution is not the only problem in the attempt to give a compositional semantics for discourses. Discourses also create a new kind of ambiguity that is neither lexical nor syntactic: A sequence of two or more sentences may, for example, be interpreted as a temporal succession or a causal explanation. Fernando gives an overview of a number of formal-semantic theories for discourses and proposes to shift from a model-theoretic treatment of Discourse Representation Theory (DRT, Kamp and Reyle, 1993) to a proof-theoretic account.

4 Lexical Decomposition

Although compositionality as understood in most formal-semantic approaches concerns the contribution of words and syntactic constituents to the determination of the meaning of a complex expression, there is also a question about the determination of the meanings of the lexical constituents themselves. Since Generative Semantics in the early 1970s (Lakoff, 1970; McCawley, 1971; Ross, 1972), many theories have proposed that words such as ‘kill’ are internally complex, and that their relations to other words such as the relation of ‘kill’ to ‘dead’, depends on the internal structure of these words. According to these theories, most lexical items and particularly verbs are decomposed both syntactically and semantically. The lexical atomism of Fodor (1970) and Fodor and Lepore (2002) famously contradicts this idea, yet this part of this Handbook shows that variants of the original decompositional position are very much alive.

The major approaches that have tried to improve on the original ideas of the generative semanticists are surveyed by Wunderlich. These approaches make various syntactic and architectural assumptions, and differ on what the true atoms of semantic composition are taken to be. Among these theories, one finds Montague’s (1960, 1970b) and Dowty’s (1979) meaning-postulate approach, Jackendoff ‘s (1990) conceptual semantics, which proposes atoms such as event, state, action, place, path, (p. 9) property, and amount, Generative Lexicon Theory (Pustejovsky, 1995; this volume), which aims to account for the multiplicity of readings of polysemous words, Lexical Decomposition Grammar (Gamerschlag, 2005; Wunderlich, 1997b), which distinguishes between semantic form (SF) and conceptual structure in Jackendoff ‘s sense and decomposes lexical concepts in complex hierarchical structures or templates, Lexical Conceptual Structure as elaborated in the work of Levin and Rappaport Hovav (1991, 1995), the Natural Semantic Metalanguage (NSM) account of Wierzbicka (see e.g. Goddard and Wierzbicka 2002), which analyses concepts/words by reductive paraphrases using a small collection of universal semantic primes, the Neo-Davidsonian account (Krifka, 1989; Davidson, 1967a; Pietroski, 2002, this volume), and the more strongly syntactocentric approach of Hale and Keyser (2002). All of these theories stand in contrast to the strictly atomist position of Fodor and Lepore (1998, 1999), which objects to every form of lexical decomposition and takes the meaning of every morpheme as primitive.

Wunderlich offers a number of empirical arguments in favour of the Lexical Decomposition Grammar (LGD) approach. For example, it elegantly accounts for lexical alterative, as when intransitive verbs like ‘break’ or ‘gallop’ have transitive and causative variants: In the causative variant, an additional, non-overt, CAUSE (or ACT) predicate is arguably present in the underlying meaning of the verb. Another advantage is that the arguments of a verb cross-linguistically appear to be hierarchically ordered: A decomposition like the Neo-Davidsonian one, which is ‘flat’ in the sense that it merely adds a number of conjoined predicates to a given event variable, cannot account for such hierarchies. A further argument is provided through the behaviour of denominal verbs such as ‘saddle’, ‘box’, or ‘shelve’. In these cases, the incorporated noun provides a lexical root meaning that enters into one of a number of templates restricting the verbs that can be formed from such nouns as well as their possible meanings. The complementarity of manner and result in possible verb meanings is also predicted, as is the near-equivalence of Possession and Location in many languages: For instance, a house can be said to have three bathrooms, while the bathrooms can also be said to be in the house.

Like Wunderlich, Harley is convinced that there is extensive cross-linguistic empirical evidence against the atomist approach, and hence for some form of lexical decomposition. Taking a more syntax-centred approach rooted in recent minimalist syntax, Harley lays outa framework meant to make Generative Semantics finally work. As she notes, if words of categories N, V, and A are not semantic primitives, a first problem affecting lexical atomism is solved: that it is not clear how structureless atoms can ever be learned. However, this problem returns (and remains unsolved) with the irreducible meanings of the lexical roots: the decompositional buck has to stop somewhere. Once entering a syntactic derivation, the syntactic category of such roots is only gradually determined as the derivation proceeds, and syntactic dependents are introduced by additional, phonologically empty syntactic heads: For instance, the transitive verb ‘kill’ acquires its transitive status through a silent syntactic head translatable as ‘cause’ or ‘make’ (as in make John become dead). Arguably, this syntactic decomposition, which (p. 10) is not visible in the surface form of the word ‘kill’ in English, is overtly manifest in the morphosyntax of many languages. With more fine-grained distinctions between syntactic heads and their projections at our disposal, an ambiguity in sentences like ‘John opened the door for 5 minutes’ can now be accounted for in purely structural terms. Harley also presents case studies of the decompositions of ‘have’, ‘give’, and ‘get’. Surveying all of Fodor’s classical arguments against such an approach, Harley concludes that these objections can be successfully met.

Hinzen, taking a slightly different stance, argues that the debate for or against lexical decomposition nonetheless continues. He picks up a topic addressed by Pietroski as well, namely the relation between lexical decomposition in language and the atomicity of concepts in human thought. If language were a guide to the structure of concepts, the theory of syntax might be a theory of what structures our concepts have. If so, concepts that the computational system of language treats as atomic or structureless, should be structureless at the level of thought as well. Hinzen argues that evidence for the syntactic decomposition of lexical verbs is very considerable; however, classical evidence leading to the conclusion that lexicalized concepts in general are not internally structured remains powerful as well. This presents us with a dilemma: Syntactic decomposition appears to be real, and we expect the relevant syntactic decompositions to be compositionally interpreted; but this conflicts with the classical evidence Fodor provides for lexical atomism in semantics. Hinzen suggests that only rethinking the architecture of grammar will address this problem.

The part closes with a survey by Pustejovsky of the approach to ‘co-composition’, a term that refers to the process of how compositional processes determined by the phrasal structure of an expression are supplemented by additional interpretive mechanisms at the interface between lexicon and syntactic structure. These concern productive ways in which meaning can be determined systematically even when the semantic types of particular ambiguous predicates are fixed. For example, Pustejovsky notes that the verb ‘throw’ has different senses in different grammatical contexts, in which it can have the senses of ‘propel’ (as in ‘Mary threw the ball to John’), or of ‘organize’ (as in ‘John threw a party’), or of ‘create’ (as in ‘Mary threw breakfast together quickly’). But even when such senses each correspond to unique semantic types that can as such enter compositional processes of function application, there is an abundant number of cases in language where predicates are coerced to change their types, as in Mary began the book, where begin selects an event rather than an object. Generative lexicon theory accounts for meaning determination in such cases by positing a structure of ‘Qualia’ inside lexical items, which act as operators shifting the types of predicates in systematic and compositional ways. The co-compositional approach also generalizes to a number of other otherwise problematic constructions, such as subjects interpreted as agentive when they normally are not, or the various senses that a verb such as ‘to open’ can have, depending on what kind of objects it takes as its complement.

(p. 11) 5 The Compositionality of Mind

Part V of the Handbook focuses on the psychology of concept combination. In psychology, concepts are those bodies of knowledge used in the processes underlying higher-cognitive competencies, such as categorization, induction, or analogy-making (see Machery, 2009 for discussion). For about thirty years, psychologists have examined whether in addition to the concepts that are permanently stored in long-term memory (e.g. dog, water, going to the dentist) people are able to produce new concepts on the fly by combining the concepts they already possess. Such a process is typically called ‘concept combination’. For instance, psychologists wonder whether people are able to combine their bodies of knowledge about Harvard graduates and about carpenters to create a body of knowledge about Harvard graduates who are carpenters (Kunda et al., 1990) that could be used to make inductive inferences about Harvard graduates who are carpenters or to explain their behaviour.

Part V reviews the findings about concept combination as well as the models that have been developed to explain these. The first two chapters, respectively by James Hampton and Martin Jönsson and by Edward Wisniewski and Jing Wu, review the most important findings about concept combination and propose two distinct models to account for them. The two following chapters focus on a well-known criticism of the field of concept combination in the psychology of concepts. Lila Gleitman, Andrew Connolly, and Sharon Armstrong argue that prototype theories of concepts are inadequate to explain how concepts combine, while Jesse Prinz defends the psychological research on concept combination against a range of criticisms. Finally, Edouard Machery and Lisa Lederer argue that psychologists have overlooked a range of ways in which concepts might combine in a fast and frugal manner.

In ‘Typicality and compositionality: the logic of combining vague concepts’, Hampton and Jönsson endorse the prototype theory of concepts, and they review a range of important findings suggesting that prototypes compose. It appears that people’s judgements about whether an object belongs to the interpart of two classes A and B does not depend on whether it is judged to belong to these two classes; rather, it is a function of its similarity to a prototype resulting from the combination of the prototypes of A and B. Furthermore, Hampton and Jönsson describe in detail the model of prototype combination developed by Hampton in an important series of articles starting in the 1980s. Finally, they discuss Connolly et al.’s (2007) recent work on concept combination, which has challenged the claim that prototypes combine (this work is also discussed in Gleitman and colleagues’, Prinz’s, and Machery and Lederer’s chapters).

Wisniewski and Wu’s chapter, ‘Emergency!!!! Challenges to a compositional understanding of noun—noun combinations’, examines how speakers interpret novel noun— noun combinations (e.g. ‘zebra football’ or ‘roller coaster dinner’) with a special focus on speakers of English and Chinese. Wisniewski and Wu show that speakers typically (p. 12) attribute to the members of the extension of novel noun—noun combinations (e.g. ‘zebra football’) ‘emergent properties’ (see also Hampton and Jonsson’s chapter)—viz. properties that are not attributed to the members of the extensions of the nouns (e.g. ‘zebra’ and ‘football’). They then review the model of concept combination developed by Wisniewski in a series of groundbreaking articles before showing that this model can account for the phenomenon of emergent features.

In ‘Can prototype representations support composition and decomposition?’ Gleitman, Connolly, and Armstrong challenge the prototype theory of concepts, arguing especially that the prototype theory of concepts fails to predict how complex expressions such as ‘quacking ducks’ are interpreted. They review and elaborate on the recent findings by Connolly et al. (2007) that have been influential in the recent debates about concept combination.

Prinz’s ‘Regaining composure: a defence of prototype compositionality’ defends the claim that prototypes combine against the philosophical objections developed by Jerry Fodor and the more empirical criticisms developed by Connolly, Gleitman, and Armstrong. While Fodor gives numerous examples of complex concepts (e.g. pink tennis balls silkscreened with portraits of Hungarian clowns) that do not seem to involve complex prototypes produced out of other prototypes, Prinz argues that Fodor’s argument can be blocked by distinguishing the claim that prototypes can be combined from the claim that prototypes are always combined. Appealing to the model of prototype combination developed in Furnishing the Mind (Prinz, 2002), which draws in part on Hampton’s research, he then shows how prototypes can be combined.

In ‘Simple heuristics for concept combination’, Machery and Lederer review critically three influential models of concept combination (Smith et al.’s (1988b); Hampton’s, and Costello and Keane’s). They note that these models have not paid much attention to the reason why complex concepts are produced (to categorize, to draw an induction, etc.), and they propose that complex concepts might be produced differently depending on the context in which they are produced. They also note that many of the hypothesized processes of concept combination are complex and resource-intensive. By contrast, taking their cue from the Fast-and-Frugal-Heuristics research programme (Gigerenzer et al., 1999), Machery and Lederer argue that concept combination is likely to be underwritten by several distinct processes, each of which produces complex concepts in a fast and frugal way, and they describe several such processes.

6 Evolutionary and Communicative Success of Compositional Structures

Part VI is concerned with the evolution of compositional linguistic systems and the utility of compositional systems. On the basis of the communicative systems found in living primates, it is natural to speculate that the systems of signs used by our (p. 13) ancestor species during the evolution of hominins were either non-compositional (see, e.g., Cheney and Seyfarth, 1990 on the signs used by baboons) or had only a primitive form of compositionality (recent research by Arnold and Zuberbühler, 2006, suggests that putty-nosed monkeys can combine some sounds in a meaningful way; see also Ouattara et al., 2009, on Campbell’s monkeys; and see McGonigle and Chalmers, 2006 for general discussion). The compositionality of human languages raises the question of how compositionality evolved and of what benefits compositionality brings about.

This part reviews the main controversies about the evolution of compositionality and introduces some new hypotheses about the utility of compositional systems. The first two chapters are mostly focused on the evolution of compositionality. Michael Arbib describes the holophrasis—compositionality debate for protolanguage, while Kenny Smith and Simon Kirby describe two possible mechanisms that explain the evolution of compositional languages, the first of which appeals to biological evolution and the second to cultural evolution. The following two chapters are concerned with the utility of compositional systems. Peter Pagin examines the benefits brought about by compositional linguistic systems, while Gerhard Schurz explains why people possess prototypes and combine them by appealing to the natural evolution of species and the cultural evolution of artefacts.

In ‘Compositionality and holophrasis: from action and perception through protolanguage to language’, Arbib argues that natural languages are not properly said to be compositional; rather, they have compositionality. While the meanings of component expressions contribute to the meaning of the expressions that contain them (e.g. sentences), they do not determine it entirely. He then moves on to discuss the status of compositionality in the hypothesized protolanguage. Protolanguage is the system of signs used by hominids before the evolution of natural languages. According to what Arbib calls ‘the compositional view’, prototolanguages were made of words, but lacked syntax. Protolanguages were thus quite similar to pidgins, where words are merely juxtaposed. The evolution of compositional languages merely consisted in adding syntax to already existing words. By contrast, the ‘holophrastic view’, which is defended by Arbib, holds that in protolanguages communicative acts were unitary words and could not be decomposed into concatenated words.

Smith and Kirby’s chapter, ‘Compositionality and linguistic evolution’, compares two approaches to the evolution of compositionality—one that appeals to biological evolution, one that focuses on cultural evolution. They first discuss critically Pinker and Bloom’s (1990) approach to biological evolution. By contrast, they hypothesize that compositionality is socially learned, and they then review numerous models explaining how cultural evolution could have selected for compositional languages.

Pagin’s ‘Communication and the complexity of semantics’ begins by challenging the traditional idea that compositionality is required for a language to be learnable. This challenge immediately raises the following question: If the function of compositionality is not to make languages learnable, why are languages compositional? Pagin proposes that the main benefit of compositionality is to enable speakers to interpret complex expressions quickly and efficiently, while quick interpretation is needed for successful (p. 14) communication (see Machery and Lederer’s chapter in Part V for a related discussion of simplicity and speed). To support this hypothesis, he argues that compositional semantics tend to minimize computational complexity.

In ‘Prototypes and their composition from an evolutionary point of view’, Schurz explains why, given the (biological or cultural) evolution of species and artefacts, prototypes are an efficient way of representing classes, and he extends this argument to the combination of prototypes. Schurz’s argument for the existence and utility of prototypes should be read in conjunction with the discussion of prototypes in Part V (see, particularly, Hampton and Jonsson’s, Gleitman and colleagues’, and Prinz’s chapters).

7 Neural Models of Compositional Representation

In his programmatic article ‘On the proper treatment of connectionism’, Smolensky (1988a) made the provocative claim that connectionism does not only provide us with a good biological model of information flow among single neurons or larger groups of neurons, but may well give explanations on the cognitive level of information processing. However, it needed Fodor and Pylyshyn’s (1988) empathic reply to define the agenda for a new debate—the debate between classicism and connectionism. It continues to this day. The crucial questions are: (i) What are the defining features of the ‘cognitive level’ of information processing? (ii) How close do connectionist architectures in principle come to fulfilling these features? (iii) Are the connectionist architectures in question really biologically plausible? And (iv) what is the surplus value that connectionist models contribute to the explanation of cognitive processes? A mere implementation of classical architectures would be too little. Fodor and Pylyshyn argue that information processes on the cognitive level have to be accounted for by reference toa representational structure that is compositional, productive, and systematic. Defenders of the connectionist perspective on cognition have since either denied or weakened one or more of those structural requirements or they have tried to show how connectionist architectures might fulfil them.

In Chapter 27, Horgan introduces a non-classical notion of compositionality and argues that a dynamical-cognition framework satisfying non-classical compositionality might provide a foundation for cognitive science. The kind of compositionality Horgan has in mind is non-classical in at least two respects: First, compositional structure in the system of representations need not be tractably computable. Second, compositional structure does not require separately tokenable constituent-representations. Horgan’s notion of non-classical compositionality thus does not primarily contrast with the formal notion of compositionality used, for example, in formal semantics, where compositionality just implies the existence of a homomorphous mapping between syntax and semantics. It rather addresses something that one might call ‘Fodorian (p. 15) compositionality’, a cluster of ideas that not only involves the notion that the semantic value of a complex representation is a structure-dependent function of the semantic values of its constituents, but also the postulate that representational constituents be separately tokenable and processes over representations be tractably computable. The dynamical-cognition framework treats cognition in terms of total occurrent cognitive states that are mathematically realized as points in a high-dimensional dynamical system. These mathematical points, in turn, are physically realized by total-activation states of a neural network with specific connection weights. Horgan’s dynamical-cognition framework opposes the classicist assumption that cognitive-state transitions conform to a tractably computable transition over cognitive states. Horgan has in mind systematically content-sensitive cognitive-state transitions that accommodate lots of relevant information without explicitly representing it during cognitive processing.

A psycholinguistic test case for the controversy between classicism and connectionism, analysed by Penke in Chapter 28, are the different processing models of linguistic inflections, such as the past tense formation in English (Pinker, 1999). According to the classical symbolic view, morphologically complex word forms are structurally composed out of component parts by the application of a mental rule that combines a word stem (laugh) with an affix (-ed). Connectionist approaches deny that regular inflection is based on a compositional mental operation assuming instead that regular inflected forms are stored in an associative network structure. In the classical picture a dual mechanism is postulated for regular and irregular verb forms, viz. semantic composition and, respectively, associative memory. The challenge for connectionism is to provide a unified model for both regular and irregular verb forms and so demonstrate its superiority over classicist models.

In Chapter 29, Stewart and Eliasmith discuss the biological plausibility of recent proposals for the implementation of compositionality in local and distributed con-nectionist networks. In particular Hummel and Holyoak’s (2003) LISA architecture and the neural blackboard architectures of van der Velde and de Kamps (2006) are reviewed. Stewart and Eliasmith then turn to their own model, which combines a vector symbolic architecture (VSA) with a neural engineering framework. The core idea of vector symbolic architectures, which goes back to Smolensky’s (1995b) Integrated Connectionist Symbolic Architecture, is to map a semantic structure into a vector algebra by a homomorphism. The vector algebra contains role and filler vectors. The filler vector corresponds to lexical concepts and the role vectors to thematic roles. An operation of binding is achieved by some kind of multiplication of a role vector and a filler vector and encodes the fact that a certain lexical concept fills a certain thematic role (e.g. the agent or theme role of an event). An operation of merging, typically vector addition, allows the generation of a complex role-filler structure where various lexical concepts play different thematic roles. This operation of binding and merging are recursive and thus warrant productivity. The homomorphous mapping guarantees compositionality. The different VSAs mainly differ in the choice of the merging operation. While Smolensky uses tensor multiplication, Stewart and Eliasmith argue for the better biological plausibility (p. 16) of cyclic convolution. The model remains symbolic since a semantic constituent relation can be defined for the vector algebra using an operation of unbinding.

The idea of neuro-emulative semantics is a radically different, non-symbolic proposal of how to achieve compositionality in connectionist networks. It is based on the mechanism of neural synchronization. Building on empirical evidence on objectrelated neural synchrony in the cortex and topologically structured cortical feature maps, presented by Engel and Maye in Chapter 30, Werning develops the idea of neuroemulative semantics in Chapter 31 (see also Werning, 2003a, 2005a). The approach incorporates the Gestalt principles of psychology and uses oscillatory recurrent neural networks as a mathematical model. The semantics to be developed is structurally analogous to model-theoretical semantics. However, unlike model-theoretical semantics, it regards meanings as set-theoretical constructions not of denotations, but of their neural counterparts, their emulations. Objects are emulated by oscillatory network activity induced by bottom-up or top-down mechanisms. These oscillations are able to track and represent objects in the environment. The fact that objects have certain properties is emulated by the fact that those oscillations pertain to various cortical features maps. On these maps, neurons encode properties of various attribute dimensions like colour, orientation, direction of movement, etc. Synchronous oscillatory activity provides a mechanism of binding and allows for the emulation of one and the same property bundle, whereas anti-synchronous oscillatory activity corresponds to the emulation of distinct property bundles and hence distinct objects. Exploiting the isomorphism between model-theoretical semantics and neuro-emulative semantics, compositionality theorems can be proven.

The Handbook concludes with a neurolinguistic outlook presented in Chapter 32 by Baggio, van Lambalgen, and Hagoort. Can we reformulate compositionality as a processing principle such that we will be in a position to test it empirically? Does the comprehension of sentences by humans really proceed in a compositional way? Baggio, van Lambalgen, and Hagoort take the main motivation for compositionality to be an easing in the burden of storage. They review a number of empirical data that include neurolinguistic EEG experiments and involve among others semantic illusions, the difference between world knowledge and semantic knowledge, fictional discourse, semantic attraction, and coercion. To account for those data, Baggio, van Lambalgen, and Hagoort argue, it is often necessary to postulate an increase of storage if one wants to stick to compositionality as a processing principle. This, however, goes against the main motivation of compositionality. They concede that compositionality remains effective as an explanation of cases in which processing complexity increases due to syntactic factors only. However, it apparently falls short of accounting for situations in which complexity arises from interactions with the sentence or discourse context, perceptual cues, and stored knowledge. (p. 17)