Rebekah Baglini and Christopher Kennedy
This chapter investigates the relationship between adjectives and event structure by looking at properties of deverbal adjectives and deadjectival verbs. Although simple adjectives are not eventive, they nevertheless play an important role in matters of event structure, both in the way that they influence the eventive properties of verbs that they are derivationally related to, and in the way that an understanding of the scalar properties of adjectival meaning informs theorizing about eventive meanings. Although often considered in isolation, we show that adjectival gradability and verbal aspect are intimately related scalar phenomena. The structural properties of an adjectival scale determine the aspectual class of a derived event predicate. Similarly, the aspectual structure of a verb phrase constrains the scale structure of an adjectival participle. Our discussion focuses primarily on degree-based approaches to these phenomena, but we also consider alternative approaches based in a more articulated ontology for states.
This article describes the different types of temporal adverbial, interactions of adverbials and tense, and tense and adverbials in subordinate clauses. From a morphosyntactic view, there are six different types of temporal adverbial: an adverbial composed of a noun phrase (NP) and an adposition; an NP functioning as an adverbial; sentential; a temporal adverbial clause; and based on adverbs and adjectives, respectively. This morphosyntactic classification is not coextensive with a semantic classification. The article uses a fourfold semantic classification—positional adverbials, quantificational adverbials, adverbials of duration, and Extended-Now adverbials—that more or less follows the traditional ones (except for Extended-Now adverbials, which is a novel category).
Helen de Hoop and Joost Zwarts
Case has not received a lot of attention from formal semanticists, probably because the approach has mostly focused on languages with relatively sparse case systems. This article examines how formal tools are being used in the study of the meaning of case. There are many semantic aspects of case that lend themselves to such a treatment, such as argument structure, quantification, aspect, and space. This article looks at the application of formal semantics to case in the domains of argument structure and space. First, it considers work in which the central notions of grammatical function and noun phrase interpretation play an important role in relation to case marking. It then explores Keenan's (1989) semantic case theory, and de Hoop's (1992) and van Geenhoven's (1996) type-shifting approaches to case and voice alternations. Following Krifka (1992) and Kiparsky (1998), the article shows how the mereological approach can account for the semantics of partitive case in Finnish. The article concludes with a discussion on case and spatial structure.
This article presents an overview of the basic issues concerning the relationship between case, grammatical relations, and semantic roles such as agent and patient. In most approaches, semantic roles are directly linked to abstract grammatical relations for the core arguments of the clause. Cases are considered to be a surface expression of grammatical relations. All approaches that are concerned with the relationship between semantic roles and grammatical relations are able to capture the argument realisation of transitive verbs selecting highly potent agents and strongly affected patients such as ‘break’, ‘open’, or ‘hit’ in accusative languages. Approaches using role lists instead of semantic decompositions lack the means to cope with the large number of individual roles that are selected by the full range of verbs and with the reverse case pattern in ergative constructions. Accordingly, this article deals primarily with the relationship (or linking) between grammatical relations and semantic roles in different types of approaches. It also discusses role lists and role hierarchies, along with proto-roles and lexical semantic structures.
Categorial grammar predates Syntactic Structures by two decades. Characterized by classification of expressions by recursively defined types, it is highly lexicalist or, as in the formulation pursued here, purely lexicalist. The chapter address continuity (concatenation) and discontinuity (interpoloation), categorial syntactic structures as proof nets, Curry-Howard type-logical semantics, and complexity and acceptability.
Ronald W. Langacker
Research leading to the formulation of cognitive grammar began in the spring of 1976. On the American theoretical scene, it was the era of the “linguistics wars” between generative semantics and interpretive semantics. With generative semantics, cognitive grammar shares only the general vision of treating semantics, lexicon, and grammar in a unified way. Cognitive grammar is part of the wider movement that has come to be known as cognitive linguistics, which, in turn, belongs to the broad and diverse functionalist tradition. It is strongly functional, granted that the two basic functions of language are symbolic (allowing conceptualizations to be symbolized by sounds and gestures) and communicative/interactive. The symbolic function is directly manifested in the very architecture of cognitive grammar, which posits only symbolic structures for the description of lexicon, morphology, and syntax. In principle, cognitive grammar embraces phonology to the same extent as any other facet of linguistic structure. To date, however, there have been few attempts to articulate the framework's phonological pole or apply it descriptively.
Ronald W. Langacker
Cognitive Grammar (CG) is a particular version of cognitive linguistic theory within the broader movement of functional linguistics. It is a usage-based approach grounded in both cognition and social interaction. An independently justified conceptual semantics makes possible an account of grammar as consisting solely in assemblies of symbolic structures (form–meaning pairings).
Mentation does not proceed via the pursuit of random paths of thought, but instead by way of connections among ideas that are guided by certain types of associative principles. Since a primary function of language is to evoke thoughts in the minds of interlocutors, it is unsurprising that we would find evidence that these associative principles are at play in the manner in which language is structured and interpreted. This chapter provides a brief survey into some of the respects in which this is the case at the discourse and lexical levels. The data argue for a rich notion of event structure that makes crucial reference to different types of association, and illustrate how the importance of event structure in linguistic theory goes beyond merely accounting for the representation of events and the semantics of the words used to express them.
Henk J. Verkuyl
Compositionality concerns the computation of complex meanings at higher levels of structure on the basis of atomic meanings. This article is based on the conviction that complex meaning of phrase structure should be approached on the basis of the principle of compositionality, and considers two temporal domains—tense and aspect—in which linguists have a choice between a compositional and non-compositional approach. It shows that H. Reichenbach's (1947) tense system suffers from not being compositional and argues that a strictly compositional approach to Slavic aspectuality produces better results than competing non-compositional approaches advocated by most scholars of Slavic languages, mostly on the basis of informal semantics. The article also discusses L. A. te Winkel's binary system, compositionality in Russian language, terminative imperfectivity, and durative perfectivity.
Toshyuki Ogihara and Yael Sharvit
The English present tense does not exhibit a uniform behavior in all embedded environments. Its ability to receive a simultaneous reading in complement clauses of attitude verbs depends on the matrix tense. Likewise, in relative clauses, the present tense is capable of receiving a simultaneous reading if the matrix tense is future, but not if it is past. However, there are languages (for example, Japanese and Hebrew) where the present tense receives (or can receive) a simultaneous reading in complement clauses of attitude verbs, even when the matrix tense is past. There are also languages (such as Japanese, but not Hebrew) where the present tense can receive a simultaneous reading in relative clauses, even when the matrix tense is past. This article explores the nature of these language-internal and cross-linguistic variations, and the success (or lack thereof) of two particular theories in accounting for it: the theory we refer to as the ULC-based theory (ULC stands for Upper Limit Constraint) and the copy-based theory. It then proposes a theory that borrows insights from both of the aforementioned theories to account for embedded tenses.
This chapter examines the syntactic decompositional view of event structure. On this view, the event is composed of distinct syntactic heads that correspond to its meaning ingredients. The chapter critically reviews the various arguments presented in the literature for a decompositional analysis of pairs of verbs that differ roughly in that one of them has one more argument than the other. It focuses on the inchoative alternation, comparing it to the Japanese and Hungarian causative alternations. The chapter shows that these alternations differ from one another in important respects, and only the Japanese causative alternation deserves a syntactic decompositional treatment. The chapter thus contributes a critical evaluation of the scope and limitations of syntactic representations of lexical decomposition.
This chapter explores the relationship between constrained semantic representations of events, and structured syntactic representations that express them. I show that these representations track each other systematically, and that argument structure generalizations emerge in lock-step with these structures. I therefore propose a system in which those generalizations follow from the following general principles of structural interpretation: (i) embedding corresponds to the cause/leads to relation; (ii) each subevental structure is related potentially to a participant NP; (iii) event-recursion is limited to structures with at most one dynamic predication per event phase. The maximal subevental structure consists of a stative predication embedding a dynamic one, and the dynamic one in turn embedding a stative one. This structure and its proper subsets exhaust the event types built by the grammar. These principles ensure the relative prominence of the different argument positions as well as specific entailments for the different positions.
Henk J. Verkuyl
What is the real nature of the aspectual division between perfective and imperfective as revealed by the well-known in/for-test? The answer is founded on the idea that this division between completion and incompletion mirrors our cognitive capacity to shift between discreteness and continuity as expressed in the number systems N and R. To get at the real contribution of a verb to aspectual information, the first step is to determine the basic atemporal building block making a tenseless verb stative or non-stative. For this, verbhood is to be understood aspectually in a very strict way abstracting from the contribution of arguments. It follows that one has to get ‘below’ event structure in order to see why the in/for-test works as it turns out to do (or in some cases not).
This chapter reviews recent proposals about how the meanings of evidentials should be captured within formal semantic theories, which attempt to model compositional meaning in a way that gives insight into possible semantic variation. The literature surveyed addresses three questions. First, how should the core meaning of evidential morphemes be characterized and what sorts of information can be inferred from their use in particular contexts and hence does not need to be specified as part of the core meaning? Second, can the way that evidentials compose with the rest of the sentence be captured using existing formal tools, or do evidentials have semantic properties that motivate additions to our semantic toolkit? Third, is there a limit to the range of possible evidential meanings? If so, how can a formal semantic theory constrain the possible meanings?
Linguists and philosophers since Aristotle have attempted to reduce natural language semantics in general, and the semantics of eventualities in particular, to a ‘language of mind’, expressed in terms of various collections of underlying language-independent primitive concepts. While such systems have proved insightful enough to suggest that such a universal conceptual representation is in some sense psychologically real, the primitive relations proposed, based on oppositions like agent-patient, event-state, etc., have remained incompletely convincing. This chapter proposes that the primitive concepts of the language of mind are ‘hidden’, or latent, and must be discovered automatically by detecting consistent patterns of entailment in the vast amounts of text that are made available by the internet using automatic syntactic parsers and machine learning to mine a form- and language-independent semantic representation language for natural language semantics. The representations involved combine a distributional representation of ambiguity with a language of logical form.
Linguists started to handle the semantics of linguistic constructions with the proper generality only in the twentieth century. Leonard Bloomfield approaches the notion of a construction via the notion of a constituent. A “constituent” of a linguistic form e is a linguistic form, which occurs in e and also in some other linguistic form. It is an “immediate constituent” of e if it appears at the first level in the analysis of the form into ultimate constituents. A “construction” combines two or more linguistic forms as immediate constituents of a more complex form. Bloomfield's notion of a “pronounceable” string of sounds is purely phonetic. So the entire work of distinguishing grammatical from ungrammatical expressions of the language L rests on the question whether they are “meaningful.”
Lisa deMena Travis
This chapter explores the nature of Inner Aspect by investigating non-culminating accomplishments languages. In these languages, the unmarked way of saying ‘x killed y’ does not entail that y is dead. Intriguingly, in many of these languages, the morphology needed to encode entailment brings with it other meanings such as extra effort, nonintentionality, suddenness. First, a syntactic account of the role of this morpheme posits a morphologically realized endpoint in Inner Aspect within the predicate. Then this account is compared to an alternate view that concentrates on the semantics of the construction and the modal meaning of the morpheme. Finally, it is argued that phonological evidence must also be taken into consideration in any further study. The goal of the chapter is not to provide any solution, but to raise awareness of the relevant issues.
This article explores the relation between language and thought. The term ‘thought’ is an abstraction. It has its uses: for many philosophical purposes one may simply want to abstract from the linguistic forms that structure propositions, and concentrate on their content alone. But that should not confuse us into believing in an ontology of such entities as ‘thoughts’ – quite apart from the fact that, if we posit such entities, our account of them will not be generative and be unconstrained empirically. Where the content of forms of thought that have a systematic semantics corresponds to a so-called grammatical meaning – meaning derived from the apparatus of Merge, phasing, and categorization – minimalist inquiry is a way of investigating thought, with syntax–semantic alignment as a relevant heuristic idea. Having the computational system of language in this sense equates with having a ‘language of thought’, with externalization being a derivative affair, as independent arguments suggest. Thus, a somewhat radical ‘Whorfian’ perspective on the relation of language and thought is developed, but, it is a Whorfianism without the linguistic-relativity bit.
This chapter discusses a number of developmental disorders that impact language acquisition, and their possible relevance to understanding how language is typically acquired. The chapter begins with a discussion of whether language can be selectively impaired relative to general cognitive abilities, and whether it can be selectively spared. The second half of the chapter discusses how exactly language does and does not “go wrong.” The topics include the relevance of “deviance” and whether there is any evidence for it, and a discussion of the critical importance of both cross-disorder comparisons of the same linguistic phenomena, and of cross-linguistic comparisons of children with the same disorder.
This chapter explores the interpretation of compounds in terms of the framework of lexical semantic analysis developed in Lieber (2004). That work offered an analysis of typical English root compounds such as dog bed, synthetic compounds such as truck driver, and coordinative compounds such as producer-director, all of which are arguably endocentric. The chapter is organized as follows. Section 5.1 gives a brief overview of the system of lexical semantic representation developed in Lieber (2004). Section 5.2 extends the treatment of the semantic body of lexical items beyond that given in Lieber (2004), considering the relationship between the skeleton and the body on a cross-linguistic basis. It shows that the nature of the semantic body is critical to the range of interpretation available to any given compound. Section 5.3 adopts the system of classification for compounds developed in Bisetto and Scalise (2005, and this volume), and shows how it can be applied in terms of the current system. Section 5.4 offers specific analyses of several different types of compounds, from which will emerge the conclusion in Section 5.5 that exocentricity cannot be treated as a single unified phenomenon. Section 5.6 focuses on a kind of compounding that is nearly unattested in English, but is quite productive in Chinese, Japanese, and a number of other languages, namely verb–verb compounds.