Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE ( © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

date: 05 August 2020


Abstract and Keywords

This handbook aims at offering an authoritative and state-of-the art survey of current approaches to the analysis of human languages, serving as a source of reference for scholars and graduate students. The main objective of the handbook is to provide the reader with a convenient means of comparing and evaluating the main approaches that exist in contemporary linguistics. Each of the chapters is devoted to one particular approach, theory, model, program, or framework of linguistics.

Keywords: explanation, morphology, phonology, pragmatics, semantics, syntax, typology, universal

Like the other volumes of the Oxford Handbook in Linguistics series, the present volume aims at offering “an authoritative and state-of-the art survey of current thinking and knowledge in a particular field” to serve as a source of reference for scholars and graduate students. Its format, however, differs from that of most other volumes of the series. The volume does not really have the internal structure that one might expect a handbook to have: Rather than grouping the chapters according to a catalog of more general themes, the table of contents has the format of a “shallow” taxonomy, simply listing the chapter titles and contributors. The editors have given the question of how the various chapters should be arranged and presented in a volume of this kind quite some thought. In the end they decided to simply arrange the chapters in alphabetical order of the first key word figuring in the chapter title, for the following reason: Current linguistic analysis has turned into an extremely complex field and imposing a rigid classification of theoretical concepts and orientations has become increasingly difficult and controversial. The editors therefore came to the conclusion that it would be best to leave it to the reader to find his or her own way in comparing and relating the chapters to one another. We are aware that this is a procedure that is not really the one expected from a handbook-type treatment but we believe that it suggests itself on the basis of the nature of the volume.

A major objective that the editors had in mind when embarking on the handbook project was to give those scholars who were responsible for, or are prominently involved in the development of a given approach, program, or theory a chance to describe and promulgate their work. Another objective was that, rather than offering a limited selection of mainstream lines of linguistics, we wanted to expose the reader to a broad range of theoretical discussions. To this end, the reader will find strongly contrasting perspectives on analyzing syntax, as they surface, for example, in the chapters by Boeckx at one end and O’Grady at the other, or of accounting for typological data, as they can be found, for example, in the chapters by Baker and Van Valin at one end and those of Bickel and Haspelmath at the other.

(p. 2) In accordance with the general theme of this volume, authors tend to emphasize what is common to human languages across genetic and geographical boundaries and how the commonalities are best to be accounted for in linguistic analysis. The editors consider it important, however, to also draw the reader’s attention to areas where languages differ from one another, and they decided to devote one chapter to linguistic relativity and the effects it might have on purportedly non-linguistic cognition (Pederson, this volume).

In concluding, we wish to make it clear that a student looking for guidance on how to analyze a given language may be disappointed when consulting this book since its concern is not primarily with offering means of analysis but rather with a survey of ‘models’ that may be of help in finding or developing the right framework for analyzing a language or set of linguistic data.

1.1 Goals

The main goal of this volume thus is to provide the student of language with alternatives that have been proposed in contemporary linguistics for analyzing and understanding the structure of human languages. To this end, the authors were confronted with the following questions that were meant to provide guidelines in the preparation of chapters:

  1. (a) How can the main goals of your model be summarized?

  2. (b) What are the central questions that linguistic science should pursue in the study of language?

  3. (c) What kinds of categories are distinguished?

  4. (d) What is the relation between lexicon, morphology, syntax, semantics, pragmatics, and phonology?

  5. (e) How is the interaction between cognition and grammar defined?

  6. (f) What counts as evidence in your model?

  7. (g) How does your model account for typological diversity and universal features of human languages?

  8. (h) How is the distinction synchrony vs. diachrony dealt with?

  9. (i) Does your model take sociolinguistic phenomena into account?

  10. (j) How does your model relate to studies of acquisition and to learning theory?

  11. (k) How does your model generally relate to variation?

  12. (l) How does your model deal with usage data?

  13. (m) What kind of explanations does your model offer?

  14. (n) How does your model relate to alternative models?

For good reasons, the authors of this volume highlight the potential that their work offers to the student of language or linguistics, and are therefore less concerned with (p. 3) areas where their work offers less satisfactory or no solutions. Accordingly, the way and the extent to which the questions are addressed in the following chapters differ greatly from one chapter to another. To be sure, not all of the questions are relevant to what a particular framework of linguistic analysis is about; hence, such questions are ignored by the authors concerned. There are also authors who relate their framework explicitly to the catalogue of questions, and simply admit that their work has scope only over a limited set of linguistic phenomena, or that it does not provide meaningful answers to specific questions. For example, Hudson (this volume) admits that his theory of Word Grammar has research gaps in areas such as phonology, language change, metaphor, and typology, or Van Valin (this volume), observes that there is no theory of phonology related to Role and Reference Grammar, and that work on morphology is in its initial stages, and Baker (this volume) notes that extending the kind of questions that he is concerned with in Formal Generative Typology to the domains of phonology and the lexicon would simply be outside his expertise.

Some of the questions received relatively little attention. This applies in particular to the question of what counts as evidence in a given model. In approaches relying largely or entirely on quantitative data though, such as corpus linguistics or probabilistic linguistics (Biber and Bod, this volume), there is a clear answer to this question. Thus, Biber says: “Considered within the larger context of quantitative social science research, the major strengths of the corpus-based approach are its high reliability and external validity.”

The main objective of this handbook is to have current influential approaches to linguistic analysis represented and to provide the reader with a convenient means of comparing and evaluating the various approaches. To this end, the editors aimed at reserving one chapter for each of the approaches. In a few cases, however, it turned out desirable to have more than one chapter devoted to one and the same approach in order to take account of contrasting orientations characterizing the relevant approach. Accordingly, Optimality Theory is represented with chapters on phonology (Gouskova) on the one hand and on grammatical categories (de Swart and Zwarts) on the other, and the Chomskian tradition of linguistics is represented not only with a general chapter on language-internal analysis (Boeckx) but also with chapters highlighting its potential of dealing with typological diversity (Baker) and of analyzing the cartography of syntactic structures (Cinque and Rizzi), respectively.

1.2 Approach, Framework, Model, Program, or Theory?

How to refer to one’s work: Does it qualify as an “approach,” a “framework,” a “model,” a “program,” a “theory,” or something else? The decisions made differ greatly from one author to another, depending on the goals underlying their work. Functional (p. 4) Discourse Grammar, Lexical-Functional Grammar, and others are theories (Asudeh and Toivonen, and Hengeveld and Mackenzie, this volume), Minimalism is a program (Boeckx), natural semantic metalanguage is an approach (Goddard). But perhaps more importantly, one and the same author may refer to his or her work as a “model” in some contexts, as an “approach” in other contexts, or as a “framework” in still other contexts. More generally, authors with a generativist orientation tend to phrase their work in terms of a theory, and for equally good reasons, other linguists avoid this term; for quite a number of linguists with a functionalist orientation, there is some reluctance to recognize “theory” of any kind as being of use in doing linguistics.

The problem with the terminology is that there is not much agreement across the various schools on how to define these terms. Dryer (2006a: 28–9) says that “[t]he notion of theory widely assumed in formal linguistics is essentially equivalent to that of a metalanguage for describing languages. Providing an analysis of a particular set of data within a formal theory involves providing a description of that data within the metalanguage that constitutes that theory.” But Haspelmath (this volume) uses a similar definition for “framework,” characterized by him as a sophisticated and complex metalanguage for linguistic description that is intended to work for any language.

In addition to the diversity just sketched there is also some range of diversity in what should be the main goals of linguistic analysis. The main goal of Functional Discourse Grammar is to give an account of morphosyntactically and phonologically codified phenomena in languages (Hengeveld and Mackenzie, this volume), and Langacker (this volume) states that for Cognitive Grammar the goal is to describe the structure of particular languages and develop a general framework allowing the optimal description of any language. For others again, the declared goal is to explain the structure of language (e.g. Hudson, this volume), or to understand the cognitive organization of language (Bybee and Beckner, this volume).

1.3 Orientations

There is no shortage of classifications in the relevant literature proposing groupings and cleavages among the various approaches to linguistic analysis; the reader is referred to relevant works such as Newmeyer (1998), Darnell et al. (1999), Butler (2008), etc., for information. More generally, linguistic approaches tend to be divided into generativist (or formalist) and functionalist ones, and when we submitted a proposal for the present book to the publisher, an anonymous reviewer suggested that “[s]omething could be said about the mutual antipathy that seems to exist between the two camps. Some formalists are dismissive of the functionalist approach, and some functionalists have a variety of negative feelings about formalism including in some cases an almost pathological fear of even drawing a tree diagram.” An obvious way of structuring this volume might therefore have been to present the chapters in accordance with such a divide.

(p. 5) Our reasons for not adopting such a procedure are the following. First, we believe that the overall goals of linguists are essentially the same, namely understanding what languages are about, and how they can best be described and explained. Second, our main concern is not with differences in linguistic methodology but rather with finding answers to the questions listed above. And third, it would seem that this “divide” is gradually losing much of the significance it once had.

There is neither agreement on how the two main kinds of approaches should be referred to nor on what their distinguishing properties are. For reasons given in Newmeyer (1998: 7–11) we will refer to what are frequently called formal or formalist approaches as generativist ones and retain the widely accepted term functionalist for the second kind of approaches or orientation,.1 even if we do not know if the term “generativist” would really be accepted as a synonym for ‘formal,” for example by those doing work in representational frameworks outside of the Chomskian Principles and Parameters or Minimalist traditions. The term “functionalist” is seemingly less controversial for the second kind of approaches, but the problems with this term are of a different nature: What is commonly subsumed under this label includes such a wide range of directions and schools that some feel tempted to subsume anything that excludes a generativist perspective under this label.

This divide between two basic orientations is associated with a range of contrasting perspectives; it surfaces in a number of antagonisms that have been pointed out in the relevant literature. One distinguishing feature is that for many generativists—that is, by no means for all—a central task for linguists is to characterize the formal relationships among grammatical elements largely independent of some characterization of the semantic and pragmatic properties of those elements. On the functionalist view, by contrast, language is explained with reference to the functions it is argued to serve. For most linguists with a functionalist orientation, language is foremost an instrument for communication, and the following claim made by Simon Dik (1986: 21) more than two decades ago is still endorsed by many students of language: “The primary aim of natural languages is the establishment of inter-human communication; other aims are either secondary or derived.” This view contrasts with that prominent among linguists with a generativist orientation; for Chomsky (1980: 239), human language “is a system for free expression of thought, essentially independent of stimulus control, need-satisfaction or instrumental purpose.”

In defense of the former view one might argue, as has in fact been done (e.g. Nuyts 1993), that many forms of presumed non-communicative behavior, such as self-talk, involve the same kind of mechanisms as linguistic communication and can be accounted for with reference to the latter. But in much the same way can one as well argue that language is a tool of thought and that any linguistic communication presupposes thought, or cognition and, hence, that communication is derivative of the latter.

(p. 6) Another area where contrasting opinions can be found is the following: On the one hand there are approaches relying on the notion of Universal Grammar (UG) and an autonomous system of generative rules (e.g. Chomsky 1995); on the other hand there are approaches that do without any form of universal grammar or the assumption that there is something like grammar and rules, arguing, as is done for example in the emergentist approach of O’Grady (this volume), that language evolves as a product of efficient processing. This raises the question of what the ontological status of a “rule” is or should be, whether rules are really required in linguistic analysis or whether “rule-like behavior” may be no more than, for example, a side-effect of maximizing probability, as is argued for by students of probabilistic linguistics (see Bod, this volume).

Another distinction concerns the question of whether linguistic analysis (and linguistic explanation) should focus on language knowledge or on language use. The central concern of generative grammar is with what constitutes knowledge of language, how this knowledge is acquired, and how it is put to use (Chomsky 1986a: 3). But there is also the contrasting view according to which the main concern of the linguist should be with understanding the structure of languages; as Bybee and Bybee and Beckner (this volume) argue, “the units and structure of language emerge out of specific communicative events.”

Finally, there is also the question relating to what should be the most central domain of linguistic analysis. In an attempt to summarize the main contrasting positions on this issue, Butler (2003a: 27) concludes that “a functional approach to language would place semantics/pragmatics at the very heart of the model, thus differing radically from formal approaches, which consider syntax as central.” This is echoed, for example, in the conception of language in relevance theory (Yus, this volume), or of Systemic Functional Grammar, where language is viewed as meaning potential where all strata of the linguistic system contribute to the making of meaning (Caffarel-Cayron, this volume). As the following chapters suggest, however, the answer to this question is not always all that unambiguous.

This is but a small catalog of distinguishing properties that have been mentioned. There are many other contrasting positions in addition; suffice it to mention that linguists working in the tradition of Michael Halliday emphasize that “[t]he image of language as rule is manifested in formal linguistics; the image of language as resource is manifested in functional linguistics” (Caffarel-Cayron, this volume), and one might also mention that there is a remarkable pragmatic difference in the role played by the central exponents of the two ‘camps’, characterized by Newmeyer thus:

For better or worse [… ], Chomsky is looked upon as the pied piper by the majority of generative linguists. No functionalist has managed to play the pipes nearly as enticingly to the graduate students of Hamlin.

(Newmeyer 1998: 13)

That there is a fundamental divide in current linguistics on what language is, or is about, is undeniable, but this divide is far from watertight; rather, it is leaky and—as far as recent developments in general linguistics suggest—leakiness is increasing. (p. 7) First, neither students of approaches such as Relational Grammar, Lexical-Functional Grammar, Head-Driven Phrase Structure Grammar, etc. nor many other students working on formal syntax would necessarily look upon Chomsky as the pied pieper. Second, each of the two “camps” is associated with a wide range of different approaches and, as will become obvious in the following chapters, there are considerable areas of overlap between the two. As Van Valin (2000: 335–6) maintains, many of the ideas and methodologies subsumed under the heading “functional linguistics” are more distant from each other than they are from many formalist ideas. If one were to use his classification into purely formalist, structural-functionalist, and purely functionalist as a basis then Role and Reference Grammar is located in the intermediate category of generativist-functionalist approaches (Van Valin 1993a: 2). And according to Hengeveld and Mackenzie (this volume), Functional Discourse Grammar is located halfway between radical formal and radical functionalist approaches.

That degree of formalization is no significant distinguishing feature between generativist and functionalist approaches is also suggested by recent developments in what is commonly referred to as construction grammar: Whereas some directions within this general research paradigm, such as probabilistic linguistics (Bod, this volume) and Embodied Construction Grammar (Feldman et al., this volume) are highly formalist, others, such as radical construction grammar (Croft 2001) or cognitive grammar (Langacker, this volume) are distinctly less so.

Second, there are also commonalities among differing approaches. Many of the approaches, if not all, are concerned—in some way or other—with searching for the most appropriate, economic, or most elegant way of analyzing language structure. For example, when Langacker (this volume) concludes that Cognitive Grammar “shares with generative grammar the goal of explicitly describing language structure” then this also applies to many other approaches across the “divide.” And all approaches have some typological basis, that is, they rest on generalizations about languages across genetic phyla and continents, even if the role played by typology varies considerably from one approach to another (see Baker, this volume, for discussion). And finally, in spite of all the specialized terminologies that characterize individual approaches, there is a common core of technical vocabulary figuring in many different theoretical frameworks. Terms such as sentence, verb, noun, determiner, agreement, passive, tense, aspect, negation, complement, voice, subordination, relative clause, etc. belong to the technical vocabulary of most approaches.

While linguists across different theoretical orientations in fact share a large range of technical vocabulary, one has to be aware, however, that there are also many contrasting definitions and uses of one and the same term, reflecting alternative theoretical orientations. When students of the natural semantic metalanguage approach discuss issues of “universal grammar” and “language universals” (Goddard, this volume) then the theoretical assumptions underlying this usage are fairly different from those that students working in the Chomskian tradition make (see e.g. Baker, this volume). And much the same applies to a number of other terms; what is defined in Lexical-Functional Grammar theory as a “functional constraint” (cf. Asudeh and Toivonen, (p. 8) this volume) has little in common with the use of the same term in many functionalist approaches. Conversely, there are also quite a number of cases where one and the same general linguistic phenomenon is referred to by different terms in the various approaches—what is called a “head” in many schools of linguistics corresponds to the “regent” in dependency theory, and the notion “subcategorization” corresponds in a number of ways to what in other traditions would be referred to as “valency” (Ágel and Fischer, this volume). Such differences are far from arbitrary; rather, they are indicative of the diversity of theoretical concepts that are the subject matter of the following chapters.

1.4 Locating Linguistic Analysis

There is consensus among most authors of this volume that linguistics is an autonomous discipline which requires its own theoretical foundation, methodological apparatus, and discipline-specific set of analytical techniques. Nevertheless, there are also authors arguing that linguistics is related to some other discipline in a principled way. Among the disciplines that are held to be particularly closely related to linguistics, especially in some more recent works, biology occupies a prominent position. Boeckx (this volume), for example, argues that “the generative enterprise is firmly grounded in biology” and, from a different perspective, Givón (this volume) emphasizes the analogies that exist between linguistics and biology; language diachrony, he argues, recapitulates many general features of biological evolution: Both abide by four principles of developmental control, namely graduality of change, adaptive motivation, terminal addition (of new structures to older ones), and local causation (with global consequences). Note also that Feldman et al. (this volume) claim that the correct linguistic analysis ultimately depends on evidence from biology, psychology, and other disciplines.

Up until the early 20th century, if not later, a common practice in linguistics was to use Latin grammar as a model for describing other languages, including non-European languages, and one of the major achievements of structuralism was that it freed linguistics of this straightjacket, making it possible to analyze each language in its own right. Now, it is argued by some modern linguists that after the 1950s the Latinist model was replaced in some schools of linguistics by an English model, in that the kinds of categorization used to describe grammatical structures to be found in the languages across the world were biased in favor of the categories of English; cf. Chomsky’s (1981: 6) assertion that “[a] great deal can be learned about UG from the study of a single language”. This is an issue that also surfaces in some of the chapters of this volume; Van Valin illustrates the problem with the following example:

[…] theories starting from English and other familiar Indo-European languages often take the notion of subject for granted, whereas for one that starts from (p. 9) syntactically ergative and Philippine languages, this is not the case, and the notion of subject as a theoretical construct is called seriously into question.

(Van Valin, this volume)

But then the question arises of what should be one’s template or templates in deciding on how language structures should be analyzed. A somewhat extreme perspective, one that is biased neither in favor of any theoretical presuppositions nor of some specific language or group of languages, is suggested by Haspelmath:

The idea that a single uniform framework could be designed that naturally accommodates all languages is totally utopian at the moment. So instead of fitting a language into the Procrustean bed of an existing framework, we should liberate ourselves from the frameworks and describe languages in their own terms.

(Haspelmath, this volume)

1.5 Analogies Used for Understanding Linguistic Phenomena

Metaphors and other analogical figures provide convenient means for demonstrating, illustrating, or understanding salient features of one’s own road to linguistic analysis as against alternative roads. A paradigm example is provided by Newmeyer (1998: 161–2), who offers a couple of relevant analogies to describe the status of internal explanations for autonomous syntax. One relates to bodily organs, such as the liver, the other concerns the game of chess. Like the principles of generative syntax, he observes, those of chess form an autonomous system: Through a mechanical application of these principles, every “grammatical” game of chess can be generated. But he also observes that the autonomy of this game does not exclude the possibility that aspects of the system were motivated functionally. One kind of functional motivation can be seen in the aims of the original developers and the influence that players may have exerted on the rules of the game; another one concerns the players who, subject to the rules of the game, have free choice of which pieces to choose and where to move them. Nevertheless, such factors are irrelevant to the autonomy of chess, and he concludes:

By the same reasoning, the autonomy of syntax is not challenged by the fact that external factors may have affected the grammar of some language or by the fact that a speaker of a language can choose what to say at a particular time. The only issue, as far as the autonomy of syntax is concerned, is whether one’s syntactic competence incorporates such external motivating factors. As we have seen, it does not do so. In short, the autonomy of syntax maintains that as a synchronic system, grammatical principles have an internal algebra. This fact, however, does not exclude the possibility that pressure from outside the system might lead to a changed internal algebra.

(Newmeyer 1998: 161)

(p. 10) One may add that this does not conclude the list of analogical features shared by the two kinds of phenomena compared; for example, like chess, language is a social institution, created by humans for humans. Newmeyer’s primary concern is with the system and the ‘internal algebra’ of the principles underlying the system, including the competence of the persons concerned. But there are a number of alternative perspectives that one may adopt in analyzing such institutions, and each of these perspectives is associated with a different set of questions. Two possible alternatives are hinted at by Newmeyer. One of them would invite questions such as the following: Who designed the institution, and why was it designed in the first place? How, or to what extent, does the present design of the institution reflect the motivations of those who designed it? The other perspective would concern questions such as the following: What do people do with the institution? What are the aims and purposes for using it? And under what circumstances do they use it or not use it?

Such questions suggest that there are at least two contrasting ways of analyzing such institutions: One may either highlight their internal structure, the principles on which they are based, and the knowledge that people have about these institutions, or one may focus on those who developed and/or use these institutions. The latter perspective is found especially but not only among those following a functionalist orientation. The analogies favored by such scholars are of a different nature: Rather than games, body parts, or products of “architecture” (cf. Jackendoff, Culicover, this volume), they use analogies highlighting the role of the language user or processor, who may be likened to an architect or builder, as reflected in Hagège’s (1993) metaphor of the “language builder,” or a craftsman, as in O’Grady’s emergentist framework (2005, and this volume), where the native speaker is portrayed as a “language carpenter” designing sentences by combining lexical items.

Perhaps the most common metaphorical vehicle drawn on in linguistics is that of a biological phenomenon, namely that of trees: Both in diachronic and synchronic linguistics, the tree has provided a convenient template for describing and understanding taxonomic relationships: Throughout the history of linguistics, tree diagrams have been recruited to represent patterns of genetic relationship among languages, syntactic structures, and other phenomena. One issue that has found some attention in more recent discussions, reflected in the present volume, is whether or not tree branchings should necessarily be binary. But there are also authors doing without tree models; Hudson (this volume), for example, prefers to represent syntactic sentence structure as a network rather than in terms of any kind of a tree structure.

1.6 Domains of Language Structure

Roughly speaking, it would be possible to classify approaches on the basis of which domain or domains of language structure they are most centrally concerned with. (p. 11) But one question here is which domains are to be distinguished in the first place. The ones most commonly appearing in the following chapters are phonology, morphology, syntax, semantics, the lexicon, and pragmatics, even if not all of them are recognized by all scholars as being significant domains of grammar.

Neither phonetics nor phonology had found appropriate coverage in the first edition of this handbook (Heine and Narrog 2010), but both are now better represented in the volume. Experimental phonetics is discussed in Beddor’s chapter, which focuses on the relation between cognitive representations and the physical instantiation of speech. Incorporating autosegmental representations, the chapter by Paster then presents an approach to phonology that is based on rules and rule ordering rather than on constraints, the main goal of the chapter being to provide an overview of a derivational model of phonology.

One of the domains that has attracted the interest of linguists in the course of the last fifty years perhaps more than others is syntax. But should syntax be given a privileged status in analyzing language structure, as it is, for example, in the Minimalist Program and other approaches framed in the Chomskian tradition, or should it be seen as functioning “in the grammar not as the fundamental generative mechanism, but rather as an intermediate stage in the mapping between meaning and sound” (Jackendoff, this volume), or as being derivative of cognition, as is argued for example in some models of cognitive grammar (see e.g. Langacker, this volume), or as being a product of discourse pragmatic forces, as is suggested by some functionalist linguists (cf. Givón 1979)? And should all syntactically sensitive phenomena of language structure—for example constituency on the one hand and grammatical functions on the other—be treated in one and the same domain, as is done in some of the syntactic approaches discussed in this volume, or should there be, for example, two separate structures (c-structure vs. f-structure), as students of Lexical-Functional Grammar propose (Asudeh and Toivonen, this volume)?

A number of students of grammar do recognize syntax as a distinct domain but do not attribute any central role to it. Rather than a syntactic machinery, Givón (this volume) sees a well-coded lexicon together with some rudimentary combinatorial rules, as it can be observed in pre-grammatical pidgin and other forms of communication, as more essential for understanding and analyzing language structure. In other directions of linguistics again, syntax is not treated as a distinct domain of grammar at all. In particular, some linguists with a functionalist orientation argue that syntactic and morphological phenomena form an inextricable unit, referred to as “morphosyntax.”

But more than syntax, morphology has been the subject of contrasting perspectives. In some schools of linguistics no distinct component or level of morphology is distinguished. Having at times been dubbed “the Poland of linguistics”, some linguists do not consider morphology to be a relevant subdiscipline at all, treating it rather as a component of some other domain, whether that be syntax, phonology, or the lexicon. There is the view, for example, that morphology is included in the same comprehensive (p. 12) representational system as syntax (e.g. Baker, this volume), even if this view is not shared by many others. Spencer and Zwicky (1988a: 1), in contrast, view morphology as being at the conceptual center of linguistics. That morphology is a distinct domain and subdiscipline of linguistics is also maintained in other approaches discussed in this volume, such as Word Grammar (Hudson, this volume), and for others again, morphology “is the grammar of a natural language at the word level”, as Booij (this volume; see also Booij 2007) puts it.

And there is also a wide range of different opinions on the place of the lexicon in grammar. Some scholars would consider the lexicon to be only of marginal concern for analyzing grammar, or treat the lexicon and grammar as mutually exclusive phenomena (cf. Croft 2007b: 339). Others again attribute core syntactic properties to the lexicon. According to adherents of dependency grammar and valency theory, an essential part of grammar is located in the lexicon, “in the potential of lexemes for connexion, junction, transfer, and valency” (Ágel and Fischer, this volume; see also Hudson, this volume), and Categorial Grammar is, as Morrill (this volume) puts it, “highly lexicalist; in the ideal case, purely lexicalist.” Passivization is widely held to be an operation to be located in syntax; but in Lexical-Functional Grammar (LFG) or Head-Driven Phrase Structure Grammar (HPSG) it is treated as a rule that converts active verbs into passive verbs in the lexicon, altering their argument structure (see Jackendoff, this volume, for discussion). That the lexicon is of central importance for understanding and analyzing grammar is also pointed out in some other chapters of this volume, such as those of Givón and O’Grady.

Semantics is recognized as a distinct domain in most approaches, and in some approaches it is viewed as being the domain that is most central to linguistic analysis (see e.g. Goddard; Caffarel-Cayron, this volume), even if it is looked at from a number of contrasting perspectives. That the meaning of both lexical items and constructions cannot be understood satisfactorily without adopting a frame-based analysis is argued for by Fillmore and Baker (this volume). Another theme concerns the place of semantics vis-à-vis other domains of linguistic analysis. There are different views on where semantics ends and other domains such as pragmatics or cognition begin. Huang (this volume) draws attention to Grice (1989), who had emphasized “the conceptual relation between natural meaning in the external world and non-natural, linguistic meaning of utterances”; we will return to this issue later in this section. In some approaches there is an assumption to the effect that semantics is primary while pragmatics is secondary, the latter being concerned largely with phenomena that cannot be fitted into a semantic analysis. Jackendoff (this volume), by contrast, concludes that “one cannot do the ‘semantics’ first and paste in ‘pragmatics’ afterward,” and in Role and Reference Grammar, discourse pragmatics plays an important role in the linking between syntax and semantics (Van Valin, this volume).

The boundary between semantics and pragmatics is in fact an issue that comes up in a number of chapters. Langacker (this volume) observes that the standard doctrine assumes a definite boundary between semantics and pragmatics (or between linguistic and extra-linguistic meaning) while he maintains that there is no specific (p. 13) line of demarcation between the two. A similar conclusion also surfaces in some lines of research in neo-Gricean pragmatics. Levinson (2000) argues that, contrary to Grice (1989), conversational implicatures can intrude upon truth-conditional content, and that one should reject the “received” view of the pragmatics–semantics interface, according to which the output of semantics provides input to pragmatics, which then maps literal meaning to speaker-meaning (see Huang, this volume).

In a similar fashion, there is the question of what the place of pragmatics should be vis-à-vis syntax. Baker aptly portrays two main contrasting stances on this issue thus:

On one view, pragmatics is the more basic study, and syntax is the crystallization (grammaticization) of pragmatic functions into more or less iconic grammatical forms. On the other view, syntactic principles determine what sentences can be formed, and then pragmatics takes the range of syntactic structures that are possible and assigns to each of them some natural pragmatic use(s) that take advantage of the grammatical forms that are available. The first view is characteristic of functionalist approaches to linguistics; the second is the traditional Chomskian position.

(Baker, this volume)

For a syntactic approach to deal with information structure, see Cinque and Rizzi (this volume); the role of pragmatics in the tradition of Grice (1978) is most pronounced in the chapters on Default Semantics (Jaszczolt, this volume), relevance theory (Yus, this volume), and on neo-Gricean pragmatic theory (Huang, this volume), and these chapters also show that much headway has been made in the research of this domain. Using a Kantian apophthegm of the form “pragmatics without syntax is empty; syntax without pragmatics is blind”, Huang argues that pragmatics plays a crucial role in explaining many of the phenomena that are thought to be at the very heart of syntax.

Pragmatics in a wider sense is also the subject of conversation analysis. Among the approaches that adopt a perspective on language structure and linguistic discourse beyond orthodox linguistic methodology, conversation analysis has occupied an important place since the end of the 1960s. In his chapter on this field, Sidnell observes that students working in this research tradition “typically understand language to be fundamentally social, rather than biological or mental, in nature. Linguistic rules from this perspective are first and foremost social rules which are maintained in and through talk-in-interaction.” Interaction, it is argued, can be approached in terms of a system with its own specific properties that cannot be reduced to convenient linguistic, psychological, cultural, or other categories.

Finally, there is the domain of cognition, which appears to be rapidly gaining importance in linguistic theorizing, and this is also reflected in some of the discussions of this volume. Boeckx (this volume), interprets the Minimalist Program of Chomsky (1995) as an “attempt to situate linguistic theory in the broader cognitive sciences,” opening up fresh perspectives for an overall theory of cognition, and for Feldman et al. (this volume), linguistic analysis is part of a Unified Cognitive Science. These authors argue that the nature of human language and thought is heavily influenced by the neural (p. 14) circuitry that implements it, and integration of linguistic research with knowledge on neural reality is an important goal of their framework.

1.7 Relations Among Domains

If there are contrasting positions on which grammatical domains should be distinguished in linguistic analysis, then this applies even more to the question of how the relationship among these domains should be defined: Are they all independent of one another, and if they are not, how are they interrelated? It is this question where a particularly wide range of different answers is volunteered by the various authors. Jackendoff (this volume) argues that “the internal structure of some components of language, as well as the relation of language to other faculties, is consonant with a parallel architecture for language as a whole.” In Cognitive Grammar, linguistic units are limited to semantic, phonological, and symbolic structures that are either part of occurring expressions or arise from them through abstraction and categorization, but Langacker (this volume) adds that a “major source of conceptual unification is the characterization of lexicon, morphology, and syntax as a continuum consisting solely in assemblies of symbolic structures.”

Most authors state explicitly which domains they distinguish in their approach and what kinds of connections they postulate among domains. One way of establishing such connections is via hierarchical organization; Functional Discourse Grammar, for example, assumes a top-down organization of grammar, where pragmatics governs semantics, pragmatics and semantics govern morphosyntax, and pragmatics, semantics, and morphosyntax govern phonology (Hengeveld and Mackenzie, this volume). A tenet of some schools of linguistics is in fact that there is one domain that has a privileged status vis-à-vis other domains. Such a status can be due to the magnitude of connections that that domain is held to share with other domains. But such a status can also be due to relative degrees of descriptive and/or explanatory power attributed to one specific domain. In other approaches, specific theoretical devices are proposed to connect different domains. For example, students of Lexical-Functional Grammar use Glue Semantics as a theory to take care of the interface between syntax and semantics, and an “m-structure” is proposed to deal with the interface between syntax and morphology (Asudeh and Toivonen, this volume).

In other approaches again there is some specific domain that relates different domains to one another. Thus, in Systemic Functional Grammar, a tristratal linguistic system of semantics, lexicogrammar, and phonology is proposed, and semantics is the interface between grammar and context (Caffarel-Cayron, this volume). In Role and Reference Grammar, discourse pragmatics plays an important role in the linking between syntax and semantics; Van Valin proposes a linking algorithm that directly connects the semantic with the syntactic representation, and there is a direct mapping between the two representations. For Spencer and Zwicky (1988a: 1), by contrast, (p. 15) morphology is at the conceptual center of linguistics since it is the study of word structure, and words are at the interface of phonology, syntax, and semantics (see Section 1.6).

Finally, there is also the position represented in the volume according to which there is no need to distinguish domains in the first place. In the natural semantic metalanguage approach, for example, it is argued that meaning is the bridge between language and cognition, and between language and culture, and Goddard (this volume) concludes that compartmentalizing language (or linguistic analysis) into syntax, morphology, semantics, and pragmatics therefore makes little sense.

1.8 The Nature of Structures

One issue discussed in a number of the chapters to be presented concerns the question of whether to set up a distinction between deep, underlying or underived structures on the one hand and surface or derived structures on the other, as is done in particular in approaches designed in the Chomskian tradition or in Optimality Theory (see Gouskova, this volume), or else whether such a distinction can or should be dispensed with, as is argued for in other, most of all but not only functionalist approaches (cf. Hudson; Culicover, this volume).

A related question is how to deal with “zero,” or empty categories, or null elements in syntactic representations, for example, with null pronouns (pro, PRO), noun phrase traces, “null subjects” in infinitival complements, or constructional null instantiation (Fillmore and Baker, this volume)—elements that are posited on the basis of structural considerations but are not phonologically expressed. Such categories have an important status for scholars working in some schools of syntactic analysis but are not recognized by others; Van Valin (this volume) describes the latter position thus: “If there’s nothing there, there’s nothing there” (see also Culicover, this volume).

Language structure shows both “regular” and “irregular” features; as Michaelis (this volume) puts it, many, if not most, of the grammatical facts that people appear to know cannot be resolved into general principles but must instead be stipulated. Linguistic approaches tend to highlight the “regular” structures, proposing generalizations that have a high degree of applicability. But what to do with the other part of grammar that is elusive to the generalizations, such as prefabricated word combinations, or idiomatic and ritualized structures? This is a question that is addressed in some way or other in a number of chapters, and it is one where students of construction grammar and probabilistic linguistics propose answers challenging earlier models of grammar (Bod, this volume; see also Jackendoff, this volume, and others).

Another issue relates to the nature of grammatical categories. Most of the authors rely on entities of linguistic categorization that are discrete/algebraic, based on necessary and sufficient conditions, widely known as “classical categories.” Others again, mainly but not only authors with a functionalist orientation, believe in the non-discreteness of linguistic categories, drawing on models framed in terms of Roschian (p. 16) prototypes (e.g. Langacker and Goddard, this volume) or of continuum models (see Taylor 1989 for a discussion of the distinguishing properties of these types of categories). Bod (this volume) observes that “[t]here is a growing realization that linguistic phenomena at all levels of representation, from phonological and morphological alternations to syntactic well-formedness judgments, display properties of continua and show markedly gradient behavior,” and that all the evidence available points to a probabilistic language faculty.

All these positions are represented in the following chapters, but there are also authors who allow for both kinds of categories. For example, in the Conceptual Semantics of Jackendoff and Culicover (this volume), conditions other than necessary and sufficient ones are admitted, and this type of semantics is compatible with Wittgensteinian family resemblance categories (“cluster concepts”). Bybee and Bybee and Beckner (this volume) argue that “the boundaries of many categories of grammar are difficult to distinguish, usually because change occurs over time in a gradual way, moving an element along a continuum from one category to another. Accordingly, students of grammaticalization have proposed category structures that take the form of clines or chains but are not necessarily restricted to non-discrete categories (Heine and Narrog, this volume; see also Hopper and Traugott 2003).

Some discussions in the volume also concern two contrasting principles of analyzing syntactic relations, commonly described in terms of the distinction constituency vs. dependency: Should one use a phrase structure model, as quite a number of authors do, or a dependency model, as others prefer, or should one use a combination of both? There is fairly wide agreement that phrasal categories in some form or other constitute an indispensable tool for analyzing grammatical structures. But are they really indispensable in linguistic analysis, or are there alternative kinds of categories in addition, perhaps categories that allow doing away with phrasal structures? Some students of language would answer this question in the affirmative. In dependency grammar and valency theory (Ágel and Fischer, this volume), but also in the emergentist approach of O’Grady (this volume), for example, it is argument dependencies rather than phrasal categories that are central, and the latter do not have any independent status in computational analysis.

Rather than phrasal categories, corpus-driven research finds other kinds of grammatical units and relations to be central. As Biber (this volume) observes, the strictest form of corpus-driven analysis assumes only the existence of word forms. Lexical bundles are such units; they are defined as the multi-word sequences that recur most frequently and are distributed widely across different texts; in English conversations they include word sequences like I don’t know if or I just wanted to. Note that—unlike formulaic expressions—most lexical bundles cut across phrasal or clausal boundaries, being “structurally incomplete”; they are not idiomatic in meaning, and their occurrence is much more frequent than that of formulaic expressions (cf. also Haspelmath, this volume, who rejects notions such as noun phrase (NP) and verb phrase (VP), positing language-specific categories instead).

(p. 17) 1.9 Lexical vs. Functional Categories

Another issue concerns the lexicon–grammar interface. Most approaches of linguistic analysis assume that in addition to lexical categories there is a second kind of forms, referred to as functional categories (or grammatical categories in some frameworks), operators, etc., that is, grammatical taxa serving the expression of functions such as case, number, tense, aspect, negation, etc. The following are a few distinguishing properties that are widely recognized: (a) Lexical categories are open-class items while functional ones are closed-class items (having a severely restricted number of members belonging to the same class), (b) the former have a rich (lexical) meaning while that of functional categories is schematic, (c) lexical categories are independent words or roots while functional categories tend to be dependent elements, typically—though not necessarily—described as clitics or affixes, and (d) functional categories tend to be shorter (frequently monosyllabic).

On many perspectives this distinction is a robust one, but the way it is treated differs from one approach to another. There is in particular the question of whether the boundary between the two kinds of categories is discrete, as is maintained in the majority of approaches presented in this book, or gradual, as argued for explicitly in some of the approaches, or else, whether there is no boundary in the first place. Langacker (this volume), for example, maintains that instead of being dichotomous, lexicon and grammar form a continuum of meaningful structures, the primary difference between “lexical” and “grammatical” units being that the latter are more schematic in their content, their main import residing in construal.2 possible or feasible A related position is maintained in grammaticalization theory, where it is argued that—at least in a number of cases—it is not possible to trace a discrete boundary between the two (see Heine and Narrog, this volume).

1.10 Recurring Topics

As we observed above, there is a range of structural concepts and technical terms that are shared by most linguists. But at a closer look it turns out that there are also some dramatic differences on whether or how these concepts and terms apply within a given approach. For example, a paradigm concept of linguistic analysis across linguistic schools can be seen in the notion “sentence” (or clause). But there are also (p. 18) alternative notions that are proposed in some of the approaches. In approaches based on dependency theory (e.g. Ágel and Fischer, Hudson, this volume), the word is the basis of grammatical analysis, while Functional Discourse Grammar takes the discourse act rather than the sentence as its basic unit of analysis (Hengeveld and Mackenzie, this volume), and for a number of linguists, “construction” is taken to be a crucial component of language structure (e.g. Michaelis, this volume). Some authors suggest that, rather than being epi-phenomenal, constructions should be in the center of linguistic analysis. To this end, a model is proposed in usage-based theory where the grammar of a language is interpreted as a collection of constructions, organized into networks by the same criteria that words are (Bybee and Beckner, this volume). So, what should be the basis of linguistic analysis—words, sentences, constructions, discourse acts, or any combination of these? As we mentioned above, grammatical classes and syntactic structures other than word forms have no a priori status in corpus-driven research (Biber, this volume).

Another example concerns the case functions subject and object. That “subject” and “object” are useful or even indispensable entities for describing relations of arguments within the clause is widely acknowledged in many frameworks of linguistic analysis, even if various refinements have been proposed, such as that between the subject of a transitive clause (A) and that of an intransitive clause (S). However, the crosslinguistic validity of these entities has not gone unchallenged; suffice it to mention Van Valin’s position (this volume) according to which grammatical relations like subject and direct object are not universal and cannot be taken as the basis for adequate grammatical theories (see Section 1.3).

One of the grammatical phenomena that is seen by some to be a testing ground for the viability of a given approach is passivization. Questions that are raised in this volume on the analysis of passives include the following: Does passivization involve some movement of arguments, as is argued for in mainstream generative grammar, or is there no need to assume that there is movement, as maintained in a number of approaches, including Cognitive Grammar, Simpler Syntax, Role and Reference Grammar, etc. (see e.g. Langacker, Culicover, Van Valin, this volume)? And, is passivization really a syntactic phenomenon, as proposed in many approaches, or should it be treated as belonging to a distinct domain of information structure? Other phenomena that are discussed controversially in the following chapters are not hard to come by; one may wish to mention for example anaphora, which is analyzed by many as a syntactic phenomenon (cf. Chomsky 1995) but as a pragmatic one by others (see Huang, this volume).

Finally, the arrangement of linear elements in the clause has received remarkable attention across many schools of linguistics ever since Greenberg (1963a) proposed his classification of basic word order types. And once again, we find contrasting positions on what the ontological status of word order should be in a theory of language. There are, on the one hand, those who maintain that linear order must be a primitive of the syntactic theory rather than a derived property (Culicover, this volume; Barss and Lasnik 1986; Jackendoff 1990b). On the other hand, there are also those for whom linear (p. 19) order is derivative of other syntactic phenomena (Larson 1988), and for a number of functionalists, word order is not a primitive of any kind but rather an epi-phenomenal product of discourse-pragmatic manipulation.

1.11 Experimental Analysis

Another area where the present volume differs from the first edition of this handbook (Heine and Narrog 2010) concerns experimental work in linguistic analysis. This is a rapidly growing field of linguistics, and the volume contains three chapters devoted to it, relating to phonetics, semantics, and neurolinguistics: The chapter by Beddor is concerned with experimental phonetics, more precisely with the physical instantiation of speech, and with the relation between cognitive representations and the physical instantiation of speech. A central topic in Beddor’s discussion is provided by the classic notion of coarticulation, that is, the overlapping articulatory movements that are necessary for rapid, fluent, comprehensible speech. Surveying a wide range of methods and techniques that have become available to the phonetician, she shows how this notion can be approached from different perspectives.

Experimental semantics is the topic of Matlock and Winter’s chapter. Linguists dispose by now of a wealth of methods and empirical techniques for the study of phonetics, phonology, morphology, and syntax. As the history of the field suggests, however, semantic analysis turns out to be much harder to approach. A considerable part of the study of linguistic meaning was, and still is based on the linguist’s introspection and intuitive reasoning. These authors argue that there are good reasons to conduct comprehensive experiments on semantic content and how meaning is processed, the focus of Matlock and Winter being on issues of cognitive linguistic theory.

More than in other domains of linguistic analysis, experiment-based research is essential to neurolinguistic research. That the relationship between language processing and the brain is complex is well known. Defining notions such as “syntax,” “phonology,” and “semantics” as significant domains of language knowledge and/or language use is a common practice in linguistic analysis across many different linguistic schools. But if one were to expect that these notions each correspond to a distinct brain region then one would be disappointed. As is argued in the chapter by Arbib, there is “a skein of interconnected regions, each containing an intricate network of neurons, such that a specific ‘syntactic task’ may affect the activity of one region more than another without in any way implying that the region is dedicated solely to syntactic processing.” Summarizing results of neurolinguistic research that have been accumulated over the last decades, the main goal of this chapter is to contribute to the development of a new theory of language whose terms are grounded in a better understanding of the brain. Such a theory, the author suggests, can fruitfully be approached within a cooperative computation framework as outlined in the chapter.

(p. 20) 1.12 Typology

Language description is constantly concerned with the question of whether generalizations on linguistic phenomena should be discrete/categorial or stochastic/statistical. The authors of this handbook differ from one another on how they answer this question. While most aim at the former, there also some chapters devoted to approaches based on statistical, probabilistic generalizations, most of all in Bod’s chapter. This question is not only relevant for language-internal analysis but applies in much the same way to comparative linguistics in general and crosslinguistic language typology in particular. By using a comparative approach referred to by him as multivariate typology, Bickel (this volume) observes that such an approach is able to generalize on typological diversity among the languages of the world. Suggesting that the distribution of linguistic structures is the product of history, he furthermore argues that this approach is also of help in reconstructing diachronic events.

As was observed above, all approaches discussed in this volume rest at least to some extent on generalizations about different languages. But the role played by typology varies considerably from one approach to another. While accounting for typological diversity is a central concern for many authors (see e.g. Baker, de Swart and Zwarts, Hengeveld and Mackenzie, Haspelmath, Van Valin, this volume), this goal is not given high priority in the work of some other authors. One issue that is treated differentially is how and where typological distinctions should be accounted for in a given approach. For a number of authors, typology forms the basis for all generalizations on language structure. In the Principles and Parameters model of Chomsky, by contrast, Universal Grammar (UG) is conceived as a set of principles regulating the shape of all languages, and these principles can be thought of as laws to which all languages must abide (see Boeckx, this volume); but in addition there is a set of parameters giving rise to the specific forms of individual languages, thereby accounting for typological diversity. Other, related, issues concern the level of abstraction that one should aim at in typological analysis and comparison, or the question of how many languages should be included in one’s sample in order to come up with meaningful generalizations about the world’s languages at large (see Baker, this volume for an insightful discussion of this issue).

In a number of chapters, correlations are proposed between language typology and structural properties of the languages concerned. A case in point is provided by Huang (this volume), who observes that there is a correlation in his binding condition A between English-type, syntactic languages, where it is grammatically constructed, and Chinese-type, pragmatic languages, where it is pragmatically specified (for more observations of this kind, see de Swart and Zwarts, this volume).

The question of how grammatical categories relate to typological diversity and universal features of human languages is also an issue that surfaces in a number of the chapters. The answer given by most authors is that there should be a set of categories that is crosslinguistically the same, that is, one that in some way or other reflects (p. 21) universal properties of human language or languages. For example, in the cartographic approach it is assumed that all languages share the same principles of phrase and clause composition and the same functional make-up of the clause and its phrases (Cinque and Rizzi, this volume). But there are also those authors who argue against universally uniform categories.

The latter position is associated on the one hand with a research field that has a long tradition in linguistics, namely the study of linguistic relativity. As the discussion by Pederson (this volume) suggests, some progress has been made more recently in our understanding of the interactions between cognitively universal and linguistically specific phenomena. On the other hand, it is also associated with another tradition of descriptive linguistics of the mid-20th century, namely American structural linguistics. Observing that the vast majority of the world’s languages have not, or have not sufficiently been described and that the typologists’ comparative concepts do not necessarily match the descriptive categories of individual languages, Haspelmath (this volume) suggests that language documentation is one of the primary tasks of the linguist and that one should describe each language and its grammatical categories in its own terms. With this view he takes issue not only with generativist approaches but also with those falling under the rubric of Basic Linguistic Theory, in particular with those of Dixon (1997) and Dryer (2006b), who use the same concepts for both description and comparison.

There is a stance that is somehow intermediate between these two extreme positions—one that is well represented in this volume—which postulates a crosslinguistically stable set of categories, but neither are all these categories represented in a given language nor are they represented the same way across languages.

A question central to all typological work is what typology can tell us about language universals. Here again there are contrasting positions correlating with the generativist/functionalist divide. For example, reviewing the monumental World Atlas of Language Structures (Haspelmath et al. 2005), Baker (this volume) concludes that “standard typologists have looked hardest for universals in exactly those domains where generativists least expect to find them, and have hardly looked at all in those domains where generativists predict that they exist.”

1.13 Synchrony vs. Diachrony

An issue that goes perhaps somewhat unnoticed in works on linguistic analysis concerns how a given account of language knowledge or language use relates to time: Should a framework used to explain the nature of language structure be restricted to synchronic observations or should it also account for diachrony? That our understanding of linguistic structure may benefit from adding a diachronic perspective is argued for in a number of chapters, most notably in those of Givón and Heine and Narrog, and in some of the approaches a separate component is proposed to deal with (p. 22) the synchrony/diachrony interface; cf. the principle of viability in valency theory (Ágel and Fischer, this volume).

A distinction commonly made across the different schools of linguistics is one between two contrasting subfields, conveniently referred to, respectively, as synchronic and diachronic linguistics. The former deals with linguistic phenomena as they are observed at some specific point in time, which typically includes the present, while the latter is concerned with how linguistic phenomena behave across time, that is, essentially with how languages change. But at a closer look, this does not really seem to be an exhaustive classification. That the borderline between the two is somewhat problematic has been pointed out independently by many linguists. Langacker (this volume), for example, notes that since entrenchment and conventionality are matters of degree, there is never a sharp distinction between synchrony and diachrony.

Languages constantly change; the English of today is no longer exactly what it used to be a few decades ago: There are now new lexical items and use patterns that were uncommon or non-existent twenty years ago. Strictly speaking therefore, a synchronic analysis should relate to one specific point in time that needs to be defined in the analysis concerned. As a matter of fact, however, this hardly ever happens; grammatical descriptions are as a rule silent on this issue, and they may therefore more appropriately be dubbed achronic rather than synchronic. The fact that languages constantly change has induced some students of language to argue that a rigidly synchronic analysis is not possible or feasible3 and that linguistic analysis should be panchronic in orientation (Heine et al. 1991; Hagège 1993), that is, include the factor of time as part of the analytic framework. To our knowledge, however, there is so far no general theory of language that appropriately accounts for panchrony.

1.14 Sociolinguistic Phenomena

Linguistic analysis in general is based predominantly on language-internal phenomena, and this is reflected in the present volume. Accordingly, findings on the interface between linguistic and extra-linguistic phenomena, such as social or cultural ones, play a relatively minor role in most of the approaches discussed in this volume; Hudson (this volume) therefore aptly concludes that “sociolinguistics has otherwise had virtually no impact on theories of language structure.” But there are some noteworthy exceptions. As the chapters by Biber, Caffarel-Cayron, Hudson, and Pederson in particular show, general linguistics can benefit greatly from incorporating a sociolinguistic, or a socio-cultural dimension, and in the natural semantic metalanguage approach, the notion cultural script is proposed as an important analytic tool (p. 23) to account for culture-dependent crosslinguistic distinctions, in particular on how ethno-pragmatically defined categories can exert an influence on language structure, for example in the form of constructions that—to use the wording of Goddard (this volume)—“are tailor-made to meet the communicative priorities of the culture.”

1.15 On Explanation

Differences surfacing in the volume also relate to how the structure of language or languages should be explained: Are explanations of the deductive-nomological type possible, meaningful, or both in linguistics; should explanations be context-dependent or context-free; should they be based on deduction, induction, abduction, or any combination of these; is it possible to draw a boundary between theory-internal and theory-external explanations, and can they be based on probabilistic generalizations, or is it only exceptionless, law-like generalizations that should be the concern of the linguist? And finally, should explanations be mono-causal or multi-causal, and should they be internal or external?

It is the last question that has attracted the attention of linguists perhaps more than others. In internal explanations, a set of facts falls out as a consequence of the deductive structure of a particular theory of grammar, or else, a given phenomenon is explained with reference to other phenomena belonging to the same general domain. Thus, in approaches framed in the Chomskian tradition, “one feature of a language is explained in terms of its similarity to another, at first different-seeming feature of that language and another language, by saying that both are consequences of the same general principle” (Baker, this volume); for example, Bresnan and Mchombo (1995) argue that Lexical Integrity provides a principled explanation of the complex syntactic, morphological, and prosodic properties of Bantu noun class markers (see Asudeh and Toivonen, this volume).

In external explanations, by contrast, a set of facts is derived as a consequence of facts or principles outside the domain of grammar. There is a partial correlation between these two types of explanation and the two main orientations of contemporary linguistics: Internal explanations are most likely to be found in works of generativist linguists, while functionalist approaches are likely to be associated with external explanations. As will become apparent in the following chapters, however, the situation is much more complex. Another correlation concerns a distinction that we mentioned above, namely that between synchrony and diachrony: Internal explanations are overwhelmingly, though not necessarily, synchronic in orientation, while external ones are likely to have a diachronic component, or to be generally diachronic in nature.4

(p. 24) There is no overall way of deciding on which of the two kinds of explanations is to be preferred. As we saw in Section 1.5 in our example of the game of chess, the nature of an explanation depends crucially on the kind of questions one wishes to answer. Thus, whereas question (a) below is most strongly associated with internal explanations and generative theories, (b) is the question that functionalists tend to be concerned with, drawing on external explanations.

  1. (a) How can the knowledge that native speakers have about their language best be understood and described?

  2. (b) Why are languages structured and used the way they are?

Independent of how one wishes to decide on which of these options is to be preferred, there remains the question of whether there are certain domains, or components, of language structure that are particularly rewarding in looking for explanations. Syntax is the domain that has attracted quite some attention but—as we saw in Section 1.3— there is also the opinion, favored in functionalist traditions, that it is semantics or the pragmatics of communication that provide the best access to linguistic explanation.

1.16 Language Acquisition and Language Evolution

Formerly considered to be fairly marginal topics, evolutionary perspectives of language and its structure are now widely recognized as relevant components in mainstream linguistics. This applies not only to the analysis of language acquisition but also to research work on the genesis of human language or languages. What linguistic analysis means to children acquiring their first language is discussed in detail by Eve Clark in her chapter on linguistic units of first language acquisition. As Clark observes in this chapter, children’s skill in the production of linguistic structure overall lags considerably behind their skill in perception and comprehension.

A few decades ago, topics relating to language origin and language evolution were in fact non-issues in mainstream linguistics. The present volume suggests that this situation may have changed: Language evolution is nowadays a hotly contested subject matter in some schools of linguistics, both in functionalist works (Givón, this volume) and generativist traditions of modern linguistics (Hauser et al. 2002; Jackendoff 2002; Jackendoff and Pinker 2005). Givón in particular argues that relating language structure to other human and non-human communication systems and to language (p. 25) evolution is an essential task of understanding grammar (Givón 1979, 2005). A field of research that is held to be particularly rewarding in the reconstruction of language evolution is that of signed languages (see Wilcox and Wilcox, this volume). While there are contrasting opinions on how language evolution may have proceeded, there is agreement among these authors that it is essentially possible to reconstruct this evolution.

1.17 Conclusions

For obvious reasons, the forty chapters of this handbook are restricted to a specific goal and, hence, to a specific range of subject matters that are immediately relevant to the analysis of a given language, or of languages in general. Accordingly, many other issues that are also of interest to the student of language have to be ignored. Such issues concern on the one hand comparative linguistics; as desirable as it would have been to have a more extensive treatment of relationship among languages, this would have been beyond the scope of the present volume. Other issues that will not receive much attention either are language contact, that is, how speakers of different languages and dialects interact with and influence one another, language internal diversity, such as variation among dialects or socially defined linguistic varieties, and languages that are said to enjoy a special sociolinguistic or linguistic status, such as pidgins and creoles.

One major incentive to work on a second edition of the handbook was to include additional chapters on topics that could not be covered in the first edition. To this end we have added seven new chapters, namely on experimental phonetics (Beddor), phonology (Paster), experimental semantics (Matlock and Winter), typology (Bickel), conversational analysis (Sidnell), language acquisition (Clark), and neurolinguistics (Arbib). Still, a number of topics are not discussed in as much detail as would have been desirable. The reasons for this are of two kinds: Either the authors contracted were not able to meet the deadline set, or we were not able to find suitable authors for a specific subject. We ask the readers for their kind understanding for this fact. (p. 26)


(1) Note that names used to refer to approaches do not necessarily reflect the orientation concerned; for example, Lexical-Functional Grammar (see Asudeh and Toivonen, this volume) is not commonly considered to be a functionalist theory.

(2) Thus, Langacker argues that morphologically contrasting items such as nouns and derivational items like nominalizing suffixes can in a given case be assigned to the same category: “A nominalizer (like the ending on complainer or explosion) is itself a schematic noun and derives a noun from the verb stem it combines with” (Langacker, this volume).

(3) For example, Hagège (1993: 232) maintains: “No strictly synchronic study of a human language is conceivable, given the constant reshaping work done by LBs [language builders; BH&HN] even when a language seems to have reached a state of equilibrium”.

(4) A number of functionalists would follow Givón (1979, 2005) in arguing that, like functional explanation in biology, functional explanation in linguistics is necessarily diachronic (Bybee 1988c; Keller 1994; Haspelmath 1999a, 2008c; Heine and Kuteva 2007; Bybee and Beckner, this volume); in the wording of Dryer (2006a: 56), “a theory of why languages are the way they are is fundamentally a theory of language change.”