Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: null; date: 27 June 2019

Linguistic Minimalism

Abstract and Keywords

Linguistic Minimalism refers to a family of approaches exploring a conjecture, first formulated by Noam Chomsky in the early 1990s, concerning the nature of the human language faculty. This contribution, I state as clearly as possible what the conjecture amounts to, what sort of research program emerges from it, and how it could be carried out (using examples from the existing literature as concrete illustrations). Its roots are traced, and its philosophical commitments discussed. Finally, the connection between the emergence of linguistic minimalism and the renewed interests in central themes in the study of biological foundations of language (biolinguistics) are emphasized.

Keywords: biolinguistics, Darwin’s problem, Galilean style, Minimalism, parameter, Plato’s problem, Principles-and-Parameters, rationalism

Linguistic minimalism refers to a family of approaches exploring a conjecture, first formulated by Noam Chomsky in the early 1990s, concerning the nature of the human language faculty. My aim in this chapter is fourfold. First, I want to state as clearly as I can what the conjecture amounts to, what sort of research program emerges from it, and how it could be carried out (using examples from the existing literature as concrete illustrations). Second, I want to emphasize that the minimalist program for linguistic theory did not arise out of nowhere. It is firmly grounded in the generative enterprise and the rationalist (“Cartesian”) tradition more generally. Third, the pursuit of specific minimalist analyses follows a certain research style, often called the “Galilean style”, whose core properties I want to discuss, since they help one understand why certain moves are made when minimalism is put into practice. Fourth, I want to highlight the fact that minimalism, if rigorously pursued, naturally gives rise to a specific way of approaching interdisciplinary problems such as “Darwin’s Problem” (the logical problem of language evolution). Indeed, I believe that minimalism has significantly contributed to the resurgence of “biolinguistic” themes in recent years, and may mark a return of linguistic studies to the heart of cognitive science after two decades of (unavoidable, indeed, necessary) modularization.

I should stress right from the start that the overview that follows is obviously a very personal one. The reader should bear in mind that, although I rely on the works of a large group of researchers, this is very much “linguistic minimalism as I see it”. (p. 430)

18.1 Beyond Explanatory Adequacy

Science is all about understanding, not describing. Scientists are in the business of explaining, not merely cataloging. Since its inception, the generative approach to language has studied language as a natural object, pretty much the way a physicist or chemist approaches his object of study. Since Chomsky’s early works, language (more accurately, the language faculty [FL]) has been treated as an organ of the mind in order to shed light on one “big fact” about human beings: short of pathology or highly unusual environmental circumstances, they all acquire at least one language by the time they reach puberty (at the latest), in a way that is remarkably uniform and relatively effortless. The acquisition of language is all the more remarkable when we take into account the enormous gap between what human adults (tacitly) know about their language and the evidence that is available to them during the acquisition process. It should be obvious to anyone that the linguistic input a child receives is radically impoverished and extremely fragmentary when compared with the subtlety and complexity of what the child acquires. It is in order to cope with this “poverty of stimulus” that Chomsky claimed that humans are biologically endowed with an ability to grow a language, much like ethologists have done to account for the range of quite specific and elaborate behaviors that animals display. The biological equipment that makes language acquisition possible is called Universal Grammar (UG). Fifty years of intensive research have revealed an astounding array of properties that must be part of UG if we are to describe the sort of processes that manifest themselves in all languages (in often quite subtle ways) and the way such processes become part of the adult state of FL. This much generative grammarians (over a broad spectrum of theoretical persuasions) take as undeniable. Around 1980, the array of UG properties just alluded to crystallized into a framework known as the Principles-and-Parameters (P&P) model, which quickly became the mainstream or standard model for many linguists. The P&P model conceives of UG as a set of principles regulating the shape of all languages and a set of parameters giving rise to the specific forms of individual languages. Principles of UG can be thought of as laws to which all languages must abide. Among other things, these account for why all languages manifest recursive dependencies that go beyond the power of finite-state machines (as in anti-missile missile, anti-anti-missile-missile missile, etc.), and for the existence of displacement (situations where an element is interpreted in a very different location from the one it is pronounced in, as in What did John say that Mary saw___?). They capture the range of arguments a verb can take to express certain events (John ate/John ate the food, but not John saw/John ate Mary the food). They also account for the fact that certain dependencies, even “long-distance” ones, must be established within certain local domains (What did you say she saw___? but not What did you ask who saw___?). And they also require hierarchical structures in syntax (phrases) to be dominated (“headed”) by a designated element inside them (a phenomenon known as “endocentricity”) (e.g., there is no noun phrase without a noun at the center of it: [John’s happiness] but not [John’s arrive]). UG principles also account (p. 431) for why certain elements must be pronounced in some positions in sentences (John was arrested), but not in others (was arrested John), and so on. The degree of details with which UG principles are formulated requires advanced training in linguistics (see Lasnik and Uriagereka 1988; Haegeman 1994), and never fails to impress or overwhelm the non-specialist. But they have enabled practitioners to account for what looks like fundamental properties of FL.

Next to these invariant principles linguists recognize the existence of parameters to account for the fact that superficially there is more than one language. If all principles were invariant down to their last details, there would only be one language heard/signed on the planet. Since this is not the case, some aspects of UG must be left “open” for the local linguistic environment to act upon. We can think of parameters as items on a (UG) menu; a pre-specified set of options from which to choose. Some languages will have a Subject–Verb–Object word order (English), others will have a Subject–Object–Verb order (Japanese). Some language will require the question word to be at the front of the sentence (English), others not (Chinese). And so on. Thus conceived, the P&P model likens the acquisition task to choosing the right options (setting the right values of the parameters) to conform to the language of the environment, much like one switches the various buttons of a circuit to on or off to achieve the desired configuration. Arguably for the first time in history, the P&P model allowed linguists to resolve the tension between the universal and particular aspects of language. It led to a truly impressive range of detailed results covering a great number of languages (see Baker 2001), and accounted for key findings in the acquisition literature (see Guasti 2002; Wexler 2004; Snyder 2007).

By the end of the 1980s, after more than ten years of sustained effort revealing the fine structure of P&P, Chomsky got the impression that the overall approach was well-established, and that it was time to take the next step on the research agenda of the generative enterprise. The next step amounts to an attempt to go beyond explanatory adequacy. Chomsky 1965 distinguishes between three kinds of adequacies: observational, descriptive, and explanatory, and, not surprisingly, puts a premium on explanatory adequacy. The aim of (generative) linguistics was first and foremost to account for the amazing feat of human language acquisition. Once it was felt that the P&P model met this objective (in some idealized fashion, of course, since no definitive P&P theory exists), it became natural to ask how one can make sense of the properties the P&P model exhibits—how much sense can we make of this architecture of FL? Put differently, why does FL have this sort of architecture?

Quite reasonably, Chomsky formulated this quest beyond explanatory adequacy in the most ambitious form (what is known as the strong minimalist thesis), in the form of a challenge to the linguistic community: Can it be shown that the computational system at the core of FL is optimally or perfectly designed to meet the demands on the systems of the mind/brain it interacts with? By optimal or perfect design Chomsky meant to explore the idea that all properties of the computational system of language can be made to follow from minimal design specifications, a.k.a. “bare output conditions”—the sorts of properties that the system would have to have to be (p. 432) usable at all (e.g., all expressions generated by the computational system should be legible, i.e., formatted in a way that the external systems can handle/work with). Put yet another way, the computational system of language, minimalistically construed, would consist solely of the most efficient algorithm to interface with the other components of the mind, the simplest procedure to compute (generate) its outputs (expressions) and communicate them to the organs of the mind that will interpret them and allow them to enter into thought and action. If the strong minimalist thesis were true, FL would be an ideal linguistic system. But it should be stressed that the point of the minimalist program is not to prove the validity of this extreme thesis but to see how far the thesis can take us, how productive this mode of investigation can be. The strong minimalist thesis amounts to asking whether we can make perfect sense of FL. Asking this question is the best way to find out how much sense we can make out of FL. The points where the minimalist program fails will mark the limits of our understanding. If one cannot make perfect sense of some property P of FL (i.e., if P cannot be given a minimalist rationale in terms of computational efficiency toward interface demands), then P is just something one must live with, some accident of history, a quirk of brain evolution, some aspect of FL that one must recognize in some brute force fashion, one whose secrets must be forever hidden from us, as Hume might have said.

There is no question that the minimalist program is the right strategy to account for properties of FL. Its conceptual/methodological legitimacy can hardly be questioned (except perhaps by appealing to the fact that biological organs in general do not display the sort of optimality that the minimalist program is looking for—the “tinkering” side of biology to which I return in section 19.4 below), but the timing of its formulation may be. A program such as minimalism is neither right nor wrong; it is either fertile or sterile. Its success will depend on the state of maturity reached by the enterprise within which it is formulated. It is quite possible that minimalist questions in linguistics are premature. It very much depends on how one feels about the P&P model. It is important here to stress the term “model”. The feasibility of the minimalist program does not rely on the accuracy of all the principles and parameters posited down to their smallest details (indeed such principles and parameters are constantly revised, enriched, improved upon, etc.), but it does depend on whether we think that the sort of approach defined by the P&P model has a fighting chance of being explanatorily adequate. I side with Chomsky in thinking that it does, but, not surprisingly, P&P skeptics have found the minimalist program outrageous.

18.2 A Guide to Minimalist Analysis

Having made its immediate conceptual foundation clear, let me now turn to how the strong minimalist thesis could be put into practice (for a more extended discussion, see Boeckx 2006, ch. 5; see also Uriagereka 1998; Lasnik et al. 2005; Hornstein et al. 2006; and Boeckx 2008). (p. 433)

The first thing for me to note here is something I already alluded to above: the point of minimalist inquiry is not to pick a definition of optimal design and prove its existence but rather to look for one that allows us to make progress at the explanatory level. (As Putnam 1962 observes in the context of Einstein’s principle that all physical laws be Lorentz-invariant, it is perhaps because of their vagueness, their programmatic nature, that scientists find such guiding principles extremely useful.) This is another way of saying that there are many ways of articulating minimalist desiderata. There are, in fact, two, possibly three, major families of approaches that presently fall under the rubric of linguistic minimalism. All of them grew out of Chomsky’s early minimalist writings (Chomsky 1993, 1995), so I will begin this section by giving the flavor of the early minimalist period before turning to more recent developments.

Among the generalizations arrived at in the elaboration of the P&P model was one that proved particularly instrumental in the development of the minimalist program. Chomsky (1986a: 199) interpreted the unacceptability of sentences like was believed John to be ill and John was believed is ill (compare John was believed to be ill) as indicating that an element had to be displaced at least once, but could not be displaced twice to a case/agreement position (preverbal position triggering subject-agreement on the verb). Put differently, displacement of a noun phrase (out of the domain where it is obviously interpreted, in this case, the vicinity of be ill) must happen until that noun phrase reaches a case-assigning/agreement-triggering position. But once it has reached that position, the displaced element is frozen there. From this Chomsky concluded that movement to a preverbal subject position was “a last resort operation”. In more general terms, Chomsky claimed that some operation must take place, and, further, that once the operation has taken place, it cannot happen again. Chomsky took this to mean that the application of certain operations is banned if nothing is gained by performing it.

In a similar vein, at the end of the 1980s, Chomsky and others began to reinterpret some generalizations and principles in terms of least effort strategies. Take, for example, the so-called Minimal Distance Principle. Rosenbaum formulated this principle in 1970 to deal with instances of so-called control. Control is a cover term for the mechanism that lies behind the way we interpret sentences like John tried to leave as indicating that John is both the agent of “trying” and of “leaving”. That is, we interpret the sentence John tried to leave as meaning that John did something that would make it possible for him (and not somebody else) to leave. Control is also at work in sentences like John persuaded Mary to leave. The grammar dictates that this sentence be understood as meaning that the leaver is Mary not John. Rosenbaum’s Minimal Distance Principle expresses the idea that the element understood as the subject of the infinitival clause (to leave in our examples) is the element that is closest to that infinitival clause. This, too, has the flavor of an economy/least effort condition, and was interpreted as such in a minimalist context by Hornstein (1999).

From the early 1990s onward, Least Effort and Last Resort principles became a cornerstone of syntactic theorizing, a key feature of syntactic operations. In addition to claiming that syntax should be organized around principles that legislate against superfluous steps in derivations and superfluous elements in representations, Chomsky (p. 434) also suggested that the architecture of syntax followed from “virtual conceptual necessity”.

For example, the fact that sentences are made up of a potentially infinite number of distinct phrases has been taken to force upon linguistic theory a grouping operation which combines at least two elements a and b forming a set {a, b}. This is the operation Chomsky calls Merge. Sharpening the use of virtual conceptual necessity, Chomsky claimed that since at least two elements must be the input of Merge, let us say that at most two elements must be the input of Merge. This means that if we want to combine three elements into a set (phrase), two applications of Merge are required. A first step that puts two elements together, and a second step that takes the group just formed and joins the third element to it. This captures Kayne’s (1984) binary branching requirement on syntactic phrases that had become standard in the P&P model.

  1. (1) Linguistic Minimalism

The idea that this piece of P&P syntax may follow from virtual conceptual necessity has recently been related to various suggestions to the effect that binary branching trees (which, at a suitable level of abstraction, are pervasive in nature) may be computationally more efficient/economical than other kinds of representations (see, for example Medeiros 2008; Boeckx 2014). If this turns out to be the case, this convergence of virtual conceptual necessity and computational economy/efficiency is the type of result that scientists would regard as strongly suggesting that the minimalist program is on the right track.

Chomsky also pointed out that refraining from imposing an upper bound on the number of applications on Merge yields recursive structures, and thus captures the essence of what allows language to make infinite use of finite means. Likewise allowing Merge to recombine members of the sets it forms—what Chomsky recently called internal merge—yields a version of displacement. Note that no additional condition is needed to allow displacement. In fact, as Chomsky (2004) points out, it would take an extra condition to disallow it. Note also the pleasing symmetry between the operation that yields displacement and the basic operations that combines two elements (Merge). The emergence of economy conditions on derivations and representations, the consequences of virtual conceptual necessity, and the search for unity and symmetry in syntactic operations and representations now define what we take to constitute the true character of linguistic principles. Such guidelines play themselves out in different ways, depending on which particular one is stressed. For example, in the wake of Baker’s (1985) Mirror Principle and Kayne’s (1994) Linear Correspondence (p. 435) Axiom, several researchers have explored the degree of transparency between syntactic representations and morphological or linear orders. In practice this has led to the so-called Cartographic Project, which connects the robust restrictions one observes at the level of morphological make-up of words and the linear order of elements in a sentence to very rich, fine-grained, highly articulated series of phrases in the clause (see Rizzi 1997; Cinque 1999). The approach is animated by minimalist concerns, as it pays attention to the nature of the mapping between narrow syntax and the interfaces, and also as it reflects on the nature of the kind of computation needed and its cost. As Rizzi (2004: 9) puts it “one driving factor of the cartographic endeavor is a fundamental intuition of simplicity (…). Complex structures arise from the proliferation of extremely simple structural units: ideally, one structural unit (a head and the phrase it projects) is defined by a single syntactically relevant feature”. Rizzi (2004: 10) goes on to point out that “local simplicity is preserved by natural languages at the price of accepting a higher global complexity, through the proliferation of structural units. … Recursion is cheap; local computation is expensive and to be reduced to the minimum”.

A different, though ultimately related, kind of approach capitalizes on Chomsky’s 1993 idea that syntactic operations are subject to Last Resort. Such an idea, first formulated in the context of movement, has been extended to all syntactic relations, and has led to the development of highly constrained, so-called crash-proof models of grammar, where lexical features and their mode of licensing plays a significant role (see Frampton and Gutmann 2002; Adger 2003; among others). This line of inquiry connects with the Cartographic Project in that it leads to elaborate feature organizations (geometries) that mirror the series of heads posited in Cartographic studies. I anticipate an important degree of convergence between these two families of approaches in the near future.

A third type of minimalist studies, which emerged more recently, has de-emphasized the role of specific features in driving syntactic computations and paid more attention to the consequences of assuming a more derivational architecture, where small chunks of syntactic trees (aka “phases”) are sent cyclically to the interfaces (see Epstein et al. 1998; Uriagereka 1999; Chomsky 2000a). This type of approach (articulated by Chomsky over a series of papers beginning with Chomsky 2000a; see Chomsky 2001b, 2004, 2007, 2008) seeks to turn the economy principles of the early minimalist period into theorems, and generally minimizes the size of the output sent to the interfaces. Because they adhere to an even more minimal inventory of properties of lexical items (“features”) and structures, such studies only ensure the bare minimum (legibility requirement) at the interfaces. As a result, a fair amount of filtering must be performed by the external systems to ultimately characterize “well-formed” expressions. In my view, this marks a partial return to the Filtering architecture that characterized much of the P&P era (see Chomsky and Lasnik 1977; Lasnik and Saito 1984, 1992), and that goes back in many ways to Chomsky and Miller (1963). The studies under consideration relate rather naturally to models of the lexicon and morphology that deflate the pre-syntactic lexicon (Hale and Keyser 1993, 2002; Halle and Marantz 1993; Marantz 2000, 2006; Borer 2005). They also fit particularly well with Neo-Davidsonian semantic representations (Pietroski 2003; Boeckx 2014).

It is impossible for me to review in a brief overview like this one the range of results that minimalist theorizing has already achieved, but I would like to highlight three conclusions that seem to be gaining significance and plausibility as minimalist inquiry progresses. The first pertains to the external systems to which the core computational system relates. It was common in the early minimalist period to define the strong minimalist thesis by making reference to both sound/sign and meaning, the two external systems that syntax interfaces with. However, it has become increasingly clear in recent years that syntax appears to be designed primarily to interface with meaning. Chomsky puts it thus:

It may be that there is a basic asymmetry in the contribution to language design of the two interface systems: the primary contribution to the structure of [the] F[aculty of ] L[anguage] may be optimization of the C-I [sense] interface.

(Chomsky 2008: 136)

The privileged status of meaning over externalization has implications beyond the narrow concerns of syntactic analyses, and is likely to play a prominent role in biolinguistic studies, which focus on the place of language in cognition and its evolution. It has also become clear (though it is rarely made explicit, but see Hornstein 2009; Boeckx 2011) that if minimalist research is on the right track, syntax per se is unique in the sense that it is not subject to parametric variation, and furthermore is virtually unaffected by points of variation, which must, by necessity, be relegated to the margin of FL (specifically, the morpho-phonological component of the grammar). Boldly put, minimalist syntax marks the end of parametric syntax (which relies on there being parameters within the statements of the general principles that shape natural language syntax), and leaves no room for an alternative to Borer’s (1984) conjecture that parameters are confined to lexical properties. This much should be clear: If minimalist research is taken seriously, there is simply no way for principles of efficient computation to be parametrized. It strikes me as implausible to entertain the possibility that a principle like “Least Effort” could be active in some languages but not in others. In other words, narrow syntax solves interface design specifications optimally in the same way in all languages (contra Baker 2006; and Fukui 2006). I believe that this conclusion is a natural consequence of the claim at the heart of the generative/biolinguistic enterprise that there is only language, Human, and that this organ/faculty emerged very recently in the species, too recently for multiple solutions to design specifications to have been explored.

Thirdly, as Hornstein (2001, 2009) has stressed, minimalism marks the end of grammatical modules. All versions of the P&P model prior to the advent of minimalism took FL to be internally modular, consisting of a variety of largely independent domains, whose combined results yielded surface outputs. By adhering to a radically impoverished notion of FL, minimalists are entertaining the possibility that the core computational system does not contain a variety of internal modules. Rather, simple processes combine with interface conditions to yield surface outputs. This too makes (p. 437) a lot of sense from an evolutionary perspective, given the short window of time we are dealing with in the context of language (see Hornstein 2009).

Let me conclude this section by stressing once again that linguistic minimalism is a program, not a theory. A program is a bit like a genome. In isolation it is insufficient to define the shape of an organism, or a theory. It offers certain regulatory principles that constrain the development of an organism/theory. It forces one to pay attention to the role of other factors that come into play during development. In the context of minimalism, the nature of the external systems with which the core computational system interfaces is all-important, and must be investigated in tandem with syntactic analyses. In this sense, minimalism marks the end of linguistic isolationism, and opens up fresh perspectives for an overall theory of cognition, as I make more explicit in section 19.5.

18.3 A New (Radical) Chapter in Rationalist Psychology

Although the plausibility of a minimalist program for linguistic theory was first made explicit in the early 1990s, the suspicion that FL exhibits non-trivial design properties reaches further back. It is, in fact, a fairly natural expectation once we take into account the rationalist roots of the generative enterprise.

I here want to distinguish between two ways of construing the minimalist project. In exploring this minimalist program for linguistic theory, linguists are not just answering the universal Occamist urge to explain with only the lowest number of assumptions (what one might call “pragmatic minimalism”, or the weak minimalist thesis), they insist that minimalist analyses really go at the heart of FL as a natural object (“semantic/ontological minimalism”, or the strong minimalist thesis.) Principles of well-designed analysis (to be distinguished from principles of good design in nature) have always been part of generative studies, but until Chomsky (1993) no one had thought (explicitly) about elevating principles of good analysis to the ontological or metaphysical level; to move from the nominalist to the realist position. There is a good reason for this. Generative grammarians were concerned primarily with a more immediate problem, the logical problem of language acquisition (what Chomsky 1986a called “Plato’s Problem”), and the issue of explanatory adequacy. Until some advance could be made on that front, a minimalist program for linguistic theory would have been premature. It was only once the P&P model had stabilized, and been shown how general principles could be extracted and segregated from points of variations (parameters) that it became methodologically sound to try to make sense of these principles in more explanatory terms.

Nevertheless, one can find hints of a minimalist impulse in several early works by Chomsky (for a more extended discussion of these hints than I can afford here, see (p. 438) Freidin and Vergnaud 2001 and Boeckx 2006; see already Otero 1990). As early as 1951, Chomsky wrote (p. 1) that considerations of “simplicity, economy, compactness, etc. are in general not trivial or ‘merely esthetic.’ It has been recognized of philosophical systems, and it is, I think, no less true of grammatical systems, that the motives behind the demand of economy are in many ways the same as those behind the demand that there be a system at all [Chomsky here refers to Nelson Goodman’s work—CB]”. This could be paraphrased in a more modern context as “the motives behind the strong minimalist thesis and the search for optimal design are in many ways the same as those behind the demand that there be a (rational(ist)) explanation for properties of FL at all”. Short of finding a way of making perfect sense of FL, explanations are bound to be to some degree arbitrary, never going beyond the level of explanatory adequacy. It is perhaps because adaptationist explanations in biology tend to involve a fair amount of contingency, almost by necessity (see Monod 1971; Gould 1989, 2002), that Chomsky has for many years stressed the possibility that “some aspects of a complex human achievement [like language] [may be the result of ] principles of neural organization that may be even more deeply grounded in physical law” (Chomsky 1965: 59). For it is in physics that minimalist conjectures have been pursued with much success since Galileo. It is in this domain that one most readily expects good design principles at work. Chomsky’s guiding intuition, and the basic contrast between biology and physics, is made clear in this passage from 1982 (Chomsky 1982: 23; but see already Chomsky 1968: 85):

It does seem very hard to believe that the specific character of organisms can be accounted for purely in terms of random mutation and selectional controls. I would imagine that the biology of 100 years from now is going to deal with evolution of organisms the way it deals with evolution of amino acids, assuming that there is just a fairly small space of physically possible systems that can realize complicated structures.

Citing the work of D’Arcy Thompson, Chomsky points out that “many properties of organisms, like symmetry, for example, do not really have anything to do with a specific selection but just with the ways in which things can exist in the physical world”. It seems quite clear to me that Chomsky saw in the success of P&P the very first opportunity to pursue his long-held intuition that the design of FL (and, beyond it, complex biological systems), has a law-like character, determined in large part by very general properties of physical law and mathematical principles.

Already in the 1970s, when he was attempting to unify the locality conditions that Ross (1967) had called “islands”, Chomsky sought to explain these “in terms of general and quite reasonable ‘computational’ properties” (see Chomsky 1977: 89; see also Chomsky 1973: sec. 10), but the degree of specificity of these principles was still quite considerable. And, as Chomsky (1981: 15) notes, considerations of elegance had been subordinated to “the search for more restrictive theories of UG, which is dictated by the very nature of the problem faced in the study of UG” (the logical problem of language acquisition/the search for explanatory adequacy). (p. 439)

Much like one can see hints of linguistic minimalism at work in some of Chomsky’s early works and much like one can read Chomsky’s most recent writings on the optimal character of language as the systematic exploration of the scattered references to natural laws present already in the 1960s, I think one can see the formulation of the minimalist program as another chapter (albeit a fairly radical one) in Cartesian linguistics, another application of rationalist psychology. When Chomsky wrote Cartesian Linguistics in 1966, he was concerned with showing that his own arguments against behaviorism (Chomsky 1959) emphasized certain basic properties of language, such as the creative aspect of language use, or the innate basis of knowledge, that Descartes, Leibniz, and other rationalists of the 17th and 18th century had already identified. Chomsky wanted to stress how much would be lost if these insights were obscured by pseudo-scientific approaches to language such as behaviorism, and how much would be gained by trying to shed further light on the issues that, say, Neo-Platonists brought up in the context of meaning and the nature of concepts (a task that Chomsky has pursued to the present; see Chomsky 2000b; McGilvray 2009). Furthermore, he wanted to demonstrate that certain intuitions in the technical works of Port-Royal grammarians matched pretty closely (or could easily be reinterpreted in terms of) what was being done at the time in generative grammar. At the same time, Chomsky was stressing how recent advances in modern mathematics, once applied to language as he had done in Chomsky 1955, 1957, could sharpen some intuitions about the nature of language such as Humboldt’s aphorism about language making infinite use of finite means.

Today I think that another aspect of rationalist thought could be said to animate modern (bio-)linguistics, under the impetus of the minimalist program. This aspect pertains to the rationalist conception of life, the sort of conception that was advocated by Geoffrey St Hilaire, Goethe, Owen, and more recently, D’Arcy Thompson, and Turing—those that Kauffman (1993) refers to as the rationalist morphologists.

As Amundson (2005) recounts in his masterful revisionist history of biological thought, the rationalist tradition in biology was obscured not so much by Darwin himself but by all the architects of the modern evolutionary synthesis. Rationalist morphologists had as their main ambition to develop a science of form. They saw development (in both ontogenic and phylogenetic senses) to be governed by laws, revealing a certain unity (of type). They focused on form, and treated function as secondary. They de-emphasized the role of what we would now call adaptation and the power of the environment to shape the organism, and favored internalist explanations according to which development (again, in both its ontogenic and phylogenetic senses) was channeled by physical constraints. Quite correctly, they saw this as the only research strategy to attain a truly explanatory theory of form, a true science of biology. Not surprisingly, they saw it as necessary to resort to idealization and abstraction to reveal underlying commonalities (such as Owen’s archetype).

In contrast to all of this, neo-Darwinians, led by Ernst Mayr, focused on function, adaptation, change and the exuberant variety of life. They were empiricists, as is the majority of working (evolutionary) biologists today. But as we will see in the next section, the tide is changing, and laws of form are making an emphatic come-back. (p. 440) There is no doubt that the rationalist morphologists would have been pleased to see the sort of naturalistic, reason-based account of FL advanced in the minimalist program. The attempt to go beyond explanatory is one of the most sustained attempts in linguistics to develop a science of linguistic form, one that abides by the principle of sufficient, or sufficiently good, reason, and views arbitrary aspects of language with the same skepticism with which rationalist morphologists treated contingency.

It may not be too far-fetched to say that just like generative grammar sharpened Humboldt’s intuition, minimalist work sharpens the intuition that rationalist morphologists might have had about language. Of course, no one would have been bold enough to regard language as an optimally designed system in the 17th or 18th century, for they lacked the necessary pre-requisite achievement of explanatory adequacy. Though the term Universal Grammar was in use in the 17th and 18th century, the Port-Royal grammarians never went as far as proposing anything remotely like a P&P model, so any attempt to go beyond explanatory adequacy would have been as premature as the minimalist program seemed to be until a decade or so ago. But the minimalist program is imbued by (radical) rationalism, and promises to shed significant light on a key aspect of what makes us human, thereby contributing to the elaboration of Hume’s Project of a Science of Man, itself an extension of Descartes’ mathesis universalis.

Let me conclude this section by pointing out that although I have been at pains to trace back the origin of minimalist thought, I cannot fail to mention that linguistics, even more so than biology, is a very young science. And while it is too easy to forget some of its roots, it is also too easy to forget how truly remarkable it is that already now we can formulate a few minimalist concerns with some precision.

18.4 On the Plausibility of Approaching FL with Galilean Lenses

The approach of the rationalist morphologists touched on above was very “Galilean” in character. Steven Weinberg, who introduced the term into physics, characterized it thus:

… we have all been making abstract mathematical models of the universe to which at least the physicists [read: scientists—CB] give a higher degree of reality than they accord the ordinary world of sensation.

(Weinberg 1976)

The Galilean style also characterizes Descartes’ work, and is the central aspect of the methodology of generative grammar, as explicitly recognized in Chomsky (1980; see also Hornstein 2005). It is, in other words, another aspect of Cartesian linguistics, or science in general. Galileo notes that (p. 441)

[in studying acceleration] … we have been guided … by our insight into the character and properties of nature’s other works, in which nature generally employs only the least elaborate, the simplest and easiest of means. For I do not believe that anybody could image that swimming or flying could be accomplished in a simpler or easier way than that which fish and bird actually use by natural instinct.

(Galileo 1638 [1974]: 153)

Elsewhere, Galileo states that nature “always complies with the easiest and simplest rules”, and that “nature … does not that by many things, which may be done by few” (1632 [1962]: 99).

The Galilean program is thus guided by the ontological principle that “nature is perfect and simple, and creates nothing in vain” (see, for example, Galileo 1632 [1962]: 397). This outlook is exactly the one taken by minimalist linguists. Indeed, it can be said that the minimalist program is the most thoroughgoing application of the Galilean style in linguistic science, the most radical form of rationalist/Cartesian linguistics yet. The guiding assumption in minimalism is Goedel’s basic axiom that Die Welt ist vernünftig (the world is full of rationality). The road to Galilean science is to study the simplest system possible, for this is where one is most likely to find intelligibility (rationality). Stephen Jay Gould never tired of emphasizing (see Gould’s 1983 delightful essay on explanation in biology “How the zebra gets its stripes”), aesthetic styles (Holton would call them themata; see Holton 1973) profoundly influence the practice of science. Cognitive science is no exception, as Piattelli-Palmarini (1980) already noted in the context of the famous Chomsky-Piaget debate.

The problem for minimalists is that, with its emphasis on the Galilean style of explanation, it has made them look like heretics in Darwin’s court. As I mentioned already in the previous section, the modern synthesis in evolutionary biology has not been kind to rationalist thinking and its Galilean style. Whereas it is the style of choice in physics, it has clearly been marginalized in biology. I discuss this clash of scientific styles or cultures at length in Boeckx (2006, ch. 4), so I won’t belabor the point here. As Fox-Keller (2002) clearly states, biologists are not sympathetic to idealization, seeing it as a “weakness”, a lack of “satisfying explanation” (p. 74), always requiring “more measurement and less theory” (p. 87). Francis Crick (1998: 138) makes essentially the same point when he states that “while Occam’s razor is a useful tool in physics, it can be a very dangerous implement in biology”. Chomsky himself, already in the early P&P days, was aware of the conflicting outlooks, as he wrote (in a way that highlights once again how latent minimalism was in his earlier writings):

This approach [which Chomsky does not name, but he clearly has the Galilean style in mind—CB], … is based on a guiding intuition about the structure of grammar that might well be questioned: namely, that the theory of core grammar, at least, is based on fundamental principles that are natural and simple, and that our task is to discover them, clearing away the debris that faces us when we explored the varied phenomena of language and reducing the apparent complexity to a system that goes well beyond empirical generalization and that satisfies intellectual or even esthetic (p. 442) standards. … but it might be that this guiding intuition is mistaken. Biological systems—and the faculty of language is surely one—often exhibit redundancy and other forms of complexity for quite intelligible reasons, relating both to functional utility and evolutionary accident.

(Chomsky 1981: 14)

It is for this reason that, although the generative enterprise is firmly grounded in biology, the perspective advocated by minimalists has been deemed “biologically implausible” by many linguists and cognitive scientists alike (see Pinker and Jackendoff 2005; Parker 2006; Marcus 2008; among many others). Jackendoff (1997: 20) nicely sums it up when he says: “it is characteristic of evolution to invent or discover ‘gadgets’. (…) The result is not ‘perfection’.” Jackendoff goes on to say that he would “expect the design of language to involve a lot of Good Tricks (…) that make language more or less good enough. (…) But non-redundant perfection? I doubt it”.

I can certainly see how familiarity with popular accounts of evolutionary biology can lead to the claim that linguistic minimalism is biologically implausible, but I have no doubt that the burden of proof will soon shift. Several established figures in biology have started advocating for an enrichment of the standard model in biology (the modern synthesis that emerged some 50 years ago). Gould (2002) made a giant plea for (theoretical) pluralism. For from advocating a wholesale rejection of the Darwinian perspective, Gould stressed the need to recognize non-adaptationist modes of analysis when tackling the problem of form, including the sort of methodology that D’Arcy Thompson’s Growth and Form represents at its best (see especially Gould 2002: ch. 11).

Although this feeling has not yet reached the popular press, more and more biologists feel that the ultra-adaptationist perspective at the heart of the modern synthesis cannot produce results that qualify as intelligible (read: rational(ist), satisfactory) explanations, and confines biology to a lesser scientific status (making biology “unique”; see Mayr 2004). A growing number of biologists side with Lynch’s 2007 opinion that “many (and probably most) aspects of genomic biology that superficially appear to have adaptive roots … are almost certainly also products of non-adaptive processes”. How could it be otherwise, with so few genes (as genomics continues to reveal) for much complexity? Pigliucci (2007a) is right to contrast this with the evolutionary psychologists’ contention that natural selection should be treated as the default explanation for complex phenotypes; see Dennett 1995, and Pinker 1997, who take Dawkins 1976, 1986 as gospel. I wish they remembered Darwin’s claim at the beginning of The Origin of Species that “natural selection is … not [the] exclusive means of modification” (Darwin 1859 [1964]: 6).

As Carroll (2005) points out, the modern synthesis has not given us a theory of form, the sort of theory that pre-Darwinians were after. But, as the pre-Darwinians recognized, form is so central. As Goodwin and Trainor (1983) write (in a passage that could be lifted from Chomsky’s writings), “… the historical sequence of forms emerging during evolution is logically secondary to an understanding of the generative principles defining the potential set of forms and their transformations”. (p. 443)

Echoing Gould (2002), Pigliucci (2007b) is right to say that biology is in need of a new research program, one that stresses the fact that natural selection may not be the only organizing principle available to explain the complexity of biological systems. It is not just all tinkering; there is design too. Pigliucci reviews numerous works that provide empirical evidence for non-trivial expansions of the modern synthesis, with such concepts as modularity, evolvability, robustness, epigenetic inheritance, and phenotypic plasticity as key components.

With minimalist themes in the immediate background, Piattelli-Palmarini (2006) notes that the sort of (adapative) perfection or optimization that neo-Darwinians routinely endorse is just not plausible. There simply hasn’t been enough time to optimize organisms gradually. What is needed is “instantaneous” optimization, optimization without search or exploration of alternatives. Empirical results in this domain are coming in, beginning with Cherniak et al.’s (2004) characterization of the neural connectivity of the cortex as the best solution among all conceivable variants. Optima in structures, motions, behaviors, life-styles are now frequently recognized in the pages of Science or Nature, and none of them seem to be the outcome of long-sought, hard-won, gradualistic adapations. (The latest example of this trend to reach me is a study of the bird’s optimal wing stroke [Dial et al. 2008], which vindicates Galileo’s claim quoted above that flying could not be achieved in a simpler way than that which the bird uses.) At the same time, the study of biological networks (“systems biology”) reveals “special features that give hope that [such] networks are structures that human beings can understand” (Alon 2003: 1867). Biology is finally yielding to intelligibility. Elsewhere (Alon 2007), Alon writes that biological networks of interactions are simpler than they might have been (or might have been expected to be). Alon clearly states that cells often seem to economize, relying as they do on only a few types of patterns called network motifs that capture the essential dynamics of the system. Alon stresses that approaches that seek to reveal such motifs must rely on abstract representations, focus on the essential aspects, and suppress details—in good Galilean style. Alon is right to conclude his article on “simplicity in biology” by saying that “simplicity in biology must be emphasized” so as to “encourage the point of view that general principles can be discovered”. For “without such principles, it is difficult to imagine how we might ever make sense of biology on the level of an entire cell, tissue, or organism” (Alon 2007: 497).

The very same conclusion applies in the domain of linguistic minimalism. Without adhering to the Galilean style, without the strongest possible emphasis on simplicity in language (the strongest minimalist thesis), it is hard to imagine how we might ever make sense of the properties of FL. Chomsky (2004: 124) correctly remarks that “insofar as [minimalism makes progress in capturing properties of FL], the conclusions will be of significance, not only for the study of language itself ”. If simplicity and efficiency of design are found at the level of the cell and at the level of FL, it is not implausible to expect the same sort of simplicity everywhere in between these two relatively extreme realms of biological structures. Perhaps, then, minimalist pursuits will provide biologists with another model organism in their quest for a science of form.

(p. 444) 18.5 The Prospects of Approaching UG from Below: “Applied Minimalism”

I have emphasized the metaphysical commitments of linguistic minimalism because it is clear that with the advent of minimalism, linguistics got philosophical, in White-head’s (1925) sense: “If science is not to degenerate into a medley of ad hoc hypotheses, it must become philosophical and must enter upon a thorough criticism of its own foundations.” To me, a major part of the excitement of doing linguistics comes from what it reveals about the mind and our species. But I confess that it is now possible to relegate the metaphysical implications of minimalism to the background, and characterize the core methodological principle of minimalist research in a more neutral fashion, by saying (as Chomsky has done in 2007) that minimalism essentially amounts to “approaching UG from below”.

This characterization of the minimalist program becomes especially apt when we realize how closely related minimalist research is to the re-emergence of biolinguistic concerns. Chomsky has remarked that

Throughout the modern history of generative grammar, the problem of determining the character of FL has been approached “from top down”: How much must be attributed to UG to account for language acquisition? The M[inimalist] P[rogram] seeks to approach the problem “from bottom up”: How little can be attributed to UG while still accounting for the variety of I-languages attained.

(Chomsky 2007: 4)

Concretely, approaching UG from below means that the inventory of basic operations at the core of FL must be reduced to a minimum, and much of the richness previously attributed to UG must be re-evaluated: it must be shown to be the result of simple interactions, or else must be attributed to the external mental systems that the core computational system of language interacts with.

Minimalism has forced us to rethink syntax from the ground up (as well as phonology; see Samuels 2009, and semantics; see Pietroski forthcoming, Uriagereka 2008, Hinzen 2007), and find out what is most fundamentally true of, or constitutive of what Hauser et al. (2002) have dubbed the faculty of language in the narrow sense. At the same time, the strongest minimalist thesis requires us to make informed hypotheses about the nature of the external systems that FL serve, which form the faculty of language in the broad sense. As soon as one says that the core computational system of language meets interface demands in an optimal manner, one is forced to adopt an interdisciplinary approach to the study of language. Unsurprisingly, the minimalist program is characterized on the dust jacket of Chomsky (1995) as an “attempt to situate linguistic theory in the broader cognitive sciences”.

If indeed much of the specificity of language turns out to be the result of its place in the topography of the mind, it won’t do to restrict one’s attention to linguistic data (p. 445) to understand language (FL). The systems with which language interacts are bound to be illuminated by minimalist inquiry. Unsurprisingly, questions of meaning, and the relationship between syntactic form and conceptual structures has made an emphatic come-back (see Hale and Keyser 2002; Borer 2005; Pietroski 2006; Reinhart 2006; Hinzen 2007; Uriagereka 2008; Boeckx 2014), as meaning is, in the eyes of many, “the holy grail of the sciences of the mind”.

Several authors (see Boeckx 2006; Reuland 2006; Hornstein 2009) have noted that the search for basic principles of organization render FL cognitively and biologically more plausible. Reuland aptly characterizes this state of affairs by saying that the original P&P principles were too good to be false, but much too specific and parochial to be true.

The high degree of specificity of linguistic principles (which I hasten to stress is not specific to the P&P approach pursued by Chomsky and colleagues but is shared by virtually all linguistic frameworks I am aware of: HPSG, LFG, “Simpler Syntax”, Relational Grammar, etc.) was necessary to begin to understand the logical problem of language acquisition, but led to a certain feeling that interchanges between linguists and cognitive scientists are “sterile” (see Poeppel and Embick 2005). Poeppel, in particular, stresses that the major obstacle in this respect is the granularity mismatch problem: The degree of abstraction and specificity of linguistic representations and what neuroscientists can today understand is a chasm. As I suggested in Boeckx (2006, ch. 4; see also Hornstein 2009), it is not too implausible to think that the focus on basic, elementary operations and representations in minimalism may help bridge this gap. As Poeppel himself notes, at least some of the operations that minimalists are entertaining (concatenate, merge, copy, linearize, etc.) could conceivably be implemented in neural networks. I don’t expect the task to be quick and easy, but I would bet that minimalism has a role to play in turning the mystery of brain implementation into a problem. (Marantz 2005 appears to be equally confident, and deserves credit for resuscitating with the help of his students the much-maligned, but so attractive derivational theory of complexity, which looks plausible again in light of linguistic minimalism.)

Similarly, there is no denying that minimalist constructs allow one to entertain more plausible scenarios concerning the evolution of language than its theoretical predecessors did. Piattelli-Palmarini’s (1989) claim that no simple-minded adaptationist account, of the sort put forth by Pinker and Bloom (1990), is likely to be correct still strikes me as exactly right. Piattelli-Palmarini was right to emphasize that many of the properties of language are not amenable to an account that sees communication as the essence of language. But it is fair to say that at the time Piattelli-Palmarini wrote his important essay, the non-adaptationist alternative invoking laws of form didn’t look too promising either. How could very general laws of form yield the degree of specificity that the P&P model was made of? Here, too, one faced a granularity mismatch problem. And just as in the case of neuroscience, I think that linguistic minimalism may help us bridge this gap, and make the laws of form conjecture plausible. Here, too, much work remains to be done (and some, like Lewontin 1990, even think that the task is hopelessly beyond our reach), but an article like Hauser et al. (2002; see also Fitch et al. 2005), and (p. 446) the exchanges and studies that it helped foster, I think, demonstrate that progress can be made in this difficult domain.

It is true, as Fitch et al. (2005) remark, that a hypothesis like theirs is strictly speaking logically independent of the minimalist program, but it is hard to see how in practice such a minimal view of the narrow faculty of language as the one they entertain can be seriously put forth without the conceptual and empirical backing up of some minimalist conjecture.

As Wexler (2004) points out in a slightly different context, progress in linguistics—in particular, the advent of a minimalist program—may well turn out to be indispensable in making headway in the two key areas, the logical problems of brain implementation and evolution, that Lenneberg (1967) put at the heart of his project of uncovering the biological foundations of language (biolinguistics). Perhaps minimalists will contribute significantly to making Lenneberg’s dream come true.

Let me close this section by pointing out, as Chomsky has done in a very clear fashion in Chomsky (2005), that approaching UG from below relies on the existence of three factors in language design (and indeed, in the design of all organisms, as Lewontin 2000 has stressed): (i) the genetic endowment, (ii) the contribution of the environment, and (iii) principles of growth and laws of form that transcend the limits of “genomic” nativism. It is in its emphasis on these third factor principles that the “bottom-up” approach to UG meets the Galilean style (indeed, the bottom-up approach depends on the plausibility of the latter). It thus turns out that a solution to all the problems of biolinguistics, from Plato’s Problem to Darwin’s Problem, depends, to a varying degree of radicalness, on rationalist tenets. Indeed, they are unthinkable without what I’d call the Cartesian Program. Without this philosophical foundation, the problems of language acquisition and evolution remain mysteries, as the rationalist philosophers were well aware of. It is no accident that Herder, Humboldt and others built on the foundations of Cartesian linguistics to tackle the problem of the emergence of language in the species; they realized that these foundations offered the only ray of hope in addressing this problem (see Viertel 1966; Schreyer 1985); exactly as minimalists do today.

18.6 Closing Remarks

Linguistic minimalism, with its reliance on the Galilean style, and its bottom-up approach that moves away from the standard generative assumption that UG is very rich, may be hard to swallow for many. It is the most radical statement yet that linguistics conducted along rationalist guidelines is not philology by other means. Its focus of attention is not our common sense notion of language, nor is it even grammar or I-language, but only those aspects that fall within the faculty of language in the narrow sense. By stressing the continued reliance on rationalism assumptions, I have tried to indicate that linguistic minimalism is the latest of a series of thought experiments (p. 447) concerning the nature of language that are carried out in an extremely precarious environment (the generative enterprise), fraught with controversy. Although I believe, with Chomsky, that much of the controversy is misplaced, it won’t disappear easily (as the resurgence of “new empiricism” and the “rethinking of innateness” makes clear). As Leila Gleitman once said, empiricism is innate. The success of minimalism depends in large part on the success of the generative/rationalist program as a whole. In many ways, linguistic minimalism emerges as the logical conclusion of 50 years of research in generative grammar. By giving its most radical expression to the Generative enterprise, it makes it possible for the very first time to do what Chomsky envisioned in the opening paragraphs of Syntactic Structures:

The ultimate outcome of these investigations should be a theory of linguistic structure in which the descriptive devices utilized in particular grammars are presented and studied abstractly, with no specific reference to particular languages.

(Chomsky 1957: 11)

The minimalist perspective does not, of course, invalidate other approaches to linguistic phenomena; it in fact gives them a certain intellectual coherence and foundation, as all such approaches implicitly or explicitly rely on constructs that must ultimately meet the challenges of acquisition, evolution, and brain implementation. I think that linguistic minimalism, as an addition to the panorama of linguistic analyses, is much needed, and has unique insights to offer into language, its nature, origin, and use.

Further Reading

As in all previous stages of the generative enterprise, Chomsky’s writings are required readings to understand the nature of linguistic minimalism. Chomsky’s most recent essays have yet to be put together in book form (as were the early essays collected in Chomsky 1995), and as such, they remain to be streamlined, content-wise, but each one of them is indispensable. If I had to single one out, I’d recommend Chomsky 2004 (“Beyond explanatory adequacy”), to be read in conjunction with the less technical Chomsky 2005 (“Three factors in language design”). In addition to Chomsky’s works, readers can find overviews of linguistic minimalism ranging from the more philosophical (Uriagereka 1998; Boeckx 2006) to the more technical (Lasnik et al. 2005; Hornstein et al. 2006), as well as a very useful anthology of minimalist studies in Boskovic and Lasnik (2007). (p. 448)

Notes:

As I stress at the outset of this chapter, this overview is very much minimalism as I see it. But I am indebted to a number of people who opened my eyes to both big and small points over the years, and changed my vista several times along the way: Noam Chomsky, Juan Uriagereka, Norbert Hornstein, Massimo Piattelli-Palmarini, Paul Pietroski, Marc Hauser, Sam Epstein, Bob Berwick, Howard Lasnik, Željko Bošković, Dennis Ott, Bridget Samuels, Hiroki Narita, and Ángel Gallego.