Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: null; date: 23 August 2017

Cognitive Processing in Bilinguals: From Static to Dynamic Models

Abstract and Keywords

Cognitive processing in bilinguals is the focus of this article. It proposes a move from the current largely static models of multilingual processing to more dynamic models. As in the first edition of this book, the focus is on language production because the models that have been developed for this are detailed, well supported, and have been accepted as the standard at this moment. The first section is largely similar to that previously presented. Then, for the transition to a more dynamic model, Hartsuiker and Pickering's comparative study to evaluate different variants is discussed and contrasted with a view that is based on a dynamic perspective on representation and processing in which change over time is seen as the most important aspect of processing. Traditional psycholinguistic models and their variants are discussed at length. An analysis of the characteristics of the DST-based models of bilingual processing concludes this article.

Keywords: cognitive processing, bilinguals, static models, psycholinguistic, language production

This chapter proposes a move from the current largely static models of multilingual processing to more dynamic models. As in the first edition of the Oxford Handbook of Applied Linguistics, the focus will be on language production because the models that have been developed for this are detailed, well supported, and have been accepted as the standard at this moment. The first section is largely similar to that previously presented (de Bot, 2002). Then, for the transition to a more dynamic model, Hartsuiker and Pickering's (2007) comparative study to evaluate different variants will be discussed and contrasted with a view that is based on a dynamic perspective on representation and processing in which change over time is seen as the most important aspect of processing.

1.1 Traditional Psycholinguistic Models and Their Multilingual Variants

For the discussion of language production and code-switching, Levelt's “speaking” model (1989) is taken as a starting point. This is arguably the most established psycholinguistic model available, and various researchers have shown its relevance for bilingual processing (Green, 1993; Myers-Scotton, 1995; Poulisse, 1997). The Levelt model will be discussed briefly here in order to give the reader an idea of the main line of argumentation. More elaborate versions of the model are described in Levelt (1989, 1993; Levelt, Roelofs, and Meyer, 1999; for bilingual processing, see Kormos, 2006).

(p. 336) In the speaking model, different modules are distinguished:

  • The conceptualizer

  • The formulator

  • The articulator

Lexical items are stored in the lexicon in separate files for lemmas and lexemes. The different parts can be described briefly as follows:

The conceptualizer translates communicative intentions into messages that can function as input to the speech production system. Levelt distinguishes macroplanning—which involves the planning of a speech act, the selection of information to be expressed and the linearization of that information—from microplanning—which involves the propositionalization of the event to be expressed, the perspective taken, and certain language-specific decisions that have an effect on the form of the message to be conveyed. The output of the conceptualizer is a preverbal message, consisting of all the information needed by the next component, the formulator, to convert the communicative intention into speech. Crucial aspects of the model are the following:

  • There is no external unit controlling the various components.

  • There is no feedback from the formulator to the conceptualizer.

  • There is no feedforward from the conceptualizer to the other components of the model.

This means that all the information that is relevant to the “lower” components has to be included in the preverbal message.

The formulator converts the preverbal message into a speech plan (phonetic plan) by selecting lexical items and applying grammatical and phonological rules. Lexical items consist of two parts, the lemma and the morphophonological form, or lexeme. The lemma represents the meaning and syntax of the lexical entry, whereas the lexeme represents the morphological and phonological properties. In production, lexical items are activated by matching the meaning part of the lemma with the semantic information in the preverbal message. Accordingly, the information from the lexicon is made available in two phases: Semantic activation precedes form activation (Schriefers, Meyer, and Levelt, 1990). The lemma information of a lexical item concerns both conceptual specifications of its use—such as pragmatic and stylistic conditions—and (morpho)syntactic information, including the lemma's syntactic category and its grammatical functions, as well as information that is needed for its syntactic encoding (in particular, number, tense, aspect, mood, case, and pitch accent). Activation of the lemma immediately provides the relevant syntactic information, which in turn activates syntactic procedures. The selection of the lemmas and the relevant syntactic information leads to the formation of a surface structure. While the surface structure is being formed, the morphophonological information in the lexeme is activated and encoded. The phonological encoding provides the input for the articulator in the form of a phonetic plan. This phonetic (p. 337) plan can be scanned internally by the speaker via the speech-comprehension system, which provides the first possibility for feedback.

The articulator converts the speech plan into actual speech. The output from the formulator is processed and temporarily stored in such a way that the phonetic plan can be fed back to the speech-comprehension system and the speech can be produced at normal speed.

A speech-comprehension system connected with an auditory system plays a role in the two ways in which feedback takes place within the model: The phonetic plan as well as the overt speech are passed on to the speech-comprehension system, where mistakes that may have crept in can be traced. Speech understanding is modeled as the mirror image of language production, and the lexicon is assumed to be shared by the two systems.

1.2 Speech Production in Bilingual Speakers

The Levelt model has been developed as a monolingual model and, if one wants to apply it to code-switching and other bilingual phenomena, one needs to clarify to what extent the present model is capable of handling bilingual speech.

In her discussion of learners of a foreign language as bilingual speakers, Poulisse (1997) mentions the following factors that have to be taken into account in a bilingual model:

  1. 1. L2 knowledge is typically incomplete. L2 speakers generally have fewer words and rules at their disposal than L1 speakers. This deficiency may keep them from expressing messages they had originally intended to convey, may lead them to use compensatory strategies, or may lead them to avoid words or structures about which they feel uncertain.

  2. 2. L2 speech is more hesitant, and contains more errors and slips, depending on the level of proficiency of the learners. Cognitive skill theories such as Schneider and Shiffrin's (1977) or J. Anderson's ACT* 1982 stress the importance of the development of automatic processes that are difficult to acquire and hard to unlearn. Less automaticity means that more attention has to be paid to the execution of specific lower level tasks, which leads to a slowing down of the production process and to a greater number of slips because limited attention resources have to be expended on lower level processing.

  3. 3. L2 speech often carries traces of the L1. L2 speakers have a fully developed L1 system at their disposal, and may switch to their L1 either deliberately (motivated switches) or unintentionally (performance switches). Switches to the L1 may, for example, be motivated by a desire to express group membership in conversations in which other bilinguals with the same L1 background participate, or they may occur unintentionally, for example, when an L1 word is accidentally accessed instead of an intended L2 word. Poulisse and Bongaerts 1994 argue that such accidental switches to the L1 are very similar to substitutions and slips in monolingual speech.

(p. 338) Poulisse (1997) argues that the incomplete L2 knowledge base and the lack of automaticity of L2 speakers can be adequately handled by existing monolingual production models, but that the occurrence of L1 traces in L2 speech pose problems for such models. Paradis (1998), on the other hand, claims that neither switches to the L1 nor cross-linguistic influence (CLI) phenomena call for adaptations in existing models. In terms of processing, Paradis argues, CLI phenomena cannot be distinguished clearly from code-switching phenomena; both result from the working of the production system in an individual speaker, and the fact that CLI may sometimes be undesirable in terms of an external model of the target language is not relevant here.

1.3 Language Separation and Language Choice

In dealing with bilingual speakers, there are two aspects that have to be accounted for:

  1. 1. How do those speakers keep their languages apart?

  2. 2. How do they implement language choice?

Psycholinguistically, code-switching and keeping languages apart are different aspects of the same phenomenon. In the literature, a number of proposals have been made on how bilingual speakers keep their languages apart. Earlier proposals involving input and output switches for languages have been abandoned for models based on activation spreading.

On the basis of research on bilingual aphasia, Paradis 2004 has proposed the subset hypothesis, which he claims can account for most of the data found. According to Paradis, words (but also syntactic rules or phonemes) from a given language form a subset of the total inventory. Each subset can be activated independently. Some subsets (e.g., from typologically related languages) may show considerable overlap in the form of cognate words. The subsets are formed and maintained by the use of words in specific settings; words from a given language will be used together in most settings, but in settings in which code-switching is the norm, speakers may develop a subset in which words from more than one language can be used together. The idea of a subset in the lexicon is highly compatible with current ideas on connectionistic relations in the mental lexicon (cf. Roelofs, 1992).

A major advantage of the subset hypothesis is that the set of lexical and syntactic rules or phonological elements from which a selection has to be made is reduced dramatically as a result of the fact that a particular language/subset has been chosen. The claim in this chapter is that the subset hypothesis can explain how languages in bilinguals may be kept apart but not how the choice for a given language is made. The activation of a language specific subset will enhance the likelihood of elements of that subset being selected, but it is no guarantee for the selection of elements only from that language.

According to the subset hypothesis, bilingual speakers have files for lemmas, lexemes, syntactic rules, morphophonological rules and elements, and articulatory (p. 339) elements that are not fundamentally different from those of monolingual speakers. Within each of these files there will be subsets for different languages, but also for different varieties, styles, and registers. There are probably relations between subsets in different files; in other words, lemmas forming a subset in a given language will be related to both lexemes and syntactic rules from that same language, and phonological rules from that language will be connected with articulatory elements from that language. The way these types of vertical connections are made is, in principle, similar to the way in which connections between elements on the lemma level develop.

Activating a subset in the lexicon on the basis of the conversational setting can result in the activation of a particular language, but it can also result in the activation of a dialect, a register, or a style. These subsets can be activated both top down (when a speaker selects a language for an utterance) and bottom up (when language used in the environment triggers and activates a specific subset;de Bot, 2004). Triggers on different levels—in other words, sounds, words, constructions, but probably also gestures—can activate a subset. An interesting question remains: To what extent in normal conversation is it a conscious decision to use a specific subset? Research on speech accommodation (Street and Giles, 1982) has shown that conversational partners adjust their style of speaking to each other, but largely unconsciously, and the same may happen in bilingual settings in which many factors may define what is the most appropriate style of speaking in that setting.

2. A Comparison of Three Production Models

Hartsuiker and Pickering 2007 compare three models of language production in bilinguals:

  1. 1. The bilingual version of Levelt's speaking model (de Bot, 1992)

  2. 2. Ullman's (2001) declarative/procedural model

  3. 3. Their own model (Hartsuiker, Pickering, and Veltkamp, 2004)

Their main goal is to assess which model is best able to predict cross-linguistic influence (CLI) at the lexical and syntactic levels. The main characteristics of the Levelt model have been presented in the previous sections. The Ullman model (2001) is based on a fairly strict distinction between declarative and procedural knowledge. Ullman 2001 provides evidence from neuroimaging research to show that the two types of knowledge make use of different parts of the brain. Lexical processing is typically based on declarative knowledge, whereas syntactic processing, in particular in highly proficient speakers of a language, is procedural. Another assumption of this model is that age of exposure has more impact on procedural knowledge than on declarative knowledge. Late learners cannot develop procedural (p. 340) knowledge of the second language and therefore have to rely entirely on the declarative knowledge system. Consequently, the model predicts different ways of processing of syntactic aspects for late learners as compared to early learners or native speakers. The Hartsuiker, Pickering, and Veltkamp (2004) model is concerned primarily with the interface between the mental lexicon and syntactic processing. Figure 23.1 represents their model in condensed form. It represents the lexical entries in the lexicon of a Spanish-English bilingual.

The link between the lexical concept and the L2 lemma nodes is relatively weak, as indicated by the dotted lines. There is no direct link between the L2 lemma HIT and the L1 lemma GOLPEAR, but there are links with category nodes (VERB) and combinatorial nodes like ACTIVE. How the lemmas are connected to the lexemes and the phonetic realization of the word is not represented in this reduced representation of the model. A significant aspect of the model is that lemmas are labeled for language. So the activation of the conceptual node HIT (X, Y) and the language node L2 leads to the activation of the lemma HIT. There is no separate lexicon for L1 and L2; language selection takes place through the language nodes. Activation of a lemma leads to the activation of syntactic procedures. In this model there are no language specific sets of syntactic patterns: “Importantly, such combinatorial nodes are connected to all words with the relevant properties, irrespective of language” (Hartsuiker and Pickering, 2007: 481). This implies that grammatical rules may be shared by different languages.

 Cognitive Processing in Bilinguals: From Static to Dynamic ModelsClick to view larger

Figure 10.1 Hartsuiker and Pickering's integrated bilingual model (2007: 481) Reprinted from Acta Psychologica, vol. 128, issue 3, by Robert J. Hartsuiker and Martin J. Pickering, entitled “Language integration in bilingual sentence production,” July 2008, with permission from Elsevier.

In their comparison, Hartsuiker and Pickering 2007 look at what the three models predict for four specific aspects:

  1. (p. 341) 1. Cross-linguistic influence, here restricted to the impact of L1 on L2

  2. 2. Syntactic influences within and between languages as measured by various forms of syntactic priming

  3. 3. The impact of linguistic distance on cross-linguistic influence

  4. 4. The impact of proficiency on cross-linguistic influence

Table 23.1 presents the predictions for the three models. For the first prediction, the authors show that all three models are able to handle cross-linguistic influence, though it would require a weak version of the Levelt/de Bot model in which there is interaction between the proposed separate formulators.

For the second prediction, an extensive analysis of the literature on syntactic transfer in production and syntactic priming is presented. Syntactic priming refers to the finding that speakers tend to have a preference for syntactic patterns that have been used shortly before. In a typical experiment (e.g., Bock, 1986) speakers are presented with a sentence with a particular pattern (e.g., a passive) and are then asked to choose between two versions of a target sentence, one with a similar pattern and one with a different one. Speaker will typically opt for the similar one. This syntactic priming effect has been shown not only in controlled experiments but also in natural conversation (Schoonbaert, Hartsuiker, and Pickering, 2007). All three models can deal with these findings, but the crucial aspect is whether there is a difference in priming between and within languages. In the Levelt/de Bot model, separate but connected formulators have been postulated, which would argue for more within than between syntactic priming. In a later version of the model (de Bot, 2004), this position is slightly modified by assuming that there are language specific subsets in the larger set of syntactic patterns, but this view has no consequences for this discussion. What the Ullman 2001 model predicts for within/between language priming is not so simple. Hartsuiker and Pickering explain:

Less proficient speakers rely more on declarative knowledge than proficient and native speakers, who rely more on procedural knowledge. Thus, this model is (p. 342) compatible with grammatical influences from one language on the other, but these cross-linguistic influences should be weaker than within language influences. (2007: 482)

Table 23.1. Predictions for three models in Hartsuiker and Pickering (2007)

Prediction Model

Levelt/De Bot

Ullman

Hartsuiker et al.(2004)

Cross-linguistic influence

Yes

Yes

Yes

Within CLI/between CLI

Within 〉 between

Within 〉 between

Within = between

Linguistic distance

More distance/more CLI

Unclear

No effect

Proficiency

More proficient/less CLI

More proficient/more CLI

No effect

However, it is not clear why this should be the case.Ullman 2001 makes a rather strict distinction between declarative and procedural knowledge and sees few connections between these two types of knowledge, so it is not clear how the L1 procedural knowledge can influence L2 declarative knowledge. If the two types of knowledge remain within their own domains, equal within-language priming would be expected within L1 and L2, whereas no predictions can be made with respect to between-language priming.

For the Hartsuiker et al. model (2004), the predictions are straightforward—because grammatical rules are shared by different languages, similar priming effects should be found for within- and between-language conditions. The findings on these effects in a series of experiments are mixed. Schoonbaert et al. (2007) compared all possible combinations (L1-L1/L1-L2/L2-L1/L2-L2) of priming with the same data set. They found similar within- and between-language priming effects when the prime and target verbs differed, and they found no differences when the same verb or translation equivalents were used. Hartsuiker and Pickering 2007 refer to several unpublished studies that show no differences between within- and between-language conditions. Therefore, the findings on priming seem to support the Hartsuiker and Pickering model more than the other two models.

The discussion on the impact of linguistic distance on CLI is somewhat muddy and oversimplified. “Hartsuiker et al. (2004) predict no difference between cross-linguistic priming in closely related languages (e.g., Dutch and English) or very distant languages (e.g., Korean and English), as long as the languages have a similar syntactic rule” (Hartsuiker and Pickering, 2007: 485, italics added). It is difficult to falsify such a statement; linguistic distance is defined by the degree of overlap between language systems, and more similar languages will have more overlap than less similar ones. But in all models, it is to be expected that rules that are highly similar between languages are likely to be transferred. The large literature on transfer (see Odlin, 1989, for an overview) convincingly shows that there is more CLI between similar languages than between dissimilar ones, but that does not mean that for equivalent patterns there will be no CLI. Accordingly, finding between-language syntactic priming for datives in English and Korean (Shin and Christianson, 2007, reported in Hartsuiker and Pickering, 2007) does not seem to constitute evidence against the general assumption that more linguistic similarity will lead to more CLI.

Finally, Hartsuiker and Pickering 2007 looked at the relation between level of proficiency and CLI. As they indicate, there are no data on syntactic priming to speak of this matter. The three models seem to have different predictions for this; in the Levelt/de Bot model, the assumption is that more proficiency will lead to a stronger network with more within- than between-language links and accordingly less CLI with higher proficiency. Hartsuiker and Pickering claim that in Ullman's (2001) model, higher proficiency should lead to more CLI, but it is not clear why they make such a claim. In their own model they predict no effect of proficiency on CLI.

(p. 343) Not surprisingly, Hartsuiker and Pickering conclude that their model best describes the process of CLI in bilingual speakers. The strong point of their contribution is that they have translated rather general theoretical notions into testable hypotheses, but, as the discussion presented may have elucidated, the translation is not without its pitfalls.

3. Toward Dynamic Models of Bilingual Processing

It can be argued that the Hartsuiker and Pickering article represents the state of the art at this moment in the sense that this type of model is the most prominent one. The whole literature on bilingual processes centers on this type of model. It will be argued in the remainder of this chapter, however, that there may be reasons to move beyond such models because they have a number of rather serious problems.

The main problem is that they are based on underlying assumptions that may no longer be tenable:

  • Language processing is modular; it is carried out by a number of cognitive modules having their own specific input and output and functioning more or less autonomously.

  • Language processing is incremental, and there is no internal feedback or feedforward.

  • Language processing involves operations on invariant and abstract representations.

Because of these underlying assumptions, isolated elements (phonemes, words, sentences) are studied without taking into account the larger linguistic and social context of which they are a part. Also, the models are static and steady state models in which change over time has no role to play. Moreover, studies are based on individual monologue rather than on interaction as the default-speaking situation.

Within the tradition of which such models are a part, these characteristics may be unproblematic, but in recent years new perspectives on cognition have developed that lead to a different view. The most important development is the emergence of a dynamic perspective on cognition in general and on language processing in particular. The most important tenet is that any open, complex system (such as the bilingual mind) interacts continuously with its environment and will change continuously over time. Although a full treatment of dynamic systems theory (DST) as it has been applied to cognition and language is beyond the scope of the present chapter, a brief summary of some aspects is provided. Relevant publications on various aspects of DST and language include those by Port and van Gelder 1995, van Geert 1994, van Gelder 1998, and Spivey 2007. Specific for bilingualism and second language development are works by de Bot, Verspoor, and (p. 344) Lowie (2007) and Larsen-Freeman and Cameron (2008b). The main characteristics of DST are:

  • DST is the science of the development of complex systems over time—complex systems are sets of interacting variables.

  • In many complex systems, the outcome of development over time cannot be predicted, not because the right tools to measure it are not available, but because the interacting variables keep changing over time.

  • Dynamic systems are always part of another system, going from submolecular particles to the universe.

  • Systems develop through iterations of simple procedures that are applied over and over again with the output of the preceding iteration serving as the input of the next.

  • Complexity emerges out of the iterative application of simple procedures; therefore, it is not necessary to postulate innate knowledge.

  • The development of a dynamic system appears to be highly dependent on its beginning state—minor differences at the beginning can have dramatic consequences in the long run.

  • In dynamic systems, changes in one variable have an impact on all other variables that are part of the system—systems are fully interconnected.

  • Development is dependent on resources—all natural systems will tend to entropy when no additional energy is added to the system.

  • Systems develop through interaction with their environment and through internal self-reorganization.

  • Because systems are constantly in flux, they will show variation, making them sensitive to specific input at a given point in time and some other input at another point in time.

  • The cognitive systems as a dynamic system is typically

    1. situated, in other words, closely connected to a specific here and now situation;

    2. embodied, in other words, cognition is not just the computations that take place in the brain, but also the interactions with the rest of the human body, and

    3. distributed, “knowledge is socially constructed through collaborative efforts to achieve shared objectives in cultural surroundings” (Salomon, 1993: 1)

van Gelder describes how a DST perspective on cognition differs from a more traditional one:

The cognitive system is not a discrete sequential manipulator of static representational structures; rather, it is a structure of mutually and simultaneously influencing change. Its processes do not take place in the arbitrary, discrete time of computer steps; rather, they unfold in the real time of ongoing change in the environment, the body, and the nervous system. The cognitive system does not interact with other aspects of the world by passing messages and commands; rather, it continuously coevolves with them. (1998: 3)

(p. 345) With these notions in mind, let us look at the main characteristics of the models discussed that are part of the information processing tradition.

Language processing is modular: It is carried out by a number of cognitive modules that have their own specific input and output and that function more or less autonomously.

The most outspoken opponent of a modular approach to cognitive processing at the moment is probably Michael Spivey in his book The Continuity of Mind (2007). His main argument is that there is substantial evidence against the existence of separate modules for specific cognitive activities such as face recognition and object recognition. For linguistic theories, this is crucial because in universal grammar (UG)-based theories, a separate and innate language module plays a central role. Distributed processing of language undermines the idea that language is uniquely human and innate because the cooperating parts of the brain are not unique for language, have no specific linguistic knowledge, and work in feedback and feedforward types of structures.

Isolated elements (phonemes, words, sentences) are studied without taking into account the larger linguistics and social context of which they are a part.

If cognition is situated, embodied and distributed, studying isolated elements is fairly pointless: one needs to investigate them as they relate to other aspects of the larger context, both linguistic and extralinguistic. For example, work by Eisner and McQueen 2006 has shown that the perception of ambiguous phonemes is strongly influenced by the semantics of the context in which that phoneme is used.

Based on individual monologue rather than on interaction as the default speaking-situation.

As Pickering and Garrod 2004 have argued, it is necessary to move away from monologue as the default type of language production and look instead at interaction. The task for a speaker is fundamentally different in interaction as compared with monologue. The literature on syntactic priming mentioned earlier supports this way of looking at production; how language is used depends only partly on the intentions and activities of individual speakers and is to a large extent defined by the characteristics of the interaction.

Language processing is seen primarily as operations on invariant and abstract representations.

In the models presented earlier, and in the information processing approach in general, the assumption is that language processing is the manipulation of invariant entities (words, phonemes, syntactic patterns). In a dynamic approach this (p. 346) invariance is highly problematic because every use of a word, expression or construction will have an impact on the way it is represented in the brain. As Spivey indicates:

I contend that cognitive psychology's traditional information processing approach … places too much emphasis on easily labeled static representations that are claimed to be computed at intermittently stable periods over time (2007: 4).

He admits that static representations are the corner stone of the information processing approach and that it will be difficult to replace them with a concept that is more dynamic because what is presently available is too vague and underspecified.

Language processing is incremental, and there is no internal feedback or feedforward.

One of the problems of this assumption is that many second-language speakers regularly experience a “feeling of knowing.” They want to say something in the foreign language but are aware of the fact that they do not know or do not have quick access to a word they are going to need to finish a sentence (de Bot, 2004). This suggests at least some form of (L1) feedforward in speaking. Additional evidence against a strict incremental view is provided in an interesting experiment by Hald, Bastiaanse, and Hagoort (2006). In this experiment, speaker characteristics (social dialect) and speech characteristics (high/low cultural content) were varied in such a way that speaker and speech characteristics were orthogonally varied. Listeners heard speakers whose dialect clearly showed their high or low socioeconomic status talk about Chopin's piano music or about tattoos. The combinations of high cultural content and low social status in a neuroimaging experiment led to N 400 reactions that showed that these utterances were experienced as deviant. A comparison with similar sentences with grammatical deviations showed that the semantic errors were detected earlier than the syntactic ones—a problem for a purely incremental process from semantics to syntax and phonology. The semantics and pragmatics seem to override the syntax in this experiment.

Isolated elements (phonemes, words, sentences) are studied without taking into account the larger linguistics and social context of which they are a part.

If cognition is situated, embodied, and distributed, studying isolated elements is fairly pointless: One needs to investigate them as they relate to other aspects of the larger context, both linguistic and extralinguistic. For example, work by Eisner and McQueen 2006 has shown that the perception of ambiguous phonemes is strongly influenced by the semantics of the context in which that phoneme is used.

Language processing is based on individual monologue rather than on interaction as the default speaking-situation.

As Pickering and Garrod 2004 have argued, it is necessary to move away from monologue as the default type of language production and look instead at interaction. (p. 347) The task for a speaker is fundamentally different in interaction as compared with monologue. The literature on syntactic priming mentioned earlier supports this way of looking at production; how language is used depends only partly on the intentions and activities of individual speakers and is to a large extent defined by the characteristics of the interaction.

Language processing is seen primarily as operations on invariant and abstract representations.

In the models presented earlier, and in the information processing approach in general, the assumption is that language processing is the manipulation of invariant entities (words, phonemes, syntactic patterns). In a dynamic approach, this invariance is highly problematic because every use of a word, expression, or construction will have an impact on the way it is represented in the brain. As Spivey indicates, “I contend that cognitive psychology's traditional information processing approach … places too much emphasis on easily labeled static representations that are claimed to be computed at intermittently stable periods over time” (2007: 4). He admits that static representations are the cornerstone of the information processing approach and that it will be difficult to replace them with a concept that is more dynamic because what is presently available is too vague and underspecified.

So far, there has been hardly any research on the stability of representations. De Bot and Lowie (2009) report on an experiment in which a simple word-naming task of high frequency words was used. The outcome shows that correlations between different sessions with the same subject and between subjects were very low. In other words, a word that was reacted to rapidly in one session could have a slow reaction in another session or with another individual. This outcome points to variation inherent in the lexicon and resulting from contact interaction and reorganization of elements in networks. Elman puts it this way: “We might choose to think of the internal state that the network is in when it processes a word as representing that word (in context), but it is more accurate to think of that state as the result of processing the word rather than as a representation of the word itself” (1995: 207; emphasis added). Additional evidence for the changeability of words and their meanings comes from an ERP study by Nieuwland and van Berkum 2006, who compared ERP data for sentences like “The peanut was in love” versus “The peanut was salted.” This type of anomaly typically leads to N = 400 reactions. Then they presented the subjects with a story about a peanut that falls in love. After listening to these stories, the N = 400 effects disappeared, which shows that through discourse information the basic semantic aspects of words can be changed.

To summarize, in research on the mental lexicon so far, the metaphor of a library in which books are opened and closed is often implicitly used to explain how access and storage work, reflecting the thinking in terms of static representations. From a dynamic perspective, the library metaphor no longer holds, because the book changes every time someone has read it!

(p. 348) 2.2. Characteristics of DST-Based Models of Bilingual Processing

As may be clear from the argumentation so far, it is necessary to review some of the basic assumptions of the information processing approach in which current models of multilingual processing are based. In the previous section, the main characteristics and the problems related to them were listed. It follows from this that it is necessary to develop models that take into account the dynamic perspective in which time and change are the core issues. As Spivey argues,

The fundamental weakness of some of the major experimental techniques in cognitive psychology and neuroscience is that they ignore much of the time course of processing and the gradual accumulation of partial information, focusing instead on the outcome of a cognitive process rather than the dynamic properties of that process. (2007: 53)

As a conclusion, some of the characteristics of dynamically based models are listed:

  • Models should take into consideration that languages do not exist as entities in the brain and focus on situation-associated networks instead.

  • Models should include time as a core characteristic—language use takes place on different but interacting time scales.

  • Models should allow for representations that are not invariant but variant and episodic.

  • Models should allow for feedback and feedforward information rather than a strict incremental process.

  • Models should recognize that language use is distributed, situated, and embodied; therefore, linguistic elements should not be studied in isolation but in interaction with the larger units of which they are a part.

  • Models should recognize that interaction, rather than monologue, is the focus of research.

Accepting that time and change are the core issues in human cognition implies that new models are needed, but, as Spivey 2007 readily admits, it is difficult to leave established notions and assumptions behind while there is as yet no real alternative. It is our conviction that we will move on to more dynamic models in the years to come, but how that will happen is unclear. Model development in itself is a dynamic process.

Acknowledgment

The author is indebted to Robert B. Kaplan, Ludmila Isurin, and Marjolijn Verspoor for their comments on an earlier version of this contribution.