Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: null; date: 17 September 2019

Translation Universals

Abstract and Keywords

According to Gideon Toury, there exists a universal translation theory. Baker suggests that patterns that are found across all such sets of translated versus non-translated corpora would suggest a hypothesis for universal features of translation. Baker reflects that translation universals are cognitive phenomena. However, Toury speaks of universals of translational behaviour. Segmentation is essential in translation and interpreting and it is a kind of segmentation that has no counterpart in unilingual activity. It involves simultaneous suppression and activation of the right features of the linguistic systems at the right time in the right proportions to each other before the translator or interpreter can get started on the conscious parts of the translation process. This can be termed as ‘translation unit segmentation’. With the confirmation of predictions arising from these hypothesized universals, new insights into translation studies will be gained.

Keywords: universal translation theory, hypothesis, translational behaviour, segmentation, translation unit segmentation

6.1 Introduction

The two decades around the turn of the twentieth century saw an upsurge of interest in the possibility, raised by Gideon Toury in the 1970s (1977; 1980a: 60, italics original), that there might exist ‘universals of translational behavior’. According to Toury, these might include ‘an almost general tendency—irrespective of the translator's identity, language, genre, period, and the like—to explicate in the translation information that is only implicit in the original text’. This idea was subsequently explored by Blum-Kulka (1986/2004), according to whose explicitation hypothesis (1986: 19), a translation will be more explicitly cohesive than its source text ‘regardless of the increase traceable to differences between the two linguistic and textual systems involved’. ‘It follows,’ she continues, ‘that explicitation is viewed here as inherent in the process of translation.’ Blum-Kulka reports on a number of studies of learner English, of learner translations and professional translations, ‘for lack of large scale empirical studies’ of the latter types of text alone. She concludes (1986: 21) that ‘it might be the case that explicitation is a universal strategy inherent in the process of language mediation’ in general and not just of translation, a conclusion more recently arrived at also by Gaspari and Bernardini (2010).

By the early 1990s, thanks to enhanced methods and machinery for the electronic collection and analysis of large corpora of texts, it became possible to undertake the (p. 84) kinds of large-scale empirical studies of professional translations that Blum-Kulka missed, and Mona Baker (1993), then working at the university of Birmingham where colleagues were engaged in building and investigating the COBUILD corpus, set about sketching out a machine-aided research programme aimed at testing Blum-Kulka's hypotheses and others of a similar kind. She presented this programme of research, appropriately enough, in a collection of articles published in honour of John Sinclair, under whose direction the COBUILD programme was progressing, and she did it on the basis of a conviction that:

translated texts record genuine communicative events and as such are neither inferior nor superior to other communicative events in any language. They are however different, and the nature of this difference needs to be explored and recorded. (Baker 1993: 234)

In order for the differences between translations and non-translations to be adequately explored and recorded, it would be necessary to assemble corpora of translated texts much larger than those that had been explored previously (e.g. Vanderauwera's 1985 corpus of around fifty novels translated from Dutch into English). Such corpora could be searched for evidence for or against hypotheses such as the following (Baker 1993: 244–5):
  1. 1. Blum-Kulka's (1986) explicitation hypothesis.

  2. 2. ‘A tendency towards disambiguation and simplification’. Vanderauwera (1985) had found that where a source text contained ambiguous pronouns, its translation would tend to use pronouns that were unambiguous. She also found that where her corpus of Dutch novels contained complex syntax, the translations tended to use structures that would be simpler to process.

  3. 3. ‘A strong preference for conventional grammaticality’. Shlesinger (1991) finds that interpreters tend to ignore errors and to produce complete sentences where those of their source speakers are incomplete; and Vanderauwera (1985) finds that the translations in her corpus were generally more conventional in language use than their source texts.

  4. 4. ‘A tendency to avoid repetitions which occur in source texts, either by omitting them or rewording them’ (Shlesinger 1991, Toury 1991b).

  5. 5. ‘A general tendency to exaggerate features of the target language’ (Toury 1980a, Vanderauwera 1985).

  6. 6. A tendency to mirror SL features in the translated text, though not exactly as they are used in the SL, resulting in a ‘third code’ (Frawley 1984:168) ‘which is a result of the confrontation of the source and target codes and which distinguishes a translation from both source texts and original target texts [sic] at the same time’ (Baker 1993: 245)—a kind of translational interlanguage (cf. Selinker 1972).

Baker proposes the following methodology for testing these hypotheses. Take a corpus of translations into L from a large number of languages. Take a matched, or comparable, corpus of texts originally written in L. Examine the translated corpus (p. 85) for patterns which occur in it but not (or not with the same frequency) in the non-translated corpus. Do this for large numbers of languages. Patterns that are found across all such sets of translated versus non-translated corpora would be ‘good candidates for universal features of translation’ (Baker 1993: 245).

6.2 Investigations and Findings

Many scholars have taken up Baker's challenge, including Baker's Ph.D student Sara Laviosa (then Laviosa-Braithwaite), who was able to benefit from the Translational English Corpus (TEC) designed by Baker and Laviosa at UMIST, where Baker relocated in the mid-1990s. The corpus, now housed along with the Centre for Translation and Intercultural Studies and its staff at the University of Manchester, is freely available to scholars, and contains four types of text translated into English: fiction, biography, news, and in-flight magazines.1

Laviosa-Braithwaite (1996) worked with the news and fiction parts of the original corpus (the corpus is being updated constantly) and with a comparable corpus of non-translated texts in English to investigate ‘simplification as a universal of translation’.2 Of course, it is not possible to programme a computer to search for simplification as such; instead, features of text which a computer is able to recognize and which might contribute to the relative simplicity or complexity of texts have to be identified as the focus for the electronic search, and Laviosa-Braithwaite searched for average sentence length, for the proportion in the two corpora of lexical words versus grammatical words and of high-frequency versus low-frequency words, and for ‘relatively greater repetition of the most frequent words and less variety in the words most frequently used’, on the assumption that texts containing a relatively large proportion of short sentences, relatively few different lexical words, and a high proportion of frequent words would be simpler than texts containing the opposite. She found that the translated texts did indeed seem simpler in these terms than the non-translated texts, except that translated fiction did not have shorter average sentences than non-translated fiction; the opposite was the case.

A second UMIST Ph.D, Kenny (1999b), employed a different electronically aided method. She compared translations with their source texts in a 2 million-word parallel corpus of German literary texts and their translations into English in order (p. 86) to identify normalization, which she understood as a norm (cf. Toury 1980a) or ‘tendency in translation to exaggerate features of the target language and to conform to its typical patterns’.3 She searched her corpus for frequent items (as defined in word frequency lists), hapex legomena, and unusual collocations, and she found the norm to be adhered to.

Both comparable (same language) and parallel (TTs and their STs) corpora have subsequently been employed to explore other candidates for universalhood, including Blum-Kulka's explicitation, which Øverås (1998—who remains open-minded about whether it is a norm or a universal; see p. 587) tests in a corpus consisting of the first fifty sentences of forty fragments of novels, twenty each in Norwegian and English and of their translations into the other language. As well as explicitation, Øverås (p. 575) also finds a relatively small number of cases of implicitation: ‘instances where explicit ST items are rendered by ambiguous TT items, but where recoverability in the immediate TT environment makes the item implicit rather than ambiguous’. In her corpus (p. 586):

explicitation shifts were found in all texts and […] 33 out of 40 texts (or 82.22%) contained more explicitation than implicitation […] Out of the remaining 7 texts, 4 contained an equal number of both types of shift, and in the 3 cases of dominating implicitation the differences were fairly small.

She concludes that explicitation is indeed a characteristic feature of the translation process.

Chesterman (2004) produces the following helpful list of phenomena that some scholars have understood as evidence for translation universals; a number of later studies are available in Mauranen and Kujamäki (2004a):

  • Lengthening: translations tend to be longer than their source texts (Berman 1985, Vinay and Darbelnet 1958).

  • Interference: the source text necessarily interferes with the target text (Toury 1995).

  • Standardization: a translation tends to use more standard language than a ST that exhibits deviance from the standard (Toury 1995; with respect to dialect, see Englund Dimitrova 1997).

  • A translation tends to exhibit less complexity of narrative voices than a source text which exhibits this characteristic (Taivalkoski 2002).

  • Explicitation (Blum-Kulka 1986/2004, Øverås 1998).

  • Sanitization: translations tend to display more usual collocations than their STs (Kenny 1999b).

  • Later translations tend to be closer to the ST than earlier translations (see the papers published in Palimpsestes 4,1990).

  • (p. 87) There tends to be less repetition in a translation than in its source text (Shlesinger 1991, Toury 1991b, Baker 1993).

  • Translated texts are less varied lexically than non-translated texts, less lexically dense, and use more high-frequency terms (Laviosa-Braithwaite 1996).

  • Translated texts are more conventional in their language than non-translated texts (Baker 1993).

  • Translated texts exhibit a larger quantity of patterns that are untypical of the language than non-translated texts (Mauranen 2000).

  • Translated texts underrepresent features that are unique to the language (Tirkkonen-Condit 2000).

6.3 But are they universals?

The theme of how universal a universal must be emerged in the era of pre-electronic corpus translation studies. As we saw in section 6.1 above, for Blum-Kulka, universality means just that; she maintains that (1986: 17–18, italics mine): ‘the process of translation necessarily entails shifts’, and she sees explicitation (p. 19, italics mine) ‘as inherent in the process of translation’. Toury (1980a: 60) is less certain, referring both to universals (though of behaviour) and to ‘an almost general tendency—irrespective of the translator's identity, language, genre, period, and the like—to explicate in the translation information that is only implicit in the original text’. Clearly an ‘almost general tendency’ is not universal, if by ‘universal’ we mean ‘always present’. Toury also contrasts such a tendency with phenomena which are subject to variation with translators, languages, genres, and periods; but lack of variation alone does not guarantee ubiquity, it only guarantees that a phenomenon is invariant where it is found. Baker (1993), too, mentions both universality and typicality. For her, a feature is a translation universal if it is ‘linked to the nature of the translation process itself rather than to the confrontation of specific linguistic systems’ (p. 243). It should ‘typically occur in translated text rather than original utterances and […] not [be] the result of interference from specific linguistic systems’. It must be possible to see it as ‘a product of constraints which are inherent in the translation process itself’, and it must ‘not vary across cultures’ (p. 246). She contrasts such features with features that result from the operation of norms. Norms (Toury 1978) are the underlying causes for the prevalence of features that (Baker 1993: 246) ‘have been observed to occur consistently in certain types of translation within a particular socio-cultural and historical context’.

(p. 88) There is a strong suggestion in Baker's manner of expression, reminiscent of Blum-Kulka, that translation universals are cognitive phenomena, since the processes of translation that they inhere in are certainly cognitive processes. Toury, in contrast, speaks of universals of translational ‘behaviour’. But beyond these remarks, there was little discussion of the nature of translation universals early in the life of the new, electronically driven research paradigm—which was, after all, conceived of as a paradigm within descriptive translation studies rather than in translation process research; the latter at the time tended to concentrate on think-aloud protocol studies (TAPS), and TAPS were looked on as revelatory of strategies that translators were at some level aware of employing, and there were no overt claims to universality.

Even when, in 2001, two major conferences were held at which the universality issue received some attention, scholars did not really address the central issue of the nature of translation universals. The two conferences were the third EST Congress, ‘Claims, Changes and Challenges’, held in Copenhagen in August-September, and a conference held in Savonlinna in October on the topic, ‘Translation Universals: Do They Exist?’ Papers presented at the latter were subsequently published as Mauranen and Kujamäki (2004a), which contains three articles in a section headed ‘Conceptualising Universals’. Baker attended the former congress, presenting a paper (Baker 2001) in which, according to Mauranan and Kujamäki (2004b: 2), she wondered ‘if the term [universal] was felicitous after all’. In his contribution to Mauranan and Kujamäki (2004a), Toury similarly disowns the u-word (Toury 2004: 17, italics original):

I did use the word ‘universals’ […] in my 1976 dissertation, but dropped it right away and refrained from using it ever since […] As of the early 1980s, the notion I favored was that of ‘laws’ […] because unlike ‘universals’, this notion has the possibility of exception built into it.

The last of the three papers in the conceptualization section of Mauranen and Kujamäki (2004a), Bernardini and Zanettin (2004: 52), follows suit, and focuses on ‘evaluating the adequacy of a corpus in the quest for norms and laws of translational behaviour’. Chesterman (2004: 39) takes the different tack of classifying the various kinds of universal that scholars have claimed to have identified or have gone in search of into two types. The first type, source or S-universals, cause differences ‘between translations and their source texts’, and are ‘characteristics of the way in which translators process the source text’. The second type, T-universals, where ‘T’ stands for ‘target’, give rise to differences between translations and comparable non-translated texts, and they arise from the way in which translators use the target language. Chesterman thus finds no overt place for universals arising from the translation process as a whole, and in fact his division can be seen as one between hypotheses that can be tested by each of the two kinds of corpus mentioned in section 6.2 above: corpora of translations and their STs can be searched (p. 89) for evidence of S-universals, and corpora of translated and non-translated texts can be tested for evidence of T-universals. Overall, then, progress on the conceptualization track of research into universals was limited, notwithstanding the wealth of descriptive studies presented in the book of conference papers and elsewhere.

Chesterman (2004: 44) does, however, raise the important issue of causality: ‘To claim that a given linguistic feature is universal is one thing. But we would also like to know its cause or causes. Here, we can currently do little more than speculate as rationally as possible.’ In a series of iterations of a paper first presented as the annual St Jerome Lecture at the Norwegian Business School in Bergen on 1 October 2004, and subsequently published as Malmkjær (2004a, 2008, 2009a), I have tried to speculate in such a way and also to meet the challenge posed by Toury (2004: 22), who says that ‘the question facing us is not really whether translation universals exist […] but rather whether recourse to the notion is in a position to offer us any new insights’; House (2008: 16) raises the same question. It seems to me, though, that before we can tackle these important questions, we must try to come to terms with the more basic question of what a translation universal is; otherwise, how would we know when we had found one? Since studies in the descriptive tradition have not advanced significantly in this direction, it might make sense to cease sweeping ‘at least half a century of linguistic research and theorization […] attached to the term “universal” […] under the carpet’ (Bernardini and Zanettin 2004: 52) and, instead, see how far we can take a parallel between the theoretical linguistic tradition and our own preoccupation with the idea of universals.

It is important to point out, at the outset, that the explanatory power of any given concept is relative to a particular research programme. For example, the explanatory power of Chomsky's notion of competence is, as Hymes (e.g. 1971) famously pointed out, as limited within sociolinguistics as it is extensive in Chomsky's own area of theoretical linguistics. If translation universals are of a similar nature to linguistic universals, then it is very possible that their powers of explanation will reside in the theoretical branch of their parent discipline and not in the descriptive branch. It is similarly the case that even if most of the phenomena that have been taken as evidence of universals are actually evidence of norms, laws, or tendencies, the theoretical power of only a few universals may nonetheless be very great.

Chomskyan universals are closely linked to the notion of linguistic competence, and if translation universals are features of the translation process, then it is very probable that they are related to translation competence, an area of study in which clarification would be of some benefit (see Pym 2003). The link between linguistic universals and linguistic competence is the following: linguistic universals are located in the language-acquisition device. This device begins its life in an initial state called Universal Grammar (UG). In interaction with language input, UG allows for the development of any human language (grammar), but it constrains human grammars through a set of principles that restrict the form of languages, (p. 90) and a set of parameters which define the kinds of (binary) variations that languages display (Chomsky 1981; see further Radford 2004a). Competence is the adult native speaker's implicit knowledge of their language(s) or grammar(s) (Chomsky 1965: 4); it is a mental state that the speaker is not conscious of but which allows the speaker to make judgements about the grammaticality or otherwise of stretches of a native language.

It is difficult to construct a translational parallel to this postulate. Since everyone begins their (pre-) infancy endowed with UG, but not everyone becomes a translator, we need to find a different initial state for translation competence to develop from. This initial state would need to include two or more languages, but it is necessary to allow for the languages not to be native, since many translators have learnt some of their languages formally, and translation universals are supposed to constrain translated texts irrespective of variations among translators. The initial state would also need to be age-independent, since many translators do not begin to translate until adulthood. The input data would need to be translational: seeing translation, doing translation, and receiving feedback on translation, because having more than one language appears not to suffice for a person to be able to translate (see Toury 1984).

To see how universals might fall out from the interaction of such data with the initial state, it will be helpful to consider what the relationships might be between an individual's two or more languages.

According to Paradis (2004: 110), a bilingual has ‘two subsets of neural connections, one for each language, within the same cognitive system, namely, the language system’ but (p. 112):

awareness of language membership is a product of metalinguistic knowledge. In online processing, language awareness is of the same nature and as unconscious as the process that allows a unilingual speaker to understand (or select) the appropriate word in a given context. The process of selecting a Russian word by a Latvian-Russian bilingual person is the same as the process that allows a unilingual Russian speaker to select among the indefinite, almost unlimited, possibilities for encoding a given message.

Both processes involve relating the selected item to a single, language-independent conceptual component (p. 200):

the conceptual component of verbal communication is not language-specific and there is a single non-linguistic cognitive system, even though speakers group together conceptual features differently in accordance with the lexical semantic constraints of each language. The lexical items are part of the language system, but the concepts are not.

Selection of an appropriate stretch of language is explained in terms of Paradis's Activation Threshold Hypothesis (1987; 1993; 2004: 28–31), which proposes that:

an item is activated when a sufficient amount of positive neural impulses have reached its neural substrate. The amount of impulses necessary to activate the item constitutes its (p. 91) activation threshold. […] after each activation, the threshold is lowered—but it gradually rises again. […] The selection of a particular item requires that its activation exceed that of any possible alternatives […] In order to ensure this, its competitors much be inhibited.

The hypothesis is also used to explain the selection of one language over another (Paradis 2004: 115):

When one language is selected for expression, the activation threshold of the other language is raised so as to avoid interference […] However, it is not raised so high that it could not be activated by an incoming verbal stimulus that impinges on the auditory sensory system and sends impulses to the corresponding representation […] the unselected language is not totally inhibited. Its activation threshold is simply raised high enough to prevent self-activation, but not so high as to preclude comprehension.

What kinds of translation universal would this understanding of the relationships between terms and between languages in a bilingual mind admit, and what sort of evidence might we look for that these universals exist?

One set of findings and argument for universals that might fit the bill would be those published by Tirkkonen-Condit (2000, 2004), whose comparisons of translated and original Finnish show underrepresentation in the translated texts of items that are unique to Finnish. Tirkkonen-Condit proposes two explanations for this textual phenomenon, though she cautiously refers to one of them as no more than ‘a (potentially universal) tendency of the translating process to proceed literally to a certain extent’ (2004: 183). This has the consequence that when terms in the language being translated into do not have linguistic counterparts in the language being translated from, these items ‘do not appear in the bilingual mental dictionary and there is nothing in the source text that would trigger them off as immediate equivalents’ (Tirkkonen-Condit 2004: 183). Let us set aside the idea of a mental dictionary in favour of Paradis's notion of a language system within a more general cognitive store (2004:199):

An individual's cognitive store contains several higher cognitive systems that represent the sum of that person's intellectual abilities. The conceptual system is one of them, the language system another. The conceptual system stores concepts. ‘Concept’, as used here, refers to the mental representation of a thing (object, quality or event) formed by combining all of its characteristics or particulars. A concept can be acquired through experience, by organizing features of perception into coherent wholes. With the acquisition of language, however, its boundaries (i.e. what it encompasses) maybe reshaped, and new concepts may be formed. Features of mental representation are then combined in accordance with (language-specific) lexical semantic constraints to form a (language-induced) concept. The concepts evoked by a word and by its translation equivalent will differ to the extent that their lexical semantic organization differs in the two languages. In fact, some concepts may have a label in only one of the languages and hence are not easily accessible through the other language.

(p. 92) Let us also assume that what Tirkkonen-Condit (2004: 183) refers to as ‘translating literally’ is the translational version of Davidson's idea of literal or first meaning: whatever ‘comes first in the order of interpretation’ (1986; see also Chapter 8 below), that is, the first translation equivalent that occurs to a translating translator. This seems to be a reasonable candidate for universal status: something has to be the first thing that comes to your mind when you are faced with a linguistic item to translate. It is a phenomenon which is not present in unilingual language events, nor in other bi- or multilingual events such as code-switching, which simply involve a switch of language in response, usually, to a feature of the environment or a switch of topic.

Such a ‘first translational response’ universal may on the face of it seem rather a tame translation universal; however, studying the responses arising from it—i.e. unedited, ‘immediate’ translations and possibly also interpretations—might tell us a great deal about the bilingual language store (how items in the two languages are connected) and about translation competence (How much editing is it necessary to perform after the first response? Do some translators' first responses require less editing than those of others?), and about how translational cognitive activity differs from unilingual cognitive activity and from bilingual cognitive activity that is not translational; this is very close to the aim with which Baker set out (see section 6.1 above).

A second translation universal may underlie the phenomenon that Jääskeläinen (1990) refers to as attention units and which Jakobsen (2003) calls segments. No translator is able to work at once with an entire text, so first responses to longer stretches of text will occur in segmented form. Given the very complex relationships between any two languages stored in a bilingual's mind, the task of segmentation is far from simple. For example, the Danish concept that the term ‘døgn’ is effortlessly used to refer to in Danish is likely to be less clearly defined for English speakers, who are confined to using the terms ‘day’ (meaning a day with its night) or ‘twenty-four hours’. In turn, the term døgn and the concept to which it refers are unlikely to be evoked as first responses to the English terms ‘day’ or ‘twenty-four hours’, because both can be rendered ‘literally’ into Danish, using dag and fireogtyve timer. Obviously, where longer stretches of language are concerned, the lexico-conceptual variations are likely to be greater. As Paradis (2004: 199–200) points out:

The mental representations at the (nonlinguistic) cognitive level (i.e. concepts) are organized slightly differently by each language. The greater the typological and/or cultural distance between the two languages, the greater the difference in the organization of the mental representations corresponding to a word or utterance and its translation equivalent. Note that, assuming that a concept comprises all the knowledge that an individual possesses about a thing or event, it is never activated in its entirety at any given time. Only those aspects that are relevant to the particular situation in which it is evoked are activated (Damasio, 1989). Thus, the exact same portion of the relevant neural network is not activated every time a given word is heard or uttered. English and French words may (p. 93) activate exactly the same mental representation when the context focuses on features that overlap but will activate different representations when the context includes in its focus one or more features that are not part of the meaning of both the word and its translation equivalent.

Clearly, segmentation is essential in translation and interpreting and it is a kind of segmentation that has no counterpart in unilingual activity. It involves simultaneous suppression and activation of the right features of the linguistic systems at the right time in the right proportions to each other before the translator or interpreter can get started on the conscious parts of the translation process. We might call what enables this to happen ‘translation unit segmentation’. Looking at the pairings of ST and TT that emerge as first translational responses might tell us something about the interlingual relationships, and the linguistic-conceptual relationships that exist in the translating bilingual's mind.

Above, I mentioned Toury's (2004: 22) challenge to establish what new insights we might gain by way of the notion of the translation universal. It seems to me that the two translation universals identified above invite investigation of new hypotheses: (i) that translators will never as a first translational response select a target language term that is unique to the target language; (ii) that the longest stretch of translation that a translator can deal with at once is limited by the amount of paired text a translator can hold in short-term memory; (iii) that this will vary with variation in the language pairs involved; (iv) that there will be limits on how different a first-response translation can be from its ST; it would be interesting to see what these limits are, and to speculate about whether this would help us, for example, to distinguish versions from translations. Were these predictions, arising from the hypothesized universals, to be confirmed, the hypotheses would be strengthened, and we would have gained new insight in translation studies.

Further reading and relevant sources

Most works on corpus-based translation studies provide an account of the material covered in this chapter (except for that discussed in the final section, 6.3). Meta 43.4 (1998) provides a selection of early corpus-based translation research including some directed at the idea of the translation universal. Laviosa (2002) provides an account of the state of the art a little later, and Olohan (2004) contains a chapter on what she calls ‘features’ of translation. Mauranan and Kujamäki (2004a) and Anderman and Rogers (2008) both contain studies addressing the question of translation universals. Malmkjær (2004a, 2008, 2009a, 2009b) deal with issues related to those raised in this chapter.