Sherman Wilcox and Phyllis Perrin Wilcox
This chapter presents a brief history and overview of the analysis of signed languages. Signed languages are natural human languages used by deaf people throughout the world. The authors describe the major milestones in the analysis of signed languages, including the study of duality of patterning and phonological structure; cognitive processes such a iconicity, metaphor, and metonymy as they appear in signed languages; grammaticalization; and the diachronic relation between gesture and signed language.
Joseph Hill, Carolyn McCaskill, Robert Bayley, and Ceil Lucas
The socio-historical reality of the segregation era defined the geographical and racial isolation of residential state schools for the deaf that led to the development of Black American Sign Language (Black ASL) in southern and border states after the end of the American Civil War. Even though residential state schools for White deaf children existed a few decades before the end of Civil War and sign language had been used, Black deaf children were limited to their own forms of sign language. The linguistic features of Black ASL are reviewed in the chapter based on data produced by two different generations of Black and White informants in the South. Our analysis identified specific features such as handedness, location of the sign, size of the signing space, the use of repetition, lexical differences, and the incorporation of spoken African American English into Black ASL.
Sherman Wilcox and Barbara Shaffer
This chapter examines evidentiality in signed languages. Data comes primarily from three signed languages—American Sign Language (ASL), Brazilian Sign Language (Libras), and Catalan Sign Language (LSC). The relationship between evidentiality, epistemic modality, and mirativity is examined across the expression of perceptual information as an evidential source, inference, and reported speech. It is suggested that evidentiality relies on simulation and subjectification. Finally, a proposal is offered that evidentiality, epistemic modality, and mirativity are primarily expressed through grammaticalized facial markers in signed languages, rather than by means of manual signs. These markers allow for simultaneous expression of grammatical markers. In signed languages, therefore, not only are the semantic components of evidentiality, epistemic modality, and mirativity integrated, so too are the phonological means of expression.
Roland Pfau and Markus Steinbach
This article considers the factors and processes associated with grammaticalisation in sign languages. It provides commentaries on the methodological challenges diachronic sign language research is faced with and describes selected grammaticalisation phenomena that we take to be modality-independent. It also discusses modality-specific instances of grammaticalisation and the grammaticalisation of gestures. The analysis reveals that sign language only show little evidence of type 2 grammaticalization and they have the unique possibility of grammaticalising manual and non-manual gestures.
Sherman Wilcox and Corrine Occhino
Signed languages are natural human languages used by deaf people around the world as their primary language. This chapter explores the linguistic study of signed language, their linguistic properties, and aspects of their genetic and historical relationships. The chapter focuses on historical change that has occurred in signed languages, showing that the same linguistic processes that contribute to historical change in spoken languages, such as lexicalization, grammaticization, and semantic change, contribute to historical change in signed languages. Historical influences unique to signed languages, such as the educational approach of borrowing and adapting signs and an effort to create a system of representing the surrounding spoken/written language and of the incorporation of lexicalized fingerspelling are also discussed.
Vadim Kimmelman and Roland Pfau
This chapter demonstrates that the Information Structure notions Topic and Focus are relevant for sign languages, just as they are for spoken languages. Data from various sign languages reveal that, across sign languages, Information Structure is encoded by syntactic and prosodic strategies, often in combination. As for topics, we address the familiar semantic (e.g. aboutness vs. scene-setting topic) and syntactic (e.g. moved vs. base-generated topic) classifications in turn and we also discuss the possibility of topic stacking. As for focus, we show how information, contrastive, and emphatic focus is linguistically encoded. For both topic and focus constructions, special attention is given to the role of non-manual markers, that is, specific eyebrow and head movements that signal the information structure status of constituents. Finally, aspects that appear to be unique to languages in the visual-gestural modality are highlighted.
Barbara Shaffer and Terry Janzen
This chapter surveys the expression of modality and mood in American Sign Language (ASL), with a focus on modality and, specifically, modal verbs. Beyond sentence types, mood has not been explored extensively for ASL to date, although recent work on irrealis moods has been fruitful. For a signed language such as ASL, articulation with the hands is accompanied by distinctive facial gestures and body/head postures, which become increasingly important as epistemic readings of modals are obtained. Here we give a detailed discussion of modals in ASL that range from agent-oriented to epistemic, looking at both form and function, including some negative modals. We trace the grammaticalization of a number of modal categories and show how at least some of these categories have grammaticalized from earlier gestural sources. Regarding mood, we include some discussion of conditionals, hypotheticals, and counterfactuals.
This chapter highlights the linguistic study of Native American signed language varieties, which are broadly referred to as American Indian Sign Language (AISL). It describes how indigenous sign language serves as an alternative to spoken language, how it is acquired as a first or second language, and how it is used both among deaf and hearing tribal members and internationally as a type of signed lingua franca. It discusses the first fieldwork carried out in over fifty years to focus on the linguistic status of AISL, which is considered an endangered language variety but is still used and learned natively by some members of various Indian nations across Canada and the United States (e.g. Assiniboine, Blackfeet/Blackfoot, Cherokee, Crow, Northern Cheyenne, Nakoda/Lakȟóta, and Mandan-Hidatsa). The chapter also addresses questions of language contact and spread, including code-switching and lexical borrowing, as well as historical linguistic questions.
Michael C. Corballis
This article explores the origins of language in manual gestures. Humans have the capacity to produce complex intentional gestures either vocally or manually and either system can serve as the medium for language. Vocal learning is especially critical to speech, and few non-human primate species appear capable of more than limited vocal learning. Studies suggest that primate calls show limited modifiability, but its basis remains unclear, and it is apparent in subtle changes within call types rather than the generation of new call types. The discovery of mirror neurons in area F5 of primate prefrontal cortex further supports the evolutionary priority of manual gesture. The hands and arms would lend themselves naturally to mimed representation of events in bipedal hominins. Mime is fundamentally imitative, in that there is a mapping between the mimed action and what it represents. Modern signed languages retain a strong mimetic, or iconic, component such as in Italian sign language some 50% of hand signs and 67% of the bodily locations of signs stem from iconic representations. The addition of phonation, perhaps through selection for a FOXP2 mutation, would allow non-visible gestures within the mouth, including movements of the larynx, velum and tongue, to be recovered from the acoustic signal, as proposed by the motor theory of speech perception.
Richard P. Meier
This essay considers the acquisition of sign languages as first languages. Most deaf children are born to hearing parents, but a minority have deaf parents. Deaf children of deaf parents receive early access to a conventional sign language. The time course of acquisition in these children is compared to the developmental milestones in children learning spoken languages. The two language modalities—the oral-aural modality of speech and the visual-gestural modality of sign—place differing constraints on languages and offer differing resources to languages. Possible modality effects on first-language acquisition are considered. Historically, many deaf infants born to hearing parents have had little access to a conventional language. However, these children sometimes elaborate “home sign” systems. Lastly, the role of early experience in language acquisition is considered. Deaf children of hearing parents are immersed in a first language at varying ages, enabling a test of the critical-period hypothesis.