Sherman Wilcox and Phyllis Perrin Wilcox
This chapter presents a brief history and overview of the analysis of signed languages. Signed languages are natural human languages used by deaf people throughout the world. The authors describe the major milestones in the analysis of signed languages, including the study of duality of patterning and phonological structure; cognitive processes such a iconicity, metaphor, and metonymy as they appear in signed languages; grammaticalization; and the diachronic relation between gesture and signed language.
Joseph Hill, Carolyn McCaskill, Robert Bayley, and Ceil Lucas
The socio-historical reality of the segregation era defined the geographical and racial isolation of residential state schools for the deaf that led to the development of Black American Sign Language (Black ASL) in southern and border states after the end of the American Civil War. Even though residential state schools for White deaf children existed a few decades before the end of Civil War and sign language had been used, Black deaf children were limited to their own forms of sign language. The linguistic features of Black ASL are reviewed in the chapter based on data produced by two different generations of Black and White informants in the South. Our analysis identified specific features such as handedness, location of the sign, size of the signing space, the use of repetition, lexical differences, and the incorporation of spoken African American English into Black ASL.
This chapter provides an introduction to endangered sign languages specifically designed for linguists who know little about sign languages but who may have an interest in the documentation of endangered sign languages. Focusing on ten Southeast Asian sign languages, nine of which are endangered or dying and six of which are being documented by fluent Culturally Deaf users trained through the Asian-Pacific Sign Linguistics Program in The Centre for Sign Linguistics and Deaf Studies at The Chinese University of Hong Kong, this chapter provides information about: the historical relationships of these sign languages, sign language phonology, “alphabetization” of signs by formational parameters, sign language morphology, sign language syntax, and sign language lexicons and lexicography. Finally, the chapter provides some discussion about the possible future of the documentation, conservation, and possible revitalization of endangered sign languages.
Sherman Wilcox and Barbara Shaffer
This chapter examines evidentiality in signed languages. Data comes primarily from three signed languages—American Sign Language (ASL), Brazilian Sign Language (Libras), and Catalan Sign Language (LSC). The relationship between evidentiality, epistemic modality, and mirativity is examined across the expression of perceptual information as an evidential source, inference, and reported speech. It is suggested that evidentiality relies on simulation and subjectification. Finally, a proposal is offered that evidentiality, epistemic modality, and mirativity are primarily expressed through grammaticalized facial markers in signed languages, rather than by means of manual signs. These markers allow for simultaneous expression of grammatical markers. In signed languages, therefore, not only are the semantic components of evidentiality, epistemic modality, and mirativity integrated, so too are the phonological means of expression.
Roland Pfau and Markus Steinbach
This article considers the factors and processes associated with grammaticalisation in sign languages. It provides commentaries on the methodological challenges diachronic sign language research is faced with and describes selected grammaticalisation phenomena that we take to be modality-independent. It also discusses modality-specific instances of grammaticalisation and the grammaticalisation of gestures. The analysis reveals that sign language only show little evidence of type 2 grammaticalization and they have the unique possibility of grammaticalising manual and non-manual gestures.
Sherman Wilcox and Corrine Occhino
Signed languages are natural human languages used by deaf people around the world as their primary language. This chapter explores the linguistic study of signed language, their linguistic properties, and aspects of their genetic and historical relationships. The chapter focuses on historical change that has occurred in signed languages, showing that the same linguistic processes that contribute to historical change in spoken languages, such as lexicalization, grammaticization, and semantic change, contribute to historical change in signed languages. Historical influences unique to signed languages, such as the educational approach of borrowing and adapting signs and an effort to create a system of representing the surrounding spoken/written language and of the incorporation of lexicalized fingerspelling are also discussed.
Vadim Kimmelman and Roland Pfau
This chapter demonstrates that the Information Structure notions Topic and Focus are relevant for sign languages, just as they are for spoken languages. Data from various sign languages reveal that, across sign languages, Information Structure is encoded by syntactic and prosodic strategies, often in combination. As for topics, we address the familiar semantic (e.g. aboutness vs. scene-setting topic) and syntactic (e.g. moved vs. base-generated topic) classifications in turn and we also discuss the possibility of topic stacking. As for focus, we show how information, contrastive, and emphatic focus is linguistically encoded. For both topic and focus constructions, special attention is given to the role of non-manual markers, that is, specific eyebrow and head movements that signal the information structure status of constituents. Finally, aspects that appear to be unique to languages in the visual-gestural modality are highlighted.
Aaron J. Newman
Hearing loss affects over 1 billion people around the world and is the fifth leading cause of disability. In the United States, approximately 10,000 babies are born each year with significant hearing loss. Although assistive technologies such as cochlear implants (CIs) are available to restore hearing, deaf children who receive CIs on average show significantly poorer language skills and academic outcomes than their normally hearing peers. At the same time, a relatively small percentage of deaf children are born to deaf parents and learn sign language as their first language, and grow up to be excellent, fluent communicators who are bilingual in signed and spoken language. Historically, there has been significant tension between advocates of sign language and “oralists” who discouraged sign language use. This chapter provides a critical review of language development in deaf children, including those with CIs and those exposed to different kinds, and amounts, of signed language. The linguistic and educational outcomes of deaf children are considered in light of current understanding of neurodevelopment, sensitive periods, and neuroplasticity, while highlighting areas of controversy and important directions for future research. The chapter concludes with evidence-based recommendations in favor of sign language exposure for all deaf children.
Ronice Müller de Quadros
This chapter argues for specific actions needed for language planning and language policies involving sign languages and Deaf communities, based on the understanding of what sign languages are, who the signers are, where they sign, and the sign language transmission and maintenance mechanisms of the Deaf community. The first section presents an overview of sign languages and their users, highlighting that sign languages are often used in contexts where most people use spoken languages. The second section addresses the functions, roles, and status of sign languages in relation to spoken languages, as well as the relationship between Deaf communities and hearing society. The medical view of deafness, which has a significant impact on language policies for Deaf people, is critically considered. The third section offers examples of language policies, especially related to the use of sign languages in education, and an agenda for future work on sign language policy and planning.
Barbara Shaffer and Terry Janzen
This chapter surveys the expression of modality and mood in American Sign Language (ASL), with a focus on modality and, specifically, modal verbs. Beyond sentence types, mood has not been explored extensively for ASL to date, although recent work on irrealis moods has been fruitful. For a signed language such as ASL, articulation with the hands is accompanied by distinctive facial gestures and body/head postures, which become increasingly important as epistemic readings of modals are obtained. Here we give a detailed discussion of modals in ASL that range from agent-oriented to epistemic, looking at both form and function, including some negative modals. We trace the grammaticalization of a number of modal categories and show how at least some of these categories have grammaticalized from earlier gestural sources. Regarding mood, we include some discussion of conditionals, hypotheticals, and counterfactuals.
This chapter highlights the linguistic study of Native American signed language varieties, which are broadly referred to as American Indian Sign Language (AISL). It describes how indigenous sign language serves as an alternative to spoken language, how it is acquired as a first or second language, and how it is used both among deaf and hearing tribal members and internationally as a type of signed lingua franca. It discusses the first fieldwork carried out in over fifty years to focus on the linguistic status of AISL, which is considered an endangered language variety but is still used and learned natively by some members of various Indian nations across Canada and the United States (e.g. Assiniboine, Blackfeet/Blackfoot, Cherokee, Crow, Northern Cheyenne, Nakoda/Lakȟóta, and Mandan-Hidatsa). The chapter also addresses questions of language contact and spread, including code-switching and lexical borrowing, as well as historical linguistic questions.
David P. Corina and Laurel A. Lawyer
Current understanding of the brain systems involved in language has been largely derived through the study of spoken languages. However, as naturally occurring manual-visual sign languages used in Deaf communities attest, human languages are not limited to the oral-aural modality. The existence of sign languages used in Deaf communities provides a unique opportunity to test the generality of biological models of human language. The comparison of neural systems supporting spoken and signed language allows researchers to distinguish brain systems that are common across human languages from brain systems that reflect the modality of language expression (e.g., auditory perceptual versus visual perceptual processes). These comparisons make it possible to address long-standing issues regarding the expression of language in the brain. Neuroimaging and aphasia studies of deaf signers reveal great commonalties in the neural systems used for sign and speech and provide evidence for a core neurobiological substrate for human linguistic communication. Also observed are cases of modality-specific patterns of brain activation and modality-specific language impairments that speak to functional specialization based upon sensory and motor systems unique to speech and sign. As increasingly sophisticated neurobiological models of language processing emerge, researchers are poised to ask new questions about the biological substrates of human communication is all its various forms.
Michael C. Corballis
This article explores the origins of language in manual gestures. Humans have the capacity to produce complex intentional gestures either vocally or manually and either system can serve as the medium for language. Vocal learning is especially critical to speech, and few non-human primate species appear capable of more than limited vocal learning. Studies suggest that primate calls show limited modifiability, but its basis remains unclear, and it is apparent in subtle changes within call types rather than the generation of new call types. The discovery of mirror neurons in area F5 of primate prefrontal cortex further supports the evolutionary priority of manual gesture. The hands and arms would lend themselves naturally to mimed representation of events in bipedal hominins. Mime is fundamentally imitative, in that there is a mapping between the mimed action and what it represents. Modern signed languages retain a strong mimetic, or iconic, component such as in Italian sign language some 50% of hand signs and 67% of the bodily locations of signs stem from iconic representations. The addition of phonation, perhaps through selection for a FOXP2 mutation, would allow non-visible gestures within the mouth, including movements of the larynx, velum and tongue, to be recovered from the acoustic signal, as proposed by the motor theory of speech perception.
Richard P. Meier
This essay considers the acquisition of sign languages as first languages. Most deaf children are born to hearing parents, but a minority have deaf parents. Deaf children of deaf parents receive early access to a conventional sign language. The time course of acquisition in these children is compared to the developmental milestones in children learning spoken languages. The two language modalities—the oral-aural modality of speech and the visual-gestural modality of sign—place differing constraints on languages and offer differing resources to languages. Possible modality effects on first-language acquisition are considered. Historically, many deaf infants born to hearing parents have had little access to a conventional language. However, these children sometimes elaborate “home sign” systems. Lastly, the role of early experience in language acquisition is considered. Deaf children of hearing parents are immersed in a first language at varying ages, enabling a test of the critical-period hypothesis.
This article describes signed language interpreting (SLI) as an emerging discipline. It provides a survey of the history and characteristics of SLI, the settings where signed language interpreters work, a summary of SLI research, and a description of the current state of the field. Historically, SLI has functioned as a separate entity from translation and interpreting (T&I). There has recently been growing recognition that signed languages are just another of the community languages that T&I practitioners work with. Signed languages are now formally taught in tertiary institutions throughout the world. The redefinition of the interpreter's role has generated detailed explorations of SLI professionalism and ethics. Some unique characteristics of SLI are its directionality, modality, techniques, and its settings. Finally, this article highlights, how the SLI field has emerged and in which areas it is still developing concluding with predictions for future directions.
Deaf people are commonly identified as a group by their disability or handicap. This pathological perspective regards deaf people as having a medical condition, the inability to hear. This perspective also denies the linguistic status of signed languages, regarding them as defective forms of spoken language. A more appropriate way to understand deaf people is as members of a linguistic and cultural minority. Scholars now use the terms “deaf” and “Deaf” to distinguish the audiological condition of deafness from the cultural and linguistic identity, respectively. This article focuses on signed languages and discusses deafness as a cultural identity. It examines linguistic research on signed languages, focusing on their phonology, morphology, syntax, and fingerspelling. It then explores the relations between signed languages and cognitive linguistics, with emphasis on iconicity, cognitive iconicity, metaphor, metonymy, and blended mental spaces. Finally, the article looks at the link between language and gesture, as well as between gesture and grammaticization in signed languages.
This chapter shows that the formal properties that have been identified as defining traits of human languages in the generative tradition hold in sign languages as well. Specifically: (i) clauses in sign languages have a hierarchical and recursive organization, (ii) general constraints on syntactic movement (i.e., the fact that the landing site must c-command the base position) are valid for language across modalities, and (iii) universal interpretative constraints that govern anaphora in spoken languages (i.e., Principle C of Binding Theory) are valid in the sign languages studied up to now. This notwithstanding, sign languages require refinements of analytical categories initially developed for spoken languages. For example a macro-typological difference between sign and spoken languages needs to be captured by revising the theory of wh-movement. Universal constraints holding across languages in the acoustic and in the visuospatial modalities do exist, although a fully satisfactory formulation needs to take into account findings emerging from the booming field of sign language linguistics.
This article focuses on two roles of gesture and explores the changes that take place in the manual modality when it is employed to fulfill the functions of language on its own. Gestures reflect a global-synthetic image. Gesture is idiosyncratic and constructed at the moment of speaking and it does not belong to a conventional code. The gesture conveys nuances of the coastline that are difficult to capture in speech. Gesture allows speakers to convey thoughts that may not easily fit into the categorical system, which are offered by conventional language. The gestures that accompany speech are not composed of parts but instead have parts that derive from wholes that are represented by way of imagery. The imagistic base of gesture allows it to capture and reveal information that speakers may have difficulty expressing in speech. Gesture can also play a role in cognitive growth by providing an imagistic route through which ideas can be made active or brought into the learner's repertoire. The manual modality assumes an imagistic form when it is used in conjunction with a segmented and combinatorial system. Modern-day human communication systems are based on a segmented and combinatorial mode of representation that gives the system its generative capacity. The gestures that speakers produce in the manual modality can express information that they are often not able to express within the codified spoken system. This information is processed by the listener and becomes part of the conversation.