- Oxford Library of Psychology
- Oxford Library of Psychology
- About the Editors
- Introduction to The Oxford Handbook of Cognitive Neuroscience: Cognitive Neuroscience—Where Are We Now?
- Representation of Objects
- Representation of Spatial Relations
- Top-Down Effects in Visual Perception
- Neural Underpinning of Object Mental Imagery, Spatial Imagery, and Motor Imagery
- Looking at the Nose Through Human Behavior, and at Human Behavior Through the Nose
- Cognitive Neuroscience of Music
- Neural Correlates of the Development of Speech Perception and Comprehension
- Perceptual Disorders
- Varieties of Auditory Attention
- Spatial Attention
- Attention and Action
- Visual Control of Action
- Development of Attention
- Attentional Disorders
- Semantic Memory
- Cognitive Neuroscience of Episodic Memory
- Working Memory
- Motor Skill Learning
- Memory Consolidation
- Age-Related Decline in Working Memory and Episodic Memory Contributions of the Prefrontal Cortex and Medial Temporal Lobes
- Memory Disorders
- Cognitive Neuroscience of Written Language: Neural Substrates of Reading and Writing
- Neural Systems Underlying Speech Perception
- Multimodal Speech Perception
- Organization of Conceptual Knowledge of Objects in the Human Brain
- A Parallel Architecture Model of Language Processing
- Epilogue to The Oxford Handbook of Cognitive Neuroscience—Cognitive Neuroscience: Where Are We Going?
Abstract and Keywords
Spoken language can be understood through different sensory modalities. Audition, vision, and haptic perception each can transduce speech information from a talker as a single channel of information. The more natural context for communication is for language to be perceived through multiple modalities and for multimodal integration to occur. This chapter reviews the sensory information provided by talkers and the constraints on multimodal information processing. The information generated during speech comes from a common source, the moving vocal tract, and thus shows significant correlations across modalities. In addition, the modalities provide complementary information for the perceiver. For example, the place of articulation of speech sounds is conveyed more robustly by vision. These factors explain the fact that multisensory speech perception is more robust and accurate than unisensory perception. The neural networks responsible for this perceptual activity are diverse and still not well understood.
Agnès Alsius, Department of Psychology, Queen’s University, Ontario, Canada
Ewen MacDonald, Department of Psychology, Queen’s University, Ontario, Canada; Centre for Applied Hearing Research, Department of Electrical Engineering, Technical University of Denmark, Lyngby, Denmark
Kevin Munhall is Professor and Coordinator of Graduate Studies, Queen's University.
Access to the complete content on Oxford Handbooks Online requires a subscription or purchase. Public users are able to search the site and view the abstracts and keywords for each book and chapter without a subscription.
If you have purchased a print title that contains an access token, please see the token for information about how to register your code.