Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE ( © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

date: 07 July 2020

Abstract and Keywords

Spoken language can be understood through different sensory modalities. Audition, vision, and haptic perception each can transduce speech information from a talker as a single channel of information. The more natural context for communication is for language to be perceived through multiple modalities and for multimodal integration to occur. This chapter reviews the sensory information provided by talkers and the constraints on multimodal information processing. The information generated during speech comes from a common source, the moving vocal tract, and thus shows significant correlations across modalities. In addition, the modalities provide complementary information for the perceiver. For example, the place of articulation of speech sounds is conveyed more robustly by vision. These factors explain the fact that multisensory speech perception is more robust and accurate than unisensory perception. The neural networks responsible for this perceptual activity are diverse and still not well understood.

Keywords: sensory modalities, spoken language, multisensory speech perception

Access to the complete content on Oxford Handbooks Online requires a subscription or purchase. Public users are able to search the site and view the abstracts and keywords for each book and chapter without a subscription.

Please subscribe or login to access full text content.

If you have purchased a print title that contains an access token, please see the token for information about how to register your code.

For questions on access or troubleshooting, please check our FAQs, and if you can''t find the answer there, please contact us.