Mark Tiede (Haskins Laboratories)
Human talkers decode spoken language with unconscious ease, even in noisy environments where engineering approaches to speech recognition fall short. A crucial reason for this success is the ability to integrate sensory cues from multiple sources. This talk will provide an overview of how visual and somatosensory channels augment auditory processes in speech perception, with particular emphasis on recent work showing how appropriately timed somatosensory perturbations (facial skin deformation, aero-tactile stimulation) can lead to systematic shifts in perceived perceptual boundaries.
Prochains événements
Voir la liste d'événements08 November 2024
SRPP: Nature of contrast and coarticulation in dense coronal systems
School of Languages and Linguistics, Jadavpur University
15 November 2024
SRPP de Michael Neumann
Modality.AI, Inc.
22 November 2024
SRPP de Said-Iraj Hashemi
Laboratoire de Phonétique et Phonologie
29 November 2024
SRPP de Louise McKeever
Laboratoire de Phonétique et Phonologie