Human talkers decode spoken language with unconscious ease, even in noisy environments where engineering approaches to speech recognition fall short. A crucial reason for this success is the ability to integrate sensory cues from multiple sources. This talk will provide an overview of how visual and somatosensory channels augment auditory processes in speech perception, with particular emphasis on recent work showing how appropriately timed somatosensory perturbations (facial skin deformation, aero-tactile stimulation) can lead to systematic shifts in perceived perceptual boundaries.
Prochains événements
Voir la liste d'événementsSRPP Beyond reaction time: Articulatory evidence of perception-production link in speech using the Stimulus-Response Compatibility paradigm.
Takayuki Nagamine (Department of Speech Hearing and Phonetic Sciences, University College London)
SRPP 13/03/2026 Christophe Corbier
Christophe Corbier (CNRS, IReMUS)
SRPP 20/03/2026 Claire Njoo
Claire Njoo (Université Paris-Sud)
SRPP 27/03/2026 Rasmus Puggaard-Rode
Rasmus Puggaard-Rode(University of Oxford)


