As for any communication system, the decoding of speech by the human auditory system relies on a code associating a physical input with some linguistic representations. Finding what auditory primitives (acoustic cues) human listeners rely on to decode speech sounds is an important step toward a better understanding of speech comprehension and acquisition.
In this talk I will describe two projects aiming at uncover perceptually relevant acoustic cues in speech. The first part will focus on the identification of the acoustic cues underpinning phoneme comprehension, through the example of a ba/da categorization task, using the newly developed Auditory Classification Image method (Varnet et al., 2013, 2015, 2016). Then, in a second part, we will turn to the encoding of higher-level linguistic properties in the speech signal, with a comparison of different language groups (stress-timed vs. syllable-timed languages and head-complement vs. complement-head languages) on the basis of their temporal modulation content (Varnet et al., 2017).
Prochains événements
Voir la liste d'événementsSoutenance de Xuejing Chen
Perception et production des clusters en position initiale par des sinophones
Stefanie Keulen - Seminar 1
Language and the brain: a lifetime perspective.
Stefanie Keulen - Seminar 2
The enigmatic cerebellum: involvement in speech and language.
SRPP 22/05/2026 Katia Chirkova
Katia Chirkova (Inalco)


