As for any communication system, the decoding of speech by the human auditory system relies on a code associating a physical input with some linguistic representations. Finding what auditory primitives (acoustic cues) human listeners rely on to decode speech sounds is an important step toward a better understanding of speech comprehension and acquisition.
In this talk I will describe two projects aiming at uncover perceptually relevant acoustic cues in speech. The first part will focus on the identification of the acoustic cues underpinning phoneme comprehension, through the example of a ba/da categorization task, using the newly developed Auditory Classification Image method (Varnet et al., 2013, 2015, 2016). Then, in a second part, we will turn to the encoding of higher-level linguistic properties in the speech signal, with a comparison of different language groups (stress-timed vs. syllable-timed languages and head-complement vs. complement-head languages) on the basis of their temporal modulation content (Varnet et al., 2017).
Prochains événements
Voir la liste d'événementsSRPP Éléments de prosodie du bedja (couchitique, Soudan)
Mohamed-Tahir HAMID AHMED HAMID & Cédric PATIN (Université de Lille)
SRPP 20/02/2026 Takayuki Nagamine
Takayuki Nagamine (UCL)
SRPP 13/03/2026 Christophe Corbier
Christophe Corbier (CNRS, IReMUS)
SRPP 20/03/2026 Claire Njoo
Claire Njoo (Université Paris-Sud)


