We have lately developed a non-invasive multi-sensor acquisition set – the hyper-helmet – for rare songs recording in an intangible cultural heritage safeguarding perspective. In this presentation, we take advantage of this articulatory sensing system to study and test a new nasality index. The helmet’s acoustic microphone and nasal piezoelectric accelerometer are used to calculate an oral/nasal rms ratio. An ElectroGlottoGraph instrument is the mean to estimate the voicing selector parameter. In addition, a non-intrusive tongue imaging sensor (an ultrasonic probe) and a lips movement camera are backups for articulatory and nasality qualitative interpretation. A software has been developed for synchronous acquisition of all sensors and it is been used to record an English corpus interpreted by a native English-speaking Canadian mid-age man. Multiple tests have been held to verify numerous nasality theories. Some results are shown in this presentation.
Prochains événements
Voir la liste d'événementsSRPP Beyond reaction time: Articulatory evidence of perception-production link in speech using the Stimulus-Response Compatibility paradigm.
Takayuki Nagamine (Department of Speech Hearing and Phonetic Sciences, University College London)
SRPP 13/03/2026 Christophe Corbier
Christophe Corbier (CNRS, IReMUS)
SRPP 20/03/2026 Claire Njoo
Claire Njoo (Université Paris-Sud)
SRPP 27/03/2026 Rasmus Puggaard-Rode
Rasmus Puggaard-Rode(University of Oxford)


