SRPP: Amazigh Substratal Phonology and Morphology in Moroccan Arabic

The long-standing contact between Amazigh (aka Berber) and Arabic has resulted in mutual contact-induced phenomena in either language. The undeniable presence of an Amazigh substratum in the grammar of North African Arabic definitely goes beyond the straightforward case of lexical borrowing. In this presentation, focus is on the contact-induced Amazigh phonological and morphological traits in Moroccan Arabic that ensue from structural borrowing. The phenomena under scrutiny are collated from various treatments in the literature. Although unavoidably programmatic, the treatment makes the claim that although some cases are clear borrowings from Amazigh both at the lexical and structural levels, other affinities between the two languages pose a twofold challenge. First, the Afro-asiatic affiliation of the two languages entails the presence of cognate structure, potentially blurring the distinction between the donor and recipient language. Second, the prolonged co-existence of the two languages opens the possibility for simultaneously developed entangled, common areal traits.

SRPP: Voice quality in interaction: turn-taking and beyond

Studying voice quality in spontaneous conversational speech has traditionally received little attention, mostly due to difficulties inherent in isolating voice source in such variable and noisy signals. While inverse filtering methods exist, they are either extremely time consuming (when done manually) or error prone (when automatic methods are used). By contrast, in this talk I will demonstrate that some progress can be made by using acoustic features which are minimally affected by segmental structure of speech and/or by using alternative data acquisition methods. Specifically, I will present ongoing work on measuring voice quality variation using miniature accelerometers attached to speakers’ necks. These unobtrusive sensors output a signal which is robust to vocal tract influences while at the same time capturing differences between phonation types and are thus perfectly suited to investigating interactional functions of voice quality such as turn management.

SRPP: The phonetics of stress: A phonological problem?

Long-standing uncertainty about the phonetic correlates of stress (also known as “phonetic cues to prominence”) is fundamentally due to failure to appreciate the unusual PHONOLOGICAL nature of stress in many languages.  In particular, stress in the West Germanic languages, though it is superficially a matter of local suprasegmental features that are realised on individual syllables to varying degrees, is best seen as the manifestation of a hierarchical prosodic structure that (a) defines prominence relations between prosodic constituents and (b) governs the association of intonational features with the segmental string (‘tune-text association’). Differences in tune-text association give rise to cross-linguistic misperceptions of stress.

Metrical stress theory, as originally proposed by Liberman and especially as interpreted in subsequent work, emphasised the ‘rhythmic’ aspects of the hierarchical prosodic structure, but Liberman’s more important insight is that the structure governs tune-text association.  Since all languages must have principles of tune-text association (because all spoken language involves both pitch and segmental syllables), a theoretical focus on how (and whether) tune-text association reflects prosodic structure clears up some puzzles (like why English phoneticians think French has stress) and allows for a richer and more insightful prosodic typology than a simple but inadequate dichotomy between tone languages and stress languages.

SRPP: Toward transparent and reproducible speech sciences

Large-scale attempts to replicate published studies across the quantitative sciences have uncovered surprisingly low replication rates. This discovery has led to what is now referred to as the “replication crisis”. Since our understanding of human language is increasingly shaped by quantitative data, there are raising concerns that a similar state of affairs is true for quantitative linguistics because it shares with other disciplines many research practices that decrease the replicability of published findings. In this talk, I will have a closer look at quantitative linguistics in general and the speech sciences in particular. I will suggest promising ways forward to increase the transparency, reproducibility, and replicability of our work. Moreover, I will offer actionable solutions that can help us create a more robust empirical foundation of quantitative linguistics and aid us in saving time and resources.

SRPP: Pokémonastics – what we are doing and why we are doing it

Sound symbolism—systematic associations between sounds and meanings—had not been a topic that was actively explored in the generative tradition. In my recent research, however, I argue that formal phonology can benefit from perspectives and insights offered by research on sound symbolism, and vice versa. In this talk, I illustrate this thesis based on a new research paradigm, dubbed “Pokémonastics” (Kawahara et al. 2018; Shih et al. 2019), in which researchers explore the nature of sound symbolism in human languages using Pokémon names. In this talk, I am going to review (1) how Pokémonastics began, (2) why it is a useful research strategy, (3) what we have found so far, and (4) what more needs to be done.

SRPP: Pre-focus expansion and prosody-phonology mismatches

This talk will discuss the central intonational patterns in Bemba (Central Bantu) that correlate to different kinds of declarative sentences and questions. Specifically, the intonational features of final lowering, pitch range expansion and pitch register raising will be discussed to illustrate the most robust intonational patterns of the language, following Kula & Hamann (2017). The talk particularly looks at pre-focus expansion which suggests a less common pattern of focus marking in verb forms and whose prosody contrasts with the patterns of phonological phrasing in Bemba as shown in Kula and Bickmore (2015). Pre-focus expansion contrasts with post-focal compression as discussed in other works (see e.g. Kugler 2017, in prep) and will be shown to not necessarily co-occur or be mutually dependent. The talk will aim to discuss possible resolutions to the resulting mismatch between prosody and phonology, which will require a recasting of phonological phrasing.

SRPP: The quest for Agent-Based Modelling (ABM) of sound change

Sound change refers to the slow and systematic change in spoken accent within individuals or communities over time, typically on a time scale of years or decades. Examples of sound change in the recent history are the fronting of /u/ (e.g. in GOOSE, FOOT) in Southern British English and the merging of the diphthongs in SQUARE and NEAR in favour of /i@/ in New Zealand English. The forces giving rise to sound change are rooted in the cognitive mechanisms by which humans transmit and subtly imitate each other’s speech attributes, hence making sound change an emerging phenomenon. In order to identify and understand the roots of sound change, as well as predict future emerging accents for a particular language community, the development of complex computational models is required. Agent-Based Models (ABMs) allow to simulate the evolution of spoken accent among a community of artificial agents, each one endowed with fully specified (probabilistic) rules for perception, production, and mental representation of speech sounds, together with global (stochastic) rules governing interactions among agents.

This talk presents the latest version of the ABM of sound change developed at the Institute of Phonetics and Speech Processing at LMU Munich. Acoustic and (sub-)phonemic levels are implemented in the ABM by general-purpose machine learning algorithms, namely Gaussian Mixture Models (GMMs) and Non-negative Matrix Factorisation (NMF). Each agent organises and continuously adapts both levels of representation in full autonomy. Simulated acoustic and/or (sub-)phonemic changes, at the individual as well as at the population level, are tracked separately, directly compared to real (corpus) data, and their origin interpreted on the basis of the known mechanisms governing the ABM. In the talk, the case of /u/ fronting in Southern British English will serve as example to showcase the architecture and workings of the ABM.

SRPP: A phonetic approach to Campidanese Sardinian lenition

Campidanese Sardinian displays a complex system of obstruent lenition that has received much attention in theoretical phonology, motivating formal devices such as constraints on systemic contrast (Tessier 2004); local conjunction of markedness and faithfulness (Łubowicz 2002); perceptual ‘warping’ of faithfulness scales (Storme 2018), and *MAP constraints with ranking biases (Hayes & White 2015). All of these proposals are based on Bolognesi’s (1998) description of Campidanese. In this account, a voiced and voiceless UR series of stops (referred to as /D/ and /T/, respectively) contrast in post-pausal or utterance-initial position. The /T/ series lenite to voiced continuants following a vowel within a phrase; the /D/ series do not lenite. This pattern is phonologically problematic because /T/ undergoes a relatively radical change ([voi] and [cont]), while /D/ fails to undergo a less radical change (just [cont]).

This talk argues, based on phonetic results from Katz & Pitzanti (2019), that none of the phonological devices mentioned above are necessary or sufficient for describing Campidanese consonant lenition. Instead, I propose a model that derives manner-related lenition and fortition from prosodically-conditioned changes in duration, without changing phonological features at all. This phonetic approach captures core facts about the consonant system that are missing from Bolognesi’s (1998) description and all subsequent analyses based on that description: (1) rates of prosodically-conditioned lenition and associated changes in intensity are predictable from prosodically-conditioned differences in duration; (2) manner and intensity differences between different UR consonant series are not predictable from duration alone; and (3) duration- and intensity-based lenition and fortition affect all consonants, even extending to vowel-vowel transitions in hiatus. I will present the phonetic model, show how it derives the core properties described above, and speculate on the cross-linguistic uniformity or lack thereof in such intervocalic lenition processes.

SRPP: Articulator-Specific Performance Changes and their Acoustic Consequences: Findings on Typical Talkers and Talkers with Dysarthria

Articulator-Specific Performance Changes and their Acoustic Consequences: Findings on Typical Talkers and Talkers with Dysarthria

Antje Mefferd, PhD CCC-SLP (Assistant Professor, Vanderbilt University Medical Center)

Although imprecise articulation is the hallmark of dysarthria, little is currently known about the articulator-specific mechanisms that underlie these imprecisions and how to best treat them. Speech behavioral interventions such as loud, slow, or clear speech cues are commonly used to improve the speech of talkers with dysarthria; however, the selection of one of these three treatments is often based on trial therapy rather than a scientific rationale.

Over the past few years, we have directed our research efforts towards an improved understanding of the articulator-specific mechanisms that underlie currently used therapeutic interventions. Therefore, I will report findings of a series of studies that examined tongue- and jaw-specific performance changes in response to loud, clear, and slow speech cues in typical talkers as well as talkers with Parkinson’s disease and amyotrophic lateral sclerosis (ALS). We recorded tongue and jaw articulatory movements using 3D electromagnetic articulography. In addition to cueing effects, I will report how cueing-related changes in tongue and jaw motor performance contributed to acoustic vowel contrast change in these talkers. Finally, I will share preliminary findings on a recent study that investigated articulatory tradeoffs between tongue retraction and lip protrusion and their consequences for acoustic vowel contrast in talkers with ALS. The overarching goal of our work is to provide a scientific basis for clinical decisions on speech treatment selection for talkers with dysarthria.

SRPP: Is language rhythm in the ear of the beholder? A sensorimotor synchronisation approach to the cross-linguistic study of rhythm

Is language rhythm in the ear of the beholder? A sensorimotor synchronisation approach to the cross-linguistic study of rhythm

Tamara Rathcke (University of Konstanz), Chia-Yuan Lin (University of Kent), Simone Falk (University of Montreal), Simone Dalla Bella (University of Montreal)

Rhythmic properties of speech and language have been controversially debated since 1980s (Dauer, 1983; Roach, 1982), and the quest for a rhythmic typology is sometimes viewed as Quixotic (Cummins, 2012:32). The main aim of the present study is to use a novel movement-based paradigm and provide evidence as to what extent rhythm perception is bottom-up (i.e. guided by the acoustic signal) or top-down (i.e. shaped by the native prosodic system of the listener). A series of sensorimotor synchronisation experiments (Aschersleben, 2002; Repp, 2005) has been run with French and English listeners. Thirty participants of each language were asked to tap in synchrony with the subjectively perceived beat of 20 sentences in their native and non-native language. The sentences varied in length and syntactic complexity, and were presented in a loop consisting of 20 repetitions. If rhythm perception arose bottom-up, the SMS data were expected to show evidence for some stable, acoustically defined events that would serve as synchronisation anchors in both groups of listeners. If rhythm perception was a top-down experience, the SMS data were expected to differ in a language-specific way.
The results of the experiments first highlight that beat tracking across languages is locked onto vowels. The cyclical production of vowel gestures in connected speech has been previously suggested as one of the main reasons why spoken language might be rhythmic in nature (Fowler & Tassinary, 1981). Our findings further indicate that linguistic rhythm may indeed be in the ear of the beholder, and the attempts to base a rhythmic typology entirely on acoustic properties of speech signals are likely to remain ill-advised (Cummins, 2012) whereas typological approaches that involve phonological features of prosodic systems (e.g. Jun, 2014) appear promising.