4TH PHONETICS AND IN EUROPE Phonetics and Phonology: Real-world applications 21-23 June 2021

BOOK of ABSTRACTS https://pape2021.upf.edu/ CONTENTS

5 WELCOME 6 COMMITTEES 8 PROGRAM AT A GLANCE 10 INVITED SPEAKERS 16 ORAL sessionS

16 DAY 1 - Oral session 1: SECOND ACQUISITION I 25 DAY 1 - Oral session 2: SECOND LANGUAGE ACQUISITION II 35 DAY 1 - Oral session 3: PROSODY 46 DAY 2 - Oral session 1: and language contact 55 DAY 2 - Oral session 2: acouStic phonetics and phonoloGY 66 DAY 2 - Oral session 3: ARTICULATORY PHONETICS AND PHONOLOGY 73 DAY 2 - Oral session 4: MULTIMODAL PHONETICS 80 DAY 3 - Oral session 1: ACQUISITION 88 DAY 3 - Oral session 2: CLINICAL PHONETICS

99 POSTER sessions

100 DAY 1 - Poster session A: SLA - Suprasegmental features 121 DAY 1 - Poster session B: Phonetics and Phonology: Articulation/Acoustics & Phenomena 139 DAY 1 - Poster session C: Prosody: Information Structure 156 DAY 2 (am)- Poster session A: SLA - Vowel training & Pronunciation Instructions 176 DAY 2 (am)- Poster session B: Prosodic Structure, Discourse & Pragmatics 197 DAY 2 (am)- Poster session C: Sound Change and Language Contact 209 DAY 2 (pm)- Poster session a: SLA - Acquisition of vowels & consonants 230 DAY 2 (pm)- Poster session B: , Intonation, Rhythm & 254 DAY 2 (pm)- Poster session c: Phonological Processes 277 DAY 2 (pM)- Poster session d: Prosodic Cues and Questions 307 WORKSHOPS

308 Forms and Representations in Phonology: In Honour of Bob Ladd 312 Acquisition at the interface between prosody, gesture, and pragmatics 322 Creativity and Variability: Prosody and Information 333 From speech technology to big data phonetics and phonology: a win-win paradigm 346 Prosodic variation: the role of past and present contact in multilingual societies

3 WELCOME

We are delighted to welcome you to the 4th Phonetics and Phonology in Europe Conference (PaPE 2021), to be held virtually from Monday 21 to Wednesday 23 June 2021, jointly organised by the Universitat Pompeu Fabra and the Universitat de Barcelona, Catalonia. This fourth edition follows successful conferences in Lecce, Italy (2019), Cologne, Germany (2017), and Cambridge, UK (2015). Prior to 2015, a series of biennial PaPI (Phonetics and Phonology in Iberia) conferences dates back to 2003. PaPE 2021 has the support of other research groups in Barcelona institutions (UAB, UOC, IEC) working in the disciplines of Phonetics and Phonology.

PaPE 2021 aims at being an interdisciplinary forum for research in all areas of phonetics and phonology, and promoting discussion on the sound properties of language approached from multiple perspectives: neurolinguistics, psycholinguistics, computational linguistics and speech technologies. This year, PaPE will be organised around the topic Phonetics and Phonology: Real-world applications, including contributions dealing with –but not limited to– acquisition and teaching research, multimodal communication, conversational avatars, and health and clinical research.

We are very happy for the enthusiastic response to this conference, which allowed us to put together a very nice program with around 100 posters and 35 oral presentations. We will also count on the participation of four world experts in our fields (Drs. Nataliia Kartushina, Julia Hirschberg, Wendy Sandler, and Stefanie Shattuck-Hufnagel) and four workshops running on the afternoon of June 23, following a special workshop in honor of Emeritus and Distinguished Professor Bob Ladd. We hope to build and further strengthen the bridges within our research community through this exciting program, while providing at the same time an excellent opportunity for young scientists to present their work.

Unfortunately, under the extraordinary conditions presented by the pandemic, on site PaPE 2021 had to be replaced with a virtual event this year. However, we still enable interaction between the participants and create a friendly and scientifically inspiring atmosphere to share your research. We hope that we will have other opportunities to organise further events in our vibrant city of Barcelona. In the meantime, let us wish you a profitable, collaborative and exciting online conference.

The organising committee

Pilar Prieto, Prosodic Studies Group, Department of Translation and Language Scien- ces, ICREA-UPF Mireia Farrús, Language and Computation Centre, Department of Catalan Philology and General Linguistics, UB Mariia Pronina, Prosodic Studies Group, Department of Translation and Language Sciences, UPF Patrick L. Rohrer, Prosodic Studies Group, Department of Translation and Language Sciences, UPF and the Laboratoire Linguistique de Nantes (LLING- UMR 6310), UN

5 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 WELCOME COMMITTEES

Organising Committee

Local Advisory Committee

Standing Committee

6 7 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 COMMITTEES PROGRAM AT A GLANCE

Monday June 21st Tuesday June 22nd Wednesday June 23 8h40 - 9h Opening Ceremony 9h - 9h20 Keynote Speech: Keynote Speech: 9h20 - 9h40 Natalia Kartushina SOUND CHANGE Wendy Sandler 9h40 - 10h AND LANGUAGE CONTACT 10h - 10h20 FIRST LANGUAGE 10h20 - 10h40 SECOND LANGUAGE Poster Session ACQUISITION 10h40 - 11h ACQUISITION I 11h - 11h20 Coffee Break 11h20 - 11h40 Coffee Break Coffee Break 11h40 - 12h CLINICAL PHONETICS 12h - 12h20 12h20 - 12h40 SECOND LANGUAGE ACOUSTIC PHONETICS AND ACQUISITION II PHONOLOGY Closing Words 12h40 - 13h and Student awards 13h - 13h20 13h20 - 13h40 Lunch Break 13h40 - 14h Lunch Break Lunch Break 14h - 14h20 14h20 - 14h40 Poster Session Forms and Representations 14h40 - 15h Poster Session in Phonology: In Honour of 15h - 15h20 Bob Ladd invited 15h20 - 15h40 Coffee Break 15h40 - 16h Coffee Break speakers 16h - 16h20 PROSODY ARTICULATORY PHONETICS 16h20 - 16h40 AND PHONOLOGY 16h40 - 17h

17h - 17h20 Keynote Speech: 17h20 - 17h40 Julia Hirschberg MULTIMODAL PHONETICS Parallel Workshops 17h40 - 18h

18h - 18h20 Keynote Speech: 18h20 - 18h40 Stefanie Shattuck-Hufnagel 18h40 - 19h

8 9 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 Natalia Kartushina Julia Hirschberg Center for Multilingualism in Society across the Lifespan Department of Computer Science, Columbia University (MultiLing), University of Oslo

Conversational Avatars and Empathetic Bridging the gap between L2 speech sound Conversations acquisition and L2 teaching Dialogue systems today have become much more flexible L2 speech sound acquisition is challenging for late language and useful for accomplishing tasks online or on the phone. learners, as even experienced L2 speakers often display However, there is still considerable room for improvement difficulties in their perception and production ofnon- to make them more like conversational partners: How native speech sounds. Although the dominant theoretical can we adapt them to the needs of individual users more perspectives attribute L2 speakers’ sound mispronunciation to a lack of accurate perceptual successfully? There has been considerable research on developing chatbots and avatars who representations and predict a tight relationship between the two domains, research provides no evidence for a consistent relationship and points to a number of factors modulating the empathize with users, identifying the user’s feelings and taking action to help them deal with strength of the perception-production link. Given this inconsistency in the results on the their problems. However, while identifying sentiment and emotion in speakers language relationship between L2 perception and production, how can L2 learners and L2 teachers and other modalities has been much studied, the appropriate responses to varied emotions optimize L2 speech sound learning? Can L2 learners rely on their perception to guide their is still a major challenge. We will discuss previous research in empathetic conversation and production? Can improved production fine-tune L2 perceptual representations? suggest some promising new approaches.

Moreover, L2 learners’ speech sound acquisition is further challenged by unstable accuracy in the self-assessment of their own L2 perception and production, suggesting that even when L2 production is accurate, L2 speakers might not systematically recognize it as it is. The natural question then arises as why L2 speakers’ ability to assess their own L2 accuracy is unstable? Can L2 learners improve their self-assessment to fine-tune their L2 speech sound representations? Can L2 teachers reliably guide this process?

In my talk, I will attempt to bridge the gap between research on L2 speech sound acquisition and L2 teaching by (1) answering the above-raised questions and providing, therefore, guidelines for L2 teachers and L2 learners; and (2) highlighting some areas where knowledge is lacking and indicating future directions for research.

10 11 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 INVITED SPEAKERS Stefanie Shattuck-Hufnagel Wendy Sandler Speech Communication Group, Research Laboratory of Multimodal communication, University of Haifa Electronics, MIT

The Role of Individual Acoustic Cues in Speech The Multimodal Composition of a Theatrical Processing Sign Language

In sign languages, visible actions of bodily components For many decades, speech scientists have made often correspond directly to linguistic functions, comprising extensive use of the concept of individual acoustic cues a Grammar of the Body. This property allows us empirically to the contrastive sound categories of a language, in to trace the course of language emergence in new sign constructing and testing models of the cognitive processes languages through bodily actions, exemplified by Al-Sayyid that undergird speech perception, speech production Bedouin Sign Language. I then take the Grammar of the Body approach one step further, into and the learning of these skills. Phenomena such as conversational convergence, covert the realm of sign language theatre. I show how deaf actors recruit multimodal elements from contrast, cue substitution and cue weighting/reweighting provide ample illustration of the sign language phonology, gesture, and mime, and weave them into complex compositional role of individual cues to segmental feature contrasts, and investigations have revealed an arrays that push the bounds of human communication in the service of art. active role for individual acoustic cues to prosodic structure as well. Yet only recently have researchers begun to take seriously the evidence that individual cues are represented and manipulated by speakers, attended to by listeners, and selectively used by learners and by atypical speakers. The recognition of systematic context-governed phonetic variation (in the form of cue selection and cue values) as information rather than as noise is an important step, not only for modelling typical speech processing and its development, but also for understanding, modelling and remediating atypical speech. In this paper I review some of the extensive evidence that supports this view, draw out some of its implications for processing models, and describe some of the tools that have been created for capturing and analysing surface phonetic variation at the level of the individual acoustic cue.

12 13 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 INVITEDINVITED SPEAKERSSPEAKERS

ORAL SESSIONS

In order of appearance in the programme

Poster Presentation 14 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 DAY 1 - Oral Session 1 the role of acquisition factors is currently under investigation. Cross-linguistic vowel data SECOND LANGUAGE ACQUISITION I tentatively supports a perception-first SLM framework. Overall, these results indicate that speakers can link auditory (perceptual) and Integration of vowel perception & production information in Spanish-English bilinguals: gestural (production) data during speech perception. Future experiments will test perceptual an experimental study discrimination against production data. Furthermore, addition of English-L1 Spanish learners and Spanish monolinguals to the sample will provide more information on whether Madeleine Rees1 and Brechtje Post1 perception precedes production in L2 acquisition. 1University of Cambridge

The nature and existence of the relationship between speech perception and production in the brain is heavily debated. Motor theories1,2 propose that gestural information is extracted directly from auditory input, rendering perception and production inextricable. Meanwhile, in auditory and sensorimotor models3,4,5, speech is produced according to auditory targets, and motor and auditory information inform and constrain each other through feedback loops. This dichotomy is reproduced in bilingual models. The Speech Learning Model (SLM)6,7 favours an auditory-target centric and perception-first approach in L2 acquisition. Meanwhile, the Perceptual Model8 proposes that gestural acquisition lets perception and production develop simultaneously. This experiment aims to determine whether bilinguals habitually integrate auditory and gestural information in (1) production and perception, and whether one strategy predominates in L1 or L2. Figure 1. CPB and VBB correlation in English monolingual controls, in Bark (right) and Hz (left.) I present evidence for a sensorimotor relationship between perception and production in L2 vowel acquisition. It is assumed that the point equidistant between two vowel distributions (in 2D space) is the optimal location for a categorical perceptual boundary (CPB), the continuum point where listeners stop perceiving one vowel and start perceiving another. Contrary to previous studies, this experiment generates speaker-specific perceptual stimuli from resynthesised production data following Chao, Ochoa & Daliri (2019)9, providing closer focus on individual behaviour. First, 25 Spanish-dominant bilinguals (with B2 or equivalent level English) and 14 English monolingual controls took a word reading task containing English /æ/ and /e/ and Spanish /a/ and /e/, to generate an individual vowel-variability-based boundary (VBB), a source of gestural information. Next, participants completed a vowel identification test to define their CPB. Controls showed significant positive correlations (r=0.545, r2=0.297, p=0.04 (Fig. 1)) between VBB and CPB in Bark, as did bilinguals in English (2) 2 2 (r=0.43, r =0.182, p=0.03 (Fig. 2)) and in Spanish (r=0.41, r =0.17, p=0.04 (Fig. 3). Results in Hz Figure 2. CPB and VBB correlation in bilinguals listening to English, in Bark (right) and Hz (left.) are similar in bilinguals (English: r=0.528, r2=0.279, p=<0.01; Spanish: r=0.45, r2=0.203, p=0.02) while the correlation in controls loses significance (r=0.474, 2r =0.225, p=0.09) possibly from lack of participants. These findings suggest that monolinguals and bilinguals link perceptual boundaries with their vowel distribution boundaries. Such integration is potentially a sensorimotor feedback mechanism which aids perception regardless of language. Arguably, phonological working memory (PWM)11 helps integration of auditory and gestural data, as it appears to be connected to both perception and production systems11,12,13 allowing for the ‘cycling’14 of information. Pink masking noise in trials did not affect the correlation, suggesting that a sensorimotor framework is also engaged in degraded stimulus processing (Moulin-Frier et al., 2012)10. The lack of strong correlation in some speakers may indicate that they relied 15 also on auditory-only processing, as seen in Callan et al. (2004) , and eschewed direct (3) gestural information extraction as proposed by pure motor theories. The pilot study showed no significant effects of acquisition-based factors (eg. time learning L2) on the correlation: Figure 3. CPB and VBB correlation in bilinguals listening to Spanish, in Bark (right) and Hz (left.)

16 17 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS References L1-L2 coarticulatory adjustment of vocalic in Spanish-English bilinguals Ander Beristain [1] Liberman, A.M., & Mattingly, I.G. (1985). The motor theory of speech perception revised. University of Illinois at Urbana-Champaign Cognition, 21(1), 1-36. [2] Liberman, A.M., & Mattingly, I.G. (1989). A specialization for speech perception. Science, 243, 489-494. [3] Guenther, F. H., Hampson, M., & Johnson, D. (1998). A theoretical investigation of reference Background. Vowel nasalization is a type of coarticulatory effect where an inherently frames for the planning of speech movements. Psychological Review, 105(4), 611. oral vowel is nasalized due to the early opening of the velopharyngeal port for a subsequent [4] Hickok, G., & Poeppel, D. (2007). The cortical organization of speech processing. Nature Reviews. nasal consonant. [1] points out that while this type of is ‘universal,’ the Neuroscience, 8, 393–402. intensity of such conditioned nasality varies across languages. For instance, [2, 3] studied [5] Guenther, F. H., & Vladusich, T. (2012). A neural theory of speech acquisition and production. the aerodynamics of vowel nasalization in Spanish and English across speech rates and Journal of Neurolinguistics, 25(5), 408-422. found that while nasalized vowels underwent a low-level coarticulatory effect in Spanish, the [6] Flege, J.E. (1999). The relation between L2 production and perception. In Ohala, J., Hasegawa, effect was significantly greater in English. Most studies on vowel nasalization have focused Y., Ohala, M., Granveille, D. & Bailey, A. (eds.) Proceedings of the 14th International Congress of on first language (L1) production and are limited by small sample size. The function of vowel Phonetics Sciences (pp. 1273-1276.) San Francisco: Regents of the University of California. nasalization in additional languages is thus an open question, as most work on second [7] Flege, J.E. (2007). Language contact in bilingualism: phonetic interactions. In Cole, J., & language (L2) and bilingual phonology has focused on individual consonants and vowels, but Hualde, J. I. (eds.) Laboratory phonology 9 (pp. 353-381.) Berlin: Mouton de Gruyter. not on the interrelation of the two. Nevertheless, studies such as [4] and [5] investigated the [8] Best, C.T., & Tyler, M.D. (2007). Nonnative and second-language speech perception: Commonalities and complementarities. In Bohn, O.S. & Munro, M.J. (eds.) Second Language acquisition of acoustic coarticulation in the L2 and found that coarticulatory patterns can be Speech Learning: The Role of Language Experience in Speech Perception and Production (pp. 13– transferred from L1 to the L2. Furthermore, L2 proficiency was a significant factor in both 34.) Amsterdam: John Benjamins. studies, where more advanced learners showed more ‘native-like’ patterns, yet not the same [9] Chao, S. C., Ochoa, D., & Daliri, A. (2019). Production variability and categorical perception of ones. vowels are strongly linked. Frontiers in Human Neuroscience, 13, 1-7. Research Question. Can new vowel nasalization patterns be acquired after childhood, [10] Moulin-Frier, C., Laurent, R., Bessière, P., Schwartz, J. L., & Diard, J. (2012). Adverse conditions or do L1 gestural features prevail onto the L2? Does L2 proficiency show a significant effect? improve distinguishability of auditory, motor, and perceptuo-motor theories of speech perception: An exploratory Bayesian modelling study. Language and Cognitive Processes, 27(7- Methodology. 26 participants were tested in the present study (10 native speakers of 8), 1240-1263. Peninsular Spanish, L1Sp, who were advanced L2 learners of English; and 16 native speakers [11] Baddeley, A., Lewis, V., & Vallar, G. (1984). Exploring the articulatory loop. The Quarterly Journal of American English, L1En, who were intermediate L2 learners of Spanish). They took part in of Experimental Psychology Section A, 36(2), 233-252. a Spanish and an English read-aloud task (each one week apart) from where aerodynamic [12] Thorn, A. S., & Gathercole, S. E. (2001). Language differences in verbal short-term memory do data were collected using two TSD160A pressure transducers connected to a vented mask not exclusively originate in the process of subvocal rehearsal. Psychonomic Bulletin & Review, with one oral and one nasal cavity. Each participant produced 30 target words under three 8(2), 357-364. different conditions in each language (10 phonetically oral vowels, CVCV; 10 phonetically [13] Dupoux, E., Peperkamp, S., & Sebastián-Gallés, N. (2001). A robust method to study stress nasalized vowels in tautosyllabic condition, CVN$; and 10 phonetically nasalized vowels in “deafness”. The Journal of the Acoustical Society of America, 110(3), 1606-1618. [14] Jacquemot, C. & Scott, S. (2006). What is the relationship between phonological short-term heterosyllabic condition, CV$NV.) From each vocalic segment, two different variables were memory and speech processing? Trends in Cognitive Sciences, 10(11), 480–486. analyzed: 1) onset of nasalization (OoN); and 2) degree of nasalization (DoN). OoN was [15] Callan, D. E., Jones, J. A., Callan, A. M., & Akahane-Yamada, R. (2004). Phonetic perceptual obtained by analyzing 40 time-normalized points in the vocalic segment and comparing the identification by native-and second-language speakers differentially activates brain regions functions of the oral and nasalized vowels across time; a total of 62,400 points were fit into involved with acoustic phonetic processing and those involved with articulatory–auditory/ a Generalized Additive Mixed model (GAMM) for the statistical analysis. DoN was obtained orosensory internal models. NeuroImage, 22(3), 1182-1194. by integrating the function of nasal airflow proportion to total (nasal + oral) airflow; a total of 1,560 points were fit into a linear fixed-effect model for the statistical analysis. Results. Both groups showed later OoN values for Spanish than for English (in agreement with [2,3] which proposed that nasalization is phonetic in Spanish and phonological in English). However, while the L1Sp group showed significantly later values for Spanish, the L1En group showed significantly earlier values for English;group yielded main effects (β = .006, p = .019). As far as DoN, significantly lower values were found for Spanish. Regarding the L2, only the L1Sp group was able to adjust their nasalization patterns significantly in the L2 (English) (β = .138, p < .001). These results align with previous findings that linguistic proficiency plays a significant role in L2 coarticulatory acquisition, as L1En speakers are intermediate learners while L1Sp speakers are advanced learners (see summary in Table 1.)

18 19 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSION Conclusion. Although both groups show some type of deviation from their L1 coarticulatory Teaching Mandarin tones to English speakers via intonational analogy patterns in the L2, only the advanced group (i.e., L1Sp) succeeds at showing significant cross- linguistic coarticulatory adjustment. This may indicate that L2 coarticulatory acquisition Siyu Chen1, Laurence White2, María Arche1 & Claire Monks1 might be specified in higher levels of linguistic proficiency. 1University of Greenwich; 2Newcastle University

Many Mandarin Chinese words are only distinguished by lexical tone. Thus, second language Table 1. Summary of aerodynamic results (L2) learners of Mandarin should aim to reliably recognise the four tones when listening and produce distinct tones while speaking. However, difficulties in perceiving and producing Spanish English tones have long been considered major obstacles for speakers of non-tonal languages, with OoN DoN OoN DoN the teaching of Mandarin tones “problematic, neglected, or both” [1]. L1Sp We compared two training methods for English speakers’ acquisition of Mandarin tones: CV(CV) -- 0.05 -- 0.05 1) observation of hand gestures that mimic tone contours; 2) assimilation of Mandarin tones CVN 0.87 0.13 0.45 0.28 to English intonational categories. Hand gestures may help learners visualise abstract L2 CVNV 0.90 0.08 0.64 0.14 features and build up a physical sense of those features [2]. More specifically, tone-mimicking gestures may be effective for English learners of Mandarin to distinguish words on the basis L1En of the four principal Mandarin tones [3], although L2 learners need to interpret that visual information into the auditory modality. Alternatively, an auditory-only method of tone training CV(CV) -- 0.04 -- 0.04 for native speakers of non-tonal languages is suggested by the Perceptual Assimilation CVN 0.37 0.27 0.16 0.28 Model, which proposes that learners assimilate L2 sounds to existing L1 categories at CVNV 0.60 0.23 0.23 0.27 points of similarity in the two phonological inventories [4]. Thus, a within-modality method encouraging assimilation between novel L2 tone features and L1 intonational features has *OoN: 0-1 point some theoretical support. *DoN: 0-1 percentage Native English speakers with no tonal language experience were trained to identify the four Mandarin tones. In a between-subjects design, 75 participants (47 female, 28 male; mean age References 23.3 years) were randomly assigned to one of three training conditions – gesture observation (GO), tone assimilation (TA), baseline control (BC) – with four stages of training and testing. [1] Sampson, R. 1999. Nasal vowel evolution in Romance. Oxford University Press. [2] Solé, M.-J. 1992. Phonetic and phonological processes: The case of nasalization. Language and Speech 35(1-2), 29–43. 1. Videos introduced the four Mandarin tones according to training condition (below). [3] Solé, M.-J. 1995. Spatio-temporal patterns of velopharyngeal action in phonetic and phonological 2. Participants watched a tone-training video for Mandarin monosyllables, according to nasalization. Language and Speech 38, 1-23. training condition, followed by a monosyllable-based tone identification test. [4] Zsiga, E. 2003. Articulatory timing in a second language. Evidence from Russian and English. 3. Participants watched a tone training video for Mandarin disyllables, according to training Studies in Second Language Acquisition 25, 399-342. condition, followed by a disyllable-based tone identification test. [5] Oh, E. 2008. Coarticulation in non-native speakers of English and French: An acoustic study. 4. Participants watched a word meaning training video featuring monosyllables from the Journal of Phonetics 36, 361-384. tone identification test (stage 2) and finally took a word meaning test.

Training videos in the three conditions were as similar as possible except for the critical manipulation. In the GO condition, the instructor presented the Mandarin stimuli using her hand to mimic each tonal contour. In the TA condition, Mandarin tones were explicitly compared to English intonation contours: 1 – hesitation; 2 – question; 3 – uncertainty; 4 – statement. In the BC condition, the videos were similar to the GO and TA conditions, except that the instructor did not use hand gestures or make the English intonation comparisons. As assessed by mixed-effect logistic regression models, tone identification for Mandarin monosyllables was better in both gesture observation (GO: p<.05) and tone assimilation (TA: p<.01) conditions than baseline control. The TA group also showed better performance on Mandarin disyllable tone identification relative to baseline (p<.05), with a trend towards better performance than GO (p=.056). A similar pattern was found for accuracy in the word

20 21 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS meaning test (BC: 65%; GO: 67%, TA: 69%), but differences were not robust. Mandarin Chinese listeners’ perception of nonnative /r/-/l/s: phonetics vs. phonology We conclude that both gesture observation and tone assimilation are helpful for native English learners in the initial stages of Mandarin lexical tone acquisition. However, when the Yezhou Jiang,1 Pierre Hallé,1 and Rachid Ridouane1 lexical tone combinations became more complicated, in the training of disyllabic Mandarin 1 Laboratoire de Phonétique et Phonologie, CNRS and Paris 3 University words, the within-modality training method (TA) was more helpful than the cross-modality method (GO) for tone identification. Further research could explore whether a combination Nonnative perception of /r/-/l/ has been extensively studied, mainly with English /r/-/l/ of manual gesture and tone assimilation methods is more advantageous than either method presented to Japanese listeners [1, 2]. That Japanese has an /r/ but no /l/ would explain the alone, and whether such approaches facilitate tone production, or word learning over the Japanese difficulty with /r/-/l/ as phonological. But is it solely phonological? Various studies longer term. have instead suggested that nonnative speech perception is, at least in part, driven by the phonetic properties of both native and nonnative sounds [3, 4]. References The present study aims at clarifying further this issue by examining the rather unexplored case of Mandarin-dominant Chinese listeners. Mandarin does have a lateral /l/ but whether [1] McGinnis, S. (1997). Tonal spelling versus diacritics for teaching pronunciation of Mandarin it also has a rhotic /r/ is debated. Modern analyses favor the view that Mandarin has a rhotic Chinese. The Modern Language Journal, 81(2), 228-236. /r/, transcribed as in pinyin, and phonetically close to American English (AE) /r/ [5, 6]. On [2] McNeill, D. (2008). Gesture and Thought. Chicago: University of Chicago Press. more traditional accounts, pinyin stands for /ʐ/, the voiced counterpart of /ʂ/ [7]. [3] Morett, L. M., & Chang, L. Y. (2015). Emphasising sound and meaning: Pitch gestures enhance Prior studies on Chinese perception of nonnative /r/-/l/ are scarce and yield mixed results: Mandarin lexical tone acquisition. Language, Cognition and Neuroscience, 30(3), 347-353. [8] found that Chinese listeners discriminate/identify AE /r/-/l/ quite well in several contexts; [4] Best, C. T., & Tyler, M. D. (2007). Nonnative and second-language speech perception: Commonalities and complementarities. In O.-S. Bohn & M.J. Munro (Eds.), Language Experience [9] found poor Chinese performance on Russian /r/-/l/. The issue thus needs be reexamined. in Second Language Speech Learning: In Honor of James Emil Flege (pp. 13-34). Philadelphia: John We ran Mandarin-dominant listeners on the perception of AE versus French /r/-/l/ for Benjamins. discrimination of word-initial /r/-/l/ (AXB task; 1s ISI) and identification (free transcription task in pinyin; see below). Near-ceiling performance obtained for both AE and French /r/- /l/. This may support the view that Mandarin has a /r/-/l/ contrast, hence a rhotic /r/, and that Chinese discrimination performance reflects perception at a phonological level. But these data cannot exclude the concurrent view that it reflects perception at a phonetic level. Chinese subjects’ relative proficiency in AE vs. French may also play a role. The pinyin transcription data told a very different story. Let us first note that the AE or French spoken materials for pinyin transcription were monosyllabic items chosen so as to have a putative match in Mandarin. For AE items, we selected 6 /r/-/l/ minimal pairs such as ran-lan (see Appendix), expected to match Mandarin with various pinyin onsets, in particular, , , , and /x/ (, ignoring tone notation). For French items, we added the /ʒ/ onset and selected 9 triplets such as roue-loup-joue (see Appendix), expected to match Mandarin syllables, just like for the English items (). (The /ʒ/ onset was added for its possible match with Mandarin /r/.) For these simple onset items, the transcription data were very clear-cut: both AE and French /l/ were virtually always transcribed with ; AE /r/ was mostly transcribed with (93%) and French /r/ with /x/ (95%); French /ʒ/ was transcribed with (67%) or /tʂ/ (21%). Thus, whereas the pinyin transcriptions for AE /r/-/l/ fit with a phonological account whereby Chinese listeners assimilate a nonnative rhotic to their rhotic, those for French /r/-/l/ do not. Assimilation of French /r/ to Chinese /x/ must be phonetically rather than phonologically motivated. We further tested Chinese listeners on their perception of stop(+/r/)+/a, i, u/ monosyllables. Chinese listeners transcribed French /pr, tr, kr/ with /ph, th, kh/ (97%), without vowel as a rule. In contrast, they transcribed French /p, t, k/ simple onsets with /p, t, k/ (96%). In other words, they interpreted French /r/ after a voiceless stop as [+aspiration], far from their rhotic /r/. For AE, the labial and velar clusters often induced vowel epenthesis (81%) and the post-stop /r/ was transcribed mostly as (e.g., /pra/ > ). The /dr, tr/ clusters induced much fewer vowel epenthesis (14%) (e.g., /tra/ > , /dra/ > ) and assimilated most of the time (86%) to the retroflex /

22 23 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS tʂ, tʂʰ/ (e.g., /tra/ > , /dra/ > ). Thus, AE /r/ in the alveolar clusters was generally DAY 1 - Oral session 2 interpreted as a retroflex release rather than an /r/ phoneme. SECOND LANGUAGE ACQUISITION II Altogether, then, the transcription data call for a phonetic account of the Chinese perception of /r/-/l/s, dismissing a purely phonological account of the discrimination data. Perceptual unlearning? Exposure to L1-accented-like L2 words impacts perceptual categorization of difficult second-language contrasts Appendix. Simple onset items with a putative match in Mandarin (in tone 2 where possible); ‘*’ stands for nonce words, ‘°’ for loanwoard. Miquel Llompart Friedrich Alexander University Erlangen-Nuremberg English Mandarin /r/: ran, rang, run, wrong, rue, row : ran 然, rang 瓤, ren 仁, rong 容, ru 如, rou 柔 When learning a second language (L2), it is often difficult to differentiate between non- /l/: lan, lang, *lun, long, loo, low : lan 兰, lang 狼, lun 轮, long 龙, lu 炉, lou 楼 native phones in perception. This is the case, for instance, for native German speakers with : wan 完, wang 王, wen 文, weng 翁, wu 无, wo 窝 the English /ɛ/-/æ/ vowel contrast [1]. To master this type of distinctions, learners need to establish separate phonetic categories for the two phones and then accurately encode them French Mandarin into the phonological representations of L2 words [2]. It is widely established that initial /r/: rang, *reu, °run, *ranne, roue, : rang瓤, re 捼, ren仁, ran然, ru如, shortcomings in the perception of difficult L2 contrasts lead to deficits in the phonological /l/: lent, leu, *leune, *lanne, loup : hang 航, he 合, hen 痕, han 含, hu 湖, representations of L2 words containing these phones, which are often described as “weak” or /ʒ/: gens, jeu, jeune, Jeanne, joue : lang狼, le 乐, lun轮, lan兰, lu炉 “fuzzy” from a phonetic standpoint [3]. It is unknown, however, whether exposure to phonetic : wang王, *we, wen文, wan完, wu无 inconsistencies for such confusable non-native phones in L2 words may directly affect (continued) learners’ perceptual categorization of the contrast. Note that this is especially relevant for /r/: rond, roi, Roanne, *roui : rong容, rua 挼, ruan 撋, rui 緌 L2 learning in L1-speaking countries, where learners are bound to be exposed to systematic /l/: long, loi, *loanne, Louis : hong 红, hua 华, huan 环, hui 回 mispronunciations by fellow learners. This study intends to be a first step towards answering /ʒ/: jonc, joie, Joanne, jouis : long龙, *lua, luan 峦, lei 雷 this question. : weng翁, wa 娃, wan完, wei 维 Twenty-nine advanced German learners of English participated in a three-part perception task modelled after [4] and focusing on two English vowel contrasts: /ɛ/-/æ/ and /i/-/ɪ/ (“easy” L2 baseline; shared with German). Each part consisted of an exposure phase and two 2AFC categorization phases. In the exposure phase, participants listened to a native English References speaker produce a list of 60 words (12 words with each critical vowel + 12 filler words). In the categorization phases, they were asked to categorize the steps of two 11-step continua [1] Goto, H. 1971. Auditory perception by normal Japanese adults of the sounds “L” and “R.” (bet-bat and sheep-ship) constructed using tokens of these words by the same native English Neuropsychologia 9, 317–23. speaker as endpoints. Crucially, in parts 1 and 3, all words in the exposure phases were [2] Miyawaki, K., Strange, W., Verbrugge, R., Liberman, A., Jenkins, J., & Fujimura, O. 1975. An effect of linguistic experience: The discrimination of [r] and [l] by native speakers of Japanese and pronounced correctly, whereas in part 2, words with /æ/ and /ɪ/ were mispronounced so English. Perception and Psychophysics 18, 331-40. that these vowels became [ɛ] and [i], respectively (e.g., dr[ɛ]gon, w[i]zard). Hence, the main [3] Hallé, P., Best, C. & Levitt, A. 1999. Phonetic versus phonological influences on French listeners’ question was whether exposure to mispronounced /æ/- and /ɪ/-words in part 2 would perception of American English approximants. Journal of Phonetics 27, 281-306. result in perceptual boundary shifts in the following categorization tasks relative to part [4] Takagi, N. & Mann, V. 1995. The limits of extended naturalistic exposure on the perceptual 1. Additionally, part 3 allowed to test whether, if shifts were observed, boundaries would mastery of English /r/ and /l/ by Japanese learners of English. Applied Psycholinguistics 16, 379- shift back to their original locations after hearing correctly-produced /æ/- and /ɪ/-words once 405. again. Results showed a sizeable shift from part 1 to part 2 for bet-bat and a smaller one for [5] Lin, Yen-Hwei. 2007. The Sounds of Chinese with Audio CD. Vol. 1. New York, NY: Cambridge sheep-ship (see Figure 1). Critically, these shifts went in opposite directions. While for sheep- University Press. ship exposure to the mispronunciations resulted in marginally more ship responses, for bet- [6] Lee-Kim, S.. 2014. Revisiting Mandarin ‘apical vowels’: An articulatory and acoustic study. Journal of the International Phonetic Association 44, 261–82. bat, part 2 triggered more bet responses than part 1. Finally, no differences were observed [7] Maddieson, I., & Disner, S. 1984. Patterns of sounds. Cambridge university press. between parts 2 and 3. [8] Brown, C. 1998. The role of the L1 grammar in the L2 acquisition of segmental structure. Second These results indicate that exposure to mispronounced L2 words does indeed impact Language Research 14, 136-93. learners’ perceptual categorization of L2 contrasts, but in different ways for difficult and easier [9] Smith, J., & Kochetov, A. 2009. Categorization of non-native liquid contrasts by Cantonese, distinctions. The robust shift observed for /ɛ/-/æ/ provides evidence that mispronunciations Japanese, Korean, and Mandarin listeners. Toronto Working Papers in Linguistics 34, 1-15. in L2 lexical items matching those often experienced in L1-accented English (e.g., dr[ɛ]gon) reduce the likelihood of learners’ choosing the phonological category in the contrast that is not part of their L1 (/æ/) when subsequently categorizing the vowels. This suggests that

24 25 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS extensive exposure to this type of mispronunciations may hinder the establishment of a Are Non-Native Speakers “Deaf” to Lexical Pitch Accents? reliable perceptual contrast in the long run [5]. The smaller shift for /i/-/ɪ/, by contrast, could be attributed either to selective adaptation effects after hearing many [i] tokens and no ɪ[ ] Dusan Nikolic tokens in the exposure phase [6] or to learners treating the /ɪ/ > [i] mispronunciations as University of Calgary speaker idiosyncrasies and (weakly) adapting to them [7]. Follow-up experiments are now being conducted to ascertain whether the shifts observed –most importantly that for /ɛ/-/æ/– Previous research shows that non-native speakers display “a general insensitivity to are speaker-specific or they generalize to different categorization speakers. If the latter were word stress properties, depending on the status of stress in their language” [1] (p.2). This true, this would have additional implications for our understanding of perceptual learning/ insensitivity, termed “deafness”, has been observed in the perception of both stress and recalibration processes in the L2 as well as for L2 phonological acquisition as a whole. tone contrasts [2, 3, 4, 5]. For example, French speakers could not distinguish between Spanish stress contrasts [6], while English speakers had difficulty perceiving Mandarin tone contrasts [3]. The “deafness” effect has even been documented with stress contrasts in one’s native language. For example, Correia and colleagues [4] have shown that native speakers of European Portuguese (EP) display a certain level of insensitivity towards stress contrasts provided that the main acoustic parameter signalling stress in EP, vowel reduction, is absent from the signal. While the previous research focused mainly on the “deafness” of stress and tone contrasts, the “deafness” effect has not been explored with yet another word-prosodic category - lexical pitch accent.

The present study filled this gap by asking whether there is “deafness” effect when nonnative speakers listen to lexical pitch accents. In particular, the present study explored how speakers of English, a stress-accented language, perceived Serbian lexical pitch accent contrasts. To that end, four disyllabic CVCV non-words were recorded by a trained linguist and a native speaker of Serbian. Each non-word was produced with one of the Serbian lexical pitch accent categories. In Serbian, there are, in total, four lexical pitch accents: an H*+L pitch accent which can be either short or long depending on the vowel duration, and an L*+H which can also be either short or long (as per [7]). Thus, the main acoustic correlates of Serbian lexical pitch accents are F0 contour and duration [8]. In this study, eighteen English and ten Serbian Figure 1. Predicted probability of responding “bet” (left panel) and “sheep” (right panel) as a function of continuum speakers participated. The participants carried out a sequence recall task in which they were step (1-11) and part (Part 1 in black, Part 2 in red and Part 3 in black and with dashed line). The horizontal bar signals required to associate the lexical pitch accent contrasts with the keyboard labels, and recall the 50% crossover point. the sequences of four, five, and six non-words by pressing the keys in the appropriate order. The findings revealed that non-native (English) performed above the chance level on each sequence (p < 0.01). In addition, the proportion test showed that Serbian and English [1] Bohn, O. S., & Flege, J. E. (1990). Interlingual identification and the role of foreign language participants did not significantly differ on their overall performance [χ2(1) = .05, p = 0.89] experience in L2 vowel perception. Applied Psycholinguistics, 11(3), 303-328. (Table 1). Serbian speakers did outperform English speakers on the sequences of six non- [2] Llompart, M. (2020). Phonetic categorization ability and vocabulary size contribute to the words, which were the sequences with the highest memory load that precluded acoustic encoding of difficult second-language phonological contrasts into the lexicon. Bilingualism: retrieval [χ2(1) = 11.82, p < 0.01] (see [2]). The logistic regression analysis revealed that there Language and Cognition, 1-16. [3] Darcy, I., Daidone, D., & Kojima, C. (2013). Asymmetric lexical access and fuzzy lexical was a strong effect of lexical pitch accent categories on participants’ accuracy rates [β= representations in second language learners. The Mental Lexicon, 8(3), 372-420. 0.85 SE = 0.12, z = 6.778 p < 0.01]. For example, both Serbian and English speakers were [4] Llompart, M., & Reinisch, E. (2019). Robustness of phonolexical representations relates to more sensitive to duration contrasts than to F0 contrasts, as their accuracy scores were phonetic flexibility for difficult second language sound contrasts. Bilingualism: Language and significantly higher when listening to lexical pitch accents contrasted in duration rather than Cognition, 22(5), 1085-1100. F0 (e.g. long H*+L vs. short H*+L) [Serbian - χ2(1) = 29, p < 0.01, English - χ2(1) = 17, p < 0.01]. [5] Samuel, A. G., & Larraza, S. (2015). Does listening to non-native speech impair speech The major finding of the study is that non-native (English) speakers were not “deaf” to lexical perception?. Journal of Memory and Language, 81, 51-71. pitch accents. While this aligns with some of the approaches adopted in previous studies [5], [6] Eimas, P. D., & Corbit, J. D. (1973). Selective adaptation of linguistic feature detectors. Cognitive the result is attributed to the fact that English speakers could recall the contrasts because of Psychology, 4(1), 99-109. the “heavy” acoustic parameters of Serbian lexical pitch accents, F0 and duration. Therefore, [7] Samuel, A. G., & Kraljic, T. (2009). Perceptual learning for speech. Attention, Perception, & Psychophysics, 71(6), 1207-1218. the study adopts the approach offered by Correia and colleagues [4] who claim that the “deafness” effect can be observed only in cases when non-native or native listeners are

26 27 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS unable to perceive phonetic details of the contrasts. Since this was not the case with non- Individual differences in high-variability phonetic training: the role of attention control native speakers in the present study, the “deafness” effect was not observed. In addition, and proficiency the study revealed that duration is a more robust acoustic parameter of Serbian lexical pitch accents as both native and non-native speakers were more sensitive to duration than to F0 Ingrid Mora-Plaza, Mireia Ortega & Joan C. Mora contours. This finding calls for further research on the topic as more traditional approaches Universitat de Barcelona of Serbian phonology [8] would claim that the lexical pitch accents in Serbian consist of the interplay of F0 and duration, both of which have an equal role in their phonetic make-up. High-variability phonetic training (HVPT) has been found to be effective at improving L2 learners’ perception and production of difficult L2 phonological contrasts. Research has Table 1. Overall accuracy scores per group per sequence. shown that this type of training not only facilitates L2 phonological acquisition [1], but it also enhances efficient phonological processing in the recognition and production of L2 words Percent Correct [2]. Gains after HVPT have been shown to be robust, generalizing to new items and speakers Sequence English Serbian [3] and across perception and production modalities [4]. However, no research to date 4 87 84 has examined training gains both across and within training sessions. Such gains could be influenced by learners’ individual differences (IDs) [5] in L2 proficiency [1], speech processing 5 79 79 skills [6] and cognitive skills such as phonological memory [7], inhibition [8,9] and selective 6 69 80 attention [10]. The few existing studies on IDs in the context of HVPT have not investigated the role of attention control in mediating training benefits during task performance. The present study extends this line of research by analysing the extent to which HVPT improved learners’ [1] Domahs, U., Knaus, J., Orzechowska, P., & Wiese, R. 2012. Stress “deafness” in a language with fixed word stress: an ERP study on Polish. Frontiers in psychology, 3, 439. perception and lexical encoding of a difficult L2 vowel contrast (/æ/-/ʌ/). It further examines [2] Dupoux, E., Peperkamp, S., & Sebastián-Gallés, N. 2001. A robust method to study stress the role of IDs in auditory attention control and proficiency in explaining perceptual gains “deafness”. The Journal of the Acoustical Society of America, 110(3), 1606-1618. before and after training, and within and across training sessions. [3] Braun, B., Galts, T., & Kabak, B. 2014. Lexical encoding of L2 tones: The role of L1 stress, pitch L1-Catalan/Spanish advanced learners of English (N=57) took part in four 40-minute training accent and intonation. Second Language Research, 30(3), 323-350. sessions. Participants were trained on nonwords through AX discrimination, identification [4] Correia, S., Butler, J., Vigário, M., & Frota, S. 2015. A stress “deafness” effect in European (ID) and immediate repetition (IR) tasks. The AX discrimination and ID training consisted of Portuguese. Language and speech, 58(1), 48-67. 96 and 32 trials (respectively) per session with feedback for error and response latency. The [5] Rahmani, H., Rietveld, T., & Gussenhoven, C. 2015. Stress “deafness” reveals absence of lexical word repetition task consisted of 32 trials per session, which they were asked to repeat twice. marking of stress or tone in the adult grammar. PloS one, 10(12), e0143968. Participants discriminated (AX), identified (ID) and repeated (IR) 2 minimal pairs (4 nonwords) [6] Dupoux, E., Sebastián-Gallés, N., Navarrete, E., & Peperkamp, S. 2008. Persistent stress ‘deafness’: The case of French learners of Spanish. Cognition, 106(2), 682-706. produced by 4 different voices twice in every session. An untrained group(N=12) served [7] Godjevac, S. 2005. Transcribing Serbo-croatian intonation. In S.A. Jun (Eds.), Prosodic as control. Gains in perception and in the lexical encoding of the contrast were assessed typology: The phonology of intonation and phrasing (pp. 146-171). New York: Oxford University through a categorical ABX discrimination task and a lexical decision task, respectively. Press. Attention control was measured through an auditory selective attention (ASA) test [11] and [8] Lehiste, I., & Ivić, P. (1973). Interaction between tone and quantity in Serbocroatian. Phonetica, L2 proficiency through an elicited imitation (EI) task [12]. For all perception training tasks (AX, 28(3-4), 182-190. ID), gains within training sessions were assessed by computing gain accuracy scores between the first half (block 1) and second half (block 2) of the test trials. Perception gains across sessions were obtained by subtracting an average within-session gain score and computing the difference between the averaged gain scores for sessions 1-2 and sessions 3-4. ASA and EI measures were correlated with perception gains before and after training, and within and across training sessions. The results showed that, whereas trainees significantly improved in L2 vowel discrimination and in the lexical encoding of the /æ/-/ʌ/ contrast, the control group did not. Repeated- measures ANOVAs with Session (1, 2, 3, 4) and Block (1, 2) as within-subjects factors revealed a significant main effect ofSession and Block, and a Session x Block interaction for AX accuracy, suggesting that learners’ performance was significantly more accurate in sessions 3-4 than 1-2. Overall, improvement occurred across and within sessions. The analyses also revealed that reaction times (RT) reduced significantly across and within sessions, but learners’ RTs reduced the most from session 3 to 4. Vowel identification significantly improved across sessions, especially between sessions 3 and 4, and obtained significantly faster RTs across

28 29 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS blocks and sessions. Finally, Pearson-r correlations showed that ASA explained 14.13% of Acquisition of Second Language Phonetic Features: Development of voiced intervocalic variance in AX accuracy and 12.39% in RT gains across sessions, as well as 16.16% of variance stops in L2 Spanish in ID RT gains. L2 proficiency was also found to explain variance in accuracy gains for the lexical decision and AX discrimination tasks. Partial correlations controlling for L2 proficiency Daniel J. Olson and Alexis Tews revealed that ASA explained little amount of variance in AX discrimination accuracy gains Purdue University (5.56%) and AX RT gains (16%). Overall, findings suggest that auditory attention control may play a role during phonetic training, explaining inter-individual variation in L2 speech Previous research has shown that second language (L2) learners can successfully acquire learning. aspects of L2 phonology, although the degree of success is subject to a number of internal and external factors (e.g., age of acquisition [1]; exposure [2]; formal instruction [3]). Two References: general approaches can be seen in the literature regarding the mechanisms that underpin L2 phonetic acquisition. Some researchers have approached acquisition on a segment-by- [1] Iverson, P., Pinet, M., & Evans, B. G. 2012. Auditory training for experienced and inexperienced segment basis, considering each phoneme in isolation (e.g., [4]). Others, largely in perception second-language learners: Native French speakers learning English Applied Psycholinguistics, literature, consider the acquisition of “generalized features”, in which a phonological feature 33 (01), 145-160. (e.g., manner of articulation) is acquired simultaneously for multiple phonemes (e.g., [5,6]). [2] Bradlow, A. R. 2008. Training non-native language sound patterns: Lessons from training This study examined the potential for feature generalization in L2 acquisition following Japanese adults on the English /r/-/l/ contrast. In J. G. Hansen Edwards & M. L. Zampini (Eds.), explicit phonetic instruction. The training focused on intervocalic voiced stop production Phonology and second language acquisition (pp. 287–308). Amsterdam: John Benjamins. by L1 English learners of L2 Spanish. Intervocalic voiced stops in English are produced with [3] Thomson, R. I. 2012. Improving L2 listeners’ perception of English vowels: A computer-mediated a significant oral closure [b, d, g], while in Spanish they are realized as approximants [β, ð, approach. Language Learning, 62, 1231–1258. ɣ] (e.g., [7]), although this is subject to other language-internal factors [8,9,10]. The current [4] Carlet, A. 2017. Different high variability procedure for training L2 Vowels and consonants. In study examines whether English-speaking learners of Spanish acquire a specific phoneme Sasha Calhoun, Paola Escudero, Marija Tabain & Paul Warren (eds.) Proceedings of the 19th (e.g., [β]) or a feature (e.g., intervocalic spirantization) that generalizes across multiple places International Congress of Phonetic Sciences, Melbourne, Australia 2019, 944‒948. of articulation. [5] Perrachione, T. K., Lee, J., Ha, L. Y., & Wong, P. C. 2011. Learning a novel phonological contrast Twenty-three native English-speaking learners of Spanish (intermediate-level) recorded both depends on interactions between individual differences and training paradigm design. The words in utterances and words in isolation before training (i.e., pretest), immediately after Journal of the Acoustical Society of America, 130(1), 461-472. training (i.e., posttest), and approximately 4 weeks after training (i.e., delayed posttest). All [6] Lengeris, A., & Hazan, V. 2010. The effect of native vowel processing ability and frequency target tokens were real Spanish words containing /b/, /d/, or /g/ in intervocalic position. Target discrimination acuity on the phonetic training of English vowels for native speakers of Greek. tokens were controlled for preceding vowel (i.e., non-high), stress (i.e., target in unstressed The Journal of the Acoustical Society of America, 128(6), 3757–3768. position), and were normed for word familiarity [11]. All tokens were non-cognate. A total of [7] Aliaga-García, C., Mora, J., Cerviño-Povedano, E. 2011. L2 speech learning in adulthood and phonological short-term memory. Poznań Studies in Contemporary Linguistics, 47(1), 1–14. 54 unique tokens were used in the words-in-utterances task (3 stops x 6 tokens per stop x 3 [8] Darcy, I., Mora, J. C., & Daidone, D. 2014. Attention control and inhibition influence phonological sessions = 54 tokens). A total of 45 tokens were used in the words-in-isolation task (3 stops x development in a second language. In Proceedings of the International Symposium on the 5 tokens per stop x 3 sessions = 45 total tokens). A control group (n = 19) completed the same Acquisition of Second Language Speech. Concordia Working Papers in Applied Linguistics, 5, recordings, but received training on a different feature (i.e., word-initial VOT). 115–129. The training consisted of three iterations of a visual feedback paradigm [12] conducted over [9] Lev-Ari, S., & Peperkamp, S. 2014. The influence of inhibitory skill on phonological representations the course of 8 weeks. In the visual feedback paradigm, participants recorded themselves in production and perception. Journal of Phonetics, 47, 36–46. producing a different set of words containing the target phonemes and visually compared [10] Mora, J. C., & Mora-Plaza, I. 2019. Contributions of cognitive attention control to L2 speech images of their own spectrograms and waveforms with those produced by a native speaker. learning. In A. M. Nyvad, M. Hejná, A. Højen, A. B. Jespersen, & M. H. Sørensen (Eds.), A sound Crucial for analyzing whether participants acquired a single phoneme or a generalized feature, approach to language matters – In honor of Ocke-Schwen Bohn. (pp. 477–499). Denmark: Aarhus participants received training on only one of the three stops. The degree of spirantization University. of each token was measured via CV intensity ratio [13] (i.e., minimum consonant intensity [11] Humes, L. E., Lee, J. H., & Coughlin, M. P. 2006. Auditory measures of selective and divided (dB) / maximum vowel intensity (dB)). A higher CV intensity ratio corresponds to a more attention in young and older adults using single-talker competition. The Journal of the Acoustical approximant-like production. Analysis focused on the change in both trained and non-trained Society of America, 120(5), 2926–2937. phonemes. A total of 2277 tokens (experimental group) were included in the initial analysis. [12] Ortega, L., Iwashita, N., Norris, J. M., & Rabie, S. 2002, October. An investigation of elicited Preliminary results (Figure 1), using a linear mixed effects model with CV intensity ratio as imitation tasks in crosslinguistic SLA research. Paper presented at Second Language Research the dependent variable, and session (pretest, posttest, delayed posttest) and phoneme Forum, Toronto, Canada. type (trained vs. not trained) as fixed effects, showed that visual feedback resulted ina significantly higher, more native-like, CV intensity ratio for the trained phonemes (e.g., words in isolation: pretest vs. delayed posttest t = 2.197). Importantly, the same pattern was found for the non-trained phonemes, and there was no significant interaction between session and

30 31 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS phoneme type (at delayed posttest, t = 0.165). Further correlational analysis addresses the Phonetic imitation of the acoustic realization of Spanish stress links between change in the trained and non-trained phonemes for individual participants. As a whole, the results suggest that acquisition occurs at the level of the phonetic feature Bethany MacLeod & Sabrina M. Di Lonardo Burr in this particular context. The discussion focuses on the implications for both models of L2 Carleton University phonetics and pedagogical approaches. Phonetic imitation is the process in which a talker’s pronunciation comes to sound more similar to another person’s after being exposed to their speech. Most previous work has focused on imitation of specific acoustic dimensions of individual sounds, such as vowel formants (e.g. [1], [4]). However, we know less about how imitation might manifest in acoustic measures that reflect properties of words that rely on relative measures, such as lexical stress. Previous work suggests that there are three acoustic correlates of stress in Spanish: F0 (most robust cue), duration, and intensity (least robust cue). In words in isolation, stressed vowels tend to be higher pitched, longer, and louder than unstressed ([7]), the latter of which are not reduced as they are in English [9]. Our study explores whether relative prosodic measures are imitated by exploring the imitation of the acoustic realization of Spanish stress. Figure 1. Comparison of CV Intensity Ratio for trained and non-trained voiced stops, across three sessions (pretest, We do this via two experiments: a shadowing task to explore how talkers produce imitation posttest, delayed posttest). Higher intensity ratio corresponds to more native-like speech. and a perceptual task to see how listeners perceive imitation. We have 3 research questions:

[1] Flege J. E. (1998). Age of learning and second-language speech. In D. Birdsong (Ed.), New 1. Do shadowers imitate the model talkers’ acoustic realization of lexical stress in terms of F0, Perspectives on the Critical Period Hypothesis for Second Language Acquisition. Mahwah, duration, and intensity? NJ: Lawrence Erlbaum, pp. 101–131. 2. Can listeners perceive that shadowers have imitated and does imitation of the three acoustic [2] Flege, J. E., & Liu, S. (2001). The effect of experience on adults’ acquisition of a second correlates of stress contribute to the listeners’ perception that the shadowers have imitated? language. Studies in Second Language Acquisition, 24(4), 527-552. [3] Thompson, I. (1991). Foreign accents revisited: The English pronunciation of Russian 3. If so, do listeners use imitation of the acoustic correlates to different extents depending on the immigrants. Language Learning, 41, 177–204. strength of those correlates as cues to stress (i.e. F0 > duration > intensity)? [4] Flege, J. E, Munro, M.J., & Skelton, L. (1992). Production of the word‐final English/t/– /d/ contrast by native speakers of English, Mandarin, and Spanish. The Journal of the In Experiment 1, 48 female native Mexican Spanish speakers participated in the shadowing Acoustical Society of America, 92(1), 128–143 task. The stimuli included 40 Spanish disyllabic words, half stressed on the first and [5] De Jong, K.J., Hao, Y. & Park, H. (2009). Evidence for featural units in the acquisition of half on the second. The participants read aloud the stimuli three times (baseline phase), speech production skills: Linguistic structure in foreign accent. Journal of Phonetics, 37, then listened to one of four pre-recorded female native Mexican Spanish-speaking model 357–373. talkers producing the same words and immediately repeated them (shadowing phase). [6] De Jong K.J., Silbert N.H. & Park H. (2009). Generalization across segments in second Vowel duration, mean F0, and mean intensity were measured for both vowels in all words language consonant identification.Language Learning, 59(1), 1–31. [7] Hualde, J. I. (2005). The sounds of Spanish. Cambridge, UK: University Press. in the model talker, baseline, and shadowed recordings using Praat ([2]). The difference [8] Shea, C. E., & Curtin, S. (2011). Experience, representations and the production of second between the values of the first and second vowels for each of F0, duration, and intensity language allophones. Second Language Research, 27(2), 229–250. was calculated for all recordings, generating variables we call differentials for each acoustic [9] Colantoni, L., & Marinescu, I. (2010). The scope of stop weakening in Argentine Spanish. correlate of stress ([5], [9]). In M. Ortega-Llebaria (Ed.), Selected Proceedings of the 4th Conference on Laboratory In Experiment 2, the recordings from the 48 shadower + model talker pairs comprised the Approaches to Spanish Phonology (pp. 100–114). Somerville, MA: Cascadilla Proceedings stimuli in a 4IAX perceptual experiment ([11]) which 87 Spanish-speaking listeners completed. Project. [10] Cole, J., Hualde, J. I., & Iskarous, K. (1999). Effects of prosodic and segmental context on Listeners heard two pairs of words (XA XB), where X was always the model talker’s token /g/- in Spanish. In O. Fujimura, B. D. Joseph, & B. Palek (Eds), Proceedings of the and A and B were either the baseline or shadowed token (counterbalanced) produced by a Fourth International Linguistics and Phonetics Conference (pp. 575–589). Prague, Czech shadower. The listeners’ task was to decide which of X and A or X and B were more similar Republic: The Karolinium Press. to each other. The proportion of trials in which the listeners chose the pair containing the [11] Nusbaum, H. C., Pisoni, D. B., & Davis, C. K. (1984). Sizing up the Hoosier mental lexicon: shadowed token is taken to reflect the proportion of trials in which the shadowers imitated. Measuring the familiarity of 20,000 words. Research on speech perception: Progress report, Bayesian mixed-effects models fit using thebrms package ([3]) in R ([12]) were used to analyse 10, 357–376. the data, with linear models for the acoustic data and logistic models for the perceptual data. [12] Olson, D. J. (2014). Phonetics and technology in the classroom: A practical approach to using speech analysis software in second-language pronunciation instruction. Hispania, Results for Question #1: The shadowers imitated the model talkers on all three 97(1), 47–68. differentials, shifting the most on the duration differential, followed by F0, and leaston [13] Hualde, J.I., Simonet, M., & Nadeu, M. (2011). Consonant lenition and phonological intensity differential. recategorization. Journal of Laboratory Phonology, 2(2), 301–329.

32 33 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS Results for Questions #2 and #3: The listeners perceived imitation in 53.6% of trials, a DAY 1 - Oral session 3 proportion significantly higher than 50% (β = 0.14 [0.11, 0.17]) that falls in line with intercept PROSODY previous typically subtle results ([10], [13]). They used the shadowers’ imitation on all three differentials to make their judgements; however, the extent to which they used imitation Expecting the unusual: The prosodic interpretation of contextual information of the differentials did not align with the differentials’ strength as cues to stress. Instead, the listeners used imitation of the differentials in relation to how much the shadowers had Christine T. Röhr, Stefan Baumann and Martine Grice imitated them. IfL Phonetik, University of Cologne Furthering our understanding of phonetic imitation is important for developing accounts of second-dialect acquisition and sound change ([8]) and may also have implications for models This paper investigates the influence of prior context, generating different expectations, on of teaching and learning second language pronunciation ([6]). the prosodic expression of German utterances. In a production study we evaluated how speakers prosodically encode information that is intended to sound either unusual/exciting [1] Babel, M. (2012). Evidence for phonetic and social selectivity in spontaneous phonetic imitation. (1a) or ordinary/negligible (1b) to the listener. In a perception study, we investigated the Journal of Phonetics, 40(1), 178–189. effectiveness of the prosodic marking from the listeners’ perspective. [2] Boersma, P., & Weenink, D. (2019). Praat: doing phonetics by computer Version 6.1.08, retrieved 9 December 2019 from http://www.praat.org/ (D. Weenink, Ed.). (1)(a) Rate mal, was uns heute passiert ist! Wir haben Milena getroffen. [3] Bürkner, P.-C. (2017). brms: An R Package for Bayesian Multilevel Models Using Stan. Journal of Statistical Software, 80(1). ‘Guess what happened to us today! We met Milena.’ (target noun = Milena) [4] Clopper, C. G., & Dossey, E. (2020). Phonetic convergence to Southern American English: (b) Heute ist nichts Besonderes passiert. Wir haben Milena getroffen. Acoustics and perception. The Journal of the Acoustical Society of America, 147(1), 671–683. [5] Kim, J. Y. (2020). Discrepancy between heritage speakers’ use of suprasegmental cues in the ‘Today, nothing special happened. We met Milena.’ (target noun = Milena) perception and production of Spanish lexical stress. Bilingualism: Language and Cognition, 23(2), 233–250. In the production study, fourteen German native speakers (mean age = 25.8 years, SD = 2.0) [6] Lewandowski, N., & Jilka, M. (2019). Phonetic convergence, language talent, personality and produced the second/target sentence as an appropriate continuation of the first sentence. attention. Frontiers in Communication, 4(18), 1–19. The intention to make something sound unusual or exciting is reflected in a probabilistic [7] Llisterri, J., Machuca, M., de la Mota, C., Riera, M., & Ríos, A. (2003). The perception of lexical stress distribution of discrete phonological pitch accent types on the nuclear target noun (see in Spanish. In M. J. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the 15th International Fig.2): Unusual/exciting information triggers predominantly rising accents (L+H*, H*), while Congress of Phonetic Sciences (pp. 2023–2026). Barcelona: Causal Productions. ordinary/negligible information triggers increasingly more falling accents (H+!H*, H+L*). [8] Niedzielski, N., & Giles, H. (1996). Linguistic accommodation. In H. Goebl, P. Nelde, Z. Starý, & W. Moreover, a quantitative analysis of one continuous phonetic parameter that captures Wölck (Eds.), Kontaktlinguistik – Ein internationales Handbuch zeitgen¨ossischer Forschung (pp. 332–342). Berlin/New York: Mouton de Gruyter. the f0 movement towards the target for the pitch accent, the tonal onglide (measurement [9] Ortega-Llebaria, M., & Prieto, P. (2011). Acoustic correlates of stress in central Catalan and depicted in Fig.1; cf. [1]), revealed the following: Unusual/exciting information leads to higher Castilian Spanish. Language and Speech, 54(1), 73–97. tonal onglide values (greater rises) that were found in previous studies to be perceptually [10] Pardo, J. S., Urmanche, A., Wilman, S., & Wiener, J. (2017). Phonetic convergence across multiple more prominent (e.g. [2]), while ordinary/negligible information leads to lower tonal onglide measures and model talkers. Attention, Perception, and Psychophysics, 79(2), 637–659. values (shallower rises or (steeper) falls) that are perceptually less prominent. This effect [11] Pisoni, D. B., & House Lazarus, J. (1974). Categorical and noncategorical modes of speech of the tonal onglide holds both across and within pitch accent categories (see Fig.2), e.g. as perception along the voicing continuum. Journal of the Acoustical Society of America, 55(2), 328– compared to the ordinary/negligible condition, L+H* is found more often in the unusual/ 333. exciting condition, and the instances of L+H* with greater onglides are found more often in [12] R Development Core Team (2012). R: A language and environment for statistical computing. R the latter condition, too. Foundation for Statistical Computing. Vienna, Austria, ISBN 3-900051-07-0, URL http://www.R- A perception study, with sixty German native speakers (mean age = 28.7 years, SD = 9.5) project.org/. [13] Shockley, K., Sabadini, L., & Fowler, C. A. (2004). Imitation in shadowing words. Perception & using web-based rating tasks, assessed possible interpretations/expectations in the manner Psychophysics, 66(3), 422–429. of semantic differentials with visual analogue scales. Participants listened to and evaluated a selection of target utterances from the production study that were realized with either steep or shallow f0 rises and falls on the nuclear noun (see Fig.3). Results confirmed that a pronounced rising onglide (as in L+H*) on the target noun leads to the interpretation of unusual/exciting information, although shallow rising onglides (H*) do not. As expected, falling onglides (H+!H*, H+L*) also failed to evoke an unusual/exciting interpretation (see Fig.4). In sum, we show that the choice of pitch accent type may also be related to expectations of speakers and listeners as to whether the information deviates from an “unmarked” standard,

34 35 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS i.e. ordinary/negligible information, or not. While the unmarked standard commonly involves less prominent prosodic marking, an increase in the expectation for unusual/exciting information involves more prominent prosodic marking, suggesting that in German rising onglides participate in orienting attention (the greater the rise, the stronger the effect) and that there are less marked prosodic realizations for isolated (all-new, broad focus) utterances (e.g. H* and H+H!*/H+L* accents) which do not have this attention orienting effect. However, individual speakers convey this information in different but systematically compatible ways supporting a view of intonational phonology that integrates qualitative pitch accent categories and quantitative phonetic parameters (see [3]).

Figure 4. Mean listener ratings of each stimulus on two scales per prosodic condition.

[1] Grice, M., Ritter, S., Niemann, H., & Roettger, T. 2017. Integrating the discreteness and continuity of intonational categories. Journal of Phonetics 60, 90-107. [2] Baumann, S., & Röhr, C. T. 2015. The perceptual prominence of pitch accent types in German. In Figure 1. Schematic depiction of onglide measurements for (left) rising (L+H* and H*) and (right) falling (H+!H* and Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS XVII), vol. 298, pp. 1–5. H+L*) pitch accents (adapted from [1:93]). Glasgow, UK: The University of Glasgow. [3] Cangemi, F., Krüger, M., & Grice, M. 2015. Listener-specific perception of speaker-specific productions in intonation. In Fuchs, S., Pape, D., Petrone, C., & Perrier, P. (Eds.), Individual Differences in Speech Production and Perception. Frankfurt: Peter Lang, 123–145.

Figure 2. Distribution of GToBI accent types and tonal onglides on the nuclear target noun as a function of context, all speakers.

Figure 3. Individual f0 contours of perception stimuli per condition, superimposed and temporally aligned with the onset of the nuclear target noun (onset at vertical bar).

36 37 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS Influence of lexical tones on calling melodies: a comparison between Metropolitan urgent context (t=5.50, p<.001). F0 range is larger in the vase condition for FR (t=7.56, p<.001) and Bàsàa-Cameroonian French and in CM is only subject to a high inter-speaker variability, with contexts being perceptibly distinguished for a subset of speakers. Fatima Hamlaoui1, Marzena Żygis 2,3, Jonas Engelmann3 & Sergio Quiroz2 University of Toronto, Leibniz-ZAS, Humboldt University, Berlin

In Metropolitan French, the simple vocative or chanting contour (e.g. Marina! A table! “Marina! Dinner!”) consists of a F0 peak on the penultimate syllable and a sustained, mid- plateau on the final syllable of the utterance ([1], [2]). This is consistent with what Fagyal [3] observes in her study of 1 to 5-syllable proper names produced by four French speakers asked to “call sweetly or sweetly remind [someone of something]”. Phonologically, this contour has been represented as an LHM sequence ([4]), H*H-L% ([5]) or, more recently, as H+!H*!H% ([6]). In the present study, our aim is to establish whether context (here routine vs. urgent) affects contour choice in Metropolitan French (FR), as recently shown for Catalan [7]. Second, we are interested in whether a variety of French in contact with a tone language, Bàsàa- Figure 1. Frequency distribution of contours across contexts (Left : FR ; Right : CM) Cameroonian French (CM), makes use of the chanting contour too, as in this variety proper names are lexically specified for tone. Just like in Bàsàa (Bantu), CM names in isolation present an L/H-HL contour where HL either aligns with the last or the last two syllables, e.g. [àlîs], [màgdàlénà]. As Bàsàa is a tone language in which post-lexical information has little impact on F0 ([8]), a strong effect of our speakers’ L1 on their L2 predicts that lexical tones should be preserved in both contexts. 14 speakers (10 female, 4 male) of FR and 12 bilingual speakers (4 female, 8 male) of CM with Bàsàa as their L1, aged 20–41, took part in a Discourse Completion Task replicating the design from [9]. They were asked to produce 12 proper names (1–4 syllables) to call a child (i) for dinner or (ii) to reprimand him/her for breaking a vase. 1008 (FR) and 864 (CM) utterances were collected and then analysed acoustically. Utterances were manually annotated for contour shapes by two experts. In addition, the following parameters were examined: (i) tonal scaling and proportional alignment of H1 with the accented vowel and alignment of L and H2 with the last vowel for FR, (ii) F0 at the centre of each vowel (TBU), as well as the initial and final F0 for one-syllable names in CM and (iii) F0 range for both varieties. We also analysed the duration and amplitude (RMS and integral) of the stressed vowel, stressed syllable and word for both varieties. Our results show that the choice of contour is strongly context-dependent in FR, where speakers use three main intonational calling contours: the vocative chant (dinner: 71%, vase: 8.7%), a rising (interrogative-like) contour (dinner: 20.2%, vase: 1.9%) and a falling parabolic contour (dinner: 8.9%, vase: 76.2%; see Fig.1 left). The preference for the vocative chant in the dinner context confirms Fagyal’s [3] findings and the falling parabolic contour is reminiscent of what [6] describes as used to express insistence. By contrast, CM speakers show little effect of context on the choice of contours and predominantly preserve lexical tones (dinner: 55%, vase: 75%) with a final rise being the next most frequent pattern (dinner: References 20%, vase: 8%) and very little presence of the typical chanting contour (dinner: 12%, vase: 2%; see Fig. 1 right). The two varieties thus differ in that context greatly affects contour choice in [1] Di Cristo, A. (1999). French. In Hirst, D. & A. Di Cristo (eds.). Intonation Systems: a Survey of Twenty FR, while it has little effect on contour choice in CM, where lexical tones strongly determine Languages. Cambridge: CUP. the melody of the names. [2] Ladd, R. (1996/2008). Intonational Phonology. Cambridge University Press. The linear mixed-effects models reveal that context also significantly affects intensity: [3] Fagyal, S. (1997). Chanting intonation in French. U. Penn W.P. in Linguistics 4.2: 77-90. callings are louder in the vase than in the dinner context in both varieties (FR: t=5.31 p<.001; [4] Dell, F. (1984). L’accentuation dans les phrases en français. In F. Dell, D. Hirst & J.-R. Vergnaud CM: t=14.76 p<.001). Word duration is only affected in CM, being longer for routine than (eds.). La forme sonore du langage: structure des représentations en phonologie. Paris: Hermann.

38 39 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS [5] Jun, S.-A. & C. Fougeron. (2000). A Phonological model of French intonation. In A. Botinis (ed.), H* and LH* in English and Greek Intonation, modelling and technology. Dordrecht: Kluwer. 209-242. [6] Delais-Roussarie, E., Post, B., Avanzi, M., Buthke, C., Di Cristo, A., Feldhausen, I., Jun, S.-A., Martin, Kathleen Jepson1, Cong Zhang1, Georg Lohfink2, Amalia Arvaniti1 P., Meisenbrug, T., Rialland, A., Sichel-Bazin, R., Yoo, Hiyon (2015). Intonational phonology 1Radboud University, Netherlands of French: Developing a ToBI system for French. In Frota S. & P. Prieto (eds.). Intonation in 2University of Kent, UK Romance. Oxford, UK: Oxford University Press. 63-100. [7] Borràs-Comes, J., R. Sichel-Bazin & P. Prieto (2015). Vocative intonation preferences are sensitive to politeness factors. Language and Speech 58: 68-83. In Greek, pitch accents that mark new vs. contrastive information (H* vs. LH* respectively) [8] Makasso, E.-M., F. Hamlaoui & S. J. Lee. (2016). Aspects of the intonational phonology of Bàsàa. are shown to be phonologically distinct [1]. For English, however, there has been a long In A. Rialland & L. Downing (eds.). Intonation in African tone languages. Berlin: De Gruyter. debate about whether H* and LH* are distinct accents (see [2] for a discussion). This pilot [9] Arvaniti, A., M. Żygis. & M. Jaskuła. (2016). The phonetics and phonology of the Polish calling study is a first step towards addressing this issue by using Functional Principal Component melodies. Phonetica 73, 155-162. Analysis (fPCA) to compare English and Greek (cf. [1]). The data were elicited from 16 speakers, 8 speakers of Greater London English (5 F, 3 M, aged 18-54; mean 29.25), and 8 speakers of Athenian Greek (4 F, 4 M, aged 22-27; mean 24.13). The main analysis presented here is based on a subset of our pilot corpus, first name- last name sequences as answers in question-answer pairs. The utterances were 2-5 syllables long in English (e.g., Lou Neil), and 3-6 syllables long in Greek (e.g., /maˈrina ɣaˈlani/). The utterances were either in broad focus (BF), or had narrow focus (NF) on the second word. Thus, the second word should carry either a H* accent (BF) or a LH* (NF). Participants were recorded at 44.1 kHz in quiet locations in Canterbury for English (Zoom H6N), and in Athens for Greek (AVR application on mobiles). They worked with an experimenter who posed scripted questions designed to elicit either a NF or BF rendering of the names (e.g., Is their name Anna Anderson? Anna MORRISON.). In total, 304 tokens were elicited (112 for English, 192 for Greek), of which 220 were suitable for analysis, the rest showing either uptalk or a pause between names. F0 was extracted using Voice Sauce [3]. The resulting F0 curves were normalized by speaker and submitted to fPCA using the following smooth parameters: k = 8 and lambda = 106 (see [1] and [4] for details). The analysis window was the entire utterance, and the onset of the stressed syllable of the second word was used for landmark registration. PCA captures the dominant modes of curve variation [4]; thus, the PCs can be seen as components that together determine the shape of each curve. In fPCA, the contribution of each PC to the shape of each input curve is represented by a coefficient or score; these scores can be statistically analysed to quantify their joint contribution to the realisation of tonal categories. Here, the scores of the first three PCs were modelled using lme4 in R ([5, 6]), with accent and language as fixed effects, andspeaker as random effect withaccent as random slope. Fig. 1 shows the mean curves by language and accent. The analysis showed that PCs 1-3 captured 78.9% of the data variance (Fig. 2). FPCA allowed us to observe co-varying dependencies between peak scaling and language; the shape of a curve was determined by accent and language. As shown in Fig. 1, the difference between the H* and LH* curves was clear in Greek, though less so in English. Both effects were captured by fPCA. Generally speaking, higher PC1 scores – which capture a high and late peak (Fig. 2) – were associated with LH* accents in both Greek and English. However, while the Greek LH*s had a positive PC1 score, the English scores remained negative for both accents, reflecting the minimal differences between them (Fig. 3). language was the only significant factor for PC2 and PC3, capturing differences in phonetic detail between English and Greek (Fig. 3). In conclusion, for Greek our results replicate [1]. Since our data, like those of [1], were produced in response to distinct pragmatic contexts, they further support the existence of a phonological contrast between H* and LH* in Greek. For English, on the other hand, our results point to a subtler difference between the accents, a conclusion that our current, more extensive corpus of both scripted and spontaneous speech also reflects. Thus, the English data remain inconclusive as to the existence of the H*~LH* accentual contrast. Finally, our results demonstrate the validity and robustness of fPCA for intonation analysis and its ability

40 41 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS to capture both phonetic differences (such as those within English and between English and Prenuclear accents are not just ornamental: Effects of information structure on Greek) and phonological distinctions (such as that between Greek H* and LH*). prenuclear prominence in American English and German Figure 1. Mean normalised F0 curves (scaled by speak- 1 2 3 er) separated by language and accent. The gray vertical Eleanor Chodroff , Stefan Baumann , Jennifer Cole line represents the onset of the second accented syllable. 1University of York, 2University of Cologne, 3Northwestern University

Among Germanic languages, the final pitch accent of an intonational phrase is structurally prominent as the nuclear head of the phrase, and plays a role in conveying focus and givenness distinctions (IS: information structure). In contrast, prenuclear prominences are described as optional [9], ‘ornamental’ [4], or rhythmic [5], though empirical evidence of their status is mixed. For American English, [5] found no relationship between prenuclear pitch accent type and IS distinctions in a corpus analysis of conversational speech; yet perception studies show that prominence distinctions on prenuclear accented words are perceived in relation to accent type [3, 6], and the presence of a prenuclear accent restricts the interpretation of a nuclear accent as contrastive [2, 8]. Further, experimental studies on German detected subtle but meaningful changes in peak scaling and alignment in prenuclear accents [1, 7, 10]. We examine the relationship between prenuclear prominence (phonetic and phonological) and IS in native speaker productions of American English and German, which are expected to have comparable prosodic structure. We predict for both languages that prenuclear prominence increases with informativeness (given < accessible < new < contrast). Methods. For each language, 32 participants read aloud 20 narratives consisting of two context sentences and a final target sentence. Context sentences modulated the IS of the Figure 2: PC1, PC2, and PC3 curves. The black line is the average curve for the entire corpus (and thus the same for PCs 1-3). Curves subject NP of the target sentence as given, accessible, new, or contrastive (in the latter, with with + signs show the effect of +1 standard deviation of the coefficients to the shape of the curve; curves with – signs show the effect of -1 standard deviation. The vertical line represents the onset of the second accented syllable. double contrast on subject and object, see Table 1). Stories were read in two blocks, first in a neutral affect, then in a lively affect. Analysis. The target noun was labelled using the respective (G)ToBI system. Pitch accent types were collapsed into one of three broad categories common to both languages: L*+H, H*, L+H*, excluding L* accents in the German data (n = 75). We compared German and English for the relationship between IS and prenuclear prominence, under neutral and lively affects, based on pitch accent type and acoustic prominence (duration, intensity, f0 range, and f0 slope). Results. Analysis via Bayesian mixed-effects regression models revealed credible IS effects on prenuclear phonetic prominence in both English and German, but on prenuclear pitch accent type in English alone. As to pitch accent type, German exhibited no effect of IS, Figure 3: Box plots showing PC1, PC2 and PC3 scores by language and accent. whereas English was considerably less likely to use L*+H and numerically more likely to use L+H* for prenuclear contrastive focus (Fig.1). In both languages, speakers were more likely REFERENCES to use L+H* with lively affect, though these effects were stronger for English and modulated by IS for German. For phonetic prominence, the languages were largely comparable, with [1] Lohfink, G., Katsika, A., & Arvaniti, A. 2019. Variability and category overlap in the realization of intonation. Proceedings of the 19th International Congress of Phonetic Sciences 2019 (Melbourne, English occasionally having slightly more pronounced IS effects. Expected prominence Australia), 701-705. relations across IS conditions were observed for duration (Fig.2); unexpected effects primarily [2] Ladd, D. R. 2008. Intonational Phonology. 2nd ed. Cambridge: Cambridge University Press. involved contrastive focus (weaker intensity, reduced f0 range, esp. for German), which may [3] Shue, Y-L. 2010. The voice source in speech production: Data, analysis and models. PhD Thesis: be explained by its parallel syntactic structure already expressing contrast. Lively affect UCLA. corresponded with increased f0 range, especially for English, and steeper f0 slopes. [4] Gubian, M., Torreira, F., & Boves, L. 2015. Using Functional Data Analysis for investigating Discussion and Conclusion. IS effects were present for both English and German in multidimensional dynamic phonetic contrasts. Journal of Phonetics 49, 16–40. phonetic measures of prenuclear prominence, but only for English in the phonological [5] Bates, D., Mächler, M., Bolker, B., & Walker, S. 2015. Fitting Linear Mixed-Effects Models Using measure of pitch accent type. Thus, despite the overall similarity in the prosodic phonology lme4. Journal of Statistical Software 67(1), 1-48. of these languages, they differ in the use of a categorical encoding of IS meaning through [6] R Core Team. 2020. R: A language and environment for statistical computing. Computer program. prenuclear accent type. At the same time, they are remarkably consistent in the patterning of continuous phonetic variation associated with meaning related to IS and speaker affect.

42 43 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS Overall, our findings show prenuclear accents convey meaning through phonological References and/or phonetic patterning and are therefore not (merely) rhythmic or ornamental. [1] Braun, Bettina. 2006. Language and Speech 49(4), 451-493. Context 1 Our sister Jamie spent all day Saturday in the kitchen. [2] Bishop, J. 2012. In Elordieta, G. & Prieto, P. (Eds.), Prosody and Meaning. Berlin: Mouton de Gruyter, 239-269. Context 2a She said that our stopped by to help her make some homemade preserves. nana [3] Bishop, J., Kuo, G. & Kim, B. 2020. Journal of Phonetics 82, 1–20. given [4] Büring, D. 2007. In Ramchand, G. & Reiss, C. (Eds.), The Oxford Handbook of Linguistic Interfaces. Context 2b She said that she liked to bring her homemade goods to the nursing home. Oxford: OUP, 445-474. accessible [5] Calhoun, S. 2010. Language 86(1). Context 2c She said that she likes to make everything from scratch. [6] Cole, J., Hualde, J. I., Smith, C. L., Eager, C., Mahrt, T. & Napoleão de Souze, R. 2019. Journal of new Phonetics 75, 113–147. Context 2d She said that our dad loved the strawberry jam, but [7] Féry, C. & Kügler, F. 2008. Journal of Phonetics 36(4), 680-703. contrastive Target Our nana loved the marmalade she had. [8] Gussenhoven, C. 1983. Language and Speech. 26, 61-80. [9] Gussenhoven, C. 2015. Lingue E Linguaggio 14(1). 7-24. [10] Petrone, C. & Niebuhr, O. (2014). Table 1. English example of mini story for the sentence-initial target word nana. Language and Speech 57, 108-146.

44 45 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS DAY 2 - Oral session 1 SOUND CHANGE AND LANGUAGE CONTACT

Sound Change in Western Andalusian Spanish: Analysis of Phonemic Voiceless Stops

Amber Galvano and Nicholas Henriksen University of Michigan

We examine the interaction between two ongoing sound changes affecting phonemic voiceless stops in Western Andalusian Spanish (WAS), spoken in southern Spain. The goal is to explore how the sound changes are related to one another, and how phonemic contrasts are maintained in a context for which multiple acoustic properties are available to mark the Figure 1. Model predicted marginal means for Figure 2. Model predicted marginal means for contrast. each DIALECT per each PHONEMIC SEQUENCE each DIALECT per each PHONEMIC SEQUENCE * The two sound changes in question are the post-aspiration of phonemic voiceless stops * POA combination for CLOSURE DURATION POA combination for VOT in /sp st sk/ sequences (e.g., /pasta/ → [patha] ‘pasta’), and the voicing of intervocalic /p t k/ (e.g., /mediko/ → [meðigo] ‘doctor’). Both processes have been examined independently in previous research [1, 2, 3]. However, the interaction of the two processes, whereby /s/- deletion creates the environment for intervocalic voicing to apply, remains untested. Our research was thus driven by two predictions. Under Prediction 1, intervocalic voicing should apply uniformly in /p t k/ and /sp st sk/ sequences, since /s/-deletion creates an environment for voicing to apply (e.g., [th] is flanked by vowels). Under Prediction 2, the post-aspirated stops (i.e., [ph th kh]) should not undergo voicing; the voicing feature would thus facilitate the contrast with phonemic /p t k/ (in addition to longer VOT). Through a carrier-phrase reading task, we recorded data from two groups: 31 WAS speakers from Jerez, Spain; and 30 north-central Peninsular Spanish (NCPS) speakers from Salamanca, Spain. The speakers of NCPS, a variety which does not undergo /s/-deletion and which exhibits intervocalic voicing only variably [3, 4], serve as a comparison group in the Figure 3. Model predicted marginal means for Figure 4. Individual WAS speaker means for each DIALECT per each PHONEMIC SEQUENCE VOT and % VOICING per PHONEMIC SEQUENCE experiment. The carrier phrases contained bisyllabic words with intervocalic /p t k/, /lp lt lk/, * POA combination for probability of complete or /sp st sk/. The /lp lt lk/ sequences serve as a control to compare with /sp st sk/, to analyze voicing how speakers produce phonemic voiceless stops following another coda consonant. The acoustic measures were closure duration, VOT, and % voicing of the closure duration. The results show that: (i) WAS speakers distinguish /p t k/ from /sp st sk/ through closure References and VOT duration (Figs. 1 and 2), although the between-sequence differences are more robust for VOT than for closure duration; and (ii) WAS speakers show more complete voicing [1] Ruch, H., & Harrington, J. (2014). Synchronic and diachronic factors in the change from pre- of /p t k/ compared to /sp st sk/ (Fig. 3). We further visualized the data at the individual level aspiration to post-aspiration in Andalusian Spanish. Journal of Phonetics, 45, 12–25. to understand how individual speakers use VOT and voicing as acoustic markers of the /p t [2] Torreira, F. (2012). Investigating the nature of aspirated stops in Western Andalusian Spanish. JIPA, 42(1), 49–63. k/ - /sp st sk/ contrast. In Fig. 4, we plot each speaker’s mean VOT and % voicing values for the [3] Hualde, J. I., Simonet, M., & Nadeu, M. (2011). Consonant lenition and phonological two series of consonants (“/C/” and “/sC/”, respectively). WAS speakers primarily use VOT to recategorization. Laboratory Phonology, 2(2), 301–329. cue the contrast (more visual separation along the x-axis), whereas NCPS speakers do so via [4] Nadeu, M., & Hualde, J. I. (2015). Biomechanically Conditioned Variation at the Origin of voicing (more visual separation along the y-axis). The findings therefore support Prediction Diachronic Intervocalic Voicing. Language and Speech, 58(3), 351–370. 1, showing that voicing applies in phonemic /sp st sk/ sequences in WAS; in fact, for both [5] Coetzee, A., P. Beddor, K. Shedden, W. Styler, & D. Wissing. (2018). Plosive voicing in Afrikaans: series of phonemic sequences, WAS speakers show substantially more voicing than NCPS Differential cue weighting and tonogenesis.Journal of Phonetics, 66, 185–216. speakers. [6] Kirby, J. P., & Ladd, D. R. (2016). Effects of obstruent voicing on vowel F0: Evidence from “true These findings reveal an inverse trading relation between voicing and post-aspiration: voicing” languages. The Journal of the Acoustical Society of America, 140(4), 2400–2411. whereas NCPS speakers use voicing to distinguish the two series of phonemes (in addition to the realization of /s/), the WAS speakers do so primarily through post-aspiration. We will contextualize these outcomes within the literature on cue weighting and lexical contrasts [5, 6].

46 47 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS A dynamic analysis of the sound change towards more pre-aspiration in Aberystwyth This study illustrates the benefits of using a dynamic analysis to quantify (pre-)aspiration: it English provides a more natural approach by relying not on a superimposed segmentation [8] but on time-varying curves; and it gives more details than traditional analyses by quantifying the Johanna Cronenberg1, Michaela Hejná2 and Adèle Jatteau3 complex relation between temporal extent and amplitude that characterises aspiration phases. 1LMU Munich, 2Aarhus University, 3Université de Lille

Aspiration is a form of glottal friction that is often associated with voiceless plosives. According to articulatory phonology, any voiceless plosive is composed of a glottal gesture and a supraglottal closing gesture [1] whose temporal alignment determines whether aspiration occurs before and/or after the closure. Despite their dynamic nature, aspiration phases are almost always quantified in terms of static measures such as duration or frequency of occurrence (e.g. [2]). Here we investigate pre-aspiration in an accent of Welsh English spoken in Aberystwyth, where voiceless plosives are predominantly post-aspirated and frequently pre-aspirated, e.g. pack / pha(h)kh/ [3]. Using traditional measures, a sound change has been identified: younger speakers pre-aspirate more often and with longer durations than older speakers, and older females more often than older males [4]. The aim of this study is to provide a more fine-grained analysis of this change using the method from [5] where aspiration is quantified by means of two time- varying curves that represent articulatory gestures. Figure 1. Three main variation trends (one per row) in the HF and VP curve shapes. Areas that quantify post- and pre- aspiration were coloured. Values near or at 1 in VP (dashed) indicate where the vowel in the CVC sequence is. The dataset consisted of 12 speakers from Aberystwyth, equally divided amongst two sexes and two age groups (younger: 21-30, older: 61-89 years of age). The speakers produced up to 92 different hC V(h)Ch words in isolation or short sentences up to five times, resulting in a total of 2317 tokens. The burst of the second plosive and its post-aspiration were excluded from the analysis. Two continuous acoustic curves were extracted from each token. One was the voicing probability VP [6] as a proxy for the glottal gesture, the other was the (normalised) high-frequency energy HF as a proxy for the supraglottal closing gesture. We expect aspiration to correspond to time intervals where VP is low (= voicelessness) and HF is high (= no closure). To quantify aspiration we computed the area between the time-normalised HF and VP curves: the greater and longer the gap between a high HF and a low VP, the greater the area between the two curves, thus the more aspiration there is. The area before (resp. after) the temporal

midpoint is associated with the post-aspiration of C1 (resp. pre-aspiration of C2). The post- and pre-aspiration areas depend on the shape of the HF and VP curves. The three main independent Figure 2. Distribution of PC score s3 by age group and sex of the speakers. variation trends in those curve shapes were determined by applying FPCA [7]. [1] Browman, C. & Goldstein, L. 1986. Towards an articulatory phonology. Phonology Yearbook 3, Fig. 1 shows these three main variation trends called PCs. PC1 (top row) mainly captured a 219-252. variation in the amount of post-aspiration in C1 (blue area), while PC2 explained a mild trade- [2] Parrell, B. 2012. The role of gestural phasing in Western Andalusian Spanish aspiration. Journal off (the more C1 post-, the less C2 pre-aspiration) that may hint towards a possible of Phonetics 40, 37-45.

process [3]. Importantly, PC3 explained the amount of pre-aspiration in C2 (yellow) with no [3] Jatteau, A., Hejná, M. 2016. Dissimilation can be gradient: evidence from Aberystwyth English. change in post-aspiration, so we will focus on PC3 in the remainder of this study. FPCA assigned Papers in Historical Phonology 1, 359-386. each input token a score for PC3 (called s3) which determines where the token is situated in [4] Hejná, M. 2015. Pre-aspiration in Welsh English: A case study of Aberystwyth. PhD thesis, the continuum of the shape variation shown from left (negative score) through centre (score University of Manchester. = 0) to right (positive score) in the third row of Fig. 1. An LMER with s3 as dependent variable, [5] Cronenberg, J., Gubian, M., Harrington, J. & Ruch, H. 2020. A dynamic model of the change from age and sex of the speakers as fixed factors, and speaker and word as random terms showed a pre- to post-aspiration in Andalusian Spanish. Journal of Phonetics 83, 101016. [6] Gonzalez, S. & Brookes, M. 2014. PEFAC - A Pitch Estimation Algorithm Robust to High Levels of significant effect of age on s3 (F[1, 10.2] = 5.9, p < 0.05). This can also be observed in Fig. 2 where Noise. IEEE/ACM Transactions on Audio, Speech, and Language Processing 22, 518-530. younger speakers generally have lower values of s3 than older speakers which corresponds to [7] Gubian, M., Torreira, F., & Boves, L. 2015. Using Functional Data Analysis for investigating more pre-aspiration in C2 (see Fig. 1, row 3, left column). This finding supports the evidence for multidimensional dynamic phonetic contrasts. Journal of Phonetics 49, 16-40. an ongoing sound change towards more pre-aspiration in Welsh English. It was however not [8] Fowler, C. & Smith, M. 1986. Speech Perception as Vector Analysis: An Approach to the Problem confirmed that there was a sex difference in older speakers whereby females would be more of Invariance and Segmentation. In Perkell, J. & Klatt. D. (Eds.), Invariance and Variability in advanced in this sound change than males. Speech Processes. Hillsdale: Lawrence Erlbaum Associates, 123-136.

48 49 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS Gradient effects and contrast preservation in Spanish lenition Given systematic contrast preservation in the data despite gradient effects, we argue that traditional featural distinctions based on voicing and continuancy are insufficient to Karolina Brośa, Marzena Żygisb, Adam Sikorskia, Jan Wołłejkoa address weakening phonologically. University of Warsawa, ZAS Berlinb

The Spanish of Gran Canaria presents advanced weakening of stops that leads to 1) the voicing, approximantisation and even deletion of post-vocalic /p t k/, and 2) approximantisation and deletion of post-vocalic /b d g/. The table below presents the range of possible pronunciations of underlying /p b/ (with /t d k g/ following a similar pattern). UR Example voiceless stop voiced stop approximant guapo ‘pretty’ [gwá.po] [gwá.bo] [gwá.β̞o] [gwá.o] /p/ se parece ‘is similar’ [se.pa.ɾé.se] [se.ba.ɾé.se] [se.β̞a.ɾé.se] [se.a.ɾé.se] después ‘afterwards’ [de.pwé] [de.bwé] [de.β̞wé] abuela ‘grandma’ [a.β̞wéla] [a.wéla] /b/ la vela ‘the candle’ [la.bé.la] [la.β̞éla] [la.éla] Fig. 1. Relative duration of surface sounds in the database. Fig. 2. Intensity difference of surface sounds. las velas ‘the candles’ [la.pé.la] [la.bé.la] [la.β̞éla]

As shown in the table, the outputs of the observed changes partially overlap. They also depend on several factors, most importantly, on the deletion of a preceding consonant, which makes the resultant post-vocalic context derived rather than underlying. The aim of this study was to compare surface sounds with their underlying forms and to determine the degree to which underlying /p t k b d g/ are lenited and whether the underlying sound contrasts are preserved or neutralised. To this end, we explored a large corpus of 16,474 post-vocalic /p t k b d g/ produced by 44 native speakers from Gran Canaria by studying three parameters: relative duration, intensity difference and harmonics-to-noise ratio. The first two have been used in many previous studies on lenition [1-4]. Thethird parameter was successfully used for measuring periodicity in sounds which are only partly harmonic [5]. We decided to test it as a lenition marker with the assumption that the more lenited a segment is, the more vowel-like, and hence the more harmonicity is expected. The statistical results show a path of gradual sound shortening and opening with the Fig. 3. Harmonics-to-noise ratio of surface sounds advancement of weakening, from voiceless stops to open approximants (Fig. 1-2). There in the database. Fig. 4. Six surface variants of underlying /p t k b d g/ identified are differences between groups of sounds in relative duration values, with the duration in the study. Intensity difference (difference between minimum intensity of the target sound and maximum intensity of the decreasing from voiceless stops to voiced stops to approximants ([b d g] vs. [β̞ ð̞ ɣ̞ ]: t=23.51, preceding vowel) marks aperture (see [2] for a comparison of p<.001, [p t k] vs. [b d g] at the level of statistical tendency: t=1.96, p=.058). As for intensity acoustic intensity measures with articulatory data). difference, approximants show the lowest values ([β̞ ð̞ ɣ̞ ] vs. [b d g]: t=-51.15, p<.001), with an ascending trend toward more constricted voiceless [p t k] pronunciations ([b d g] vs. [p t k]: References t=-36.53, p<.001). We propose that these changes should be treated as continuity lenition [6] [1] Dalcher Villafaña, Christina. 2008. Consonant weakening in Florentine Italian: A cross-disciplinary leading to the flattening of the intensity contour across sounds in a speech stream. We also approach to gradient and variable sound change. [2] Parrell, Benjamin. 2011. Dynamical account of provide evidence that harmonics-to-noise ratio can be successfully used to predict lenition how /b, d, g/ differ from /p, t, k/ in Spanish: Evidence from labials. [3] Carrasco, Patricio; José Ignacio degree, which is a new finding. Our data show that the observed changes lead to the flattening Hualde; and Miquel Simonet. 2012. Dialectal differences in Spanish voiced obstruent allophony: of the harmonicity profile of the target segment with respect to the flanking sounds: HNR is Costa Rican versus Iberian Spanish. [4] Figueroa, Mauricio, and Bronwen G. Evans. 2015. Evaluation of lowest for [p t k], higher for [b d g] and highest for [β̞ ð̞ ɣ] (voiceless stops vs. voiced stops: t=- segmentation approaches and constriction degree correlates for spirant approximant consonants. ̞ [5] Bárkányi, Zsuzsanna, and Zoltán Kiss. 2010. A phonetically-based approach to the phonology of v: A 5.55, p<.001; voiced stops vs. approximants t=-15.55, p<.001, see Fig. 3). As for phonological case study from Hungarian and Slovak. [6] Katz, Jonah. 2016. Lenition, perception, and neutralisation. effects, we detected systematic use of six different variants depending on the UR and on [7] Martínez-Celdrán, Eugenio, and Xosé Luís Regueira. 2008. Spirant approximants in Galician. the phonological context (consonant deletion). More specifically, there are two types of [p t k], two types of [b d g] and two types of [β̞ ð̞ ɣ̞ ] that vary in intensity difference, a parameter successfully used as a proxy of degree of constriction ([7], see Fig. 4).

50 51 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS Context, position in word and duration as predictors of voicing alternation of stops: of the segment in question and the left context, whereas devoicing is reliably predicted by a large-scale corpus-based study in 5 Romance Languages the right context, followed by the left context, the duration of the preceding segment and the language. Interestingly, “language” is located towards the lower part of the tree in Figure 1c. This suggests that particular languages tend to subordinate to the historical movement Yaru Wu1,2, Mathilde Hutin1, Ioana Vasilescu1, Lori Lamel1, Martine Adda-Decker1,2 of the language family. Nevertheless, the languages split into two groups with Portuguese 1Université Paris-Saclay, CNRS, LISN, 91405 Orsay, France in one branch and the other four languages in the other, which is in line with the higher 2Laboratoire de Phonétique et Phonologie (UMR7018, CNRS-Sorbonne Nouvelle), France devoicing rate seen for Portuguese in Figure 1a. The overall correct prediction of voicing alternation on unseen data (round-robin experiments with a random selection of data) are How synchronic phonetic variation interacts with diachronic change has been a key focus 73.6% for voiceless stops and 68.2% for voiced ones (mean of 10 experiments). of variationist studies for over 50 years. Today, most linguists subscribe to Ohala’s proposal that synchronic variation is one of the preconditions for diachronic change [1]. The “big Table 1. Variation factors included in the automatic classification. data” revolution, which reached the humanities field in recent years, has added a new Factors Details dimension to this research path. Large multilingual collections of spoken data covering Language Spanish (spa), French (fre), Italian (ita), Portuguese (por) or Romanian (rom) various communication situations are now accessible for phonetic and laboratory phonology Position in word [9,10] Position of the stop in the word: word initial position (wInitial), word medial position (wMedial) or word final position (wFinal) Left context [10,11,12] Nature of the segment preceding the stop in question: pause including hesitation, breath and silence;vowel (V); sonorant (Son); research, and can be explored with digital approaches [2,3]. In line with this new research voiced obstruent (Ob+); voiceless obstruent (Ob-) direction, the present study explores two of the most common phonetic processes in the Right context [7,10] Nature of the segment following the stop in question: pause including hesitation, breath and silence; vowel (V); sonorant (Son), world’s languages: voicing and devoicing of stop consonants (i.e. /ptk/ realized as [bdg] voiced obstruent (Ob+); voiceless obstruent (Ob-) or /bdg/ realized as [ptk]). These processes are attested both as instances of synchronic Segment/Word duration [3,13,14] Duration of the stop / of the segment preceding the stop / of the segment following the stop / of the word variation and as processes provoking sound change. Using methods borrowed from speech technology, we seek to gain insight into the factors responsible for the phonetic precursors of these two phenomena. Corpus and methodology. First, language-specific automatic speech recognition systems [4] in forced alignment mode with voicing variants (+/- voiced) were used to provide reliable accounts of the voicing alternation phenomena specific to [5,6,7]. Machine learning techniques are used to study variation in 5 Romance languages for which we have had abundant data (~ 1000 hours): French (176h), Italian (168h), Spanish (223h), Portuguese (114h) and Romanian (300h). Second, decision tree-based classification [8] allows the factors having the largest contribution to the prediction of how the voicing feature of stops is realized to be determined. Contextual, positional and durational factors were included in this study (see Table 1). For each experiment, 70% of the data set was randomly selected (a) (b) (c) for training and the remaining 30% was used for test purposes in order to assess how well Figure 1. Voicing alternation rates of voiceless (/ptk/) and voiced (/bdg/) stops (a), and classification tree on the voicing of canonically voiceless (b) and voiced (c) stops. the tree generalizes to new data. Given that there are many fewer voicing alternations than unchanged observations, sampling techniques were used to balance the two groups (voiced vs voiceless) of each dataset. References Results. We present results from both the automatic alignment and automatic classification. Figure 1a shows the alternation rates obtained via our forced alignment allowing voiced [1] Ohala, J. J. (1989). Sound change is drawn from a pool of synchronic variation. Language change: and voiceless variants for both canonically voiceless and voiced stops. For instance, 9.4% Contributions to the study of its causes, 173, 198. of French voiceless stops /ptk/ were aligned with their corresponding non-canonical voiced [2] Adda-Decker, M., & Lamel L. (2018). Discovering speech reductions across speaking styles and variant [bdg] and 9.9% of voiced stops /bdg/ were aligned with a devoiced [ptk] variant. languages, in Rethinking reduction: Interdisciplinary perspectives on conditions, mechanisms, and Portuguese exhibits a much higher rate of devoicing of voiced stops (/bdg/ realized as [ptk]) domains for phonetic variation, F. Cangemi, M. Clayards, O. Niebuhr, B. Schup-pler, and M. than the opposite phenomenon (voiced realizations of /ptk/, i.e., aligned with [bdg]). The Zellers, Eds. De Mouton Gruyter. rates of devoiced realization of voiced stops are also slightly higher than voiced realizations [3] Ryant, N., & Liberman, M. (2016, September). Large-scale analysis of Spanish/s/-lenition using of voiceless stops for both French and Romanian, with the reverse tendency seen for Spanish audiobooks. In Proceedings of Meetings on Acoustics 22ICA (Vol. 28, No. 1, p. 060005). Acoustical Society of America. and Italian. As for the respective roles of context, position in word and duration parameters [4] Gauvain, J. L., Lamel, L., & Adda, G. (2002). The LIMSI broadcast news transcription system. correlated with voicing alternations, Figures 1b and 1c highlight the factors having the largest Speech communication, 37(1-2), 89-108. contributions to the prediction of voiced / voiceless segments (see texts in white rectangular [5] Adda-Decker, M., & Hallé, P. (2007). Bayesian framework for voicing alternation and assimilation boxes). The factors contributing the most to voicing (/ptk/ becomes [bdg]) are the duration studies on large corpora in French. In 16th Congress of Phonetic Sciences (pp. 613-616).

52 53 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS [6] Jatteau, A., Vasilescu, I., Lamel, L., Adda-Decker, M., & Audibert, N. (2019, September). “ Gra [f] DAY 2 - Oral session 3 e!” Word-Final Devoicing of Obstruents in Standard French: An Acoustic Study Based on Large Corpora. In INTERSPEECH (pp. 1726-1730). ACOUSTIC PHONETICS AND PHONOLOGY [7] Hutin, M., Jatteau, A., Vasilescu, I., Lamel, L., & Adda-Decker, M. (2020, October). Ongoing phonologization of word-final voicing alternations in two Romance languages: Romanian and Agreement and its acoustic reflexes in English French. In Interspeech 2020. [8] Quinlan, J. R. (1986). Induction of decision trees. Machine learning, 1(1), 81-106. 1Marcel Schlechtweg & 2Greville G. Corbett [9] Ségéral, P., & Scheer, T. (2008). Positional factors in lenition and . Lenition & fortition, ed. 1Carl von Ossietzky Universität Oldenburg (Germany), 2University of Surrey (UK) J. Brandao de Carvalho, T. Scheer, & P. Ségéral, 131-172. [10] Vasilescu I., Wu Y., Jatteau A., Adda-Decker M., & Lamel L. (à paraître). Alternances de voisement Whether or not specific variables trigger an acoustic distinction between phonologically et processus de lénition et de fortition: une étude automatisée de grands corpus en cinq langues romanes. In Revue TAL (Volume 61, Numéro 1). www.atala.org/content/varia-12. identical forms has been a hotly debated territory. A key example is word-final s in English, [11] Niebuhr, O., & Meunier, C. (2011). The phonetic manifestation of French /s#∫/ and /∫#s/ and the question whether it systematically varies with different grammatical characteristics sequences in different vowel contexts: On the occurrence and the domain of sibilant or functions. It has been demonstrated that affixal s (as in laps) differs from non-affixal assimilation. Phonetica, 68(3), 133-160. word-final s (as in lapse) in duration (see, e.g., Plag et al. 2017; Schwarzlose & Bradlow 2001; [12] Meunier, C., & Espesser, R. (2011). Vowel reduction in conversational speech in French: The Seyfarth et al. 2018; Song et al. 2013; Walsh & Parker 1983). Moreover, within affixal s, studies role of lexical factors. Journal of Phonetics, 39(3), 271-278. have revealed that several types differ in duration, such as plural (cars) and plural-genitive s [13] Vasilescu, I., Vieru, B., & Lamel, L. (2014). Exploring pronunciation variants for Romanian (cars’) (see, e.g., Hsieh et al. 1999; Plag et al. 2020). Finally, a further study showed that the s speech-to-text transcription. In Spoken Language Technologies for Under-Resourced Languages. of regular plural (e.g., toggles) and comparable pluralia tantum nouns (e.g., goggles) does not [14] Snoeren, N. D., Hallé, P. A., & Segui, J. (2006). A voice for the voiceless: Production and differ in duration (see Schlechtweg & Corbett 2021). We now focus on a case not investigated perception of assimilated stops in French. Journal of Phonetics, 34(2), 241-268. to date. We ask whether and how morphosyntactic agreement has an impact on the duration of word-final s. For this purpose, consider the examples in (1) and the Appendix. We see four different groups. When we have a present tense verb, one finds an overt distinction in agreement between singular and plural (see (1a) cabs break (plural) in comparison to cab breaks (singular)). In (1b), however, there is no overt agreement of the past tense verb (see cabs broke (plural) in comparison to cab broke (singular)). In (1c), we observe agreement not only between the plural noun and the present tense verb, as in (1a), but also between the determiner and the plural noun (see These cabs (plural) in comparison to This cab (singular)). In (1d), there is no agreement between the plural noun and the past tense verb, as in (1b), but there is agreement between the determiner and the plural noun. We examine whether the plural s on the noun differs in duration across these conditions. Several scenarios seem possible. First, agreement might not be reflected acoustically, giving no duration difference between the four conditions in (1). Second, the determiner- noun and/or the noun-verb agreement might be expressed in the duration of the s. On the one hand, agreement might lead to a shorter s. For instance, in (1b), the only plural marker is the nominal s suffix and it might therefore be longer than the s in (1a), where the verb additionally signals the plurality of the noun. Put differently, the s is more important to express plurality in (1b) than in (1a) and might be lengthened for this reason in (1b). On the other hand, agreement might lead to a longer s. That is, for example, (an) additional plurality marker(s) on the determiner and/or verb might intensify the speaker’s attention to the plurality value and, hence, lead to a lengthening of the s. We tested 12 native speakers of North American English, using 16 nouns in a reading study conducted with Praat (Boersma & Weenink 2020). Each person was exposed to all items in all conditions (16 items per person x 4 conditions per item = 64 experimental cases per person). As can be seen in the examples, we carefully controlled for potentially confounding variables by using the same sentences in all conditions and by exposing all subjects to all items in the four conditions. All target nouns are regular plurals, singular- dominant, monosyllabic, inanimate, and contain the voiced /z/ in the plural. All verbs are irregular and have the same number of syllables in the present and past tense. The order of the four conditions was counterbalanced both within and across subjects. No significant differences across conditions were detected. The findings will be interpreted against the

54 55 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS background of other studies that showed acoustic variation of the English s. Domain-initial denasalized and oral stops in Korean: the role of voice quality

(1) a. The blue cabs always break down. Jiayin Gao (U. of Edinburgh), Jihyeon Yun (no affiliation), Takayuki Arai (Sophia University) b. The blue cabs always broke down. c. These blue cabs always break down. In several varieties of Korean, domain-initial nasals have a tendency for denasalization d. These blue cabs always broke down. or nasal weakening. Their typical realization is similar (though not identical) to a prevoiced stop, and they may even become devoiced [1,2]. This makes it necessary to examine the Appendix interplay of multiple phonetic properties of the denasalized stops along with the three oral The/These large tags really make/made the price clear. stop series (lenis, fortis, aspirated) as a whole. How are they distinguished one from another? The/These ripe pears usually fall/fell from the tree. What are their concomitant properties? This study reports the phonetic distributions related The/These wide screens eventually become/became useless. to the laryngeal properties of the four stops series: VOT, f0, and voice quality as indexed by The/These old cars unfortunately have/had mechanical problems. open quotient (OQ). First, with regard to the three oral stop series, all voiceless, VOT and The/These thick nails easily hold/held up the picture on the wall. f0 have been extensively studied, but results related to voice quality remain inconsistent. The/These short rides regularly take/took an hour. Some studies have shown breathier vowel after aspirated than lenis stops [3], while others The/These cheap creams often sting/stung her skin. have shown the opposite pattern [4]. We examined whether voice quality changes over the The/These rough waves regularly shake/shook the beach house. course of the vowel, which might explain inconsistent results restricted to fixed time points The/These soft plums already stink/stank. (Q1). Second, it has been shown that voice quality contributes to stop identification [5]. We The/These big stones immediately sink/sank in the lake. thus assessed how the role of voice quality is mirrored in production (Q2). Third, nasality and The/These new trains clearly speak/spoke for themselves. breathy voice share acoustic similarities, which has led to sound change in one direction or The/These deep ponds always freeze/froze during the winter. the other [6]. We thus asked whether denasalization was accompanied with an enhancement The/These dried figs amazingly grow/grew sweeter and sweeter. of breathiness (Q3). The/These small phones unexpectedly ring/rang very loudly. Acoustic and electroglottographic (EGG) data were recorded with nine (5F, 4M) native The/These thin pads often fall/fell out. speakers of Seoul and Gyeonggi Korean, aged from 19 to 29. The speakers produced nonce phrases /(C)V-ta/, where C was a labial or alveolar stop (nasal, lenis, fortis, or aspirated), and References V one of the 7 monophthong vowels, as if they were spelling out the syllable blocks in the Hangul chart. With 2 to 4 repetitions per speaker, a total of 1709 /CV/ tokens were analyzed. Boersma, Paul & David Weenink. 2020. Praat: Doing phonetics by computer [Computer program]. Zero-onset /V/ syllables were analyzed for comparison. We used Praatdet [7] to compute f0 Retrieved from http://www.praat.org. and open quotient (Oq) based on the derivative of the EGG signals, duration-normalized to 9 Hsieh, Li, Laurence B. Leonard & Lori Swanson. 1999. Some differences between English plural time points over the vowel. noun inflections and third singular verb inflections in the input: the contributions of frequency, sentence position, and duration. Journal of Child Language 26 (3). 531–543. 13% of the “nasal” stops are devoiced (i.e., with short-lag VOTs). VOT results of the Plag, Ingo, Julia Homann & Gero Kunter. 2017. Homophony and morphology: The acoustics of (voiceless) oral stops corroborate previous reports: shortest for fortis, longest for aspirated, word-final S in English. 53. 181−216. and intermediate for lenis. F0 results are also in line with previous findings: aspirated and Plag, Ingo, Arne Lohmann, Sonia Ben Hedia & Julia Zimmermann. 2020. An is an , or is it? fortis stops cooccur with high f0, while lenis, nasal stops and zero onsets cooccur with low Plural and genitive plural are not homophonous. In Lívia Körtvélyessy & Pavol Štekauer (eds.), f0. Voice quality, on the other hand, is less straightforwardly conditioned by the onset (Fig. Complex words: Advances in morphology, 260-292. Cambridge: Cambridge University Press. 1). Overall, lenis and aspirated stops are followed by breathier vowels (i.e., with higher Oq) Schlechtweg, Marcel & Greville G. Corbett. 2021. The duration of word-final s in English: A than fortis and nasal stops, especially at the beginning of the vowel. As for , seven comparison of regular-plural and pluralia-tantum nouns. Submitted. (Q1) Schwarzlose, Rebecca & Ann R. Bradlow. 2001. What happens to segment durations at the end of speakers show breathier voice following aspirated than lenis stops over the entire vowel, a word? The Journal of the Acoustical Society of America 109. 2292. but two show other patterns. Thus, the discrepancy in the previous results was not due to Seyfarth, Scott, Marc Garellek, Gwendolyn Gillingham, Farrell Ackerman & Robert Malouf. 2018. the time course difference, but possibly to individual variability. As for (Q2), a classification Acoustic differences in morphologically-distinct homophones. Language, Cognition and tree (CART) analysis applied to the four stop categories has only validated the importance Neuroscience 33 (1). 32−49. of VOT and f0 in the classification, but the relative role of breathiness is minor. However, in Song, Jae Yung, Katherine Demuth, Karen Evans & Stefanie Shattuck-Hufnagel. 2013. Durational the short-lag VOT + low f0 zone which may correspond to fortis, lenis, and devoiced nasal cues to fricative codas in 2-year-olds’ American English: Voicing and morphemic factors. The Journal of the Acoustical Society of America 133. 2931–2946. stops (Fig. 2), breathiness increases as VOT decreases in the production of lenis stops, which Walsh, Thomas & Frank Parker. 1983. The duration of morphemic and non-morphemic /s/ in contributes to the disambiguation with a fortis stop (Fig. 3). As for (Q3), only two speakers English. Journal of Phonetics 11 (2). 201–206. show comparable open quotient for nasal and aspirated/lenis stops, suggesting breathy vowel after nasal stops. For most speakers, open quotient for nasal stops is closer to fortis than the other two stop series, suggesting modal voice. Following the observation related

56 57 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS to Q2, modal voice with nasal stops may also serve to disambiguate them from lenis stops. [3] Ahn, H. (1999). Post-release phonatory processes in English and Korean: acoustic correlates and To conclude, compared with VOT and f0, voice quality overall contributes little in the implications for Korean phonology (Unpublished doctoral dissertation). University of Texas. distinction of the four stop series in Korean. However, it plays a role of disambiguation in [4] Cho, T., Jun, S.-A., & Ladefoged, P. (2002). Acoustic and aerodynamic correlates of Korean stops and fricatives. Journal of Phonetics, 30(2), 193–228. cases where the VOT-f0 space gets crowded. At a more general level, we will discuss how multiple cues enter in interaction, often in an asymmetrical way, to reconcile articulatory [5] Kim, M.-R., Beddor, P. S., & Horrocks, J. (2002). The contribution of consonantal and vocalic information to the perception of Korean initial stops. Journal of Phonetics, 30(1), 77–100. constraints and perceptual biases with contrast maintenance (see more discussions in [8]). [6] Matisoff, J. A. (1975). Rhinoglottophilia: the mysterious connection between nasality and glottality. Nasálfest: Papers from a symposium on nasals and nasalization (pp. 265–287). [7] Kirby, J. (2017). Praatdet: Praat-based tools for EGG analysis. https://github.com/kirbyj/praatdet [8] Gao, J., Yun, J., & Arai, T. (2021). Korean laryngeal contrast revisited: An electroglottographic study on denasalized and oral stops. Journal of the Association for Laboratory Phonology, 12(1), 7. DOI: http://doi.org/10.5334/labphon.277

Fig. 1: Normalized Oq curves by onset and speaker.

Fig. 2: Scatterplot of f0 (st) against VOT (ms). Fig. 3: Scatterplot of normalized Oq (first 3 timepoints) against normalized VOT.

References

[1] Kim, Y. S. (2011). An acoustic, aerodynamic and perceptual investigation of word-initial denasalization in Korean (Unpublished doctoral dissertation). University College London. [2] Yoo, K., & Nolan, F. (2020). Sampling the progression of domain-initial denasalization in Seoul Korean. Journal of the Association for Laboratory Phonology 11(1): 22. DOI: https://doi.org/10.5334/labphon.203

58 59 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS The rhotic at the edges: an acoustic and EPG study displaying variable coarticulatory patterns, including temporal variability in different consonantal contexts or prosodic positions, place of articulation and degree of constriction Katerina Nicolaidis & Mary Baltazani variability, as well as phonation type variation. Aristotle University of Thessaloniki & University of Oxford

Rhotics exhibit phonetic variety within and across languages [1, 2]. Acoustic and electropalatographic studies on the Greek tap in several prosodic positions have documented the presence of a vocoid between the rhotic and the consonant in /Cr/ and /rC/ contexts, and more interestingly, in phrase initial position when /r/ is followed by a vowel, e.g., [3, 4, 5]. Several studies have detected a vocoid in /Cr/ clusters and /rC/ sequences, e.g., in Spanish dialects, in Catalan, in Romanian, and in Hungarian [7, 8, 9, 10]. The vocoid presence has been attributed to temporal adjustments between the gestures for the rhotic and the flanking consonant [8]. Few studies have reported tap production in phrase-initial /##rV/ or phrase-final contexts /Vr##/ with a flanking vowel [5, 8, 11]; the vocoid presence in these Figure 1. (Clockwise from top left): A. Phrase-initial /ɾ/ contexts cannot be accounted for in terms of a gestural transition from one consonant to the in /ˈɾoða/; B. Phrase-final /ɾ/ in /ˈnektaɾ/; C. /VCɾ##/ next. Finally, not much attention has been paid to the tap in /VCr##/ contexts, where the /r/ context in /metɾ/; in all figures the red rectangle indi- is not immediately adjacent to a nuclear vowel and where, intriguingly, the presence of two cates the vocoid(s) location. vocoids has been reported [11], one before and one after the constriction phase of the tap. In this study we compare the realisation of /r/ (a) as a singleton tap accompanied by a vowel in phrase-initial (##rV) and phrase-final (Vr##) position, and (b) in /VCr##/ contexts. We examine tap production in diverse contexts to elucidate its gestural composition and its coordination with the gestures for adjacent sounds. In addition, we look into variation in production due to possible initial strengthening vs. pre-boundary lengthening effects as well References as word-frequency effects, as /r/-final words in Greek are of low frequency. The speech material consisted of disyllabic words with a /##rV/ or a /Vr##/ sequence (V= [i, [1] Lindau, M. (1985). The story of /r/. In Fromkin, V. A. (ed.), Phonetic Linguistics: Essays in Honor of Peter Ladefoged. Academic Press, 157–168. e, a, o, u]) in two stress conditions (stressed and unstressed) and monosyllabic words with / [2] Ladefoged, P. & Maddieson, I. (1996). The Sounds of the World’s Languages. Cambridge: Blackwell. VCr##/ in [i, e] contexts. Acoustic and EPG data were recorded from five Greek speakers (2M, [3] Nicolaidis, K. & Baltazani, M. (2011). An electropalatographic and acoustic study of the Greek rhotic 3F) repeating the speech material five times. Acoustic analyses included measurement of the in /Cr/ clusters. Proceedings of the 17th International Congress of Phonetic Sciences. Hong Kong, duration of the rhotic constriction and vocoid(s), and F1 and F2 formant frequencies of the 1474–1477. nuclear vowel and the vocoid(s). Articulatory analyses included examination of the place and [4] Nicolaidis, K. & Baltazani, M. (2014). An investigation of acoustic and articulatory variability degree of constriction as well as variability due to position, context, and speaker. during rhotic production in Greek. Selected papers, 11th International Conference on Greek Results reveal that the vocoid in phrase-initial position precedes the constriction, while in Linguistics. Rhodes, Greece, 1192–1207. phrase-final position it follows the constriction (Figures 1A, 1B). In /VCr##/ contexts there [5] Baltazani M. & Nicolaidis, K. (2013). The many faces of /r/. In Spreafico, L. & Vietti, A. (eds.), Rhotics: are two vocoids, one prior to and another after the constriction (Figure 1C). The vocoid is New data and perspectives. Bozen Bolzano: Bu Press, 125–144. [6] Bradley, T. G. & Schmeiser, B. (2003). On the Phonetic Reality of Spanish /r/ in Complex Onsets. typically longer in duration than the constriction phase. Its formant structure is typically In Kempchinsky P. & Piñeros, C.E. (eds.), Theory, Practice, and Acquisition. Somerville, MA: similar to that of the nuclear vowel but more centralised. EPG data show that the production Cascadilla Press, 1–20. of the /r/ is typically alveolar with variation in place due to context. Production varies from [7] Bradley, T. G. (2004). Gestural timing and rhotic variation. In Face, T. (ed.), Laboratory Approaches fully constricted to more open articulations. to Spanish Phonology. Berlin: Mouton de Gruyter, 195–220. Our findings suggest that the vocoid is an integral part of the gestural composition of the [8] Recasens, D., & Espinosa, A. (2007). Phonetic typology and positional allophones for alveolar rhotic. They question cross-linguistic accounts for /r/, especially in clusters, which assume rhotics in Catalan. Phonetica 63, 1–28. that the vocoid is part of the nuclear vowel which underlies the whole syllable and is [9] Vago, R. M. and Gósy, M. (2007). Schwa vocalization in the realization of /r/. Proceedings of the briefly exposed between the consonants as the result of gestural overlap between the two 16th ICPhS. 505–509. consonantal gestures [7]. Overall, composite results from this and previous studies on the [10] Savu, C. F. (2012). An Acoustic Phonetic Perspective on the Phonological Behavior of the Rhotic Greek /r/ in different contexts suggest that there is a vocalic gesture of the rhotic per se upon Tap. Presentation in ConSOLE XX, University of Leipzig. [11] Savu, C. F. (2013). Another look at the structure of [ɾ]: Constricted intervals and vocalic which the tap constriction is superimposed. Its complete structure is manifested in /VCr##/ elements. In Spreafico, L. & Vietti, A. (eds.), Rhotics: New data and perspectives. Bozen Bolzano: contexts where the nuclear vowel is not immediately adjacent to the tap, cf. [11]. Both the Bu Press, 145–158. constriction and the vocalic components interact with the gestures for flanking segments

60 61 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS Acoustic correlates of epilaryngeal constriction in Levantine Arabic guttural consonants

Jalal Al-Tamimi Newcastle University, UK

Background: Guttural consonants are assumed to form a natural class due to phonological patterning and use of a common oro-sensory zone in the pharynx (McCarthy, 1994; Sylak-Glassman, 2014). Members of this class are usually pharyngeals and uvulars, with pharyngealised and/or glottals sometimes included (McCarthy, 1994; Sylak-Glassman, 2014). Coarticulatory effects on surrounding vowels are used to quantify group membership with an increase in the first formant frequency often reported as a main correlate (McCarthy, 1994; Zawaydeh, 1999; but see Bin-Muqbil, 2006 for contradictory results). Following the predictions of the “Laryngeal Articulator Model” (LAM, Esling, 2005), a constriction in the a) b) pharynx leads to supra-laryngeal and laryngeal changes. LAM predicts sounds produced in the epilaryngeal/pharyngeal area to show lingual retraction, with a back and down gesture, and either a larynx rising or constriction of internal/external muscles resulting in a tense voice. Acoustic consequences of a pharyngeal constriction are an approximation of the first two formants with an increase in F1 and a decrease in F2 (Stevens, 1989), whereas those of a tense voice quality are an overall lowering in spectral tilt with an increase in energy around F3 (Kreiman et al. 2014). This study evaluates acoustically the combined effect of retraction and laryngeal changes in assessing the degree of separation between gutturals and non-gutturals. Method: 10 Levantine Arabic speakers (5 females), aged 25-45 were recorded producing 21 consonants in Arabic in a /ʔVVCVV/ frame with three subsequent repetitions (VV: symmetric /iː aː uː/; C (6 classes): plain /t d ð s z l/, velar /k ɡ x ɣ/, uvular /q/, pharyngealized /tˤ dˤ ðˤ sˤ zˤ lˤ/, pharyngeal /ħ ʕ/ and glottal /h ʔ/; n = 2034 items). Analysis: The

V1CV2 sequences were analysed with VoiceSauce (Shue et al, 2011) to extract various acoustic metrics (bark difference metrics, noise, and harmonic differences) at multiple data points within each of V1 and V2. We look at the coarticulatory patterns within the last half of V1 and first half of V2 to quantify the differences between and within the two categories (gutturals c) = uvular, pharyngeal and pharyngealised; non-guttural = plain, velar and glottal). Principal Component Analyses (PCA) and Random Forests (RF) were used as data reduction/clustering Figure 1: 3D PCA on V2 (Half) with clustering (a); RF classification rates (Bark = Bark difference; VQ = Voice quality; b); Boxplots technique and predictive tools, respectively. Results: Starting with the PCA (Figure 1, a), (with mean in dashed lines) of 10 best predictors from variable importance scores in non-gutturals (plain, velar and glottal; in the results show a clear separation and clustering of gutturals (i.e., uvular, pharyngealised blue) vs gutturals (uvular, pharyngealised and pharyngeal; in red) with significance levels (***p <0.001, ** p<0.01, * p<0.05; c) and pharyngeal) in comparison with non-gutturals (plain, velar and glottals). Glottals are the closest to gutturals but show shared correlates with plains and velars. Our RF results References: showed variable classification rates (Figure 1, b, comparing results between V1 to V2), with the highest being in Bark-difference + VQ (Voice Quality) at 87% between guttural vs non- Bin-Muqbil, M. S. (2006). Phonetic and phonological aspects of Arabic emphatics and gutturals guttural and 69% between the six categories (VQ achieves 80% between guttural vs non- (Unpublished doctoral dissertation). University of Wisconsin-Madison. guttural indicating a role for VQ). Most of the confusions were between the group of gutturals Esling, J. 2005. There Are No Back Vowels: The Laryngeal Articulator Model. The Canadian Journal of or non-gutturals. Finally, boxplots of the best 10 predictors based on variable importance Linguistics/La revue canadienne de linguistique, 50(1), 13–44. scores are presented in Figure 1 (c). The results point to a mixture of bark-difference and Kreiman, J., Gerratt, B. R., Garellek, M., Samlan, R., & Zhang, Z. (2014). Toward a unified theory of voice quality metrics to correlate with the contrast with primary spectral compactness voice production and perception. Loquens, 1(1). (↓Z2-Z1) and divergence (↑Z3-Z2), openness (↑Z1-Z0) and pharyngeal constriction (↓Z4-Z3) McCarthy, J. J. 1994. The phonetics and phonology of Semitic pharyngeals. In: Keating, P. (Ed.), and secondary decrease in spectral tilt due to increase in energy above F1 and around F3 Phonological Structure and Phonetic Form: Papers in Laboratory Phonology III, 191–233. Cambridge (↓A2*-A3*, ↓H4*-H2kHz*, ↓A1*-A3*, ↓H1*-A3*) and decrease in energy around F2 (↑H1*-A2* University Press and ↑A1*-A2*). Conclusion: Our results show empirical evidence supporting the legitimacy Shue, Y.-L., P. Keating, C. Vicenik, and K. Hu (2011). VoiceSauce: A program for voice analysis. Proc. of the guttural natural class. The primary changes observed quantified through bark- ICPhS XVII: 1846-1849. differences correlate well with lingual retraction (following LAM). Secondary changes due Sylak-Glassman, J. 2014b. An Emergent Approach to the Guttural Natural Class. Proceedings of the to increase in energy above F1 and around F3/F5 are associated with a tense voice quality Annual Meetings on Phonology, 1–12. causing decrease in spectral tilt (Kreiman et al. 2014). The combination of metrics allowed Zawaydeh, B. (1999). The phonetics and phonology of gutturals in Arabic (Unpublished doctoral to evidence relationships between uvular, pharyngealised and pharyngeal consonants: dissertation). PhD dissertation, Bloomington, In: Indiana University. gutturals share an epilaryngeal constriction with a combined supra-laryngeal and laryngeal Stevens, K. N. (1989). On the quantal nature of speech. Journal of Phonetics, 17, 3-45. effects, with implications to formal accounts.

62 63 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS Full and partial vowel devoicing in Turkish and the role of prosodic prominence

Janne Lorenzen University of Cologne

Vowel devoicing (VD) especially in high vowels is known to occur in several languages, including French, Japanese and Turkish [1, 2, 3]. It generally surfaces as a gradient rather than a categorical phenomenon modulated, e.g., by segmental context, speech rate, and stress. Notably, even in Turkish, where the status of word-level stress is debated (e.g., [4, 5]) and phonetic cues are Figure 1. Examples of voiced (left), partially voiced (center) and devoiced (right) realizations of /i/ in ‘elçice’ evidently weak [6], stressed vowels are less prone to VD than unstressed ones [3]. However, since only the canonical finally stressed words are considered in a previous study [3], itis unclear whether VD is inhibited by syllable position or word-level stress. A less studied form Estimate Std. Error z value Pr(>|z|) of gradience in VD pertains to degree of VD within a language (see Fig. 1 for examples of (Intercept) -3.6076 0.3912 -9.223 <0.0001*** fully, partially devoiced and fully voiced vowels). The present study thus examines new data to Stress-unstressed 0.4321 0.1931 2.237 0.0253* address two previously neglected issues: i) Do finally (FS) and non-finally stressed (NFS) vowels PrecedingSegment-voiceless 1.1172 0.2134 5.236 <0.0001*** inhibit VD to the same extent and ii) are full and partial VD modulated by the same variables? FollowingSegment-voiceless 1.4193 0.1987 7.141 <0.0001*** Twelve native speakers of Turkish were recorded reading 40 disyllabic and trisyllabic FS and VowelHeight-low -1.3769 0.1965 -7.006 <0.0001*** NFS target words placed in carrier sentences. This yielded a corpus of 960 words including 2640 Figure 2. Generalized linear mixed effects model of vowel devoicing vowels, which was manually annotated on a segment tier. Mean fundamental frequency (F0) was extracted in four subsequent portions of the vowel using a Praat [7] script after adjusting Estimate Std. Error z value Pr(>|z|) pitch settings to each speaker’s individual range. Vowels were classified as fully devoiced if no (Intercept) 6.9345 1.5976 4.341 <0.0001*** F0 measure could be extracted in either of the vowel portions and as partially devoiced if no F0 was detected in at least one portion of the vowel. Position-non-final -2.1621 0.7401 -2.922 0.0035** A generalized linear mixed effects model (GLMM, [8, 9]) with VD as dependent variable (levels: PrecedingSegment-voiceless -1.7312 0.6877 -2.517 0.0118* devoiced, voiced) and speaker and word as random effects revealed significant effects of FollowingSegment-voiceless -2.525 0.6576 -3.84 0.0001*** stress, voicing in preceding and following segments and vowel height (Fig. 2). Syllable position, SyllableStructure-open -2.7429 0.67 -4.094 <0.0001*** however, did not emerge as a predictor of VD. VD was more likely to occur in unstressed VowelHeight-low 1.5499 0.5642 2.747 0.0060** high vowels following or preceding voiceless consonants, irrespective of the position of the Figure 3. Generalized linear mixed effects model of partial and full vowel devoicing syllable within the word. Final and non-final stress thus behave like two instances of the same prosodic phenomenon with regard to VD. Furthermore, previous accounts ([3]) construe VD in [1] Torreira, F., & Ernestus, M. 2010. Phrase-medial vowel devoicing in spontaneous French. Proceedings Turkish as gestural overlap due to temporal compression. While stressed syllables are indeed of Interspeech 2010 (Makuhari, Japan), 2006-2009. longer than unstressed ones by 8.06ms on average in the present data, closed syllables are on [2] Tanner, J., Sonderegger, M., & Torreira, F. 2019. Durational evidence that Tokyo Japanese vowel average 22.23ms longer than open syllables, yet syllable structure is not a significant predictor devoicing is not gradient reduction. Frontiers in Psychology 10, 821. of VD. This finding appears incompatible with a duration-driven explanation. VD is instead [3] Jannedy, S. 1995. Gestural phasing as an explanation for vowel devoicing in Turkish. OSU Working Papers in Linguistics 45, 56-84. more likely to be facilitated by lower subglottal pressure in unstressed as opposed to stressed [4] Kabak, B., & Vogel, I. 2001. The phonological word and stress assignment in Turkish. Phonology 18, vowels. 315-360. A second GLMM was fitted with a subset of the data using degree of VD (levels: partial, full) [5] Inkelas, S., & Orgun, C. 2003. Turkish stress: A review. Phonology 20, 139-161. and the same random effects as before (Fig. 3). Partial VD is more likely than full VD to occur [6] Levi, S. 2005. Acoustic correlates of lexical accent in Turkish. Journal of the International Phonetic in typically disfavored environments, e.g., following or preceding voiced segments, in a low Association 35, 73-97. vowel or in a closed syllable. This finding suggests that Turkish VD is a phonetic phenomenon [7] Boersma, P., & Weenink, D. 2020. Praat: doing phonetics by computer. (Version 6.1.09). Computer concomitant of varying degrees of coarticulation. Finally, full VD is less likely than partial VD to program. occur in word-final syllables. This finding is unexpected as the final syllable is more prone to [8] R Core Team (2020). R (Version 1.2.5033) [Computer program]. Vienna, Austria: R Foundation for VD cross-linguistically [10] and should thus exhibit higher degrees of VD. Turkish is however Statistical Computing. known to make use of phrase-final boundary tones [11], which could serve as a weak inhibitor, [9] Bates, D., Mächler, Bolker, B., & Walker, S. 2015. Fitting linear mixed-effects models using lme4. resulting in lower rates of full VD in this position. Journal of Statistical Software 67, 1-48. To conclude, VD in Turkish is truly gradient with full VD relying more on facilitating variables [10] Gordon, M. 1998. The phonetics and phonology of non-modal vowels: A cross-linguistic perspective. such as voiceless adjacent segments, high vowels, and absence of stress than partial VD. Proceedings of the Twenty-Fourth Annual Meeting of the Berkeley Linguistics Society, 93-105. [11] Ipek, C., & Jun, S.‑A. 2013. Towards a model of intonational phonology of Turkish: Neutral intonation. Duration does not appear to fully account for the effect of stress. FS and NFS vowels inhibit VD Proceedings of Meetings on Acoustics (Montreal, Canada), 2-9. to the same extent, however, preliminary data suggest that there may be an additional effect of phrase-level prominence preventing full VD in final syllables.

64 65 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS is unaccounted for by predictable factors, with linear regression used to demonstrate that DAY 2 - Oral session 3 individual differences in production variability are not systematically related to individual ARTICULATORY PHONETICS AND PHONOLOGY differences in the measured physiological and prosodic variables. Taken together, these findings provide empirical support for a theoretical account in which speakers differ in their representation of segmental targets, as well as general support for Idiosyncratic articulatory variability as evidence for individual differences in segmental models in which a speaker’s representation of a segment consists of stochastic distributions representation of possible target values along articulatory dimensions (e.g., [9], [10]). Computational modelling is used to show how the observed interspeaker differences can be incorporated in a dynamical model of phonological planning as in [10], and how the resulting model Sarah Harper generates the relationship between within- and across-context behavior observed in the University of Southern California corpus data. The articulatory and acoustic properties of any one phonological segment are known to vary both between speakers and between tokens in an individual’s speech (e.g., [1], [2]). This variation across speakers has been attributed to individual differences in speaker physiology (e.g., [3]), cognitive processing style (e.g., [4]) and prosody (e.g., [5]). However, relatively little research has systematically examined the extent to which this variation may also reflect idiosyncratic differences in how segments are represented, or in other mechanisms affecting the selection of production targets in speech (cf. [6], [7]). This paper presents experimental evidence suggesting that individual differences in production reflect not only the influence of predictable physiological and prosodic factors, but also interspeaker differences in the Figure 1. Articulatory dimensions measured for each token in the XRMB data. representation of segmental units. A computational model based on the data is presented to show how these findings support a theoretical account in which segmental units are represented as stochastic distributions of possible target values that differ across speakers. Articulatory measurements from word-initial and -final tokens of /t/, /s/,ʃ / /, /l/, and /ɹ/ produced by 40 native speakers of American English in the Wisconsin X-ray Microbeam Corpus ([8]) were used to examine interspeaker differences in the variability observed in the production of phonologically relevant articulatory dimensions (e.g., the location and degree of the tongue-tip constriction) (Fig. 1). For each token, velocity trajectories of pellets placed on the upper and lower lips and on the tongue tip, blade, body and dorsum were used to automatically find the time of movement extremum for the articulatory gesture(s) used to form each segment. Constriction location, degree, and orientation were extracted for all gestures in each consonant at the time of movement extremum. For each speaker, the interquartile range for each measured dimension was calculated by segment within and across distinct phonetic contexts (defined as the unique combination of a prosodic position and a segmental environment) as a measure of variability. Measurements of palate doming, height, and length, vocal tract length, average speech rate, and pause frequency were also Figure 2. Comparison of within-context and across-context interquartile range values across speakers by calculated for each speaker to evaluate whether individual differences in the articulatory segment for constriction location (top) and constriction degree (bottom). realization of segments could be attributed to individual differences in factors previously shown to condition variation. References The results of this investigation indicate that interspeaker differences in variability are maintained for each dimension in a segment across multiple levels of linguistic structure, [1] Johnson, K., Ladefoged, P., & Lindau, M. 1993. Individual differences in vowel production. JASA but are generally not related across different segments or dimensions. Analyses using 94(2), 701-714. Spearman’s rank-order correlation demonstrate that the amount of variability exhibited by a [2] Whalen, D. H., Chen, W. R., Tiede, M. K., & Nam, H. 2018. Variability of articulator positions and speaker across contexts is linked to the extent of their within-context variability for a particular formants across nine English vowels. JPhon 68, 1-14. dimension in a particular segment (Fig. 2). However, no consistent evidence was found for [3] Rudy, K., & Yunusova, Y. 2013. The effect of anatomic factors on tongue position variability relationships in production variability across different segments or different dimensions. during consonants. JSLHR 56(1), 137-149. Additionally, the results indicate that the majority of the variability observed across speakers

66 67 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS [4] Ou, J., & Law, S.P. 2017. Cognitive basis of individual differences in speech perception, production The role of aerodynamic factors in feature interaction and phonological patterns and representations: The role of domain general attentional switching. Attention, Perception, and Psychophysics 79(3), 945-963. Maria-Josep Solé [5] Byrd, D., Krivokapić, J., & Lee, S. 2006. How far, how long: On the temporal scope of phrase Universitat Autònoma de Barcelona, Spain boundary effects.JASA 120, 1589-1599. [6] Baker, A., Archangeli, D., & Mielke, J. 2011. Variability in American English s-retraction suggests a solution to the actuation problem. Language Variation and Change, 23, 347- 374. The aim of this paper is to show how aerodynamic factors allow us to better understand [7] Smith, B. J., Mielke, J., Magloughlin, L., & Wilbanks, E. 2019. Sound change and coarticulatory phonological patterns that are difficult to explain by articulatory or acoustic factors alone. variability involving English /ɹ/. 4(1), 63. Furthermore, aerodynamic factors may in part account for the way in which features [8] Westbury, J. R. 1994. X-ray microbeam speech production database user’s handbook. Madison, WI. interact with each other, that is, why some features tend to co-occur, while others seldom [9] Pierrehumbert, J. B. 2000. Exemplar dynamics: Word frequency, lenition and contrast. In J. combine (i.e. dependency relations [1]). We examine case studies which demonstrate the Bybee & P. Hopper (Eds.), Frequency effects and the emergence of linguistic structure. Amsterdam: interdependence of different parts and actions of the speech mechanism –what happens at John Benjamins, 137-157. one site of the vocal tract (e.g., enlarging the oral cavity) affects the aerodynamic conditions [10] Roon, K. D., & Gafos, A. I. 2016. Perceiving while producing: Modeling the dynamics of at a distant site (sustaining voicing at the glottis) – and their reflexes in phonological patterns. phonological planning. Journal of Memory and Language 89, 222-243. The velum, the vocal folds, and the oral articulators (tongue and lips) act as valves that channel airflow and acoustic energy into the nasal and oral cavities, and the atmosphere. While these different mechanisms (nasal, glottal and oral) are anatomically independent, the outcome of their localized activity, e.g., voicing, frication, stop bursts, degree of aspiration, tongue-tip trilling, is dependent on activities or conditions at other distant sites in the vocal tract. Prior work on dependency relations due to aerodynamic interaction between independent gestures includes the dependency between voicelessness and frication (i.e., increased transglottal flow generates supraglottal frication) ([2, 3] and references therein), and ways to circumvent the ‘Aerodynamic voicing constraint’, resulting in the emergence of non-etymological nasals, spirantization, d-flapping/lateralization or retroflexion, and implosivization [4, 5, 6]. We further examine patterns involving loss of audible frication, stop bursts or tongue-tip trilling with variation in the aerodynamic conditions due to (i) the action of distant gestures (glottal action, nasal aperture), or (ii) coarticulation with adjacent segments or syllable position (affecting the size of the constriction). Differences in glottal action have an impact on the presence of friction noise at the oral constriction. Vibration at the vocal folds for voiced obstruents reduces the rate of flow through the glottis and the oral pressure build- up, compared to voiceless obstruents, and hence results in a lower intensity of noise for fricatives and release bursts. Some phonological consequences are that voiced, but not voiceless, fricatives tend to be weakened into glides and approximants (example 1) or rhotics (example 2). Voiced fricatives are also lost earlier than voiceless fricatives (example 3). Along similar lines, voiced trills tend to weaken (i.e., are realized as nontrilled variants, and alternate with taps and approximants) but glottal aperture –i.e., trill devoicing utterance finally (found in American Spanish, Brazilian Portuguese, Farsi) – facilitates tongue tip trilling with lowered subglottal pressure [7]. These patterns illustrate that what happens at the glottis has an impact on supralaryngeal frication and tongue-tip trilling. Changes in the impedance of the nasal valve have an impact on supralaryngeal frication, stop bursts and tongue tip-trilling. Opening the nasal valve will vent the airflow through the nasal cavity, thus reducing pressure difference across the oral constriction for intense frication, a noisy release burst or tongue-tip trilling. Thus nasalized fricatives are rare and nasal trills do not occur. Coarticulatory nasalization also impairs strong frication or high intensity stop bursts. Fricatives preceding nasals tend to be weakened or lost (example 4) [8, 9] and voiceless stops, which rely on a strong release burst, tend to be replaced by a glottal stop, become voiced, or assimilate before a nasal (example 5).

68 69 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS In sum, the cases examined show that local articulatory gestures have consequences on the Prominence relations and supra-articulatory modifications in habitual and loud speech air pressure and flow at other sites of the vocal tract, thus accounting for the interaction of features involving anatomically distant and independent articulators. Understanding the Lena Pagel, Simon Roessig and Doris Mücke trade-offs between articulator movements and aerodynamic forces, and their acoustic result, IfL Phonetics – University of Cologne is essential for accounting for phonological patterns, for speech pathology (compensatory gestures), and articulatory modelling and synthesis. Vocal effort has been reported to vary as a function of prosodic structure. Syllables associated with prosodic prominences are produced with higher vocal effort [1]. The modulation of Examples cited prosodic prominence in an utterance is in turn tightly connected to the marking of focus structure. In this case, vocal effort is manipulated on the local level of syllable- or word-sized units. In addition, variations of vocal effort are related to speaking styles (e.g. loud speech) and can therefore span longer stretches of speech. Here, modifications of vocal effort apply to a global level. Both local and global modifications of vocal effort lend prominence to units of speech. Hence, we use the terms local and global prominence for the two domains. Importantly, modulations of local and global prominence are correlated with changes in supra-laryngeal articulation [2; 3]. In particular, for local modifications, two strategies have been reported. A greater opening of the vocal tract is related to a sonority expansion (SE), increasing the acoustic energy radiating from the mouth and resulting in the enhancement of syntagmatic contrasts between syllables with different degrees of prominence [4]. Furthermore, a more extreme articulation of vocalic targets is described as the strategy of localised hyperarticulation (HA), leading to an enhancement of paradigmatic contrasts between place features [5]. While in some cases, these strategies can go hand in hand, in others, such as high vowels, they may conflict [6]. The present study investigates two kinds of interactions: the one of local and global prominence and the one of the two prominence-lending strategies SE and HA. We explore variations of tongue and lip movements associated with focus structure (local prominence) and speaking style (global prominence). This abstract presents preliminary results from ongoing work on a corpus of 20 German speakers using 3D-Electromagnetic Articulography (AG510). Subjects are engaged in an interactive task and produce utterances containing the target syllables /ba, ma, References bi, mi/ in a habitual and a loud speaking style. Here, we restrict our analysis to the target vowel /i/. The game-like experiment employs question-answer pairs to embed the target word either [1] McCarthy, John J.1988, Feature geometry and dependency: A review. Phonetica 45.84-108. in broad or contrastive focus. We examine extrema of the tongue body position on a vertical [2] Ohala, J.J. 1997. Aerodynamics of Phonology. Proceedings of the 4th Seoul International Conference (high–low) and horizontal (front–back) dimension and of the Euclidian lip distance during /i/ on Linguistics, Seoul: Linguistic Society of Korea, 92–97. (cf. fig. 1). These measures are compared between focus types (broad vs. contrastive, i.e. local [3] Ohala, J.J. & Solé, M.J. 2010. Turbulence and phonology. In Susanne Fuchs, Martine Toda & Marzena Zygis (eds.) Turbulent sounds. An interdisciplinary guide. Berlin: Mouton de Gruyter, prominence) and speaking styles (habitual vs. loud, i.e. global prominence). pp. 37-97. The preliminary results are presented in fig. 2 which displays averaged trajectories of lip [4] Ohala, J.J. 2011. Accommodation to the aerodynamic voicing constraint and its phonological distance and tongue body for one speaker. The data are time-normalised and the window is relevance. Proceedings of the 17th ICPhS, Hong Kong, 64-67. chosen such that it includes the articulatory gesture towards the respective extremum in /i/. [5] Solé, M.J. 2012. Natural and Unnatural Patterns of Sound Change? In M.J. Solé & D. Recasens We can observe the general trend to open the lips wider in loud speech compared to habitual (eds.) The Initiation of Sound Change (pp. 123-146). Amsterdam: John Benjamins. speech as well as, in both speaking styles, from broad to contrastive focus. This suggests that [6] Sprouse, R., Solé, M.J., & Ohala, J.J. 2008. Oral cavity enlargement in retroflex stop.Proceedings SE is used as a strategy enhancing prominence on the local and the global level. Furthermore, of the 8th International Seminar on Speech Production. Strasbourg, France , 425- 428. the maximum vertical tongue body position becomes higher from habitual to loud speech and [7] Solé, M.J., 2002. Aerodynamic characteristics of trills and phonological patterning. Journal of from broad to contrastive focus. This modification can be interpreted in terms of HA, enhancing Phonetics 30.4, pp. 655-688. the place feature [+high] in order to lend local and global prominence. The horizontal tongue [8] Solé, M.J. 2009. Acoustic and aerodynamic factors in the interaction of features. The case of body position is not modified in a consistent way and will be subject to further investigation. nasality and voicing. In M. Vigario, S. Frota and M. Joao Freitas (eds.) Phonetics and Phonology. Concerning the interaction of SE and HA, the preliminary data suggest that both strategies Interactions and Interrelations, Amsterdam: John Benjamins, pp. 205-234. are applied to enhance prominence on both levels. They are carried out by different [9] Solé, M.J. 2010. Effects of syllable position on sound change: An aerodynamic study of final fricative weakening. Journal of Phonetics 38.2: 289-305. articulators and can thus complement each other. Turning to the interaction of local and global prominence, the results show that the marking of local prominence is preserved under

70 71 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS global prominence – in fact, the differences between focus types appear even stronger in the DAY 2 - Oral session 4 loud style. Crucially, loud broad focus is produced with more extreme articulatory positions MULTIMODAL PHONETICS than habitual contrastive focus. The data thus provide evidence that invariance is found in the relation between distributions of phonetic parameters rather than in the direct mapping of Using gestures in L2 prosody acquisition training: The role of individual differences meaning to signal [7; 8]. Lisette van der Heijden, Lieke van Maastricht, and Marieke Hoetjes Centre for Language Studies, Radboud University, The Netherlands

Previous studies have demonstrated the tight relationship between speech and co-speech gestures, as well as the beneficial role that gesture can play in L1 development [1]. However, the use of gestures in L2 acquisition is under-investigated. Findings on gestural training in L2 phonology learning specifically present contrasting results: Some studies find an effect of seeing or performing gestures during phonology training on L2 performance [2][3], yet others Figure 1. Overview of analysed articulatory dimensions in sagittal section. A: maximum Euclidean lip distance. B: maximum tongue body position (vertical). C: maximum tongue body position (horizontal). do not (e.g., [4]) or only in certain contexts, dependent on task and gesture complexity [5]. Moreover, prior work varies in the type of gesture that is used (e.g., beat, iconic, or metaphoric),

the training modality (i.e., learners see or also produce gestures), the L2 (supra)segment that A B C is being taught, and the dependent measures used to quantify acquisition. Hence, comparing previous work and determining why effects occur is challenging. Also, several studies suggest that individual differences might be relevant in this context (e.g., [6]), but to date, no study has disentangled the effect of different types of gestures and training, while also considering relevant individual factors, such as working memory (WM) capacity and musical aptitude. In our study, sixty Dutch natives, without previous experience with Spanish, received trai-ning on the lexical stress rules of Spanish in between pretest and posttests. Dutch learners of Spanish generally struggle with Spanish lexical stress, especially in Dutch-Spanish cognates that are highly similar except for the position of the stressed syllable (e.g., ‘piramides’, ‘hori-zon’, and ‘ventilator’ in Dutch, but ‘pirámides’, ‘horizonte’, and ‘ventilador’ in Spanish). The training consisted of written instructions about the three lexical stress rules in Spanish and, per rule, a video of a native speaker presenting an example and two practice items with feed-back to Figure 3. Averaged movement trajectories of three articulatory dimensions (A, B, C) the participant. Participants were randomly assigned to one of five gesture conditions: in two focus conditions (local prominence) and two speaking styles (global prominence). Presented is at least the articulatory gesture approaching the extremum during /i/. 1) AV: The native speaker in the video did not produce gestures (control); 2) AVB-Perc: The participant saw the native speaker produce a beat gesture; [1] Ladd, B. (2008). Intonational phonology (2nd ed.). Cambridge University Press. 3) AVB-Prod: The participant saw and reproduced a beat gesture; [2] Cho, T. (2006). Manifestation of prosodic structure in articulatory variation: Evidence from lip 4) AVM-Perc: The participant saw the native speaker produce a metaphoric gesture; kinematics in English. Laboratory phonology, 8, 519-548. 5) AVM-Prod: The participant saw and reproduced a metaphoric gesture. [3] Geumann, A. (2001). Vocal intensity: acoustic and articulatory correlates. In B. Maassen et. al. (Eds.), Speech Motor Control in Normal and Disordered Speech. Proceedings of the 4th International The metaphoric gesture was temporally aligned with the stressed syllable and produced by Speech Motor Conference, 13-16 June, Nijmegen, Netherlands (pp. 70-73). moving the hands apart horizontally to stress that duration is the primary cue of lexical stress [4] Beckman, M. E., Edwards, J., & Fletcher, J. (1992). Prosodic structure and tempo in a sonority model in Spanish [7]. The stroke of the beat gesture was temporally aligned with the stressed syllable of articulatory dynamics. In G. J. Docherty & D. R. Ladd (Eds.), Papers in Laboratory Phonology: Vol. and produced by moving both hands together vertically. In a pretest and an immediate and 2. Gesture, Segment, Prosody (pp. 68-89). Cambridge University Press. delayed posttest, participants read aloud identical yet randomly ordered, short, easy to parse, [5] De Jong, K. J. (1995). The supraglottal articulation of prominence in English: Linguistic stress as Spanish phrases containing words that differ in lexical stress from their Dutch cognates (Figure localized hyperarticulation. The journal of the acoustical society of America, 97(1), 491-504. 1). In between the two posttests, all participants performed a musical aptitude and WM task. [6] Harrington, J., Fletcher, J., & Beckman, M. E. (2000). Manner and place conflicts in the articulation In the acoustic analysis lexical stress was coded as on-target or not. of accent in Australian English. In M. B. Broe & J. B. Pierrehumbert (Eds.), Papers in Laboratory We found that, irrespective of gesture condition, participants significantly improved on Phonology: Vol. 5. Acquisition and the Lexicon (pp. 40-51). Cambridge University Press. their L2 lexical stress productions from pretest to first and second posttest (Figure 2). While [7] Gafos, A., Roeser, J., Sotiropoulou, S., Hoole, P., & C. Zeroual, C. (2020). Structure in mind, structure differences between gesture conditions were non-significant, there were several significant in vocal tract. Natural Language and Linguistic Theory, 38, 43-75. three-way interactions between WM capacity or musical aptitude on the one hand and testing [8] Roessig, S. & Mücke, D. (2019). Modeling Dimensions of Prosodic Prominence. Frontiers in moment and gesture condition on the other hand. Hence, the effectiveness of gesture type Communication, 4, 44. and training modality in teaching L2 lexical stress was significantly influenced by WM capacity and musical aptitude. Therefore, present findings underline the importance of considering

72 73 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS individual factors and methodological choices in determining the effect of gestures in L2 Synchronisation of Multimodal Cues with Intonational Categories in Turkish acquisition. Olcay Turk and Sasha Calhoun Victoria University of Wellington

An ongoing challenge for intonational phonology is the identification of intonational categories [1, 2]. This paper argues for the use of multimodal cues in the identification of intonational categories, with evidence from (manual) gesture apex synchronisation in Turkish. Gestures and speech are implemented together as one unified communicative system, and are temporally coordinated (i.e., synchronised). Many studies have found consistent synchronisation of atomic prominence markers, i.e., pitch accents and gesture apexes [3, 4, Figure 1. Screenshots of the native speaker of Spanish producing the metaphoric gesture (left) and the beat gesture 5, 6, 7]. This is prima facie evidence that gestures and prosody are implemented together, (middle), and an example of a stimulus item (right). and therefore the former can inform the identification of the latter through apex-tone synchronisation. However, these studies have considered only a limited number of languages with similar intonational structures. In addition, they have not considered prosodic and gestural contexts (e.g., other surrounding tonal events and gesture types) when explaining gesture-prosody synchronisation. Turkish presents a challenging case for apex-pitch accent synchronisation as multiple tones can occur on a single prosodic word: L tones marking prosodic word (PW) onsets, H- (pre-nuclear) or L- (nuclear or post-nuclear) marking intermediate phrase offsets (ip), L% or H% marking intonational phrase offsets (IP), and pitch accents (L*, H* or !H*) associated with the stressed syllable of PWs [8, 9] (Figure 1). Without any prosodic contextual analysis and a clear definition of synchronisation, which have been neglected in earlier studies, any of these tonal events may show synchronisation with apexes. Furthermore, if apexes are indeed systematically synchronised with only certain tonal events then this can be used for their identification. In particular, there is disagreement about whether all words bear a pitch accent in Turkish. Studies within the AM framework agree that words with non-final stress have pitch accents. However, some authors have argued that the final pitch rise in Figure 2. Ratio correct L2 lexical stress productions per gesture condition, separated by testing moment. Error bars pre-nuclear ip-final words with final stress is just the ip-final H- phrase tone [8], while others represent 95% confidence intervals. ***: p < .001 argue it functions also as an independent H* coinciding with the H- [9]. The present study argues that looking at coordination with gesture can shed light on the accenting of these [1] Iverson, J. M., & Goldin-Meadow, S. (2005). Gesture paves the way for language words – if the apexes are synchronised with phrase-final rises, this suggests that the rises development. Psychological science, 16(5), 367-371. doi: 10.1111/j.0956-7976.2005.01542.x double-function as pitch accents. [2] Baills, F., Suárez-González, N., González-Fuente, S., & Prieto, P. (2019). Observing and producing pitch gestures facilitates the learning of Mandarin Chinese tones and words. Studies in Second This study asked which tones apexes are synchronised with and whether this Language Acquisition, 41(1), 33-58. doi: 10.1017/S0272263118000074 synchronisation depends on other prosodic and gestural features. Four Turkish participants [3] Li, P., Baills, F., & Prieto, P. (2020). Observing and producing durational hand gestures facilitates were filmed in a narrative task and three hours of Turkish natural speech data were acquired the pronunciation of novel vowel-length contrasts. Studies in Second Language Acquisition, 1-25. and ToBI annotated [8, 9]. Manual gestures were annotated for type (e.g., iconic, deictic, doi: 10.1017/S0272263120000054 metaphoric or beat) [10], and apexes (N=820) were marked at the dynamically prominent [4] Zheng, A., Hirata, Y., & Kelly, S. D. (2018). Exploring the effects of imitating hand gestures and points in time of the strokes, i.e., maximum extensions or changes in the direction of head nods on L1 and L2 Mandarin tone production. Journal of Speech, Language, and Hearing movement [3, 11]. The relationship between gesture and prosody was tested via mixed- Research, 61(9), 2179-2195. doi: 10.1044/2018_JSLHR-S-17-0481 effects regression models and synchronisation was defined via equivalence testing. [5] Hoetjes, M., & Van Maastricht, L. (2020). Using Gesture to Facilitate L2 Phoneme Acquisition: The Importance of Gesture and Phoneme Complexity. Frontiers in Psychology, 11, 3178. doi: The findings showed that gesturally prominent apexes were chiefly synchronised 10.3389/fpsyg.2020.575032 with prosodically prominent pitch accents (Figure 1a), indicating that prominence was the [6] Özer, D., & Göksun, T. (2020). Gesture use and processing: A review on individual differences in primary constraint for synchronisation, as in earlier studies. However, analysis of cases cognitive resources. Frontiers in Psychology, 11, 2859. doi: 10.3389/fpsyg.20 20.573555 with no prosodic prominence provides evidence for a hierarchical preference constraint on [7] Ortega-Llebaria, M., & Prieto, P. (2011). Acoustic correlates of stress in Central Catalan and synchronisation, since apexes were preferentially synchronised with the PW-initial L tones Castilian Spanish. Language and Speech, 54(1), 73-97. doi: 10.1177/0023830910388014 and not with the markers of prosodic constituents higher in the hierarchy such as phrase

74 75 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS final H- tones (Figure 1b). This finding provides multimodal evidence in support of the claim [6] N. Esteve-Gibert and P. Prieto. 2013. Prosodic structure shapes the temporal realization of that words with final stress are not always accented since the apexes were coordinated with intonation and manual gesture movements. Journal of Speech, Language, and Hearing Research, PW initial L tones instead of the phrase final H- in accentless pre-nuclear ips, indicating an vol. 56, 850–864. absence of prosodic prominence in these phrase-final rises. This synchronisation pattern also [7] S. Shattuck-Hufnagel and A. Ren. 2018. The prosodic characteristics of non-referential co- showed that apexes can mark edges of prosodic phrases (PW onsets in Turkish, see [12] for speech ges- tures in a sample of academic-lecture-style speech. Frontiers in Psychology, vol. 9, similar results in French), indicating that prosodic phrasing is a constraint for synchronisation 1514. [8] B. Kamali. 2011. Topics at the PF interface of Turkish. Ph.D. dissertation, Harvard University. in addition to prominence. The findings were largely consistent across different prosodic and [9] C. Ipek and S.-A. Jun. 2013. Towards a model of intonational phonology of Turkish: Neutral gestural contexts, although it was tighter with tones in the nuclear area, also confirming the intonation. Proceedings of Meetings on Acoustics, vol. 19, no. 1. Montreal, Canada: Acoustical validity of the division between nuclear and pre-nuclear areas in Turkish prosodic analysis Society of America, 060230. [13]. [10] D. McNeill. 1992. Hand and mind: What gestures reveal about thought. Chicago: University of Overall, these findings confirm earlier results showing that the prominences in Chicago Press. prosody and gesture are synchronised, but also expand our knowledge of the typology of [11] S. Shattuck-Hufnagel, Y. Yasinnik, N. Veilleux, and M. Renwick. 2007. A method for studying micro-level synchronisation, in cases like Turkish where edges of prosodic phrases can also the time alignment of gestures and prosody in American English: Hits and pitch accents in academic lecture-style speech. In A. Esposito, M. Bratanic, E. Keller, and M. Marinaro (Eds.), be marked by apexes. In addition, the study shows how multimodal evidence from gesture NATO Security Through Science Series E Human And Societal Dynamics. Washington, DC: IOS is relevant to the identification and classification of phonological categories, given that these PRESS, vol. 18. are planned and implemented together. [12] P. L. Rohrer, P. Prieto, and E. Delais-Roussarie. 2019. Beat gestures and prosodic domain marking in French. In S. Calhoun, P. Escudero, M. Tabain, and P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences. Melbourne, Australia, 1500–1504. [13] C. Fery. 2017. Intonation and prosodic structure. Cambridge University Press.

(a) A pre-intermediate phrase with pitch accents. The apex coordination is with the pitch accent in the final prosodic word.

(b) Two pre-nuclear intermediate phrases. The gesture apex is coordinated with the prosodic word initial L tone as the phrase is accentless.

Figure 1: The apexes pairing with the tones in accented and accentless prips

References

[1] G. Lohfink, A. Katsika, and A. Arvaniti. 2019. Variation, variability and category overlap in intonation. Proceedings of the 3rd Phonetics and Phonology in Europe Conference, 87–88. [2] S. Roessig and D. Mücke. 2019. Prosodic variation in a multi-dimensional dynamic system. Proceedings of the 3rd Phonetics and Phonology in Europe Conference, 91–92. [3] D. P. Loehr. 2004. Gesture and intonation. Ph.D. dissertation, Georgetown University Washington, DC. [4] Y. Yasinnik, M. Renwick, and S. Shattuck-Hufnagel. 2004. The timing of speech-accompanying gestures with respect to prosody. Proceedings of From Sound to Sense, vol. 50. Cambridge: MIT, 97–102. [5] T. Leonard and F. Cummins. 2011. The temporal relation between beat gestures and speech. Language and Cognitive Processes, vol. 26, no. 10, 1457–1471.

76 77 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS Towards understanding the multimodality of prominence production and perception 4

1 2 3 s Gilbert Ambrazaitis , Johan Frid , and David House g 3 i n 1 2 3 t a

Linnaeus University, Växjö, Lund University Humanities Lab, KTH Stockholm, Sweden r 2 head d no yes l i z e

Previous research on the visual dimensions of speech suggests that prominent syllables or a 1 m r

words are frequently co-produced with beat-like movements of various parts of the body, and o the present work focuses on head movements (e.g., [1-2]). Several studies have shown for n 0 a number of settings that visually perceived head movements can contribute to the overall -1 a r e s a n o t a ll prominence impression (e.g., [3-4]), and our own ongoing research (in preparation) has n in n a p o k p r a confirmed this effect for head movements produced by Swedish television news presenters: ö k s We let two groups of naïve Swedish participants (44 in an audio-visual and 41 in an audio-only condition) perform prominence ratings in a selection of 16 news clips (218 words in total) using Figure 1. Audio-visual prominence ratings (rater Table 1. Characteristics of 11 selected words (set

a three-level scale. The clips had previously been annotated for head movements. Words co- normalized) for five selected words by a female A: upper part; set B: lower part – see text); fo fall produced with a head movement were overall rated more prominent than words without, and speaker (set A). See Tab. 1 for word semantics and rise in semitones (st) refer to the fall and the and f characteristics rise of a two-peaked pitch accent; ‘Head’ indi- this difference was significantly larger in the audio-visual than in the audio-only condition. o In another study, based on an extended set of Swedish news data, we have shown that pitch cates if a head movement is co-produced accents on words co-produced with head movements (or head and eyebrow movements) tend

to be realized with larger accentual fo movements than pitch accents on words without head Selected word 'saknas' Selected word 'skuldsatta' movements [5]. Together, the two reported results suggest that, in order to make a word more prominent (as perceived audio-visually), pitch accents will be both strengthened acoustically 3 s s 3 g and co-produced with a head movement. However, we still do not understand the details of g i n i n t t a 2 a r the relationships between acoustic (such as f and durations) and kinematic parameters (such r 2

o d as head movements) and how these in turn relate to perceived prominence. A goal for our d i z e i z e l 1 l a subsequent research is to scrutinize these relations by means of extending our perceptual a 1 m m r ratings to the larger dataset used in [5]. r o 0 o n Meanwhile, as a preliminary account to be presented at the conference, along with the overall n 0 rating results presented above (in preparation), we study the relations between f , head o -1 movements, and perceived prominence in two sets of carefully selected words (11 words audio audio_video audio audio_video in total) from the dataset rated so far (218 words). The five or six words in each set were selected in order to introduced a measure of experimental control over phonological-prosodic Figure 2. Audio-visual vs. audio-only prominence Figure 3. Audio-visual vs. audio-only prominence properties as well as fo realizations: All five words in set A are di-syllabic words with initial stress ratings (rater normalized) for the word saknas ‘is ratings (rater normalized) for the word skuldsat- and the same word-accent category (Accent 2) from the same female news presenter; all six missing’ (set A). ta ‘indebted’ (set B). words in set B are compounds (with Accent 2 and two lexical stresses), uttered by the same male news presenter. The words in both sets vary critically in terms of fo realizations and head References movements. Results from the audio-visual condition suggest that the presence of head movements might [1] Swerts, M., & Krahmer, E. 2010. Visual prosody of newsreaders: effects of information structure, play a crucial role for the perception of prominence, as, in set A, words with head movements emotional content and intended audience on facial expressions. Journal of Phonetics 38, 197- were rated clearly higher than words without, given similar f -profiles (see Fig. 1 and Tab. 1); o 206. one word without a head movement (inte ‘not’) likewise received higher ratings, but that word [2] Ambrazaitis, G., & House, D. 2017. Multimodal prominences: Exploring the patterning and usage was also produced with a larger f range. If the high ratings of the words saknas and alla in Fig. o of focal pitch accents, head beats and eyebrow beats in Swedish television news readings. 1 are related to the presence of the visually perceived head movement, then we should expect Speech Communication 95, 100-113. considerably lower ratings for these two words in the audio-only condition. However, this was [3] Mixdorff, H., Hönemann, A., & Fagel, S. 2013. Integration of acoustic and visual cuesin not the result obtained. Fig. 2 compares ratings for the two conditions for the word saknas as prominence perception. In Proceedings of AVSP 2013. Annecy, France. an example (F(1,83)=1.317, p=0.254). Turning to set B, however, we a find a difference between [4] Prieto, P., Puglesi, C., Borràs-Comes, J., Arroyo, E., & Blat, J. 2015. Exploring the contribution of the conditions for at least one of the four words with head movements included in this set (Fig. prosody and gesture to the perception of focus using an animated agent. Journal of Phonetics 3, F(1,83)= 4.22, p=0.043*). These results show that, even if there is an overall effect of visually 49, 41-54. perceived head movements of the prominence rating (in preparation), this effect might be [5] Ambrazaitis, G., & House, D., submitted. Probing effects of lexical prosody on speech-gesture virtually absent in individual words, and other factors than f and head movements can have a o integration in prominence production by Swedish news presenters. rather strong influence on the prominence rating. This latter conclusion is well in line with our [6] Wagner, P. 2005. Great expectations – Introspective vs. perceptual prominence ratings and general knowledge on prominence, and not least on top-down effects (e.g., [6]). More detailed results will be presented and discussed at the conference. their acoustic correlates. In Proceedings of Interspeech 2005 (pp. 2381-2384). Lisbon, Portugal.

78 79 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS DAY 3 - Oral session 1 unfamiliar sound-object pair, and this effect was only found for TD (t = 3.44, p = 0.0008). No FIRST LANGUAGE ACQUISITION other significant differences were found. These findings point to a word learning effect in the trained HP condition only, with LP being treated as a kind of IS, suggesting that phonotactic frequency guides toddlers’ early knowledge of the phonological grammar of words. The How phonological knowledge shapes early word learning in (a)typical development: absence of such effect in the AR group suggests either phonological delay or a different An eye-tracking study developmental path.

Sónia Frota, Jovana Pejovic, Cátia Severino and Marina Vigário (1) Learning phase University of Lisbon Olá. Vamos dar um nome a este boneco. (Hi! Let’s give a name to this toy) Word learning requires associating sound sequences to meaning. It is well-known that infants Este é o [ˈmaku]. (This is [ˈmaku]) demonstrate a perceptual sensitivity to speech sounds that attunes to the native language Olha para o [ˈmaku]. (Look at [ˈmaku]) Vamos brincar com o [ maku]. (Let’s play with [ maku]) during the first year of life. However, knowledge about the sounds interacts with knowledge ˈ ˈ … about the words in ways that are not yet fully understood. Early word learning seems to Eu tenho outro boneco. Também podes brincar com ele. be sensitive to native phonotactic frequency patterns, with high frequency sequences (I have another toy. You can also play with it) being learned earlier, and trained phonotactic regularities shaping word learning ([1, 2]). Test phase Phonological sequences that do not occur in the language, even if they include non-native Olha! (Look!) sounds, also seem to be learned if enough referential support is provided ([3, 4]), unlike Trial 1: Onde está o [ˈmaku]? (Where is [ˈmaku]?) sequences that combine native sounds in phonologically illegal ways ([5, 6]). Importantly, Trial 2: Olha para o [ˈpadu]! (Look at [ˈpadu]!) methodological differences across studies undermine previous findings on how phonology Examples – HP [ˈmaku], [ˈtisu]; LP [ˈbaʎu], [ˈgiRu]; IS [ˈɲagu], [ˈɾibu] guides word learning. In addition, similar sounding word pairs seem to be especially challenging for the early learner ([7] for a review). Previous studies addressing legal or illegal phonotactics did not use similar sounding pairs, nor targeted atypical developing infants. The present study examines whether and how relative phonotactic frequency (high/low probability pseudowords-HP/LP) and phonological grammar (illegal sound sequences-IS) impact early word learning in typically developing (TD) and at-risk (AR) toddlers. Eye movements of European Portuguese-learning toddlers were recorded while watching videos in a word learning task. All participants were raised in monolingual homes and had no hearing or visual problems. Those born full term and with no familial risk for language impairments constitute the TD group (N=31, mean age 20.4 mos); toddlers born premature (<37 weeks) or with familial risk for autism or language disorder form the AR group (N=29, mean age 20.7 mos). Speech stimuli were C1VC2V disyllabic sequences embedded in natural sentences uttered in child-directed speech (see (1-2) below). HP sequences included high frequency C1 and C2, whereas LP sequences included low frequency C1 and C2 ([8]); IS sequences had an illegal prosodic word onset. Vowels were the same across conditions. Object stimuli were the same as in [1]. Four common words that name familiar objects known by toddlers according to the CDI were the control items. The experiment included 8 blocks, each composed of a learning phase (where a label-object pair was presented 6 times, and another object was presented unlabelled), a test phase (where the familiar label and an Figure 1. Structure of a block. unfamiliar label were presented and infants were required to look at one of the two objects), and the control (Figure 1). Target side, trial order and target object were counterbalanced. References Looking time was defined as the net dwell time for each of the two dynamic AOIs in the test phase and control, in both the pre-naming (0-2000ms) and post-naming (2500-5000ms) [1] Gonzalez-Gomez, N., Poltrock, S., & Nazzi, T. 2013. A “bat” is easier to learn than a “tab”: effects of relative phonotactic frequency on infant word learning. PLoS One 8:e59601. phases. Data trimming followed the criteria in [1]. [2] Bren, E., Pomper, R., & Saffran, J. 2019. Phonological Learning Influences Label-Object Mapping The linear mixed-model analysis revealed that in control trials, for both groups, looking at in Toddlers. Journal of Speech, Language, and Hearing Research 62, 1923-1932. the target significantly increased in the post-naming phase relative to the pre-naming phase, [3] May, L., & Werker, J. F. 2014. Can a click be a word? Infants’ learning of non-native words. Infancy as expected (TD, t = 2.36, p = 0.02; AR, t = 2.15, p = 0.03). The data of the post-naming phase 19, 281–300 in test trials was explored to look for effects of the learning condition. Only the learning [4] Bijeljac-Babic, R., Nassurally, K., Havy, M., & Nazzi, T. 2009. Infants can rapidly learn words in a with HP sequences triggered a significant difference between looking at the familiar and foreign language. Infant Behavior and Development 32, 476–480.

80 81 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS [5] Graf-Estes, K., Edwards, J., & Saffran, J. R. 2011. Phonotactic constraints on infant word learning. Phonological variability in child-directed-speech is not affected by recording setting: Infancy 16, 180–197. Preliminary results on Southern German and Swiss German [6] MacKenzie, H., Curtin, S., & Graham, S. A. 2012. 12-month-olds’ phonotactic knowledge guides their word-object mappings. Child Development 83(4), 1129–1136. Katharina Zahner1, Moritz Jakob2, Monika Lindauer2, Bettina Braun2 [7] Werker, J. F. 2018. Perceptual beginnings to language acquisition. Applied Psycholinguistics 39, 1University of Trier, 2University of Konstanz 703-728. [8] Frota, S., Vigário, M., Martins, F., & Cruz, M. 2010. FrePOP – Frequency Patterns of Phonological Objects in Portuguese: Research and Applications. Available at: http://frepop.letras.ulisboa.pt/ Analyses of child-directed speech (CDS) typically focus on prosodic parameters [1-3], but little is known on the extent of segmental variability in children’s input, particularly with respect to regional variation. The long-term goal of our project is to study how the extent of regional phonological variability in the input affects children’s early lexical representations [4]. Here, we present the phonological variability in the input of 4 children (0;6-2;10), 2 living in Southern Germany, 2 in Switzerland, see Table 1. These two Alemannic dialect areas [5] (Fig. 1) differ in the prestige and the usage of dialect [6-8]: In Switzerland, dialect is spoken across different contexts (e.g., also in politics and media). In Southern Germany, dialect is rather spoken in less formal situations and more strongly influenced by the standard language. We analysed CDS from 2 caregivers per child in 2 settings: (a) every-day input in home settings recorded over 2-3 days, which is less artificial and rather informal, and (b) parental descriptions of custom- made picture books in the presence of a researcher, a situation which is more targeted and controlled [9, 10] and thus may be more formal. Because of the different status, we expect variability to be reduced in (b) compared to (a) for children in Southern Germany, while for Swiss children, variability is expected to be comparable across the two recording settings. For (a), we collected 2897 minutes of speech from the 4 families (of which 124 minutes were analysed, N = 7656 words). Families were asked to switch on a portable recorder as often as possible during a 2-3-day period. The recordings were chunked into sound files of ~10 sec. Using Praat [11], for each CDS word form (for both caregivers), we coded whether the realized form deviated segmentally from the standard citation form, i.e., either being a dialectal variant of a specific region (e.g., [nø:t] for [nɪçt] nicht ‘not’) or a general variant common across different regions (e.g., in spoken communication, [nɪç] for [nɪçt]). For (b), we constructed an electronic picture book for each child, in which each picture occurred 3 times. The objects were the 16 most frequent nouns from (a); the pictures were selected from [12, 13] following a naming-test. Parents were recorded via Zoom while looking through the picture book with their child and instructed to naturally interact with the child; the experimenter changed pages after 2-3 mentions of the targets. A picture-book recording session lasted between 5-15 min. Data (63 minutes of speech, N = 7130 words) were annotated as in (a). Figure 2 presents the proportion of the two types of variability (general vs. dialectal) across country of residence and recording setting. There was an interaction between country of residence and type of variability (ß = -0.27, SE = 0.07, t = -4.12, p < 0.001), with more dialectal than general variability for Swiss German children, and, conversely, more general than dialectal variability for Southern German children. There was generally more variability in Switzerland than in Germany (ß = 0.25, SE = 0.05, t = 5.21, p < 0.001). Importantly, recording setting did not play a role (p = 0.68) and did not interact with the other factors. Hence, the difference in phonological variability in the input of children growing up in Southern Germany vs. in Switzerland appeared in both the naturalistic home setting and the picture- book task even though day-long recordings are supposedly less controlled and artificial than recordings done in lab-like situations. Rather, the degree of variability seems to depend on the prestige of the dialect [6-8]. We are currently analysing more data of families from more rural areas in Southern Germany (where dialect is expected to be used more frequently than

82 83 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS in urban areas). We will also examine the type of phonological variability more closely (e.g., 2014. elisions, alternations) in order to derive hypotheses on the formation of early word form [8] Siebenhaar, B. and Wyler, A., Dialekt und Hochsprache in der deutschsprachigen Schweiz. Zürich: representations in children who grow up with a different extent of phonological variability. Pro Helvetia, Schweizer Kulturstiftung, 1997. [9] Wagner, P., Trouvain, J., and Zimmerer, F., “In defense of stylistic diversity in speech research,” Journal of Phonetics, vol. 48, pp. 1-12, 2015. ID Age at Country Parent 1: Mum – Parent 2: Dad – [10] Xu, Y., “In defense of lab speech,” Journal of Phonetics, vol. 38, pp. 329-336, 2010. recording of residence Raised Raised (yrs; (state, town) state, town state, town [11] Boersma, P. and Weenink, D. (2016). Praat: doing phonetics by computer. Version 6.0.23 (version months) (own dialect score, (dialect score) depended on labeller) [Computer program]. 1-5) [12] Reber, K. and Steidl, M., “Zabulo. Individuele Lernmaterialien,” Weiden: Paedalogis, 2016. Child-1 1;7 – 2;3 G BW, Göppingen BW / HH, Konstanz, [13] Lindauer, M., “The acquisition of German by Italian and Turkish heritage language speaking (BW, Konstanz) (score 2) Hamburg (score 2) children: Sociolinguistic aspects and production experiments on proficiency in German word Child-2 0;10 – 1;5 G BW, Konstanz BW, Konstanz (BW, Konstanz) (score 3) (score 2) stress, grammatical gender and verb placement,” PhD Thesis, in prep. Child-3 2;8 – 2;10 CH TG, Kreuzlingen, BW, Konstanz [14] Rabanus, S., “Morphological change in German dialects. Two cases of plural verbs in Alemannic,” (TG, Mannenbach) (score 4) (score 3) in Language Variation in Europe. Papers from ICLaVE 2, Uppsala: 2004, pp. 339-352. Child-4 0;6 – 0;8 CH BY, Augsburg ZH, Bubikon (ZH, Wetikon) (score 3) (score 5)

Figure 1. Overview of dia- Table 1. Overview of participants’ meta data, i.e., child age at time of recording lects in the region around (home recording to picture book, column 2), country of residence (column 3), Lake Constance adapted linguistic background of parents (column 4-5). G = Germany, CH = Switzerland; from [14]. German federal states: BW= Baden-Wuerttemberg, BY=Bavaria, HH=Hamburg; Swiss cantons: TG = Thurgau, ZH = Zurich.

Figure 2. Proportion of variability (dialectal in white, general in grey) for different countries of residence (Southern Germany vs. Switzerland), split by setting (day-long home recordings on the left, and picture-book recording on the right).

References

[1] Zahner, K., Schönhuber, M., Grijzenhout, J., and Braun, B., “Konstanz prosodically annotated infant-directed speech corpus (KIDS corpus),” in Proceedings of the 8th International Conference on Speech Prosody, Boston, USA, 2016, pp. 562-566. [2] Cristià, A., “Input to language: The phonetics and perception of infant‐directed speech,” Language and Linguistics Compass, vol. 7, pp. 157-170, 2013. [3] Kalashnikova, M., Carignan, C., and Burnham, D., “The origins of babytalk: Smiling, teaching or social convergence?,” Royal Society Open Science, vol. 4, pp. 1-11, 2017. [4] Braun, B., Czeke, N., Rimpler, J., Zinn, C., Probst, J., and Zahner, K., “Replicating the familiar word effect using an App and automatic coding of fixations,”Frontiers in Psychology, submitted. [5] Wiesinger, P., “Die Einteilung deutscher Dialekte,” in Dialektologie. Ein Handbuch zur deutschen und allgemeinen Dialektforschung, Besch, W. e. a., Ed., Berlin: de Gruyter, 1983, pp. 807–900. [6] Stępkowska, A., “Diglossia: A critical overview of the Swiss example,” Studia Linguistica Universitatis Iagellonicae Cracoviensis, vol. 129, pp. 199-209, 2012. [7] Schwarz, C., “Conservative and innovative dialect areas,” Taal en tongval, vol. 66, pp. 65-83,

84 85 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS Do features of child-directed speech correlate with children’s success variability in f0 (i.e. more intonational movement). It is conceivable, of course, that parents in building towers? adapt their style to comment on their children’s success afterwards. In future work we plan to differentiate between utterances whose content is relevant for the task. Bettina Braun1, Ursula Fischer2,3, Katharina Zahner1 1Department of Linguistics, University of Konstanz, Germany (a) (b) 2Department of Sport Science, University of Konstanz, Germany 3Thurgau University of Teacher Education, Kreuzlingen, Switzerland

The present contribution is part of a larger project that investigates the factors that guide the prosodic realization of infant- and child directed speech (IDS, CDS). In previous work, we showed that mothers differ in prosodic realization in non-serious compared to serious, goal- oriented situations (Braun & Zahner, 2020). Specifically, in that study we trained a classifier to distinguish between non-serious and serious game situations. Mean f0 was a strong predictor for serious situations (serious game situations had lower meanf0 than non-serious situations). Here, we present an exploratory analysis on whether the prosodic realization of CDS and success in a tower-building task are related. To this end, we analyzed video recordings of thirteen two-year-old girls who had to build a tower consisting of 9 blocks of successively smaller blocks (3 blue, 3 yellow, 3 red), cf. Friedlmeier & Trommsdorff (1999). Mothers were allowed to help their daughters ifthey wished but were not instructed to do so. Beside motor coordination, which is necessary to Figure 1. (a) Correlation plot between success variables (prop_corr_strict and prop_corr_lenient; last two successfully stack one block upon the other, the task primarily measures children’s ability columns), verbal help (prop_verbalhelpblocks) and prosodic and linguistic variables (intended and realized to compare the size of the blocks and sort them from the largest to the smallest. This skill is syllables/sec, number of words, meanf0, sdf0). Dot size shows the strength of the correlation; blue indicates referred to as ‘seriation’ in the literature (e.g., Piaget, 1952) and has been argued to predict positive, red negative correlations. Blank cells signal non-significant correlations at α=0.05. (b) Predicted effect children’s cognitive development as well as their later mathematical skills (Malabonga et al., of standard deviation of f0 on correctness. The gray band shows the 95% confidence interval. 1995; Pasnak et al., 1996). On average, the children worked on 2.5 towers (range 2-3). We operationalized success References by the number of blocks that the child correctly added to the tower. Here we differentiated a strict coding (correct only if a block of the correct size was added) and a lenient coding Baayen, R. H. (2008). Analyzing linguistic data. A practical introduction to statistics using R. Cambridge: (correct if any smaller block was added), resulting in two proportional correctness scores. Cambridge University Press. Douglas Bates, Martin Maechler, Ben Bolker, Steve Walker (2015). Fitting linear We further coded for which of the blocks that the child added the mother gave explicit verbal mixed-effects models using lme4.Journal of Statistical Software, 67(1), 1-48. help (e.g., “No, not the red one! Take the blue one”) and derived the relative proportion. For doi:10.18637/jss.v067.i01. prosodic analysis, we extracted all CDS utterances that occurred within a tower attempt (cf. Braun, B., Zahner, K. (2020). Prosodic classification of serious and non-serious game situations using Braun & Zahner, 2020). We averaged meanf0, sdf0, intended and realized syllables per second infant-directed-speech. Paper presented at the Phonetik und Phonologie im deutschsprachigen and number-of-words for each mother and tower attempt. Figure 1 shows the correlation Raum, Trier, Germany. between the two success variables (prop_corr_strict and prop_corr_lenient), proportion of Friedlmeier, W., Trommsdorff, G. (1999). Emotion regulation in early childhood: A cross-cultural explicit help (prop_verbal_helpblocks) and the aggregated prosodic variables (meanf0, sdf0, comparison between German and Japanese toddlers. Journal of Cross-Cultural Psychology, number of words, intended and realized syllables per second), cf. Wei & Simko (2017). We 30(6), 684-711. here focus on the strict correctness coding. The strongest correlation occurred between Piaget, J. (1952). The Child’s Conception of Number. Routledge. performance and explicit verbal help (r=0.6, 95% confidence interval [0.3;0.8]) and with sdf0 Malabonga, V., Pasnak, R., Hendricks, C., Southard, M., & Lacey, S. (1995). Cognitive gains for (r=0.5, 95%CI [0.05;0.7], see Fig.1(a). To investigate the immediate influence of prosodic Kindergartners instructed in seriation and classification.Child Study Journal, 25(2), 79–96. variables, we calculated a logistic regression model (Baayen, 2008; Bates et al., 2015) with Pasnak, R., Madden, S. E., Malabonga, V. A., Holt, R., & Martin, J. W. (1996). Persistence of gains correctness of the block (strict coding) as dependent variable and the prosodic features of from instruction in classification, seriation, and conservation. Journal of Educational Research, 90(2), 87–92. https://doi.org/10.1080/00220671.1996.9944449 the utterance occurring up to 1 second before the event. There was a significant effect of sdf0 Wei, T, Viliam Simko, V. (2017). R package “corrplot”: Visualization of a Correlation Matrix. (Version only (ß=0.006, SE=0.003, z=2.2, p=0.03); the more variability in f0, the more likely a correct 0.84). Available from https://github.com/taiyun/corrplot event, see Fig. 1(b). Overall, verbal instruction resulted in the strongest overall correlation with success of the tower building phases, as measured by the proportion of correct blocks. Furthermore, the prosodic realization immediately before the block event showed beneficial effects of higher

86 87 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS DAY 3 - Oral session 2 Articulatory results indicate that coordination patterns differ across cluster types and CLINICAL PHONETICS treatment conditions (Figure 2). Shift positions and directions of the C’s in velar clusters are more similar compared to the labial one. The coordination in labial clusters does not seem to be influenced by motor condition, possibly due to a low degree of interarticulator Syllable coordination patterns in Parkinsonian speech: Effects of medication and deep dependencies. In contrast, patterns in velar clusters improve in therapeutic ON conditions brain stimulation on tongue and lip movements accompanied by increased motor ability: Whereas the rightmost C shifts in the wrong

1,2 3 1 2 direction – to the left away from the V in OFF conditions, it moves towards the V in both Tabea Thies , Richard Dano , Michael T. Barbe , Doris Muecke ON conditions. Motor ability of the patients also significantly improves with both therapy 1University of Cologne, Faculty of Medicine and University Hospital, Department Neurology, Germany, 2 3 options: motor ability is best in med-OFF/DBS-ON condition and worsens from med-ON to University of Cologne, Faculty of Arts and Humanities, IfL Phonetics, Germany, University of Cologne, med-OFF/DBS-OFF to med-OFF (Figure 3). The data shows that complex speech patterns Faculty of Medicine, Institute for Nursing Science, Germany deteriorate due to PD. This is especially the case for the velar articulation region (Thies et al., 2020). DBS and medication do not only improve general motor control in patients with Parkinson’s disease (PD) is a movement disorder which affects non-motor and motor PD, but improve also the ability of syllable organization in terms of consonantal and vocalic functions. Motor impairment is manifested in slower, smaller, unprecise and less extended timing in the oral vocal tract. movements of limbs and articulators. Speech deficits include a reduced modulation of F0 and intensity as well as an overall reduced vowel space. To increase motor ability, patients with PD are either treated with the drug ‘levodopa’ or in later stages of the disease with a ‘deep brain stimulation’ (DBS). For a DBS, electrodes get implanted in the brain, which stimulate a specific brain area with electrical pulses. While the positive effects of the treatment methods on motor skills (initiation, coordination) are uncontroversial, effects on speech remain inconclusive in the literature. This study investigates whether the treatment options result in improved speech motor control in addition to improved gross motor skills by focusing on articulatory coordination patterns in syllable onsets with high and low complexity. 10 patients with PD (7 male) were recorded with an Electromagnetic Articulograph (AG 501, Carstens) to track movements of the lips, tongue tip and tongue dorsum. Each patient was recorded in 4 conditions. To Figure 1: Schematized coordination patterns in syllable onsets (Hermes et al. 2019:9). capture the effect of levodopa, patients were recorded in medication-OFF (drug treatment was interrupted for 12 hours) and medication-ON (intake of 200 mg levodopa). To capture the Table 1: Target words and carrier sentence. sole effect of DBS, the same patients were assessed 6 – 12 months after DBS implantation in DBS-OFF (med-OFF) and DBS-ON (med-OFF). In each condition, patients’ motor ability was determined by using part III of the ‘Unified Parkinson’s Disease Rating Scale’ (Goetz et al., 2008), a standard assessment for rating gross motor ability. For the speech production task,

triples of disyllabic pseudowords and words were used with either simple onsets (C1V and C2V) or branching onsets (C1C2V), which were embedded in a carrier sentence (Table 1). In total 720 words were analyzed (9 words x 2 repetition x 10 patients x 4 conditions). Targets of C’s and V’s in the articulatory trajectories of the lip and tongue movements were annotated following Hermes et al. (2019). The interval (ms) between the vocalic and the consonantal target was calculated in CV and CCV onsets (Goldstein et al., 2009). As it

has been shown for typical speech, the interval between C1 and V is expected to increase in C1C2V compared to CV onsets, due to a leftward shift of C1 away from the V to make room for the added C2 (Figure 1). The interval between C2 and V is supposed to not change or to decrease (rightward shift of C2 towards the V). These shift patterns can be affected in a systematic way by segmental context (Bombien et al., 2013, Brunner et al., 2014)kn, ks, pl, ps/ is investigated in four speakers by means of EMA as a function of segmental make-up and prosodic variation, i.e. prosodic boundary strength and lexical stress. Segmental make-up is shown to determine the extent of articulatory overlap of the clusters, with /kl/ exhibiting the highest degree, followed by /pl/, /ps/, /ks/ and finally /kn/. Prosodic variation does not alter this order. However, overlap is shown to be affected by lexical stress in /kl/ and /ps/ and by boundary strength in /pC/ clusters. This indicates that boundary effects on coordination are stronger for clusters with little inter-articulator dependence (e.g. lips + tongue tip in /pl/ vs. Figure 2: Shift duration (+ SE) and direction of leftmost (LMC) and rightmost consonant Figure 3: Motor ability per condition. Lower val- tongue back+tongue tip in /kl/.the timing of lip and tongue movements of C1/l/ and C1/n/ ues indicate better motor skills. productions, with C1 being one of the consonants /p, f, k/, of two speakers were studied (RMC). using electromagnetic articulography (EMA

88 89 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS References Syllable repetition speech rate and rhythm as a tool to distinguish types of fronto- temporal neurodegenerative diseases Bombien, L., Mooshammer, C., & Hoole, P. (2013). Articulatory coordination in word-initial clusters of German. Journal of Phonetics, 41(6), 546–561. https://doi.org/10.1016/j.wocn.2013.07.006 Wendy Elvira-García1 and Victoria Marrero2 Brunner, J., Geng, C., Sotiropoulou, S., & Gafos, A. (2014). Timing of German onset and word 1 boundary clusters. Laboratory Phonology, 5(4): 403-454. Universidad Nacional de Educación a Distancia Goetz, C., et al. & Movement Disorder Society UPDRS Revision Task Force. (2008). Movement Disorder Society-sponsored revision of the Unified Parkinson’s Disease Rating Scale (MDS- Temporal variables are an effective tool to diagnose and distinguish among fronto- UPDRS): Scale presentation and clinimetric testing results. Movement Disorders: Official Journal temporal neurodegenerative diseases (FTD) such as Alzheimer’s, several types of primary of the Movement Disorder Society, 23(15), 2129–2170. https://doi.org/10.1002/mds.22340 progressive aphasia, ALS or corticobasal syndrome (see [1] for a review). Specifically, previous Goldstein, L., Nam, H., Saltzman, E., & Chitoran, I. (2009). Coupled Oscillator Planning Model of Speech research in English states that speech rate, percentage of vowels (%), vowel duration and Timing and Syllable Structure. 239–249. Pairwise Vocalic Index (PVIv) are the most useful parameters for characterizing FTD speech Hermes, A., Mücke, D., Thies, T., & Barbe, M. T. (2019). Coordination patterns in Essential Tremor [2], [3]. However, given that English and Spanish have different rhythm types (stress-time and patients with Deep Brain Stimulation: Syllables with low and high complexity. Laboratory syllable-time), the performance of these metrics could not be the same in both languages. Phonology: Journal of the Association for Laboratory Phonology, 10(1), 6. https://doi.org/10.5334/ labphon.141 Thies, T., Mücke, D., Lowit, A., Kalbe, E., Steffen, J., & Barbe, M. T. (2020). Prominence marking This pilot study aims to point out which temporal features would be more useful to in parkinsonian speech and its correlation with motor performance and cognitive abilities. characterize FTD speech in Spanish repetition tasks by using a small cohort. Neuropsychologia, 107306. https://doi.org/10.1016/j.neuropsychologia.2019.107306 The study analyzed a syllable alternating motion rate task (repeating pa-ta-ka) included in the SpeechFTLD protocol [4]. The participants were, on the one hand, a sample of 34 patients suffering from several FTD (mean age 71.87, σ= 6.78). Specifically, the target group consisted of seven diagnostic groups: progressive primary aphasia in three variants (logopenic (lvPPA), semantic (svPPA) and non-fluent (nfPPA)) = 15; fronto-temporal dementia in two variants (FTD and ALS-FTP) = 9; supranuclear progressive palsy (PSP)= 3 and cortico-basal syndrome (CBS)= 2). On the other hand, 6 controls of were recorded (mean age 71.83, σ=5,03). “Pataka” repetitions (a minimum of 5 repetitions, mean=16, σ=6) were annotated in Praat to segment vowels and consonants separately. The temporal parameters analysed were speech rate, vowel duration, consonant duration, and nine rhythmic metrics (%V, %C, ΔV, ΔC, VarcoV, VarcoC, rPVI-V, rPVI-C, nPVI-V). The duration data were analysed by means of Linear Mixed Models (fixed factors, diagnostic group and segment type -vowel/consonant-; random effects, interaction between subject and segment type). For rhythmic parameters, we used univariate analysis replicating [5] methodology and applying a correction for multiple comparisons [6].

Results show that speech rate is slower in patients with primary progressive non-fluent aphasia (APPnf), amyotrophic lateral sclerosis (ALS-FTD) and cortico-basal syndrome (CBS) (Figure 1). These three clinical groups, as well as those suffering from progressive supranuclear palsy (PSP), have longer segmental durations (χ2 (6)= 36.89, p< 0.0001), especially in consonants (Figure 2). Among the rhythm metrics, %C (F(7)= 5.79, p< 0.0001), ΔV (K(7)= 18.96, p= 0.008) and rPVI-C (K(7)= 15.73, p=0.02) also allow to differentiate between groups, specifically, between control and lvPPA, ALS-FTD y CBS.

Our results on speech rate are congruent with [7], [8], [9]. Regarding duration, although [2], [3], or [10] find vocalic parameters such as “vowel duration”, %V or PVIv to be relevant, in our data, consonant measures are the most useful.

In conclusion, this preliminary analysis highlights the need for specific studies in languages other than English, since typological differences may have an impact on the characterization of these clinical conditions’ speech.

90 91 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS Tongue compensatory gesture in Italian Ipokinetic Dysarthric Speech

Barbara Gili Fivela, Sonia Immacolata d’Apolito, Giorgia Di Prizio University of Salento & CRIL-DReAM – Lecce, Italy

The type of dysarthria that often develops during the course of Parkinson’s Disease (PD) is hypokinetic dysarthria (HD), whose main features are hypokinesia and bradykinesia [1,2,3]. Its impact on speech is described as involving a reduction of the vowel acoustic space and of the amplitude of speech gestures [4,5], though segmental phonological distinctions are preserved as much as possible, thanks to compensation strategies [6]. Besides possible differences in methods, the realization of compensation strategies could account forthe inconsistent results on wider, rather than reduced, amplitude in HD [7,8], such as in the case of PD’s tongue gestures in the front-back dimension in Italian [9], where the wider tongue gesture compensate for the reduced lip gesture in the [i]-[u] vowel cycle [10]. Figure 1. Speech rate results split by diagnostic Figure 2. Duration results split by diagnostic The main goal of this paper is to reach a better understanding of vowel space use and group. groups and segment type (vowel or consonant). compensation strategies with respect to anterior-posterior gestures in Italian HD, by investigating amplitude, duration and timing of lip and tongue gestures in the [a]-[u] vowel cycle. The tongue has relatively small freedom in the backward movement from a [a] central position to [u], so this vowel References cycle, making gesture widening theoretically more difficult, is useful to verify whether the tongue is [1] Poole, M. L., Brodtmann, A., Darby, D., & Vogel, A. P. 2017. Motor speech phenotypes of expected to compensate for lip movement reduction in any context or strategies may be context frontotemporal dementia, primary progressive aphasia, and progressive apraxia of speech. dependent. The main hypothesis is that, measurements regarding PD speech in general show Journal of Speech, Language, and Hearing Research, 60(4), 897–911. reduced values in comparison to those concerning healthy speech [4,5], though we also expect to [2] Ballard, K. J., Savage, S., Leyton, C. E., Vogel, A. P., Hornberger, M., & Hodges, J. R. 2014. Logopenic find increased values related to the tongue backward gesture [10]. and nonfluent variants of primary progressive aphasia are differentiated by acoustic measures The corpus is composed of three word nominal phrases, such as Lu/a X blu ‘The blue X’, where of speech production. PloS one, 9(2). DOI: 10.1371/journal.pone.0089864. Xs are ˈCVCV disyllables, Vs are /a/ - /u/ or /u/ - /a/, and initial or intervocalic Cs may be voiced or [3] Duffy, J. R., Hanley, H., Utianski, R., Clark, H., Strand, E., Josephs, K. A., & Whitwell, J. L. 2017. unvoiced bilabial stops (/p/, /b/). Four mild-to-severe dysarthric PD speakers from Lecce – Southern Temporal acoustic measures distinguish primary progressive apraxia of speech from primary Italy (age 65-80) and four peer healthy controls (CTR) participated in the study. The same severity progressive aphasia. Brain and language, 168, 84–94. level was reported for PD speakers according to clinical assessment; no relevant clinical issue was [4] Vogel, A. P., Fletcher, J., & Maruff, P. 2014. The impact of task automaticity on speech in noise. reported by CTR speakers. The corpus was read 7 times at a comfortable speech rate. Acoustic and Speech Communication, 65, 1–8. [5] Wilson, S. M., Henry, M. L., Besbris, M., Ogar, J. M., Dronkers, N. F., Jarrold, W., Miller, B. L., articulatory data were collected synchronously by an EMA 3D AG501 (Carstens Med., GmbH) in a & Gorno-Tempini, M. L. 2010. Connected speech production in three variants of primary quiet room at the CRIL-DReAM Lab. Articulatory data were recorded by means of 7 sensors: 2 glued progressive aphasia. Brain, 133(7), 2069–2088. DOI: https://doi.org/10.1093/brain/awq129 on tongue mid-sagittal plane (dorsum and tip), 2 on lips vermillion border (upper and lower), 3 for [6] Hothorn, T., Bretz, F., Westfall, P., Heiberger, R. M., Schuetzenmeister, A., Scheibe, S., & Hothorn, normalization (1 nose, 2 ears). M. T. 2016. Package ‘multcomp’. Simultaneous inference in general parametric models. R package. The acoustic signal was manually labelled in PRAAT as for consonant (C) and vowel (V) boundary. [7] Ash, S., Evans, E., O’Shea, J., Powers, J., Boller, A., Weinberg, D., ... & Grossman, M. 2013. Segments’ duration was automatically extracted together with F1, F2 values in the central 50 ms of Differentiating primary progressive aphasias in a brief sample of connected speech.Neurology , the vowels. Articulatory data labeling was performed by means of MAYDAY [11]. Gestural targets 81(4), 329–336. and velocity peaks were labeled for both tongue dorsum (TD) and lower lip (LL) track at the zero [8] Fraser, K. C., Meltzer, J. A., Graham, N. L., Leonard, C., Hirst, G., Black, S. E., & Rochon, E. 2014. velocity, along both the vertical and the horizontal axes. The duration and amplitude of lower lip/ Automated classification of primary progressive aphasia subtypes from narrative speech tongue dorsum gestures were calculated. Statistical analysis were performed using mixed models transcripts. Cortex, 55, 43–60. [9] Cordella, C., Dickerson, B. C., Quimby, M., Yunusova, Y., & Green, J. R. 2017. Slowed articulation in R (lme4 [12,13]). Residual plots did not reveal any obvious deviations from homoscedasticity or rate is a sensitive diagnostic marker for identifying non-fluent primary progressive aphasia. normality. Fixed effects, with interactions, wereVoicing (voiced vs. unvoiced) and Vowel cycle (Vcycle Aphasiology, 31(2), 241–260. doi: 10.1080/02687038.2016.1191054 AU vs. UA), Population (PD vs. CTR) and Repetition (7 levels). The latter two were introduced to take [10] Vergis, M. K., Ballard, K. J., Duffy, J. R., McNeil, M. R., Scholl, D., & Layfield, C. 2014. An acoustic into account the impact of impairment. Intercepts and random slopes for subjects were set for measure of lexical stress differentiates aphasia and aphasia plus apraxia of speech after inter-subjects variability as for the realization of repetition. A Likelihood Ratio Test was used to test stroke. Aphasiology, 28(5), 554–575. the significance (p<0.05) of each fixed effect. Acoustic measures confirm expectations, showing that central low /a/ and back high /u/ vowels are significantly different in both PDs and CTRs productions and, in PDs, F1 is lower for both vowels, and F2 shows higher values for /u/. That is, the vowel acoustic space seems slightly higher and

92 93 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS forward shifted as for the /u/ vowel (Fig 1). Nevertheless, articulatory results show that the tongue Speakers with Parkinson’s Disease”. Parkinson’s Disease, 1-8. backward gesture seems to be wider in PD’s than in CTR’s /u/s, and the lip protrusion gesture is less [9] Gili Fivela B., Iraci M., Sallustio V., Grimaldi M., Zmarich C., Patrocinio D. (2014) “Italian Vowel and wide in PDs than in CTRs (Fig.2 left and right respectively, where negative numbers correspond to Consonant (co)articulation in Parkinson’s Disease: extreme or reduced articulatory variability?”, th backward gestures and positive to forward ones). In Fuchs et al. (eds), Proceedings of the 10 International Seminar on Speech Production (ISSP), Cologne, Germany,146-149. Results will be discussed to show that in PD’s HD the acoustic space may be somewhat reduced, [10] Gili Fivela B., d’Apolito S., Di Prizio G. (2020). and Prosodic Modulation in Italian thought it looks slightly forward shifted too. Interestingly, a wider backward tongue gesture is Dysarthric Speech by Parkinsonian Speakers: A Preliminary Investigation, Speech Prosody 2020, used by PDs to compensate for the reduced lip gesture (as in [10]), even if it involves the quite May 24-28 at UTokyo, Japan, 828-832. restrictive posterior part of the oral cavity. Thus, the compensation strategy taking place seems to [11] Sigona, F., Stella, A., Grimaldi, M. & Gili Fivela, B. (2015) “MAYDAY: A Software for Multimodal be independent from the phonetic context. Articulatory Data Analysis”. In A. Romano et al. (eds.) Aspetti prosodici e testuali del raccontare, Atti del X Convegno Nazionale AISV, Edizioni dell’Orso, pp. 173-84. [12] R Core Team (2019) “R: A language and environment for statistical computing”. R Foundation for Statistical Computing, Vienna, Austria, URL:http:/www.Rproject.org/. [13] Bates, D.S. Maechler, M., Bolker, B., Walker, S. (2015) “Fitting Linear Mixed-Effects Models Using lme4”. Journal of Statistical Software, 67(1), 1-48.

Figure 1: F2 values as of tongue in /u/ (AU cycle, left boxes) and /a/ (UA cycle, right boxes) in PDs (PAT, grey) and CTRs (CONTR, white)

Figure 2: Amplitude of tongue (left panel) and the lower lip (right panel) horizontal gestures: gesture to accented /u/ (AU cycle, left boxes) and to accented /a/ (UA cycle, right boxes) in PDs (PAT, grey) and CTRs (CONTR, white)

References [1] Ackermann, H., W. Ziegler (1991) “Articulatory deficits in Parkinsonian dysarthria: an acoustic analysis”. In: Journal of Neurology, Neurosurgery, and Psychiatry, 54, 1093–8. [2] Duffy, J.R. (2005) “Motor Speech Disorders: Substrates, Differential Diagnosis, and Management”. 2°ed., Elsevier Mosby. [3] Darley FL, Aronson AE, Brown JR.(1975) “Motor speech disorders”. W.B.Saunders and Co. [4] Skodda, S., W. Gronheit, U. Schlegel (2012) “Impairment of Vowel Articulation as a Possible Marker of Disease Progression in Parkinson’s Disease”. PLoS ONE, 7 (2), 1–8. [5] Skodda S., W. Visser, U. Schlegel (2011) “Vowel Articulation in Parkinson’s Disease”. J of Voice, 25, 4, 467–72. [6] Iraci, M.M. (2017) “Vowels, consonants and co-articulation in Parkinson’s Disease”. Unpublished PhD Dissertation. University of Salento, Lecce, Italy. [7] Wong, M. N., B. E. Murdoch, B. M. Whelan (2010) “Kinematic analysis of lingual function in dysarthric speakers with Parkinson’s disease: An electromagnetic articulograph study”.Int.JS- LPathology,12,414–25. [8] Wong, M. N., B. E. Murdoch, B. M. Whelan (2011) “Lingual Kinematics in Dysarthric and Nondysarthric

94 95 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS The role of automatic acoustic profiling of prosody in schizophrenia even better than those reported in previous literature using acoustics with rates between 78-87% (Kliper et al., 2010; Martínez-Sánchez et al., 2015; Parola et al. 2019). Unconstrained Will Jones1, Jalal Al-Tamimi1 and Wolfram Hinzen2 material (both CT and SP) requires a different memory load in schizophrenia and is correlated 1Newcastle University, Newcastle, UK; 2 Universitat Pompeu Fabra, Barcelona, Spain with increase in semantic competitors, which may be linked to the disorganised thinking and formal though disorder in this client group (Hinzen and Rosselló, 2015). These results suggest a role for automated-acoustic analysis of prosody in the diagnosis and monitoring Background: Individuals with schizophrenia show abnormal speech prosody, such as of schizophrenia. increase in pausing, reduction in speech rate and reduced pitch range; usually termed as “flat” affect (Kliper et al, 2010; Martínez-Sánchez et al 2015; Çokal et al. 2019; Parola et al. 2019). However variable phonetic correlates are reported, likely due to differences between studies, and heterogeneity in the group leading to lack of generalisation in diagnosis (Parola et al. 2019). Heterogeneity in material (read vs spontaneous), type of analyses (manual vs automated), correlates used (temporal vs non-temporal) and statistical designs (traditional vs machine learning) limits generalisations (Parola et al. 2019). Here, we evaluate the efficacy of automated methods of prosodic profiling, with temporal and non-temporal predictors, obtained from three speech tasks, and the use of machine learning in identifying group differences. Method: 24 participants (12 schizophrenia; 12 age-matched controls; 7 males and 5 females in each group), aged 22-68 (schizophrenia: Mean = 45.2, SD = 9.8; control: Mean = 44.6, SD = 15.1) were recruited. Various psychological measures were collected, e.g., Digit a) b) Span, National Adult Reading Test (NART) and Brief Psychiatric Rating Scale scores (BPRS), alongside years of education, gender and age. We used three speech tasks: constrained (grandfather passage, GF), and unconstrained: a) description of an image (the Cookie Theft, CT) and b) spontaneous speech (imagining and describing the “ruins of a derelict building”, SP). Analysis: We used Prosogram (Mertens, 2004) to automatically 1) segment the speech outputs into acoustic syllables and nuclei and 2) generate prosodic profiles per speaker and task. A total of 27 acoustic measures were used and fall within temporal (e.g., speech rate, proportion of phonation and pauses, etc.) and non-temporal (e.g., pitch mean, range and trajectories, etc.) measures. Data were statistically analysed using Principal Component Analyses (PCA) as a data reduction/clustering technique and then via Random Forests (RF) as a predictive tool (66.7% training and 33.3% testing sets: 10-folds cross-validation). Results: Figure 1 (a, b, c) below shows that schizophrenia and control groups were accurately separated c) d) using PCA; the first three dimensions explained around 70% of the variance (88% for the first 5 dimensions). Most of the pitch (mean, SD, range, trajectory), speech rate and proportion of Figure 1: PCA Biplot with group clustering (C = Control; SZ = Schizophrenia), variables and individuals, in the phonation measures were highly correlated with the first three dimensions; the psychological Grandfather passage (a, GF) in Cookie Theft (b, CT), and Spontaneous (c, SP); RF rates using acoustic metrics in measures with the second and third dimensions. Individuals with schizophrenia or controls the three tasks (GF, CT, SP), combined data of three tasks (3 tasks) and from the unconstrained tasks (CT and SP, 2 tasks; in d). were clustered differently depending on the task. Results of RF on the testing set are shown in Figure 1 (d) using acoustic data. A clear difference between tasks is seen: GF received the lowest classification accuracy at 67%, followed by 83% (in both CT and SP). When the three References passages were combined, we achieve a rate of 77% (3 tasks) that increases to 93% when using the unconstrained tasks (2 tasks). The variables that contributed the most to the models in Çokal, D., Zimmerer, V., Turkington, D., Ferrier, N., Varley, R., Watson, S., & Hinzen, W. (2019). CT, SP and the 2 unconstrained tasks were % phonation/pause, pitch measures with mean, Disturbing the rhythm of thought: Speech pausing patterns in schizophrenia, with and trajectories, falls/rises and duration of nuclei, etc. Conclusion: The results presented here without formal thought disorder. PloS one, 14(5), e0217404. https://doi.org/10.1371/ are promising: the high classification rates based on automatically segmented speech and journal.pone.0217404 automated prosodic profiles are a strong indicator of efficacy in detecting the differences Hinzen, W., & Rosselló, J. (2015). The linguistics of schizophrenia: thought disturbance as between groups. Clustering via PCA showed clear overlap and variation between the groups: language pathology across positive symptoms. Frontiers in psychology, 6, 971. https:// doi.org/10.3389/fpsyg.2015.00971 individuals within each group vary but show consistent use of specific acoustic measures. Classification rates obtained in this study using a fully automated method are closeand

96 97 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 ORAL SESSIONS Kliper, R., Vaizman, Y., Weinshall, D. and Portuguese, S., (2010). ‘Evidence for depression and schizophrenia in speech prosody’, Third ISCA Workshop on Experimental Linguistics. Available at https://www.isca-speech.org/archive/exling_2010/ Martínez-Sánchez, F., Muela-Martínez, J. A., Cortés-Soto, P., García Meilán, J. J., Vera Ferrándiz, J. A., Egea Caparrós, A., & Pujante Valverde, I. M. (2015). Can the Acoustic Analysis of Expressive Prosody Discriminate Schizophrenia?. The Spanish journal of psychology, 18, E86. https://doi.org/10.1017/sjp.2015.85 Mertens, P. (2004) Prosogram: Semi-Automatic Transcription of Prosody based on a Tonal Perception Model. in B. Bel & I. Marlien (eds.) Proceedings of Speech Prosody 2004, Nara (Japan), 23-26 March. (ISBN 2-9518233-1-2) Parola, A., Simonsen, A., Bliksted, V., & Fusaroli, R. (2020). Voice patterns in schizophrenia: A systematic review and Bayesian meta-analysis. Schizophrenia research, 216, 24–40. https://doi.org/10.1016/j.schres.2019.11.031

poster SESSIONS

In order of appearance in the programme

Poster Presentation

98 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 DAY 1 - Poster Session A only with the presence of boundary tones but also with differences in duration. Thus, the SLA - Suprasegmental features simplification of the categories, on the one hand, affects the coherence of the evaluators, since their judgments have not a unique acoustic correlate. On the other hand, it also affects Automatic and human evaluation of L2 prosody: difficulties and discrepancies the students, since the feedback they receive is supposed to tell them which areas they should improve and this is not achieved with generic content messages.Given the complexity Lourdes Aguilar1, Eva Estebas-Vilaplana2 of the prosodic phenomena, the boundary between wrong and unexpected needs further 1 Departament de Filologia Espanyola, Universitat Autònoma de Barcelona clarification so as to achieve a greater coincidence of judgments among the evaluators and 2 Departamento de Filologías Extranjeras y sus Lingüísticas, Universidad Nacional de Educación a to define the prosodic wellformedness in a more precise way. Distancia Table 1. Contingency table of the labels assigned by the human evaluators. In each cell, the columns present the R1 data and the rows the R2 data. NM=No Mistake; UI=Uncommon intonation; UB=Unexpected boundary; Despite the general acknowledgment of prosody as an important issue for L2 learners, the OA=Over-accentuation LB=Lack of boundary; LA=Lack of accent. amount of prosodic training in L2 courses is poor and often non-existent. Consequently, learners, including those that are more proficient, preserve their L1 intonation when they speak the L2 (Cruz- R1/ NM UI UB OA LB LA Ferreira, 1989; Jenkins, 2000; Chen et al., 2016, de-la-Mota, 2019). This situation becomes worse in R2 self-tuition environments where students need well-defined methodologies to approach prosodic NM 723 20 11 36 1 1 issues in an autonomous way (Estebas-Vilaplana, 2017). In this context, we propose to draw attention UI 79 34 4 4 1 0 on the complexity of identifying prosodic errors in automatic evaluation of prosody. Thus, the aim UB 10 5 38 2 0 0 of this study is to compare the identification of prosodic errors delivered by an automatic system OA 13 2 1 12 1 0 with that provided by human evaluators in a corpus of utterances produced by Japanese learners LB 12 0 0 0 5 0 of Spanish. The sample includes a total of 3535 words in 60 sentences pronounced by 8 Japanese LA 2 0 0 0 0 2 learners of Spanish that were transcribed with the speech segments identified as prosodic errors and the corresponding feedback message, first given by an automatic system, and later revised by two human evaluators. The labels used for the prosodic evaluation of the data followed the Sp_ToBI References conventions (Prieto & Roseano, ed. 2010). The automatically assigned Sp_ToBI labels were obtained by the system described in Chen, N. F., Wee, D., Tong, R., Ma, B., & Li, H. (2016). Large-scale characterization of non-native González- Ferreras et al. (2012) and in Escudero-Mancebo et al. (2014). Nevertheless, the Mandarin Chinese spoken by speakers of European origin: Analysis on iCALL. Speech direct use of ToBI labels for informing learners about their prosodic limitations was discarded Communication, 84, 46-56. since the interpretation of ToBI labels requires prior training in the phonetics and phonology Cruz-Ferreira, M. (1989). A test for non-native comprehension of intonation in English. International of intonation and this goes beyond the needs of an ELE (Spanish as a Foreign Language) Review of Applied Linguistics in Language Teaching, 27(1), 23-39. student. Thus, the ToBI prosodic labels were transformed into a set of feedback messages, de-la-Mota, C. (2019). Improving non-native pronunciation: Teaching prosody to learners of which could be easily understood by a wide spectrum of users: 1) Lack of accent; 2) Over Spanish as a second/foreign language. In Key Issues in the Teaching of Spanish Pronunciation accentuation; 3) Lack of boundary; 4) Unexpected boundary; 5) Uncommon intonation. (pp. 163-198). Routledge. The messages given by the automatic system were revised and validated by two human Escudero-Mancebo, D., Aguilar, L., González-Ferreras, C., Gutiérrez-González, Y. & Cardeñoso- evaluators (R1 and R2) independently. The consistency of the human evaluators with respect Payo, V., 2014. On the use of a fuzzy classifier to speed up the Sp ToBI labeling of the Glissando Spanish corpus. In: Language Resources and Evaluation Conference (LREC). to the system predictions is Kappa=0.525 (moderate agreement) for Reviewer 1 and 0.709 Estebas-Vilaplana, E. (2017). The teaching and learning of L2 English intonation in a distance (substantial agreement) for Reviewer 2. The consistency of the two human assignments is education environment: TL ToBI vs. the traditional models. Linguistica, 57(1), 73-91. Kappa=0.488 (moderate agreement). González-Ferreras, C., Escudero-Mancebo, D., Vivaracho-Pascual, C. & Cardeñoso-Payo, V. (2012). Table 1 shows the number of different transcriptions. There is a high degree of coincidence Improving automatic classification of prosodic events by pairwise coupling. Audio, Speech, in the categories that report the absence of tonal accent (LA) and boundary tone (LB), while and Language Processing, IEEE Transactions on 20 (7), 2045–2058. uncommon intonation (UI), the unexpected appearance of a prosodic boundary (UB) and Jenkins, J. (2000). The phonology of English as an international language. Oxford: Oxford University overaccentuation (OA) are the phenomena with the highest number of disagreements (88, Press. 17 and 17 over 122, 55 and 29, respectively). Prieto, P. & Roseano, P. (2010). Transcription of intonation of the Spanish language. München: Lincom The analysis of the divergences among the reviewers shows that the feedback messages Europa. have simplified the content of the tonal categories, giving rise to different interpretations. This is the case for Uncommon Intonation: while R2 has interpreted it as a wrong production, R1 has interpreted it more broadly including cases of unexpected intonation (for example, the difference between a suspensive tone and a rising tone). Another discrepancy is related to the interpretation of lengthening. The category Unexpected boundary has to do not

100 101 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Longitudinal changes in fundamental frequency: a study of 20 untrained speakers’ [2] Lalley, P. M. 2013. The aging respiratory system-Pulmonary structure, function and neural reading from a 10-years perspective control. Respiratory Physiology & Neurobiology 187, 199-210 . [3] Stathopoulos, E.T., Huber, J. E., & Sussman, J. E. 2011. Changes in acoustic characteristics of the Tekla Etelka Gráczi1,2, Anna Huszár1, Valéria Krepsz1, Alexandra Markó2,3 voice across the life span: Measures from individuals 4–93 years of age. J. Speech Lang. Hear. Res. 54, 1011-1021. 1 Hungarian Research Centre for Linguistics, 2 MTA-ELTE “Lendület” Lingual Articulation Research 3 [4] Fouquet, M., Pisanski, K., Mathevon, N., & Reby, D. (2016). Seven and up: individual differences Group, Eötvös Loránd University in male voice fundamental frequency emerge before puberty and remain stable throughout adulthood. R. Soc. opensci. 3, 160395. Fundamental frequency (f0) varies across the speaker’s age due to physiological changes [5] Decoster, W., & Debruyne, F. 2000. Longitudinal Voice Changes: Facts and Interpretation. caused by the primary and secondary ageing [1]. The major changes in f0 appear in elder Journal of Voice 14(2), 184-193. age. Aging starts around the age of 30-35 years [2]. Previous studies reached contradicting [6] Harrington, J., Palethorpe, S., & Watson, C. I. 2007. Age-related changes in fundamental results. A cross-sectional study [3] on 88 untrained male speakers and a longitudinal study frequency and formants: a longitudinal study of four speakers. 8th Annual Conference of the of 10 professional male speakers [4] found a stable period after the age of 20 until the age International Speech Communication Association, 1081-1084. of 60 and a decrease afterward. Other studies, however, found a decrease in thee mean f0 [7] Reubold, U., Harrington, J., & Kleber, F. 2010. Vocal aging effects on F0 and the first formant: a of 15 out of 20 broadcasters’ reading between their late 20s/early 30s and late 50s, early 60s longitudinal analysis in adult speakers. Speech Communication 52 (7–8), 638-651. [8] Neuberger, T., Gyarmathy, D., Gráczi, T. E., Horváth, V., Gósy, M., & Beke, A. 2014. Development [5], in 4 professional speakers’ data between 29 and 50 years [6], and in professional females of a large spontaneous speech database of agglutinative Hungarian language. In Proceedings also in their early adult ages [7]. of the 17th International Conference, TSD 2014, September 8-12, 2014., Brno. Our aim is to study f0 changes in untrained young (20s-40s) adult speakers in a ten [9] Gráczi, T. E., Huszár, A., Krepsz, V., Száraz, B., Damásdi, N., Markó, A. 2020. Longitudinális korpusz years’ time lag. This relatively short time lag is relevant for speaker verification, speaker magyar felnőtt adatközlőkről. [Longitudinal corpus of Hungarian adult speakers’ speech] In identification and clinical phonetics. We hypothesized that similarly to [5-7] on average small Magyar Számítógépes Nyelvészeti Konferencia 16, 103-114. changes appear in f0 but with large individual differences. [10] Boersma, P., & Weenink, D. 2020. Praat: doing phonetics by computer [Computer program]. 10 female and 10 male speakers’ recordings (20-45 years old at the first [8], +10, i.e. 30-55 Version 6.1.24, retrieved 29 September 2020 from http://www.praat.org/ years old at the second recording [9]) were analyzed. 3 out of 10 women were treated for [11] Mangiafico, S. 2021. rcompanion: Functions to Support Extension Education Program thyroid disease. None of them reported any other voice related health issues within the 10 Evaluation. R package version 2.4.0. https://CRAN.R-project.org/package=rcompanion [12] Bates, D., Mächler, M., Bolker, B., & Walker, S. 2015. Fitting linear mixed-effects models using years between the two recordings or before. The reading aloud of the same 15 sentences lme4. Journal of Statistical Software 67, 1-48. were analysed. F0 parameters were measured by Praat [10]. The To Pitch function was set [13] Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions on to automatic time step, the range of f0 were 50 Hz and 300 Hz for male, and 100 Hz and 500 Automatic Control 19(6), 716-723. Hz for female speakers. The outliers were eliminated. The mean f0 and the f0 range were [14] Kuznetsova, A., Brockhoff, P. B., & Christensen, R. H. B. 2017. lmerTest Package: Tests in Linear calculated for the entire recording (global mean f0, global f0 range) and at sentence level Mixed Effects Models. Journal of Statistical Software 82(13), 1-26. http://doi.org/10.18637/jss. (local mean f0 and local f0 range.) The global f0 mean and range were tested by Scheirer– v082.i13. Ray–Hare test [11], the local mean f0 and f0 range by linear mixed effects models [12, 13, [15] Bryant, G.A., Haselton, M.G. 2009. Vocal cues of ovulation in human females. Biology Letters 14]. These four measures were set as dependent variables. The gender and the recording 5(1), 12-15, https://doi.org/10.1098/rsbl.2008.0507 session (first vs. 10 ys later) were set as factors allowing for interaction. The linear mixed [16] Abitbol, J., Abitbol, P., Abitbol, B. 1999. Sex hormones and the female voice. Journal of Voice 13(3), 424-446, 10.1016/S0892-1997(99)80048-4 models included random slopes for the recording session over speakers. Backward model [17] Pisanski, K., Bhardwaj, K., Reby, D. 2018. Women’s voice pitch lowers after pregnancy. Evolution selection was used. and Human Behavior 39(4). 457-463. https://doi.org/10.1016/j.evolhumbehav.2018.04.002 A general trend of decrease was found in female’s reading, and larger variability in male [18] Oleszkiewicz, A, Pisanski, K, Lachowicz-Tabaczek, K, Sorokowska, A. 2017. Voice-based speakers’ reading. The appearance of change did not depend on the speakers’ age. The assessments of trustworthiness, competence, and warmth in blind and sighted adults. Psychon global f0 range did not change in the analysed time interval, however, the local f0 range was Bull Rev. 24(3), 856-862. https://doi.org/10.3758/s13423-016-1146-y higher in the second recording. The data showed considerable interspeaker variability. [19] Sorokowski, P., Puts, D., Johnson, J. et al. Voice of Authority: Professionals Lower Their The difference between the two genders might be resulted by various reasons, e.g. the Vocal Frequencies When Giving Expert Advice. J Nonverbal Behav 43, 257–269. https://doi. menopausal status [15], but the period cycle [16], pregnancy [17] may be among the reasons org/10.1007/s10919-019-00307-0 but also sociolinguistic factors, like work status [18, 19] may play a role. Future work the results can be used in clinical phonetics, speaker verification, and speaker recognition. This research was supported by the National Research, Development and Innovation Office of Hungary, project No. FK-128814. References

[1] Busse, E. W. 2002. General Theories of Aging. In Copeland, J. R. M., Abou-Saleh, M. T., & Blazer, D. G. (Eds.), Principles and Practice of Geriatric Psychiatry, Second Edition. Oxford: Willey- Blackwell.

102 103 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS QuiTO el PLAto: Exploring stress deafness “persistency” through a comprehension stress errors inserted into sentences they heard. task involving contrastive pairs in L2 Spanish The obtained data were analysed by means of linear mixed-effects regression models, in which participants were introduced as random variables. Independent factors were Group, Syrine Daoussi, Marta Estrada Medina and Lorraine Baqué Level of L2 Spanish (intermediate vs advanced), Type of error (segmental vs stress-related), Lexical vs morphological stimuli, Stress pattern and Item complexity. The dependent variable The paper addresses the question of whether the alleged stress-deafness of L1 French was the Signal Detection Theory measure of sensibility loglinear A’ [6] . learners of Spanish is persistent (as claimed in some parts of the literature) or whether it The results showed that L2 advanced participants’ performance was near-native and far diminishes with increasing L2 proficiency. Here we asses phonological stress deafness thanks better than French intermediate learners of Spanish. In other words, “stress deafness” is to an oral comprehension task including morphological or lexical stress errors opposed to not as persistent in advanced learners of Spanish as predicted by [3]. The findings of this segmental errors. study are that i) intermediate learners showed almost no sensitivity to any kind of error ii) The absence of contrastive stress in French, unlike other romance languages, lead researchers for advanced leaners, segmental errors are easier to detect than stress errors, iii) errors to the hypothesis that French speakers are “deaf” to L2 stress contrasts because stress are easier to detect on smaller units as isolated words and iv) there are only differences pattern is not encoded in their own language at a phonological level [1],[2]. Moreover, further between groups on the stress pattern. The fact that the stress errors were lexical studies comparing French advanced learners of Spanish versus French learners with lower or morphological was not significant in this study. proficiency in L2 Spanish shed light on the fact that stress deafness also appears to a high extent in advanced learners and got to the conclusion that stress deafness is “persistent” [3]. Although there is not yet a psycholinguistic model predicting the acquisition of prosody, we Apart from discrimination tasks, the advanced learners were asked to make a lexical decision share the view of [7] that suggest a gradual acquisition of segments what led us to suppose task involving a shift in the stress pattern i.e. goRRO instead of GOrro (hat), that advanced that prosodic acquisition is also gradual and follows segmental acquisition. learners did not recognize as erroneous [3]. However, other studies in the literature do not share the view of stress deafness in particular [4] found that French learners of Spanish Key words: stress deafness, L2 acquisition, French, Spanish, lexical stress, morphological were able to perceive correctly different stress patterns. Interestingly, [5] suggested that the stress, contrastive stress, oral comprehension. exposure to L2 makes the French speakers more sensitive to stress. To our knowledge, no studies were carried out to assess the effects of stress deafness Examples of the stimuli we created to test participants’ performance on simple persistency for French learners through an oral comprehension task in L2 Spanish that sentences: implies to understand the whole meaning of the sentence to spot the stress or segmental errors both available in the task, but not at the same time. After a pilot study only focused on LEXICAL STRESS CONTRASTS morphological errors (i. e lavo (I wash) instead of lavó (he washed), the authors found that Condition Context+ target sentence i) according to the literature, stress errors were more difficult to process than segmental errors and ii) against findings of [3], advanced learners performed almost like the control Correct sentence Yo soy camarero y cada día pongo la mesa para los Spanish group. clientes. // Primero el PLAto In order to see if the type of stress values of the stimuli could affect stress deafness or show different processing in lexical access, the authors created a corpus and a task comparing Incorrect sentence with a wrong stress pat- Yo soy camarero y cada día pongo la mesa para los lexical stress pairs in Spanish (i.e PLAto vs plaTO) to morphological stress pairs (CORto vs tern clientes. // Primero el plaTO corTO), and that is the aim of the present study. To do so, the authors created a new corpus with two separate parts to evaluate French Incorrect sentence with a segmental error Yo soy camarero y cada día pongo la mesa para los phonological performance regarding errors on lexical vs morphological stress contrasts condition clientes. // Primero el PLEto with randomized items. The same number of stimuli appeared in three conditions: correct sentences, incorrect sentences (wrong stress pattern), and incorrect sentences (wrong MORPHOLOGICAL STRESS CONTRASTS vowel). An example of each type of stimuli is presented on the following page. Condition Context+ target sentence The listening task included three levels of decreasing difficulty: first, the participants heard Correct sentence Juan es camarero y ese día hizo lo mismo de complex sentences, then simple sentences, and finally, isolated words. All stimuli appeared siempre con los platos. //Primero los quiTO. after a sentence that provided a semantic context (ex: Yo soy camarero y cada día pongo la Incorrect sentence with a wrong stress pattern Juan es camarero y ese día hizo lo mismo de mesa para los clientes. // Primero el plato.) (I am a waiter and every day, I set the table for the siempre con los platos. // Primero los QUIto. customers. // First, the plate.) Incorrect sentence with a segmental error con- Juan es camarero y ese día hizo lo mismo de Three groups of participants took part in this experiment: 25 French speakers with an dition siempre con los platos. // Primero los cuTO. advanced level of Spanish (C1-C2 of the CEFL), 25 French speakers with an intermediate level of Spanish (B1-B2 of the CEFL), and 25 native Spanish speakers as a control group. The participants were asked to assess the linguistic acceptability of morphological or lexical

104 105 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS References SWP vs. WSP in L2 word stress assignment: A case study of “Ungheriano”

[1] Dupoux, E., Pallier, C., Sebastián, N., & Mehler, J. (1997). A Destressing “Deafness” in French? Bálint Huszthy Journal of Memory and Language, 36, 406–421. Retrieved from https://pdfs.semanticscholar. Pázmány Péter Catholic University, Budapest org/0c8f/f9ce3660dae2d4aa58ebb89b390a5ea4f0c0.pdf [2] Dupoux, E., Peperkamp, S., Sebastian-Gallés, N., & Dupoux, E., Peperkamp, S., & S. (2001). A Foreign accents represent a priceless data source for theoretical phonology; in fact, numerous robust method to study stress ‘deafness’. Journal of the Acoustical Society of America, (110), significant components of a foreign accent stem from the interference with L1’s synchronic 1606–1618. phonology: productive phenomena are transferred onto L2 pronunciation (cf. [1], [2], [3], [4], [3] Dupoux, E.; Sebastián-Gallés, N.; Navarrete, E. . P. S. (2008). Persistent stress “deafness”: the [7]). The present case study is built on an experiment made with Hungarian native speakers case of French learners of Spanish. Cognition, 106(2), 682–706. who fluently speak Italian as L2. Most phonological components of Hungarians’ Italian [4] Muñoz, M., Panissal, N., Billières, M., & Baqué, L. (2009). ¿La metáfora de la criba fonológica se pronunciation (called “Ungheriano”) originate in the differences of the syllable structures puede aplicar a la percepción del acento léxico español?: Estudio experimental con estudiantes between L1 and L2. francófonos. In Almería, España. In Carmen M. Bretones Callejas et al. (Ed.), Actas del XXIV Stop+liquid (TR) clusters are tautosyllabic in Italian [6], while they are heterosyllabic in Congreso Internacional de la Asociación Española de Lingüística Aplicada. Almeria: Universidad Hungarian [8]. This fact is confirmed by various features of Ungheriano; for instance, de Almería. Hungarians usually do not lengthen the stressed vowel before a TR cluster in Italian words. [5] Schwab, S., & Llisterri, J. (2010). The perception of Spanish lexical stress by French speakers: The stressed vowel is always long in Italian in an open syllable due to the Stress-to-Weight Stress identification and time cost. In K. Dziubalska-Kołaczyk, M. Wrembel, & M. Kul (Eds.), Principle (SWP); i.e., the stressed syllable must be heavy (cf. [5], [6]), which is obtained in Italian New Sounds 2010. Proceedings of the 6th international symposium on the acquisition of second by stressed vowel lengthening in open syllables. Apparently, Hungarian speakers perceive language speech. (pp. 409–414). Retrieved from http://liceu.uab.cat/~joaquim/publicacions/ syllables before TR clusters as closed ones because of the heterosyllabic syllabification of TR; Schwab_ that is, they do not feel the need to lengthen the stressed vowel, since the syllable is already [6] Stanislaw, H., & Todorov, N. (1999). Calculation of signal detection theory measures. Behavior heavy due to the coda consonant (for examples see Figure 1). This kind of missyllabification Research Methods, Instruments, & Computers, 31(1), 137–149. https://doi.org/10.3758/ has further consequences to Ungheriano, among others during the selection of the stressed BF03207704 syllable. [7] Van Leussen, J.-W., & Escudero, P. (2015). Learning to perceive and recognize a second language: Word stress assignment causes troubles for any learner of Italian as L2, since stress can the L2LP model revised. Frontiers in Psychology, 6. be placed to any of the last three syllables of a polysyllabic word (final stress is marked by orthography, so in this study we only deal with penultimate and antepenultimate stress). The place of the word stress goes back to diachronic reasons in Italian; however, in Ungheriano it is based on synchronic phonological patterns. According to our proposal, SWP is replaced in Ungheriano by the opposite phonological pattern, Weight-to-Stress Principle (WSP; cf. [5]); that is, Hungarians tend to place the stress to a syllable which is already heavy, which leads to pronunciation mistakes. We sort the occurring mistakes into four categories (see Figures 2, 3). If the antepenultimate syllable is closed (C) and the penultimate is open (O) in an Italian word (pattern CO), Hungarians follow WSP, and they place the stress to the closed syllable, even if in Italian it is not always the stressed one (Figure 2/a). If the penultimate syllable is closed and the antepenultimate is open (pattern OC), Hungarians also follow WSP (Figure 2/b). (Note that we use the labels C/O instead of H/L (heavy/light) because of the differences in L1/L2 syllabification, and because we take the written form of the words as inputs, which does not include vowel length.) In terms of Optimality Theory [5] this means that Dep-IO and WSP are high ranked constraints in Ungheriano; that is, Hungarians place the stress to the closed syllable so as to avoid “unnecessary” insertion processes, like vowel lengthening. The missyllabification of TR clusters can also produce closed syllables for Hungarians, so it increases the number of the inputs for wrong stress assignment (cf. the examples of Figure 2/b). If both relevant syllables are closed in the target words (pattern CC), Hungarians usually place the stress to the second closed syllable, probably by analogy, affected by the most typical Italian penultimate word stress assignment (Figure 3/a). If both syllables are open (pattern OO), WSP cannot be applied. In this case the typical Ungheriano stress assignment favours the first open syllable (Figure 3/b). This tendency goes back to SWP; i.e., if WSP is not an option, Ungheriano appeals to SWP. In other words, if Hungarians need to choose to lengthen a vowel, they prefer to do it with the first available one; e.g., It. canoa ‛canoe’

106 107 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS [kaˈnɔːa] > Ungh. [ˈkaːnoa], It. radice ‛root’ [raˈdiːtʃe] > Ungh. [ˈraːditʃɛ] etc. The findings about Rhythmic Transfer in the Process of Foreign Language Acquisition: Ungheriano word stress assignment are also supported by a formal analysis in classical OT. The Case of Galician University Students of English

(1) Rosalía Rodríguez Vázquez1 and Paolo Roseano2 Italian Ungheriano 1Universidade de Vigo, 2Universitat de Barcelona s[oː]-pra ‛above’ s[o]p-ra v[eː]-tro ‛glass’ v[ɛ]t-ro Linguistic rhythm has been defined as isochrony of speech intervals [1]. A substantial amount of phonological research [2, 3, 4] has explored the rhythmic characterisation of man[ɔː]-vra ‛manoeuvre’ man[o]v-ra languages as predominantly stress-timed, syllable-timed, and mora-timed. In recent years, m[aː]-dre ‛mother’ m[a]d-re researchers [5, 6, 7] have discussed and put to practice some quantifiable measures to d[uː]-plice ‛double’ d[u]p-lice prove the phonetic reality of such classification and thus confirm the existence of objective, Figure 1. Examples for Ungheriano syllabification mistakes in TR-clusters. acoustically measurable differences between stress-timed and syllable-timed languages. As much as the research on the rhythmic differences between languages has evolved, the (2) production of L2 in contrast to L1 rhythmic patterns has been paid little attention in the a. Pattern CO b. Pattern OC literature on L2 acquisition and linguistic transfer [8]. The existing literature on prosodic Italian Ungheriano Italian Ungheriano acquisition proves that the acquisition of L2 speech rhythm is demonstrably a challenge for cornìce ‛frame’ còrnice ànatra ‛duck’ anàt-ra language learners [9]. The analysis of the potential influence of the rhythm of L1 on L2 must graffìto ‛graffiti’ gràffito cèlebre ‛famous’ celèb-re be carefully explored so as to come to a sound conclusion regarding the role of prosodic pantèra ‛panther’ pàntera lùgubre ‛lugubrious’ lugùb-re transfer in the acquisition of L 2 rhythm. The present study analyses the production of speech rhythm in the English language nocciòla ‛hazelnut’ nòcciola màcabro ‛macabre’ macàb-ro classroom by two groups of Galician-Spanish bilingual learners whose dominant language is appendìce ‛attach- appèndice pènetra ‛to penetra- penèt-ra Galician. The aim of the study is to verify a) whether the production of the foreign language ment’ te, 3sg’ is affected by rhythmic transfer from the native language of the learners, and b) whether Figure 2. Examples for Ungheriano word stress assignment. the degree of rhythmic transfer decreases as the amount of formal instruction in the foreign language increases. In order to conduct this investigation, two groups of native speakers of (3) Galician currently studying a degree in Translation and Interpreting at the University of Vigo, a. Pattern CC b. Pattern OO where English is one of the languages studied, were recorded. The first group comprised 6 Italian Ungheriano Italian Ungheriano subjects who, at the moment of recording, certified a B1 level of English (i.e. they had only àlbatro ‛albatross’ albàt-ro canòa ‛canoe’ cànoa just started their university degree). The second group comprised 6 subjects who certified àrbitro ‛referee’ arbìt-ro civìle ‛civil’ cìvile a C1 level (i.e. they were in the last year of their degree and had done at least one semester càttedra ‛desk’ cattèd-ra erède ‛heir’ èrede in an English-speaking country). The data were obtained by means of a reading task where màndorla ‛almond’ mandòrla radìce ‛root’ ràdice students had to read the text The North Wind and the Sun in English. Besides, one control group of 6 Southern British English (SBE) speakers, who also had to read the text in English, Òtranto ‛city in Ot-rànto sevèro ‛strict’ sèvero Apulia’ and one control group of 6 Galician (GAL) speakers (i.e. Spanish-Galician bilinguals who were Figure 3. Examples for Ungheriano word stress assignment. Galician dominant speakers), who had to read the Galician version of the text, were recorded. The acoustic analysis was carried out with Praat [10]. For each recording, the vocalic and [1] Altenberg, E. P. & Vago, R. M. 1983. Theoretical implications of an error analysis of second consonantal intervals were annotated in a textgrid, and the statistical analysis was carried language phonology production. Language Learning 33, 427-47. out by means of Correlatore [11]. [2] Archibald, J. 1998. Second Language Phonology. Amsterdam & Philadelphia: John Benjamins. The results obtained with the metrics put forward by [12] and [13], which prove to be [3] Flege, J. E. 1981. The phonological basis of foreign accent: A hypothesis. TESOL Quarterly 15, equally effective (Figures (1), (2), (3)), show that a) GAL and SBE are located at the opposite 443-55. corners of each plot, which is consistent with the fact that GAL is a syllable-timed language, [4] Hansen Edwards, J. G. & Zampini, M. L. 2008. Phonology and Second Language Acquisition. while SBE is stress-timed, b) EFL groups appear in an intermediate position between the Amsterdam & Philadelphia: John Benjamins. speakers’ L1 and those same speakers’ foreign language, and c) taking into account the level [5] Kager, R. 1999. Optimality Theory. Cambridge: Cambridge University Press. [6] Krämer, M. 2009. The Phonology of Italian. Oxford: Oxford University Press. of EFL speakers, group A (subjects with a B1 level of English) is closer to L1, while group C [7] Major, R. C. 2001. Foreign Accent: The Ontogeny and Phylogeny of Second Language Phonology. (students with a C1 level of English) is closer to L2. If speakers are analysed separately (Figures Mahwah, NJ: Lawrence Erlbaum. (4), (5)), a few more detailed considerations may be added: on the whole, the distribution [8] Siptár, P. & Törkenczy, M. 2000. The Phonology of Hungarian. Oxford: Oxford University Press. along the diagonal does not change, although there is an area where the clouds of SBE

108 109 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS speakers and C speakers overlap. This might mean that some C speakers display near native- [8] Ordin, M. & Polyanskaya, L. 2015. Acquisition of speech rhythm in a second language by learners like rhythm, which is consistent with the fact that the tested subjects had a C1 level of English with rhythmically different native languages. The Journal of the Acoustic Society of America and had spent at least one semester studying in the UK. Overall, this study proves that there 138(2), 533-545. is rhythmic transfer from L1 (Galician) to L2 (English) in both the consonant and the vowel [9] Kinoshita, K., & Sheppard, C. 2011. Validating acoustic measures of speech rhythm for second language acquisition. In Lee, W. S. & E. Zee (Eds.), Proceedings of the 17th International intervals and that, in spite of small individual differences, the degree of rhythmic transfer Congress of Phonetic Sciences, Hong Kong: City University of Hong Kong, 1086-1089. from Galician L1 to English L2 decreases as the level of the foreign language increases. [10] Boersma, P., & Weenink, D. 2019. Praat: Doing phonetics by computer. Computer program. Version 6.1.07, retrieved from http://www.praat.org/ [11] Mairano, P., & Romano, A. 2010. Un confronto tra diverse metriche ritmiche usando (1) (2) (3) Correlatore. In Schmid, S., M. Schwarzenbach, & D. Studer (Eds.), La dimensione temporale del parlato. Proceedings of the V National AISV Congress (University of Zurich, Collegiengebaude, 4-6 February 2009), 79-100. [12] Ramus, F., Nespor, M., & Mehler, J. 1999. Correlates of linguistic rhythm in the speech signal. Cognition 73, 265-292. [13] Grabe, E., & Low E. L. 2002. Durational variability in speech and the rhythm class hypothesis. In Gussenhoven, C. & N. Warner (Eds.), Papers in Laboratory Phonology 7. Berlin: de Gruyer, 515-546.

Figure 1: ΔC, ΔV Figure 2: ΔC, %V Figure 3: CrPVI & VnPVI

(4) (5)

Figure 4: ΔC, ΔV (individual values) Figure 5: CrPVI & VnPVI (individual values)

References

[1] Abercrombie, D. 1967. Elements of General Phonetics. Edinburgh: Edinburgh University Press. [2] Pike, K. 1945. The intonation of American English. Ann Arbor: University of Michigan Press. [3] Dauer, R. 1983. Stress-timing and syllable-timing reanalyzed. Journal of Phonetics 11, 51-62. [4] Bertinetto P. M. 1989. Reflections of the dichotomy “stress” vs. “syllable-timing”. Revue de Phonétique Appliquée, 91-93. [5] Ramus, F., Nespor, M., & Mehler, J. 1999. Correlates of linguistic rhythm in the speech signal. Cognition 73, 265-292. [6] Grabe, E., & Low E. L. 2002. Durational variability in speech and the rhythm class hypothesis. In Gussenhoven, C. & N. Warner (Eds.), Papers in Laboratory Phonology 7. Berlin: de Gruyer, 515-546. [7] Dellwo, V. 2006. Rhythm and Speech Rate: A Variation Coefficient for deltaC. In Karnowski, P. & I. Szigeti (Eds.), Language and language-processing. Frankfurt: Peter Lang, 231-241.

110 111 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Pitch and Temporal Characteristics of L2 Spanish Speech by Chinese Speakers the target language. Finally, it often has been observed in our study that Chinese advanced learners comparatively performed better than the intermediate group, suggesting that there Peizhu Shang may be general trend of improvement in L2 speech with increasing learners’ proficiency. Universitat de Barcelona

Adult learners of second languages face great difficulties when trying to produce in a native- like way the speech patterns of the target language. According to previous literature, this hardship of correctly speaking a foreign language has been mainly correlated with the interaction between the phonetic and phonological systems of the two languages, specifically, through the transfer of L1 properties to the L2. Based on this idea, learners with different language backgrounds were expected to use different pitch and temporal skills in the L2, because different language communities have been reported to vary in the use of pitch and temporal patterns in the speech. However, a growing number of recent studies found that L2 speakers with different language backgrounds consistently had a narrower pitch range, less variable pitch, and lower speech fluency compared to the L1 speech [1–6]. These findings seem to suggest some universal constraints during the L2 learning process, irrespective of the typological distance between the L1-L2 language pairs. Nevertheless, many of the existing studies were focused on the language contact between the intonational languages, and the results they have drawn might be less convincing due to the limited evidence from tonal languages. Therefore, with the present study, we want to fill in the existing research gap and extend previous hypotheses to a relatively new language pair Chinese-Spanish. The objective of the present study was to investigate (1) whether the pitch and temporal patterns produced by Chinese learners of Spanish are highly dependent Figure 1. 80% pitch span (a), pitch variability (b), pitch change rate (c) and speech rate (d) of the 3 language groups depending on the L1 properties or they are more in support of an L2 general trend hypothesis; (2) on the question type (The abbreviation of “YN”, “WH”, “DJ”, “CYN”, and “TAG” represent the “information-seeking yes/no question”, “wh-question”, “disjunctive question”, “confirmation-seeking yes/no question” and “tag question”, respectively). whether the acquisition of L2 pitch and temporal values reflects different levels of language proficiency and (3) whether the L2 learning of Spanish pitch and temporal features shows [1] Mennen, I., Schaeffler, F., & Dickie, C. 2014. Second language acquisition of pitch rangein different levels of difficulty depending on pragmatically different question types. German learners of English. Studies in Second Language Acquisition, 36(2), 303–329. To this end, we recorded 37 participants who were divided into three language groups (L2 [2] Mennen, I., Schaeffler, F., & Docherty, G. 2007. Pitching it differently: a comparison of the pitch intermediate, L2 advanced, and L1 native group) and totally produced 555 utterances that ranges of German and English speakers. In 16th International Congress of Phonetic Sciences, varied in the question type and stress position. The data analysis was performed using two 1769–1772. types of measurements: pitch and temporal measures. Specifically, pitch characteristics [3] Yuan, J. et al. 2018. Pitch characteristics of L2 English speech by Chinese speakers: A large- were evaluated using three parameters based on the F0 distribution (namely, 80% span on scale study. In Proceedings of the Annual Conference of the International Speech Communication the utterance level, 100% span on the syllable level, and pitch dynamism quotient-PDQ) [4, Association (Hyderabad), 2593–2597. 7]. Temporal traits were quantified through the pitch change rate, the speech rate, and the [4] Shi, S., Zhang, J., & Xie, Y. 2014. Cross-language comparison of F0 range in speakers of native articulation rate. The six dependent variables were entered respectively into a linear mixed Chinese, native Japanese and Chinese L2 of Japanese: Preliminary results of a corpus-based model with the language group, gender, question type, stress position and all their possible analysis. In Proceedings of the 9th International Symposium on Chinese Spoken Language interactions as fixed effects. The subject was set as random factor with all possible random Processing, 241–244. intercepts. [5] Busà, M. G., & Urbani, M. 2011. A Cross Linguistic Analysis of Pitch Range in English L1 and L2. In XVII International Congress of Phonetic Sciences (Hong Kong), 380–383. Overall, the results of our study indicate that the L2 Spanish produced by Chinese speakers [6] Peters, J. 2019. Fluency and speaking fundamental frequency in bilingual speakers of High and were deviated from the target pitch patterns, mainly in the compression of pitch variability Low German. In Proceedings of the 19th International Congress of Phonetic Sciences (Melbourne), and pitch span (see Figure 1). Apart from these features, a strong reduction of pitch change 1–5. rate, speech rate and articulation rate were also observed in the L2 speech (see Figure 1). [7] Hincks, R. 2004. Processing the prosody of oral presentations. In Proceedings of InSTIL/ICALL2004- These findings, as a whole, appear to add to the existing evidence of a universal developmental NLP and Speech Technologies in Advanced Language Systems (Venice, Italy), 63–66. trajectory during the L2 pitch and temporal learning process [1–6]. Furthermore, the main [8] Yuan, C., González-Fuente, S., Baills, F. & Prieto, P. 2019. Observing pitch gestures favors the effect of question type seems to lend support to previous studies that assumed a rank of learning of Spanish intonation by mandarin speakers. Studies in Second Language Acquisition, difficulties among learners in acquiring the phonetic details of different sentence types 41(1), 5–32. [8–9]. The relatively faster acquisition rate may be partially correlated with the typological [9] Cortés Moreno, M. 2004. Análisis acústico de la producción de la entonación española por closeness between the L1 and the L2 or the perceptual salience of some pitch movements in parte de sinohablantes. Estudios de Fonética Experimental, 13, 80–110.

112 113 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Phonological Influence of L1 on the Perception of Mandarin Tones with longer syllables. The Danish group, however, showed no differentiation between short and long syllables regarding either the perceptual boundary or boundary width. The results Man Gao1 and Chun Zhang2 of the present study suggest that Swedish and Danish listeners do not perceive the Tone 1 1Dalarna University College, Sweden; 2Aarhus University, Denmark and Tone 2 contrast in the same way as native speakers, although they can identify them with high accuracy rate. Furthermore, they add to the growing body of evidence that L1 Although the perception of lexical tones in Mandarin Chinese by non-native learners has been phonology influences the perception of non-native lexical tones. The present study appears well studied, much of the work is limited to learners whose native language is either another to be the first to investigate and contrast the perception of Mandarin tones by learners from tone language, or some variety of English or French. Only a few studies have focused on the two Scandinavian languages. perception by learners with a Scandinavian background. Research on tonal perception by learners with a Swedish or Norwegian background may be potentially rewarding since they References are categorized as pitch accent (or word accent) languages. Unlike non-tonal languages such as English, they use pitch variation to contrast word meaning albeit in a more restricted [1] Gao, M. (2019). Perception of Tone 1–Tone 2 contrast by Swedish learners of Mandarin Chi- manner than tone languages. One previous study [1] found that Swedish learners perceived nese. In FONETIK 2019, 10–12 June 2019 (Vol. 27, pp. 7–12). Mandarin tones differently than English-speaking learners. Swedes were more likely to [2] Fischer-Jørgensen, E. (1989). Phonetic analysis of the stød in standard Danish. Phonetica, 46(1– confuse Tone 1 and Tone 2, and, more interestingly, duration played a more important 3), 1–59. [3] Boersma, P. & Weenink, D. (2018). Praat: doing phonetics by computer [Computer program]. role for the Swedes in differentiating fine pitch variations. In the present study, we aim to Version 6.0.42, retrieved 15 August 2018 from http://www.praat.org/ expand on the results of that study [1], by recruiting both Swedish and Danish learners and [4] Kiriloff, C. (1969). On the auditory perception of tones in Mandarin.Phonetica , 20(2–4), 63–67. investigating the effect of L1 prosody on the perception of Mandarin tones. [5] Chen, Q. (1997). Toward a sequential approach for tonal error analysis. Journal of the Chinese Although both Swedish and Danish are Scandinavian languages, their phonology differs Language Teachers Association, 32, 21–39. in two significant ways: 1) Swedish has two pitch accents, known as Accent 1 (‘acute’) and [6] Hao, Y.-C. (2012). Second language acquisition of Mandarin Chinese tones by tonal and Accent 2 (‘grave’); in contrast, Danish has ‘stød’, a prosodic feature that is often described as a non-tonal language speakers. Journal of Phonetics, 40, 269–279. laryngealization or a type of creaky voice [2]. 2) Swedish has consonant quantity and displays [7] Finney, D. (1971). Probit analysis. Cambridge: Cambridge University Press , 50–80. complementary quantity whereby stressed syllables must contain either a long vowel or a long consonant. However, Danish, like English and German, lacks this characteristic. These two distinct phonological features may potentially influence perceive pitch contours by Swedes and Danes, in terms of error patterns and/or perceptual cues utilized. To explore this possibility, 30 Swedish and 30 Danish beginner level learners of Mandarin Chinese participated in a four-way tone identification task and a perceptual judgement task. 15 native Chinese speakers were employed as control group. Monosyllables produced by a female native speaker of Beijing Mandarin were used in the identification task and were furthermore utilized for resynthesizing speech stimuli for the perceptual judgement task. Specifically, four continua (Mandarin syllable /i/ and /ma/, with long and short duration respectively) were constructed using PSOLA algorithm in Praat [3]. Each continuum includes 7 stimuli with incrementally varied pitch contours from a Tone 1 exemplar to a Tone 2 exemplar. These stimuli were presented in randomized order to all three groups of listeners, who were asked to categorize each stimulus as Tone 1 or Tone 2. The results of the preliminary analysis of the perception data may be summarized thus. The Tone 2–Tone 3 confusion, which is most commonly observed among native English speakers ([4], [5], [6]), was found for Danish learners but not Swedish ones. Tone 1 and Tone 2 are challenging to Swedes, but only Tone 2 is most challenging for Danes. As to the perception of the Tone 1–Tone 2 continuum, data from approximately two-thirds of listeners in the Swedish and Danish groups with high accuracy percentage in identification task was subjected to statistical analysis and comparison with the native speaker group. Application of the probit analysis [7] suggests that the Swedish learners are more sensitive to the duration than are native Mandarin speakers or Danish learners. For the Swedes, the perceptual boundaries are further towards Tone 1 with longer syllables (meaning more tokens are identified as Tone 2), and narrower boundary width (meaning more categorical-like perception) is also associated

114 115 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Native variety interference when imitating intonation in a non-native variety of Italian faithfully, possibly owing to interference from the rising F0 contour in the native inventory, even though it is phonetically different from the LI question contour and conveys a different Michelina Savino1, Caterina Ventura2, Simona Sbranna2, Aviad Albert2 & Martine Grice2 communicative function. Interestingly, this interference is active regardless of whether high 1Dept. of Education, Psychology, Communication, University of Bari, Italy frequency or low frequency words are imitated; word frequency does not play a role. Finally, 2IfL-Phonetics, University of Cologne, Germany BI imitators perform better in reproducing the terminal rise than the preceding part of the target contour, possibly because the perceptual salience of the rise, combined with its final position (recency effect), plays a role in terms of working memory resources [10]. Background. Previous studies have shown that speakers of one variety of a language are able to imitate an F0 contour from another if the contours differ in the phonetic implementation of a shared phonological function and overall F0 shape, either approximating [1] or reproducing [2] the target contour. [2] involved imitation of polar question intonation across two varieties of Italian, Bari and Naples, both exhibiting a rise-fall tune but differing in peak alignment. In the present study speakers also imitate question contours, but this time Bari Italian (BI) speakers imitate Lecce Italian (LI), which, crucially, has only a simple rise [3]. BI has a rising intonation, used to convey non-finality [4]. We investigate whether the non-final rise in their inventory influences BI speakers when imitating LI questions, and if so, whether word frequency plays a role, given previous findings suggesting an advantage for imitation with low frequency words ([5], [2]). Methods. We compared the intonational realisations of 20 (10 high frequency,10 low frequency) trisyllabic words with penultimate stress (preceded by a monosyllable), produced Figure 1. Periograms and periodic energy curves of “è baNAle”, ‘it is trivial’ (nuclear syllable in caps = Syll3) produced by 16 BI speakers as non-final items in a list in a reading task, and the same words as imitations by a BI speaker as a native non-final contour (yellow box) and as an imitation of the LI query contour (red box), of pre-recorded queries from a native LI speaker. Recordings were analysed in terms of compared with the query contour produced by a LI speaker (blue box). Vertical lines indicate Centre of Gravity (blue) synchrony (lag between Centre of Gravity and Centre of Mass within each syllable), which and Centre of Mass (red) within syllables.

estimates the shape of F0 contour within syllables (positive values indicate a rise, negative values a fall, null values a level tone) [7, 8]. Statistical significance of the results was tested by fitting a linear mixed-effects model. Difference scores between synchrony values for LI target and BI imitated query and between LI target and BI non-final were computed and used as dependent variables in the fitted model. Type (BI imitated query and BI non-final), frequency (low and high) and syllable (Syll1, Syll2, Syll3 and Syll4, where Syll3 is the nuclear syllable) were used as fixed factors. Since the full random effect structure showed overparameterization, backward elimination based on likelihood ratio-test was used [9]. Thus, random intercepts for subject and item as well as by-subject random slopes for type and syllable were included as random effects. Results. Periograms [6] of representative examples are provided in Fig.1. The distribution of synchrony values of all rising non-finals and imitated queries averaged across all BI speakers, and those of the LI query targets (Fig.2) show that the rising contours produced in the three Figure 2. Violin plots for synchrony measures: BI speakers in Figure 3. Averaged F0 curves of LI queries (blue), im- conditions tend to be different in shape: the non-finals have a fall (mostly negative values non-finality and imitation conditions and LI speaker for the itated queries by BI speakers (red) and BI non-finals target condition. Black dots on violins represent mean values. (yellow). Grey areas=SEs; vertical line=Syll4 onset. on Syll3) preceding the rise, whereas the LI targets are level before the rise. Interestingly, Syll3=nuclear syllable. the imitated queries lie between the two (for both Syll3 and Syll4, distribution shifts towards higher values than in non-finals). The difference between imitated queries and non-finals on Syll3 and Syll4 reaches significance independently of word frequency (β = 3.39, SE = 0.71, References p < .001). This indicates that speakers do not completely resort to the non-final contour in their native variety to imitate the target. For Syll3 the difference between target and imitated [1] German, J. S. 2012. Dialect adaptation and two dimensions of tune. Proceedings of Speech queries (β = 3.71, SE = 1.02, p < .01) and between target queries and non-finals (β = 7.12, SE Prosody 2012 (Shanghai, China), 430-433. = 1.12, p < .001) is also significant. The intermediate nature of the imitated rise is apparent in [2] D’Imperio, M., Cavone, R. & Petrone, C. 2014. Phonetic and phonological imitation of intonation the averaged F0 curves (Fig. 3). in two varieties of Italian. Frontiers in Psychology 5, 1-10. [3] Savino M. 2012. The intonation of polar questions in Italian: Where is the rise? Journal of the When asked to imitate the LI question contour, BI speakers produce a rise rather Discussion. International Phonetic Association 42(1), 23-48. than a rise-fall, as in their native variety, but they do not manage to imitate the entire contour

116 117 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS [4] Savino, M. 2004. Intonational cues to discourse structure in a variety of Italian. In Gilles, P. & Factors Influencing Individual Variability in L2 Learning of Tone Peters, J. (eds.), Regional Variation in Intonation, Tuebingen: Niemeyer, 145-159. [5] Goldinger, S. D. 1998. Echo or echoes? An episodic theory of lexical access. Psychol. Review 105, Tim Joris Laméris and Brechtje Post1 251-279. 1Phonetics Lab, University of Cambridge [6] Albert, A., Cangemi, F. & Grice, M. 2018. Using periodic energy to enrich acoustic representation of pitch in speech: a demonstration. Proc. of Speech Prosody 2018, 1-5. [7] Cangemi, F., Albert, A. & Grice, M. 2019. Modelling intonation: Beyond segments and tonal It is well known that there is considerable individual variability in the outcome of adult targets. Proceedings of ICPhS 2019 (Melbourne, Australia), 572-576. L2 speech learning, and learning lexical tones is no exception to this. Previous research has [8] Albert, A., Cangemi, F., Ellison T. M. & Grice, M. 2020. ProPer: PROsodic Analysis with PERiodic shown that both linguistic factors (such as L1 experience with tone or L1 tone inventory) and Energy. OSF. doi:10.17605/OSF.IO/28EA5. non-linguistic factors (such as musical experience) of individual learners can account for this [9] Baayen, R.H., Davidson, D.J. & Bates, D.M. (2008). Mixed-effects modelling with crossed random variability [1]. However, most previous studies have only separately examined either linguistic effects for subjects and items.Journal of Memory and Lang. 59, 390–412. or non-linguistic influences on L2 tone acquisition, with a few exceptions [2], [3]. Moreover, the [10] Savino, M., Winter, B., Bosco, A., Grice, M. 2020. Intonation does aid serial recall after all. vast majority of work on individual variability in tone learning investigates tone processing at Psychonomic Bulletin & Review, 27(2), 366-372. a pre-lexical level. We argue that investigation at the lexical level, i.e. in word learning, both in the listening and the speaking modalities, will provide a more complete image of tone learning. To this end, our study investigated the combined effects of linguistic and non-linguistic factors in L2 tone word learning in order to propose a cohesive theoretical account of L2 tone acquisition that goes beyond existing models that describe the effect of linguistic factors on pre-lexical perception [4], [5]. Two groups of adult native speakers of English (a non-tone language, n=21) and Mandarin Chinese (a tone language, n=20) participated in two-day experiment. The English participants had no knowledge of a tone language. Musical experience, measured by years of formal instruction (cf. [6]), and working memory, measured by a backwards digit span task, were matched across both participant groups as proxies of individual non-linguistic factors. Participants were first tested on pre-lexical tone perception through atone categorization task of vowels carrying either a rising, a falling, a mid-level, or a low-level tone. They were then trained by means of a listen-and-repeat session to learn a set of 16 pseudolanguage words with a four-way segmental (/jar/, /jur/, /lɔn/, /nɔn/) and a four- way tonal contrast (rising, falling, mid-level, low-level). After training, they were tested on their Word Production in a picture-naming task, and subsequently on their Word

Identification with picture-matching. Normalized 0f data were obtained to determine tone production accuracy. The training and the two word recall tests were repeated on day 2. At the end of training, considerable individual variability persisted in both groups for Word Production and Word Identification. GLMMs revealed that although L1 tone experience was not a significant predictor for word recall, it did modulate the effect of non- linguistic factors. For instance, whereas musical experience and pre-lexical tone perception significantly predicted Word Identification performance in English participants, it did less so in Mandarin participants, similar to findings by [2], [7]. The results suggest that instead, word recall performance in Mandarin speakers was largely driven by working memory. We also saw clear effects of L1 tone inventory: Mandarin speakers predominantly confused low-level with mid-level tones in word recall (a contrast that does not exist in the Mandarin tone inventory). This particular finding is in line with predictions made by existing models that look at the effects of L1 tone inventory on tone processing at the pre-lexical level [5], and further shows that L1 tone inventory continues to affect lexical tone processing in both the listening and speaking modalities. We propose an ‘L1-Modulated Domain-General Account’ as a new theoretical account for the learning of tone in a second language. This account posits that individual

118 119 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS performance in tone learning at the lexical level is affected dynamically by linguistic and DAY 1 - Poster Session B non-linguistic factors, and may be additionally constrained by mismatches between L1 and Phonetics and Phonology: Articulation/Acoustics & Phenomena L2 tone inventory. Overall, these effects are largely similar in both the listening and speaking modalities. A preliminary investigation of stop VOT in Western Armenian spoken in Canada References Talia Tahtadjian & Alexei Kochetov [1] M. Antoniou and J. L. L. Chin, “What Can Lexical Tone Training Studies in Adults Tell Us about University of Toronto Tone Processing in Children?,” Front. Psychol., vol. 9, pp. 1–11, Jan. 2018. [2] A. Cooper and Y. Wang, “The influence of linguistic and musical experience on Cantonese word Voice onset time (VOT) is a well-established acoustic property that can differentiate learning,” J. Acoust. Soc. Am., vol. 131, no. 6, pp. 4756–4769, 2012. stop consonants in different languages, by measuring the duration between the release [3] R. K. W. Chan and J. H. C. Leung, “Why are Lexical Tones Difficult to Learn?,” Stud. Second Lang. of a stop’s closure and the beginning of vocal fold vibration [1]. Languages differ in how Acquis., vol. 42, no. 1, pp. 33–59, Mar. 2020. they phonetically differentiate laryngeal contrasts, as, for example English implements the [4] C. K. So and C. T. Best, “Cross-language perception of non-native tonal contrasts: Effects of prevocalic voiceless-voiced contrast as long vs. short VOT (aspirated vs. unaspirated), while native phonological and phonetic influences,”Lang. Speech, vol. 53, no. 2, pp. 273–293, 2010. [5] J. Chen, C. T. Best, and M. Antoniou, “Native phonological and phonetic influences in perceptual French does it with lead voice vs. short VOT (voiced vs. voiceless). This variable has been assimilation of monosyllabic Thai lexical tones by Mandarin and Vietnamese listeners,” J. Phon., shown to be highly sensitive to language contact, as high variability and drift in VOT values vol. 83, p. 101013, Nov. 2020. were observed in the speech of bilinguals and heritage language speakers [2, 3, 4]. [6] A. R. Bowles, C. B. Chang, and V. P. Karuzis, “Pitch Ability As an Aptitude for Tone Learning,” This preliminary study seeks to examine the extent of VOT variation observed in Western Lang. Learn., vol. 66, no. 4, pp. 774–808, 2016. Armenian (WA), which is spoken primarily in the diaspora (Ethnologue, n.d.). The laryngeal [7] W. Ling and T. Grüter, “From sounds to words: The relation between phonological and lexical contrast in the language has been traditionally described as involving voiceless aspirated processing of tone in L2 Mandarin,” Second Lang. Res., 2020. and voiced stops [5]. However, the only previous investigation WA stops [6] revealed that speakers living in Lebanon implemented the contrast somewhat differently - as short lag (unaspirated) vs. lead (and different from the realization of aspirated stops in Eastern Armenian [7, 8]). The authors attributed this effect to the influence of Arabic. Based on this finding, we predict that VOT of WA speakers residing in English-speaking regions of Canada would be highly variable and subject to influences from Canadian English (CE) and the WA variety spoken in the Middle East (given the immigration patterns). To investigate this hypothesis, audio recordings were made of 16 native speakers of WA residing mostly in Ontario reading single words with word-initial stops /pʰ, tʰ, kʰ, b, d, g/ before non-high vowels /a, ɛ, ə/. Participants differed as to whether they were raised mostly in Canada or outside Canada. As the annotation and acoustic analysis of the data are still under way, here we present a subset of data of 12 words by 6 speakers (5 females) with a mean age of 39.6. VOT values, extracted from Praat, were submitted to a mixed effects model with fixed effects Laryngeal Category (voiceless aspirated, voiced), Place (labial, alveolar, velar), and Group (born in Canada, born outside), and random effects Word and Speaker. The results revealed a significant effect of Laryngeal Category (p < 0.0001). As shown in Table 1, voiceless aspirated stops had considerably higher values than voiced stops. Notably, both categories were highly variable. Voiceless aspirated stops fell within the ranges of both short and long lag, with a few voice lead tokens (likely influence of Eastern Armenian). Realizations of voiced stops were bimodal, with values characteristic of the typical voice lead and short lag. Somewhat unexpectedly, Place and Group did not turn out to be significant, likely due to the considerable within-speaker variation and other differences in speakers’ background (e.g. formal education in Armenian). Based on the values in Table 1, voiceless aspirated stops produced by both groups did not reach the prototypical CE range of 60- 90 ms [3, 9]. The amount of prevoicing was also considerably higher than for CE [2]. Some differences between the groups, however, can be discerned in Figure 1: Canadian-raised speakers showed lesser variation and somewhat higher VOT values for voiceless aspirated stops, while showing a higher incidence of short lag VOT for voiced stops. Both differences can be attributed to the influence of CE, and require further investigation. To conclude, these results partly confirm our hypothesis: stops produced by Western Armenian speakers in Canada are highly variable, yet this variation cannot be solely attributed

120 121 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS to the participants’ examined background (born in Canada or outside). These results echo Perceptual cue weighting of intervocalic velar plosives in Canadian English some of the earlier findings for heritage languages spoken in Toronto, where VOT drift was observed in some groups but not others [4]. Overall, this study sets a stage for a more Daniel Pape and Olivier Mercier systematic investigation of VOT in Western Armenian spoken in Canada and abroad. When producing speech, acoustic signals contain a number of extractable parameters, or Table 1. VOT (ms) of Western Armenian stops by participant group. acoustic cues, which then, on the perceptual side, can be used to categorize and classify Raised in Canada Raised outside of Canada different phonemes. The actual perceptual relevance of the different available acoustic cues voiceless aspirated voiced voiceless aspirated voiced and their interaction is known as perceptual cue-weighting. We investigated how Canadian median sd median sd median sd median sd English listeners (n=38) from the Greater Toronto Area identify the voicing category of 50 (16) -45 (58) 41 (38) -47 (38) intervocalic velar plosives (i.e. /k/ versus /g/) for biomechanically generated stimuli differing systematically and orthogonally in a number of acoustic parameters known to influence stop voicing. For this reason, we embedded the velar stops in intervocalic /a/ contexts to create an experimental stimuli set that varied the following acoustic cues to stop voicing along four continua: (1) voicing maintenance throughout consonantal closure (VM), (2) voice onset time (VOT), (3) duration of consonantal closure (CL) and (4) duration of the previous vowel segment (VL). We were interested in the importance, weighting and interaction of all four acoustic cues when tasking listeners to classify the VCV sequence in a forced-choice test (choice: /aka/ or /aga/). We used biomechanical modelling, a synthesis approach known to generate natural, highly controllable and very accurate stop consonants, as shown e.g. by accurate reproductions of important fine phonetic details like articulatory loops (see e.g. Perrier et al. 1998). In the following, we present findings for three experiments: Experiment 1 investigates the cue1 weighting of three concurrent acoustic cues (VM, VL, CL) in the absence of an acoustic VOT cue. Experiment 2 then determines an ambiguous VOT value (i.e. the perceptual categorical boundary between English /k/ and /g/ perceptions) which was specifically measured for our biomechanical stimuli. Additionally, the experiment examined how varying VOT values2 from 0ms to 100ms would interact with the VM parameter to influence voicing perception when duration cues were held ambiguous (i.e. intermediate VL and CL durations). Figure 1. Density plots for VOT (ms) of two laryngeal categories produced by WA speakers raised in Canada and outside. This experiment thus explored not only the effect of VOT on voicing perception, but also how VOT interacts hierarchically with other cues. Experiment 3 then adds the ambiguous VOT References boundary (measured in experiment 2) to the previous other three cues already examined in experiment 1 (VM, VL, CL). Therefore, experiment 3 compares how the perceptual cue [1] Lisker, L., & Abramson, A. S. (1964). A cross-language study of voicing in initial stops: Acoustical weighting of the three acoustic cues (VM, VL, CL) is affected when a perceptually ambiguous measurements. Word, 20(3), 384-422. [2] Sundara, M., Polka, L., & Baum, S. (2006). Production of coronal stops by simultaneous bilingual VOT value is either present (exp. 3) or absent (exp. 1) in the acoustic signal, in other words how adults. Bilingualism, 9(1), 97-114. the perceptual effect of absence vs. presence of ambiguous VOT on the voicing distinction [3] Fowler, C. A., Sramko, V., Ostry, D. J., Rowland, S. A., & Hallé, P. (2008). Cross language phonetic manifests on English listeners when faced with the task of judging the effects of the other influences on the speech of French–English bilinguals. Journal of Phonetics, 36(4), 649-663. acoustic cues (VM, VL, CL). [4] Nagy, N., & Kochetov, A. (2013). Voice onset time across the generations: A cross-linguistic study of The results from the three experiments reveal that, for our native Canadian English contact-induced change. In P. Siemund, I. Gogolin, M. Schulz & J. Davydova (eds.). Multilingualism listeners, VM has a significantly higher influence on voicing perception than all other cues (VM and language contact in urban areas: Acquisition – Development – Teaching - Communication, 19-38, Amsterdam: John Benjamins. > VL > CL), and that only at low VM levels other cues increase their influence, as can be seen [5] Vaux, B. (1998). The phonology of Armenian. Oxford: Oxford University Press. in figure 1. The absence vs. presence of ambiguous VOT values does not change perception [6] Kelly, N. E., & Keshishian, L. (2019). The voicing contrast in stops and affricates in the Western significantly for Canadian English listeners (see figure 1, blue lines): they still mostly rely on Armenian of Lebanon. INTERSPEECH, 1721-1725.[7] Hacopian, N. (2003). A three-way VOT contrast VM, to some extent on VL but not on CL. The presence of ambiguous VOTs pulls the listener in final position: Data from Armenian. Journal of the International Phonetic Association, 33(1), 51-80. responses more towards voiceless identifications, but this interaction strongly depends on [8] Seyfarth, S., & Garellek, M. (2018). Plosive voicing acoustics and voice quality in Yerevan Armenian. the presented voicing maintenance value: The interaction is strongest for low VM values but Journal of Phonetics, 71(1), 425-450. [9] Caramazza, A., Yeni‐Komshian, G. H., Zurif, E. B., & Carbone, E. (1973). The acquisition of a new disappears for higher values (above 25 ms voicing during closure). With respect to listener phonological contrast: The case of stop consonants in French‐English bilinguals. The Journal of the variability, one of the main results for experiment 2 is that individual listeners utilized very Acoustical Society of America, 54(2), 421-428. different strategies in establishing their individual cue-weighting patterns for their intervocalic voicing perception in Canadian English, as can be seen in figure 2.

122 123 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Our results add additional evidence that the perceptual cue weighting process for Variability in read speech? – An acoustic experiment with two Portuguese varieties intervocalic plosives is highly complex and cannot be argued to rely dominantly on the classic main cue(s). Finally, the dominance of VM in our results is surprising since it is often argued Conceição Cunha that voicing maintenance during stop closure is not an important parameter for English stop Institute of Phonetics, LMU Munich voicing classification. Even tough Portuguese has more than 200 million speakers in the world, in some respects it is not a well-studied language at all. For example, there are a few acoustic descriptions of stressed vowels in both European (hereafter EP, e.g.[1, 2,]) and Brazilian Portuguese (BP, [3]), and [4] compares both varieties with respect to the stressed oral vowels only, with unstressed and nasal vowels being missing. The stressed vowel system includes the seven cardinal vowels /i, e, ɛ, a, ɔ, o, u/ in both varieties. In EP, unstressed vowels result from a further process of vowel reduction which includes the of all non-high vowels (/e a o/) and the centralization of the non-high front vowel (/a/), giving rise to the following productions [i, ɨ, u, ɐ] ([5, 6]). BP still distinguish the following unstressed vowels /a, e, i, o, u/ [7], which can be reduced to [ɪ, e̤, ʊ, ɐ, 3]. [4] found some differences on duration and F2 values between the varieties, but the exact quality and differences on the unstressed vowels are not yet acoustically attested. A reliable comparative description of the vowel space in both varieties will be the first aim of this paper. Since it is not clear if both unstressed high vowels are similar in quality as the stressed counterparts and how central the centralized vowels in EP are, we predict our results will follow [2]. In order to test some variability in a controlled setup, we test the influence of prosody/information structure on the vowel space, in which the target word is repeated twice in each sentence. We expect the second occurrence of the token in each sentence to be Figure 1: Perception results for experiment 1(red lines) versus experiment 3 (blue lines). Shown are the presented VM levels during the consonant duration (ranging from 0% to 100%) on the x-axis versus the probability that the listeners chose a voiced response (i.e. /aga/ instead of /aka/). The more hypo-articulated as the first and therefore expect shorter durations and centralization columns show the variation of consonant duration and the lines show the variation of the preceding vowel duration. As can be seen, the inclusion of the vowels in this position. A further aim will be to capture variability in the vowel in which of an ambiguous VOT cue (red lines) compared to the absence of a VOT cue (blue lines) did not change listener responses in a relevant manner. Listeners were most sensitive to VM variation, especially in the lower VM regions (0% - 25%), were less sensitive to vowel duration differences we change the preceding consonant (nasal and oral stop). Overall, we predict vowels will be (more in the lower VM region) and barely sensitive to stop consonant duration. longer in BP than in EP [4] and stressed vowels to be less variable than unstressed ones. To do so, a controlled corpus including all oral and nasal vowel and diphthongs of 28 speakers of European (central region) and 16 speakers of Brazilian Portuguese (São Paulo region) has been recorded in a sound proofed box or a quiet room. All words were randomized and embedded in one of three carrier sentences (diga (’say’); leia (’read’)) as in ‘Diga pote, diga pote depois’ (’Say pot, Say pot afterwards’). After hand and formant correction, the formant trajectories were linearly time-normalised to 11 points and z-score normalised in order to reduce, as far as possible, the influence of anatomical differences in the size and shape of different speakers’ vocal tracts (see fig. 1 for difference between male and female speakers of European Portuguese). The trajectories were then Lobanov-normalized for each formant with respect to the stressed point vowels /i, u, a/ produced by the same speaker and presented in a F1xF2 vowel space for each variety. DCT-coefficients were calculated on the entire F1 and F2 trajectories and Euclidean distances computed between adjacent vowel categories to quantify the distribution of the vowels in the 6-dimension DCT space. The Figure 2: Results for experiment 2 for the comparison of VOT (x-axis) versus voicing maintenance levels (panels from left to right) for different Euclidean distances were tested pairwise with a linear mixed model [8] in R. Vowel quantity listener response patterns, i.e. different group responses. Group A (in red) is clearly more sensitive to the VOT perceptual cue and rather was analysed with unnormalized (in Hz) and word normalized durations. insensitive to the simultaneously presented VM difference (ranging from completely unvoiced (VM=0%) to fully voiced stimuli (VM=100%). In contrast, group C is rather sensitive to the presented VM differences but appears to be insensitive to the presented VOT differences. A quadrilateral-like vowel space arising from the spatial distribution of all vowels showed that some of the unstressed vowels appear in different positions than expected ([unstressed Perrier, P., Payan, Y., Perkell, J., Jolly, F., Zandipour, M., Matthies, M. (1998): “On loops and articulatory biomechanics”, In ICSLP-1998, paper 0112. /u/ is fronted in EP and /ɐ/ appear mid centralized]). We also had a main effect of prosodic position (p<0.001) but not of context. Some differences can be seen in Fig 1, in which vowels 1For this condition, no acoustic stop burst is present in the signals. The temporal relationship between articulatory vocal tract opening and onset of the voicing of the following vowel is identical for all stimuli. seem to be more peripheral in EP when preceded by a nasal than in an oral context. In 2The parameter VOT was introduced by adding identical acoustic burst signals with varying temporal distances with respect to the voicing general, BP vowels were more peripheral than the EP counterparts, which is consistent with onset of the following vowel. Thus, VOT here is defined as an acoustic cue consisting of varying temporal distances between spliced burst the main effects of variety on the pairwise Euclidian distances (with different values) and (identical for all stimuli) and vowel voicing onset (also identical for all stimuli). on duration (p < 0.001). Overall, in a controlled laboratory setup, this paper could capture

124 125 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS some internal variability caused by phonotactics and prosody to improve the comparative An articulatory study of mid-vowel neutralisation in Portuguese: description of the oral vowel space of EP and BP. the effects of stress, nasality and variable nasalisation

Michael Ramsammy University of Edinburgh

Introduction. This poster presents the results of experimental work that examines the articulatory outcomes of mid-vowel neutralisation in Brazilian Portuguese (BP). Phonological descriptions note that BP contrasts two front-mid vowels, /ɛ, e/, and two back-mid vowels, /ɔ, o/, in stressed syllables (Câmara 1970; Mateus & d’Andrade 2009). In most dialects of BP— including the São Paulo variety that is the focus of this study—this contrast is neutralised in unstressed syllables to [e] and [o], respectively; and it is argued not to be present in the nasal vowel series where only /ẽ/ and /õ/ occur. In addition, vowels in contact with nasal consonants can be variably nasalised (e.g. [hẽmu] ‘oar’). Yet how this process interacts with height neutralisation has not been discussed in existing phonological or phonetic studies. Experimentation. In view of these facts, a pilot experiment using ultrasound tongue imaging (UTI) was conducted to test the articulatory realisation of BP mid vowels. The participants were 5 native speakers of BP. UTI data were recorded using the AAA software package (Wrench 2003–2019) and an EchoBlaster 128 ultrasound machine. Participants read sentences containing disyllabic nonce words with six different phonological shapes (see Table 1): e.g. Diga pebe outra vez ‘Say pebe again’. Nonce words were constructed with front-mid and back- mid vowels. This permitted front vs back-mid vowel realisations to be compared in oral vs nasal vs pre-nasal contexts under both stressed and unstressed conditions.

Table 1: nonce words formulations Oral Nasal Pre-nasal Stressed (a) ˈCV.CV (b) ˈCṼ.CV (c) ˈCV.mV Unstressed (d) CV.ˈCV (e) CṼ.ˈCV (f) CV.ˈmV

Figure 1. Unnormalized vowel space after nasal (left) and oral stop (right) for female (upper panels) and male Analysis. Ultrasound splines were semi-automatically fitted to visible tongue contours in speakers (lower panels) of European Portuguese. The symbols [ɪ, ʊ, ɨ, ɐ] were used for the unstressed EP vowels to each vowel realisation. Polar coordinate data (Mielke 2015) were extracted from ultrasound better distinguish unstressed and stressed high vowels. Euclidean distances were calculated after normalization to minimize the huge differences between formant values in female and male speakers plotted here. frames corresponding to the full acoustic duration of each vowel. These data were subjected to a Principal Components Analysis and subsequent testing using mixed-effects linear References regression to compare tongue configurations across test contexts (cf. Bennett et al. 2017). Additionally, coordinates from single ultrasound frames corresponding most closely to [1] Cruz-Ferreira, M. 1995. European Portuguese, Journal of the International Phonetic Association 25, the temporal midpoint of each vowel were compared using visualisations of fitted splines 90-94 generated using loess smoothing (Turton 2017), which is similar to the SS-ANOVA technique [2] Cunha, C. 2019. Contextual variability in the native production of European Portuguese vowels. In commonly employed in the analysis of UTI data (e.g. Ahn 2018). Gabriel, C., Grünke, J. & Thiele, S. (Eds.) Romanische Sprachen in ihrer Vielfalt: Brückenschläge zwischen Results. Analysis confirms significant differences in tongue position across test contexts. linguistischer Theoriebildung und Fremdsprachen-unterricht. Stuttgart: Ibidem. 125-146. Figure 1 below shows fitted splines for one female speaker. Here, the front body of the tongue [3] Barbosa, P. & Albano, E. 2004. Brazilian Portuguese. Journal of the International Phonetic Association is its lowest position in stressed realisations of the oral vowels [ɛ] and [ɔ]. In unstressed oral 34, 227 - 232 [e], the tongue shows a higher configuration than in ɛ[ ]; and the front body is significantly [4] Escudero, P., Boersma, P., Rauber, A. & Bion, R,. 2009. A cross-dialect acoustic description of vowels. higher in unstressed [o] compared to stressed [ɔ]. There are also clear differences between Brazilian and European Portuguese. Journal of the Acoustical Society of America 126, 1379-1393. oral and nasal vowels. Whilst the results are consistent with the phonological descriptions [5] Mateus, M.H.M. & d’Andrade, E. 2000. The phonology of Portuguese. Oxford: Oxford University Press. [6] Vigário, M. 2003. The prosodic word in European Portuguese. Berlin: De Gruyter mentioned above, stress also plays a significant role in determining tongue position in nasal [7] Bisol, L. 2005. Introdução a estudos de fonologia do português brasileiro. Porto Alegre: Edipucrs. [5] vowel articulations. This is in part relatable to the occurrence of velar nasal stops following [8] Bates, D., Maechler, M., Bolker, B. & Walker, S. (2015) Fitting linear mixed-effects models using lme4. [ẽ] (Shosted 2006), which is particularly common in stressed contexts. Interestingly, vowels in Journal of Statistical Software 67(1), 1 48. pre-nasal contexts appear to have intermediate realisations: i.e. the tongue position is higher [9] Wood, S. N. (2011) Fast stable restricted maximum likelihood and marginal likelihood estimation than in oral vowels but lower than in fully nasal vowels, both in stressed and unstressed of semiparametric generalized linear models. Journal of the Royal Statistical Society (B) 73(1), 3 36.

126 127 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS environments. This has not previously been documented for pre-nasalised vowels in BP. High vowel shortening in Turkish Whilst further data is necessary to confirm the preliminary findings, I discuss possible implications of these results for the phonological analysis of nasality and nasalisation in Elif Esra Sarmış and Stefano Canalis Portuguese. Boğaziçi University

This paper presents the preliminary findings of a quantitative study on high vowel shortening in Turkish, and discusses the phonological status of this process. The tendency for higher vowels to be, all other things being equal, shorter than lower vowels is so widespread that it is considered a phonetic universal [1: p. 622]. In most languages, shortening apparently results from inherent articulatory properties of speech mechanisms; lower vowels require more lowering of the tongue and jaw, thus also requiring more time than a higher vowel to reach their articulatory target (although this explanation is not totally uncontroversial; see e.g. [2], [3]). At the same time, while in most languages high vowel shortening appears to be purely phonetic (mechanical, intrinsic, articulatorily or aerodynamically based, gradient), in some languages it is at least in part governed by phonology: it is controlled and (semi)categorical. For instance, it has been argued that “[Japanese] CV [sequences] containing high vowels are substantially shorter before voiceless consonants, whilst non-high vowels do not exhibit comparable shortening” [4: p. 1], suggesting that duration is phonologically controlled in Japanese to favour high vowel devoicing between voiceless consonants. We want to argue that high vowel shortening is not (purely) phonetic in Turkish either, but an allophonic rule controlled by phonology. We conducted a preliminary experiment (a larger study is underway, with more subjects and also analysing high vowel devoicing) with four Turkish native speakers to test high (/i, y, ɯ, u/) and non-high (/a, e, ø, o/) vowels in open and closed syllables, while controlling the preceding and following consonantal environment for voicing. All experimental items were bi- or tri-syllabic words with a final stressed syllable. The words were presented in a carrier sentence and all participants repeated them three times. The data were recorded via an external microphone and analysed using Praat [5]. The duration of stressed and unstressed vowels (high and non-high alike) was very similar

(it is commonly agreed that F0 is the most reliable cue of Turkish stress, the role of duration being minor at best – see the review by [6]); vowels were longer in closed syllables than in Figure 1: Fitted splines for front-mid vowel realisations (upper panel) and back-mid vowel realisations (lower open ones (a fact already reported for Turkish by e.g. [7]), and longer if surrounded by voiced panel) produced by a female speaker of Brazilian Portuguese. consonants. As for the role of vowel height, high vowels were found to be shorter than non-high ones References (Table 1). If Turkish vowel shortening only depended on articulation and/or acoustics, we would expect 1) a broadly linear correlation between it and vowel height, 2) such a correlation Ahn, S. 2018. The role of tongue position in laryngeal contrasts: an ultrasound study of English and to exist even when phonetic vowel height is not closely related to the phonological status of Brazilian Portuguese. Journal of Phonetics 71: 451–467. vowels. This is not uncommon in Turkish; the phoneme traditionally described as /ɯ/, a high Bennett, Ryan, Máire Ní Chiosáin, Jaye Padgett & Grant McGuire. 2018. An ultrasound study of back unrounded vowel, is actually slightly lower than /i/, /u/ and /y/ (as well as more central Connemara Irish palatalization and . Journal of the International Phonetic Association than back vowels; cf. [8]), and Turkish /e/ has a lower allophone [æ] when followed by a 28(3). 261-304. tautosyllabic non-glide sonorant consonant [9: p. 10] (its distribution is in fact slightly more Câmara, J. M. 1970. Estrutura da língua portuguesa. Petrópolis: Vozes. Mielke, J. 2015. An ultrasound study of Canadian French rhotic vowels with polar smoothing spline complex, with some speakers contrasting [æ] and [e] in this environment). comparisons. Journal of the Acoustical Society of America 137. 2858–2869. However, in our data the correlation between a vowel’s F1 and its duration within the [–high] Shosted, R. 2006. Vocalic context as a condition for nasal coda emergence: aerodynamic evidence. and [+high] categories is very poor – in fact, non-existent. Fig. 1 shows the absence of a Journal of the International Phonetic Association 36: 39–58. positive correlation between F1 and duration within non-high vowels, while Fig. 2 shows the

Turton, D. 2017. Categorical or gradient? An ultrasound investigation of /l/-darkening and absence of a positive correlation between F1 and duration within high vowels. In contrast, if vocalization in varieties of English. Laboratory Phonology 8(1). 1-31. the duration of [–high] and [+high] vowels is compared, a conspicuous difference between Wrench, A. 2003–2019. Articulate Assistant Advanced (AAA). Articulate Instruments Ltd. these two classes emerges, non-high vowels being rarely as short as, or shorter than, high vowels (Fig. 3).

128 129 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS This suggests that Turkish high vowel shortening is not a purely gradient phenomenon only Examining the cluster status of Kurdish consonant sequences: A perceptual approach shaped by physiological constraints, but the near-categorical output of an allophonic rule sensitive to phonological rather than phonetic factors. Dilgash M. Shareef Alsilevani , Juli Cebrian Universitat Autònoma de Barcelona Table 1. Average duration of individual vowels and different vowel heights in open syllables in all environments across all participants (in ms) Consonant clusters in Kurdish (particularly the Kurmanji regional dialect of Kurdish spoken Vowel /a/ /e/ /o/ /ø/ /ɯ/ /i/ /u/ /y/ in northern Iraq) are generally described to contain maximally two consonants in syllable- Avg. 56 (sd 62 68 72 41 41 43 41 (sd initial (onset) and syllable-final (coda) positions. However, most of the readily accessible literature on this Kurdish dialect phonology provides discrepant descriptions regarding the Duration 14) (sd 14) (sd 13) (sd 14) (sd 11) (sd 11) (sd 9) 12) cluster status of a number of onset and coda combinations, that is, whether the sequences Avg. constitute actual clusters or if they are produced with an epenthetic vowel [7] [1] [3] [4]. Across V 56 67 41.6 The aim of this study is to assess the cluster status of such sequences from the perceptual Height standpoint of native Kurmanji Kurdish speakers. Vowel L M H The organization of consonant elements in a cluster usually tends to be governed by the Height Sonority Sequencing Principle (SSP) [6]. According to the SSP, the sonority of consonants decreases the farther they are from the vowel or syllable nucleus (the sonority scale adopted in this study is Hogg and McCully’s (1987), shown in Table 1). There are, however, cases of consonant sequences that violate the SSP, despite their universal markedness [8]. Many such examples that violate the SSP are reported for Kurmanji onset and coda clusters in [1]. They include onset sequences like fricative + stop (e.g. [spi] ‘white’), approximant + fricative (e.g. [lvin]‘movement’), and nasal + fricative (e.g. [nveʒ] ‘prayer’) and coda sequences like fricative + nasal (e.g. [ʤaӡn] ‘celebration’), fricative + approximant (e.g. [bafɾ] ‘snow’), stop + approximant (e.g. [kakl] ‘core’), and stop + nasal (e.g. [tʰaqn] ‘mud’). In contrast, Shokri’s [7] extensive analysis of Kurdish consonant clusters involves only one combination that does not adhere to the SSP, namely /s/ + stop for onset clusters. This discrepancy in the literature is partly due to differences in the theoretical, impressionstic and orthography-based analyses of Kurmanji clusters given in these works. In fact, different sources also disagree regarding Fig. 1 Non-high vowels, Fig. 2 High vowels, lowess Fig. 3 High (red) and non-high lowess line superimposed ine superimposed (blue) vowels the status of clusters that adhere to the SSP. To date, no experimental research has been conducted to assess the actual status of onset and coda combinations in the Kurmanji dialect References spoken in northern Iraq. The perception of Kurmanji Kurdish onset and coda clusters was examined by means of [1] Maddieson, I. 1999. Phonetic universals. In Hardcastle, W. J., & Laver, J., The Handbook of Phonetic a forced-choice goodness task with confidence ratings. In this task listeners were presented Sciences. Oxford, UK: Blackwell, 619-639. with two productions of the same word, one with a full epenthetic vowel and one without it. [2] Solé, M.-J., & Ohala, J. 2010. What is and what is not under the control of the speaker: Intrinsic vowel duration. In Fougeron, C., Kühnert, B., D’Imperio, M., & Vallée, N. (eds.), Papers in Participants were asked to select the option that sounded more Kurdish-like, and then provide Laboratory Phonology 10, Berlin, de Gruyter, 607-655. a confidence rating using a 6-point scale. The stimuli, included 15 words, were recorded by [3] Toivonen, I., Blumenfeld, L., Gormley, A., Hoiting, L., Logan, J., Ramlakhan, N., & Stone, A. 2015. a female native speaker of Kurmanji, who produced several repetitions of the two versions Vowel height and duration. In Steindl, U. et al. (eds), Proceedings of the 32nd WCCFL. Somerville, of all the words, and the best tokens were chosen by two trained phoneticians. A total of 15 MA: Cascadilla Proceedings Project, 64-71. native Kurrmanji- speaking participants , residing in the Kurdistan region of Iraq, took part [4] Tanner, J., Sonderegger, M., & Torreira, F. 2019. Durational evidence that Tokyo Japanese vowel in the task. The results of the perceptual task were analyzed in terms of the percentage of devoicing is not gradient reduction. Frontiers in Psychology, 10. [5] Boersma, Paul & Weenink, David (2020). Praat: doing phonetics by computer [Computer time participants chose the option with an epenthetic vowel as the preferred pronunciation. program]. Version 6.1.16, http://www.praat.org/ In most cases, the option with an epenthetic vowel was consistently perceived as more [6] Kabak, Barış. 2016. Refin(d)ing Turkish stress as a multifaceted phenomenon. Talk presented natural/native-like (µ = 84%, SD = 0.36) than the option in which the vowel was elided. This at the Second Conference on Central Asian Languages and Linguistics (ConCALL52), October consistency was supported by high confidence ratings (µ = 5.6 out of 6, SD = 0.91). On the 7-9 2016, Indiana University. other hand, and in contrast to what has been claimed for the Kurmanji regional dialect spoken [7] Jannedy, S. 1995. Gestural Phasing as an Explanation for Vowel Devoicing in Turkish. OSU in neighbouring Turkey, the consonant sequence /s/+stop was perceived as more natural Working Papers in Linguistics 45, 56-84. [8] Kilic M. A., & Öğüt, F. A high unrounded vowel in Turkish: is it a central or back vowel? Speech (100% of the time) when the epenthetic vowel was elided. This finding is in accordance with Communication 43, 143-154. Öpengin & Haig’s [5] proposal that Kurmanji dialect lacks consistent strategies in handling [9] Göksel, A., & Kerslake, C. 2005. Turkish: A Comprehensive Grammar. London: Routledge. initial clusters across its different regional variations. Nevertheless, the overall results of this study support the prediction that Kurmanji speakers in the Kurdistan region of Iraq will

130 131 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS perceive, as more natural/native, consonant clusters with epenthetic vowels. How does interacting with humans and machines shape speech production? Alternating between robot & human directed speech during the same conversation Table 1: Hogg and McCully’s sonority scale (adapted from Hogg and McCully, 1987) Sound Sonority index Omnia Ibrahim 1, Gabriel Skantze 2, Volker Dellwo 3 Low vowels 10 1Language Science and Technology dept., Saarland University, Germany 2 Mid vowels 9 Speech Music and Hearing dept., KTH Royal Institute of Technology, Sweden 3 High vowels 8 Computational linguistics dept., University of Zurich, Switzerland Flaps 7 In everyday communication, the goal of speakers is to communicate their messages in an Laterals 6 intelligible manner to their listeners. When they are aware of a speech perception difficulty Nasals 5 on the part of the listener due to background noise, a hearing impairment, or a different Voiced fricatives 4 native language, speakers will naturally and spontaneously modify their speech in order to Voiceless fricatives 3 accommodate the listener. In an attempt to make themselves more intelligible, speakers will Voiced stops 2 change their speaking styles depending on their listeners’ characteristics: foreigner, infant or computer directed speech [1,2]. Furthermore in human computer/robot interactions, Voiceless stops 1 participants have been shown to address computers/robots differently than humans. Speakers change their acoustic characteristics (ex: raise their F0, slower speaking rate) when they are References talking to a robot in comparison to human [3]. According to the hypo- to hyper-articulation theory, those within-speaker variations are a [1] Hasan, A. M. (2009). Consonant clusters in Kurdish. Duhok University Journal, (1), 1-8. reflection of the trade-off between clarity of speech (listener-oriented output) and economy of [2] Hogg, R. & McCully, C. (1987). Metrical Phonology: A Coursebook. Cambridge: Cambridge effort (talker-oriented output) [4]. In this respect, goal-oriented speaking styles such as infant- University Press. or computer-directed speech is where speakers adjust their output on-line (consciously or [3] Keshavarz, M.H (2017). Syllabification of final consonant clusters: A salient pronunciation unconsciously) to meet the demands of their target audience or the communicative situation problem of Kurdish EFL learners. Iranian Journal of Language Teaching Research, 5 (2), [5,6,7]. 1-14. The present study aims at exploring the extent to which degree the interaction with humans [4] Omer, J.A.& Hamad, Sh,H (2016). Kurdish EFL learners’ strategies to break apart the and machines will affect the speakers’ speech production in unstructured human-human- different L2 onset consonant clusters. Raparin University Journal, 3 (7), 187-196. robot interactions. The interactions under study are different from previous studies, in the [5] Öpengin, E., & Haig, G. (2014). Regional variation in Kurmanji: A preliminary classification of sense that the robot is taking part in a more equal role as the other human and that leads to dialects. Kurdish Studies, 2(2), 143-176.. the speaker having spontaneous address alternation between human and robot. [6] Selkirk, E. (1984). On the major class features and syllable theory. In M. Aranoff, & R. T. The speech material: the data used for the analyses consist of a naturalistic setting game Oehrle (Eds.), Language sound structure: studies in phonology presented to Morris Halle by scenario with no instruction given, where the participants were engaging in a collaborative task his teacher and students (107–136). Cambridge, MA: MIT Press with a social robot. 10 conversations are extracted from multi-party human-robot discussion [7] Shokri, N. (2002). Syllable structure and stress in Bahdini Kurdish. STUF-Language corpus [8], which were collected at an exhibition in the Swedish National Museum of Science and Typology and Universals, 55 (1), 80-97. Doi:10.1524/stuf.2002.55.1.80. Technology for nine days. The Swedish corpus was recorded during collaborative card games [8] Yavaş, M. (2013). What explains the reductions in /s/ cluster s: sonority or [continuant]? with a social robot (Furhat). The interactional setting of the game is illustrated in figure 1. The Clinical Linguistics and Phonetics, 27(6-7), 394-403. two players were seated at a large table with a multi-touch screen, opposite the Furhat robot head. Both users were wearing unidirectional headset microphones, which allowed for the recording of two separate good quality audio streams (given the noisy setting in the museum). The speech to noise ratio in the recording is ≈ 38 dB. Each utterance in the conversation was manually labeled according to addressee type (robot or to human). The acoustic analysis: the following acoustic features were extracted using Praat scripts: fundamental frequency (mean, median, standard deviation, range, slope (how fast ƒ0 changes during utterance)), intensity and speaking rate (vowels per minute) and total utterances duration. For statistical analysis, we used linear mixed-effect models. Our findings: our results show that there are significant differences between Human- and Robot- directed speech for ƒ0, speaking rate and the total length of utterance (figure 2). Those results demonstrate that speakers change and modulate their speech alternatively when speaking to robot vs. when speaking to humans for the sake of increasing their speech intelligibility to the robot [9]. Furthermore, robot-directed speech effect is still robust when speakers spontaneously switch turns between human and robot.

132 133 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Artificial language learning of vowel reduction

Benjamin Storme1 and Francesc Torres-Tamarit2) 1University of Lausanne, 2SFL, Université Paris 8, CNRS

One typologically common pattern of vowel reduction involves raising of unstressed vowels (e.g. Bulgarian). The opposite pattern whereby vowels are lowered in unstressed syllables is cross-linguistically rare [1,2]. There is a phonetic motivation for this asymmetry: high vowels are generally shorter than lower vowels [3], and should therefore provide better candidates for unstressed vowels. Also, unstressed vowels being shorter undershoot their F1 targets as Figure 1: Interaction settings of human-human-robot conversation compared to stressed vowels, and therefore get more assimilated to surrounding consonants. In so far as consonants generally have lower F1 targets than vowels, this assimilation results in lower F1 realizations for unstressed vowels (i.e. in vowel raising; [4]). This study aims to test whether the preference for raising unstressed vowels is reflected in language learning, with an artificial language raising mid vowels in unstressed syllables (e.g. singular [péka], plural [pikáta]) being easier to learn than a language lowering high vowels in unstressed syllables (e.g. singular [píka], plural [pekáta]). We expect that participants will not infer a productive rule of lowering at the same rate as they will infer a productive rule of raising. If so, this study will provide further support to the hypothesis that phonetic naturalness plays a role in phonological learning, the so-called substantive bias (see [5,6], and references cited therein). Participants were randomly assigned to one of two groups which differ in the rule they are taught in the learning phase: vowel raising in unstressed syllables or vowel lowering in unstressed syllables. For half of the participants, singular disyllabic CVC[a] stems with initial Figure 2: Human addresses (Right) vs. Robot addresses (Left) stressed [í, é] and [ú, ó] corresponded to plural stems with initial unstressed [i] and [u], respectively, after /-ta/ plural suffixation. This corresponds to the natural, raising pattern of References: vowel reduction. For the other half, singular disyllabic CVC[a] stems with initial stressed [í, é] and [ú, ó] corresponded to plural stems with initial unstressed [e] and [o], respectively, after [1] Bradlow, A. R. (2002). Confluent talker- and listener-related forces in clear speech /-ta/ plural suffixation. This corresponds to the unnatural, lowering pattern of vowel reduction. production. Laboratory phonology 7, ed. by C. Gussenhoven and N. Warner, 241–73. During the learning phase, 32 item pairs were presented auditorily: 16 pairs containing an Berlin, Germany/New York, NY: Mouton de Gruyter. unfaithful mapping between the singular and the plural form (e.g. singular [péka], plural [2] Bjursäter, U. (2004). Speaking styles and Phonetic variation. [pikáta] for the natural condition) and 16 pairs containing a faithful mapping between the [3] Sarah Kriz, Gregory Anderson, Magdalena Bugajska, and J. Gregory Trafton. 2009. Robot- singular and the plural forms (e.g. singular [píka], plural [pikáta] for the natural condition). In the testing phase, participants were presented auditorily with a singular stem (e.g. [péka]) directed speech as a means of exploring conceptualizations of robots. In Proceedings and with two possible plurals: one with a high vowel with the same [back] specification (e.g. of the 4th ACM/IEEE international conference on Human robot interaction (HRI ‘09). [pikáta]), and one with the corresponding mid vowel (e.g. [pekáta]). Then participants were Association for Computing Machinery, New York, NY, USA, 271–272. DOI:https://doi. asked to choose the correct plural. For the participants in the raising group, the correct org/10.1145/1514095.1514171 response was the plural with a high vowel (e.g. [pikáta]). For the participants in the lowering [4] Lindblom, B. (1990). Explaining phonetic variation: A sketch of the H & H theory (pp. group, the correct response was the plural with a mid vowel (e.g. [pekáta]). The testing phase 403–439). Kluwer Academic Publishers. included two types of stimuli: (i) 32 stimuli on which participants had been taught the rule, and [5]Kuhl, P. K., J. E. Andruski, L. Chistovich, I. Chistovich, E. Kozhevnikova, U. Sundberg, and (ii) 32 new stimuli that participants did not hear before. All stimuli were read by an Italian native F. Lacerda. (1997). Cross language analysis of phonetic units in language addressed to female speaker. We followed \citet{MartinPeperkamp2020} in using non-native stimuli given infants, Science 227.684–6. the hypothesis that their processing might be more phonetic and less likely to be influenced [6]Skowronski, M. D., and J. G. Harris. (2005). Applied principles of clear and Lombard by knowledge of orthography. A total of 131 participants (66 in the natural condition, and 65 in speech for automated intelligibility enhancement in noisy environments. Speech the unnatural condition), who had French or Spanish as their native language, were recruited Communication 48.549–58. among students at French-speaking and Spanish-speaking universities. The experiment was [7] Smiljanić, R. and Bradlow, A.R. (2009), Speaking and Hearing Clearly: Talker and done with LimeSurvey. Listener Factors in Speaking Style Changes. Language and Linguistics Compass, 3: 236- The results were analyzed using a hierarchical Bayesian logistic regression in R [7], with 264. https://doi.org/10.1111/j.1749-818X.2008.00112.x. all experimental variables and their interactions as fixed effects and the maximal random [8] S. Al Moubayed, G. Skantze, and J. Beskow (2013), “The furhat back- projected humanoid effects structure justified by the experiment’s design as random effects. The posterior head-lip reading, gaze and multi-party interaction”. International Journal of Humanoid probabilities (mean and 95% credibility interval) of correct responses are shown in Figure 1 Robotics. for all combinations of variables. Overall, vowel raising appears easier to learn than vowel [9]Youyi Lu, Martin Cooke(2009), “The contribution of changes in F0 and spectral tilt to lowering as a pattern of vowel reduction, as expected under the hypothesis that phonological increased intelligibility of speech produced in noise”, Speech Communication,Volume learning reflects typological and phonetic asymmetries. However, this difference reached 51, Issue 12, Pages 1253-1262 statistical significance only in one among the 16 contexts shown in Figure 1 (a hypothesis was considered to be significantly supported by the data if its posterior probability was higher than

134 135 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS .95). Moreover, the (non-significant) advantage for raising appears to be largely limited to back A new interactive graphical approach to visualise children phonemes’ acquisition over vowels. We advance two tentative explanations for these inconclusive results. First, we noted time that the asymmetry favoring raising appeared to be stronger for participants who declared having no educational background in linguistics (n=28), in particular for back vowels, as shown Briglia A., Sauvage J. in Figure 2a. We speculate that linguistically informed participants relied on metalinguistic LHUMAIN, Université « Paul-Valéry » Montpellier (France) awareness more than naive participants when performing the task and were therefore less sensitive to phonetic detail. To control for this confound, we plan to rerun the same study and include only linguistically naive participants. Concerning the stronger advantage for raising Data. CoLaJE is a French open access database (available in CHILDES) containing longitudinal with back vowels, we hypothesize that it is due to a phonetic asymmetry in our stimuli. As recordings of in vivo child spoken language from seven children that have been regularly shown in Figure 2b, high back vowels were generally shorter than their mid counterparts in filmed one hour every month from their first year of life until 5. Each record hasbeen unstressed syllables but high front vowels were not shorter than their mid counterparts in transcribed in CHAT, for two children an additional transcription in orthographic norm unstressed syllables. If high vowels are better candidates for unstressed vowels as long as they are clearly shorter than the corresponding mid vowels, then it makes sense that raising would and IPA is provided. Data is coded in two tier : “pho” reprents what the child says and“mod” be easier to learn for back vowels (where raising correlates with shortening) than for front represents what the child should have said according to the adult’s norm. This sampling vowels (where raising does not correlate with shortening). scheme meets child language representativeness standards (Stahl & Tomasello, 2004). Data is turned into a machine-readable format by converting it into .txt to make it suitable for Python.

Method. We wrote a set of scripts to extract every occurrence of each of the 36 phonemes specific to French. It would have been possible to code Child-Directed Speech too,as everything adults say has been transcribed in these corpora, but we decided to focus on child production only, even considering that the data amount was already huge. We ordered the 36 phonemes into a hierarchical list according to their articulatory effort (Sauvage, 2015). Aim was to obtain a global overview of the phonemes’acquisition over age and, at the same time, having the possibility to focus on the specific development of any given phoneme in relation to every other one : to do so, we chose to adopt the Multiresolution Streamgraph (Cuenca et al., 2018). This graph has been conceived to overcome readability and scalability issues in the visualisation of multiple time series : it allows a navigation through the different hierarchical levels providing a way to aggregate and disaggregate its structure, as well as to filter and highlight a specific element. It is possible to know how many occurrences of any phoneme has been pronounced by any of the seven children in any of the recording sessions. Results are coherent to current literature on consonants acquisition (McLeod et al., 2018) in the sense that they approximately fit to the state-of-the-art research on the order of acquisition of consonant and vowels. A comparison has been made on the children of this dataset, allowing us to establish the amount of differences and similarities between them by overlapping graphs in a synoptic way. Results obtained were coherent to previous studies on French (Yamaguchi, 2012). The advantage of this graph is its versatility, the main limit is that it does not consider coarticulatory differences. As a number of phonemes differ in their pronunciation according to the place they occupy (onset, nucleus, coda), the [1] Gordon, M. K. 2016. Phonological typology. Oxford: Oxford University Press. visualisation is somehow coarse-grained. Tackling the problem of coarticulatory differences [2] Crosswhite, K. M. 2004. Vowel reduction. In Hayes, B., Kirchner, R. & Steriade, D. (Eds), Phonetically with programming languages is currently on of the most difficult task to solve in NLP. based phonology. Cambridge: Cambridge University Press, 191-231. [3] Lehiste, Ilse. 1970. Suprasegmentals. Cambridge, MA: The MIT Press. Results and discussion. Among current theories on phonological acquisition, in particular OT [4] Lindblom, B. 1963. Spectrographic study of vowel reduction. Journal of the Acoustical Society of (Prince, 1993), we thought that Clements’ « Theory of phonological features » was the closest America 35, 1773-1781. [5] Martin, A. & Peperkamp, S. 2020. Phonetically natural rules benefit from a learning bias: A re- to be complementary to the study of frequency effects in language acquisition, especially examination of anddisharmony. Phonology 37, 65-90. for its quantitative groundedness. We tested whether the occurrences in this graph fit to the [6] Martin, A. & White, J. 2021. Vowel harmony and disharmony are not equivalent in learning. three pillars of this theory : according to the « markedness avoidance » principle, we found 52, 227-239. that marked phonemes occur most of the times less than unmarked ones (e.g « p » occurs [7] Bürkner, P.-C. 2017. brms: An R package for bayesian multilevel models using stan. Journal of more times than « b », « t » occurs more times than « d », « k » more times than « g » and Statistical Software 80, 1-28. so on). By focusing on the same group of consonants specified by the voicing contrast, we found how the « feature economy » principle - according to which features present in one

136 137 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS segment tend to be used to define others - is coherent to what it is possible to observe from DAY 1 - Poster Session C the graphs : as this feature specifies more phonemes than any other one in French, those Prosody: Information Structure features appear more frequently and - a fortiori - children prioritize their acquisition. The same holds for principle of feature hierarchy : consonants whose defining traits are placed Prosodic expression of information status in Italian at the root are learnt most of the time before consonants whose defining traits are placed in subsequent branches (Yamaguchi, p130). These preliminary results are part of a Phd project : we are currently working on the analyses of the same corpora through features instead of Simona Sbranna, Caterina Ventura, Aviad Albert, Martine Grice phonemes. By visualising the hierarchy of features in the Multistream, we suppose to improve Universität zu Köln current knowledge on the path of phonological variations (i.e the sequence of temporary achieved stages toward the adult norm). To give an example, « tʁaktœʁ » is never pronounced Keywords: prosodic marking, information status, periogram, Italian. « dʁaktœʁ » but instead « kʁaktœʁ » as « k » is voiceless as « t ». In this way we would try to improve knowledge about the relation between feature hierarchy and acquisition. As far as Introduction. Previous research investigating prosodic marking of information status claims we know, the visualisation technique we propose is innovative and could be used as a tool that Italian tends to resist deaccentuation of given elements [1, 2, 3]. In particular, Italian to help researchers to answer questions about the L1 acquisition process in any language. reportedly accents post-focal given information within noun phrases (NPs), making it difficult to reconstruct the information status of items from the acoustic signal [4]. However, so far there have only been categorical descriptions of accent patterns, risking the loss of important information about the continuous phonetic parameters that contribute to pitch accent categorisation. Since subtle modulations of continuous prosodic parameters may play a role in differentiating information structures, this paper uses a novel approach to explore how speakers modulate prosodic cues for this purpose. The variety of Italian investigated is that spoken in Naples.

Method. We designed a board game to elicit utterance-final NPs with disyllabic nouns and adjectives evoking different information structures – given-new (GN), new-new (NN), new- given (NG) – in statements. A total of 774 analysable items were obtained from 54 speakers. We investigated continuous periodic energy and F0 trajectories to achieve enriched visual representations of pitch contours, i.e. periograms [5], and to quantify F0 contour shapes in temporal and spectral dimensions via synchrony and scaling [6, 7]. Synchrony is the distance between the tonal centre of gravity [8] and the centre of periodic mass within syllabic intervals. It reflects the perception of a rising or falling F0 shape within a syllable. Scaling reflects the difference between F0 values at the centre of periodic mass across two Figure 1. Multiresolution Streamgraph obtained from Adrien dataset https://marine27.github.io/TER/index.html consecutive syllables. To validate the production data, three phoneticians, native speakers of the Neapolitan variety, listened to the analysed items in a context-matching perception References study, indicating whether items were perceived as GN, NN, or NG.

[1] Morgenstern A.; Parisse C. (2012), « The Paris Corpus ». French language studies 22. 7-12. Results. Example periograms and periodic energy curves for the three conditions [Fig. 1] Cambridge University Press. Special Issue. reveal two patterns (GN and NN vs. NG) reflected in the averaged values for synchrony and [2] Tomasello, M. and Stahl, D. (2004). « Sampling children’s spontaneous speech : How much is enough scaling in the entire corpus [Fig. 2.a]. GN and NN items have a similar contour with a late F0 ?». Journal of Child Language, 31 :101–121 fall on the second syllable, while NG items have an earlier F0 fall, on the first syllable. In all [3] Sauvage J. (2015). « L’acquisition du langage. Un système complexe. L’Harmattan- Louvain-la- conditions the F0 contour on the adjective (third and fourth syllables) is relatively flat, starting neuve. with a slight fall in the GN and NN conditions. These patterns are reflected in the synchrony [4] Cuenca E., Sallaberry A., Wang Y., Poncelet P. (2018). « MultiStream : A Multiresolution Streamgraph and scaling values [Fig. 2.a]. In GN and NN the positive synchrony value on the first syllable Approach to explore Hierarchical Time Series ». IEEE Transactions on visualization and computer signals a rising movement, and the negative value on the second syllable indicates a mostly graphics, vol.24, no. 12 falling movement. In addition, the positive scaling value on the second syllable confirms a [5] McLeod S.; Crowe K. (2018). “Children’s Consonant Acquisition in 27 Languages: A Cross-linguistic rise from the first to the second syllable. By contrast, in NG the F0 fall already starts on the Review”. American Journal of Speech-Language Pathology. P 1-26 first syllable, reflected in the negative scaling value on the second syllable, suggesting a falling [6] Clements, G. N. & R. Ridouane (2011). “Where do phonological features come from? Cognitive, movement across the first and second syllables. In both GN and NN the F0 reaches its lowest physical and developmental bases of distinctive speech categories”. John Benjamins. Amsterdam point on the third syllable and remains stable across the fourth syllable, as indicated by the scaling values approaching zero. Results from the context-matching study [Tab. 1] show that GN and NN items are frequently confused, whereas agreement was consistently more robust for NG items, reflecting the patterns in synchrony and scaling. Fig. 2.b displays the values of synchrony and scaling for the items that were correctly matched by all three listeners. Here

138 139 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS the distribution of the values for NG corresponds more closely to the periogram in Fig.1 as compared to the values shown in Fig. 2.a including all items. In particular, the broader distribution for synchrony on the first syllable [Fig. 2.a] is replaced by a narrower distribution with a tendency towards negative values [Fig. 2.b], indicating an anticipated early F0 falling movement compared to the other conditions. This is even clearer for the values of scaling on the second syllable, where a shift of the distribution towards negative values indicates a clearer distinction between the early falling F0 movement in NG compared to the later fall in NN and GN. This result suggests that those instances of NG realised with a F0 fall starting on the second syllable, i.e. later, were unlikely to be recognised as corresponding to a NG structure.

Conclusion. The variability in the acoustic correlates of prosodic prominence creates a picture as complex as it is informative and shows that a purely categorical approach in terms of accentuation and deaccentuation is inadequate, as many cues are in the first clearly accented word in the NP. The methods presented here, allowing us to observe the subtle modulations of continuous prosodic parameters, indicate that speakers of the Neapolitan variety of Italian do mark information status prosodically within NPs. Although they do not make this distinction with the presence or absence of deaccentuation, they still distinguish between new and given items postfocally. This result is further validated by the fact that the intended information status, especially in the NG case, can be perceived entirely from the Figure 1 Synchrony and scaling averaged across speakers. Violin plots for synchrony (left) and scaling (right), averaged prosody, i.e. with no contextual cues. across all speakers for all items (A) and for a subset of the data, whose information structure was verified by being correctly rec- ognized in a perception task (B). Colour codes denote the 3 conditions (GN, NN and NG). The x-axis displays the four syllables of each NP, the y-axis shows values for synchrony (ms) and scaling (Hz). Values above the horizontal line crossing zero are positive, below it negative. Black points on the violins represent mean values.

Bibliography

[1] LADD R., 1996, Intonational Phonology, Cambridge University Press, Cambridge. [2] AVESANI C. (1997). I toni della RAI. Un esercizio di lettura intonativa, in Aa.Vv., Gli italiani trasmessi: la radio, Accademia della Crusca, Firenze, 659-727. [3] AVESANI C., M. VAYRA (2005). Accenting deaccenting and information structure in Italian dialogues, in Proceedings of the 6th SIGdial Workshop on Discourse and Dialogue. [4] SWERTS, M., KRAHMER, E. & AVESANI, C. (2002), Prosodic marking of information status in Dutch and Italian: a comparative analysis, Journal of Phonetics, 30, 4, 629-65. [5] ALBERT, A., F. CANGEMI & M. GRICE (2018). Using periodic energy to enrich acoustic representations of pitch in speech: A demonstration. Proceedings of the 9th Speech Prosody Conference, June 2018, Poznan,1-5. [6] CANGEMI, F., ALBERT, A. & GRICE, M. (2019). Modelling intonation: Beyond segments and tonal targets. In Sasha Calhoun, Paola Escudero, Marija Tabain & Paul Warren (eds.) Proceedings of the 19th International Congress of Phonetic Sciences, Melbourne, Australia 2019, 572-576. [7] ALBERT, A., F. CANGEMI, T. M. ELLISON & M. GRICE (2020). ProPer: PROsodic analysis with PERiodic energy. OSF. March 4. doi:10.17605/OSF.IO/28EA5. [8] BARNES, J. VEILLEUX, N., BRUGOS, A. & SHATTUCK-HUFNAGEL, S. (2012). Tonal Center of Gravity: A global approach to tonal implementation in a level-based intonational phonology. Laboratory Phonology. 3. 10.1515/lp-2012-0017.

140 141 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Topic and focus accents in closely related varieties of Campania Italian of the contrast among PT, RT and CF.

Violetta Cataldo1,2, Riccardo Orrico3 and Claudia Crocco2 (1) Milena (NP) lo vuole amaro (VP). 1University of Salerno, 2Ghent University, 3Università degli Studi di Napoli “Federico II” Milena (NP) takes it (the coffee) black (VP).

In the last two decades the literature has described the intonation of several varieties of spoken Italian (a.o., [1]), outlining an extremely complex picture characterized by a pronounced variation both within and across varieties. Besides, the relation among geographically close varieties has been scarcely investigated. Recently, [2] considered dialectological isogloss boundaries as a starting point to study geographically close varieties of Italian. Along these lines, we propose a first experimental investigation exploring closely related varieties of Italian spoken in Campania, namely Salerno (SI) and Cilento (CI) Italian, that will be compared to Neapolitan Italian (NI). On a dialectological basis, these points are distinguished by the Eboli-Lucera isogloss ([3],[4]), running south of Salerno; namely, the varieties have Campania (Naples and Salerno) and Lucania (Cilento) vernaculars as substrate. In this study we explore the realization of pitch accents in sentence-initial noun phrases (NP) in SI and CI. We consider statements under three pragmatic conditions: (1) Regular Topic (RT) in exhaustive answers, Figure 1. F0 curves of the topic NPs according to the three information conditions of RT (red line), PT (green line), CF (2) Partial Topic (PT [5]) in non-exhaustive answers, (3) Contrastive Focus (CF) in corrective (blue line) in CI (left panel) and SI (right panel). answers. Such conditions have been studied for NI in [6], [7] showing the presence of a phrase break between NP and the verb phrase (VP) in PT and CF, and differences in pitch accents’ alignment and span among the three conditions. Differences between RT and PT peak alignment were previously observed in a qualitative comparison between SI and NI. Here we carry out a replication study to enlarge the picture of Campania Italian to SI and CI to identify similarities and differences across diatopically close varieties. SI and CI speakers performed a reading task eliciting NP +VP target sentences (ex. 1), with NP (CV’CVCV) in 3 conditions: RT, PT, CF. Due to the covid-19 pandemic, a remote data collection was carried out via Zencastr (.wav, 44100 Hz,16 bit; [9]). The final dataset consists of 480 items: 10 target utterances * 3 conditions (RT, PT, CF) * 2 renditions * 4 spks * 2 varieties (SI, CI). The sample was segmented using Webmaus [10]; a set of Praat scripts was used to

extract duration and f0 measurements (in Hz) to calculate peak latency (in ms), span (in ST) and slope. Data were analyzed using mixed-effect linear regressionmodels in R [11]. Figure 2. Time in ms of pitch accent peak alignment taken from the stressed open syllable offset according to the topic Our results show that the three conditions are phonetically encoded in a similar way in the condition (RT, PT, CF) and the variety (SI, CI). varieties of SI and CI (Figure 1) and in line with NI [7]. Both in CI and SI RT and PT are realized as a rise, reaching the peak late in the vowel. The difference between these two conditions lies in the span significantly wider in PT (Est: 2.9, p< .001), and in the slope of the rise, steeper References in PT (Est: 9.9, p< .001). CF, on the other hand, is realized as a rise-fall movement, with the [1] Gili Fivela, B., Avesani, C., Barone, M., Bocci, G., Crocco, C., D’Imperio, M., Giordano, R., Marotta, peak aligned earlier than both PT (Est:-140.9, p< .001) and RT (Est: -122.8, p< .001). For the G., Savino, M. & Sorianello, P. 2015. Intonational phonology of the regional varieties of Italian. other parameters, CF is less clearly differentiated from the two other conditions: a steeper In Intonation in Romance. Oxford University Press, 140-197. slope groups together CF and PT viz RT (Est: 17.5, p< .001); however, along the dimension [2] Gili Fivela, B. & Nicora, F. 2018. Intonation in Liguria and Tuscany: checking for similarities of span, CF is not significantly different from neither PT nor RT. As for duration, the NP-final across a traditional isogloss boundary. In Vietti, A., Spreafico, L., Mereu, D. & Galatà, V. (Eds),Il syllable of PT is lengthened compared to RT (Est: 36.4, p< .05), suggesting the presence of a parlato nel contesto naturale, Studi AISV, Milano: Officinaventuno, 131-156 break in PT between NP and VP, although in [7] a lengthening of also the NP-final vowel was [3] Avolio, F. 1989. Il limite occidentale dei dialetti lucani nel quadro del gruppo ‘altomeridionale’: found for NI. A number of phonetic differences among the varieties (SI, CI and NI) can be considerazioni a proposito della linea Salerno-Lucera. L’Italia dialettale, 52, 1-22. noticed. Firstly, the difference in the early (CF) and the late (RT and PT) peak alignment is kept [4] Avolio, F. 2000. Ma nuje comme parlamme? Problemi di descrizione e classificazione dello constant across SI and CI (Figure 2), though in CI the position of the peaks is overall earlier spazio dialettale “campano”. Romance Philology, 54(1), 1-28. with reference to SI (Est: -33.2, p< .01). A second difference emerges from the comparison of [5] Büring, D. 1997. The Meaning of Topic and Focus – the 59th Street Bridge Accent. London: Routledge. SI and NI, and confirms the preliminary results of [8]. In SI PT peaks are aligned in the post- [6] Brunetti, L., D’Imperio, M., & Cangemi, F. 2010. On the prosodic marking of contrast in Romance tonic syllable (73.2%), while RT ones are aligned within the stressed syllable. In contrast, NI sentence topic evidence from Neapolitan Italian. In Proceedings of the 5th International data show this realization is reversed. Overall, the three conditions are distinguished within each variety by combinations of different phonetic features. However, differences in peak alignment indicate diatopic variation, in that SI, CI and NI differ in the phonetic implementation 142 143 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Conference on Speech Prosody, Chicago. Focus marking by Spanish monolingual and heritage speakers [7] D’Imperio, M., & Cangemi, F. 2011. Phrasing register level downstep and partial topic constructions in Neapolitan Italian. In Gabriel, C. & Lleó, C. (Eds.), Intonational phrasing in Ingo Feldhausen1 and Maria del Mar Vanrell2 Romance and Germanic: Crosslinguistic and bilingual studies, Amsterdam: John Benjamins, 75– 1Université de Lorraine & ATILF-CNRS, 2Universitat de les Illes Balears 94. [8] Orrico, R. 2020. Individual variability in intonational meaning identification: The role of cognitive In this paper, we shed new light on the question of how narrow information and contrastive and sociolinguistic variables. PhD thesis. focus is prosodically and syntactically realized by monolingual (MS) and heritage (HS) speakers [9] Magistro G. (in prep.). Prosodic cues to syntactic reanalysis. Experimentally tracking Jespersen’s cycle in progress. PhD Diss. Ghent University. of Castilian Spanish (CS). Research on the syntax-prosody interface in the expression of focus [10] Kisler, T.,Reichel, U.D. & Schiel, F. 2017. Multilingual processing of speech via web services. by HS is relatively new (see [1], [2], i.a.). We, thus, add to this body of research by presenting Computer Speech & Language, 45, 326–347. the insights of a production test conducted with 12 Spanish MS and HS (with German as their [11] R Core Team 2019. R: A language and environment for statistical computing. Vienna, Austria: dominant language). Our five main findings: R Foundation for Statistical Computing. URL https://www.R-project.org/ (a) Bilingual speakers realize both types of focus almost always by stress shift, (1a), and the pitch accent is predominantly realized by L+H* (see section Bilinguals); (b) Monolingual speakers, in turn, realize information focus by different strategies (cf. (1b), (2), and (3)), but stress shift is not a relevant option (see section No stress shift); (c) Cleft constructions are used by monolinguals for both focus types even though there are certain preferences (see section Cleft and focus type); (d) Focus does not have to bear always sentential stress: in clefts, prosodic alignment can be a sufficient correlate of focus (see sectionFocus without sentential stress); (e) Monolinguals typically realize the pitch accents by L+H* for non-final focused constituents and L* for final focused constituents.

Methodology: We conducted a production test based on semi-spontaneous speech designed to elicit different focus readings by means of question-answer pairs from short picture stories. A total of 2508 contours were obtained (monolinguals: 1848; bilinguals: 660; all native speakers of Castilian Spanish). No stress shift: There is an ongoing discussion on how focus is realized in Spanish. Theoretical work (such as [3], [4]) argues that neutrally focused elements must be located in sentence-final position (viap-movement , (1b)) in order to receive main stress by means of the Nuclear Stress Rule. Empirical studies, in turn, show that neutrally focused elements actually can be realized in situ (1a) and that this option reflects the predominant strategy for focus realization (e.g. [5], [6], i.a.). Our empirical results of the monolingual speakers (N=7) show that stress shift is not an option in Castilian Spanish (1a). Cleft and focus type: While the cleft constituent (such as Juan in (3)) is generally considered to be the contrastively focused element in Spanish (see, e.g., [3]). [7] states that simple clefts on the one hand and (inverted) pseudo-clefts on the other hand have different information structural properties. Our study – as far as we know – represents the first empirical verification of this claim and confirms it; see Table 2. Focus without sentential stress: It is generally accepted that focus in Spanish bears sentential stress (see, e.g., [3], i.a.). However, contrary to what has been claimed in the past, our results show the contrastively focused constituent in clefts such as (3a) does not always bear sentential stress (in up to 80% of the cases) – independently of the grammatical function of the clefted element. Bilinguals: The results for German-Spanish bilingual speakers show a clear preference for stress shift in both informational and contrastive focus, see Table 1 (in line with other studies on bilinguals, e.g. [6]). The realized focal pitch accent is almost always L+H*, but the syllable bearing it is longer in contrastive contexts. Interestingly, the few instances of p-movement attested in the bilingual data occur with contrastive focus and not with information focus.

144 145 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Thus, the bilinguals clearly differ from the monolinguals. Future research will show whether Fronted information focus in Catalan at the syntax-phonology interface the differences might be due to the influence of German (a language allowing for stress shift) or whether stress shift is a default strategy of bilinguals. Gorka Elordieta1, Francesc Torres-Tamarit2 and Maria del Mar Vanrell3 1University of the Basque Country, 2CNRS-Université Paris 8, 3Universitat de les Illes Balears (1) a. [F Los aLUMnos] se enfrentaron con la policía. (*CastSp. / AmSp.) ‘The students confronted the police’. It is generally assumed that Catalan resorts to syntactic movement to mark focus. The specific b. Se enfrentaron con la policía [ los aLUMnos]. (CastSp. / okAmSp.) F strategies used to alter the canonical word order are apparently determined by focus type. Since information focus must be in sentence-final position to receive phrasal stress ([1]), (2) [CF ManZAnas] compró Pedro (y no peras). Contrastive focus ‘Pedro eats apples (and not pears).’ the non-focal material has to be either left- or right-dislocated ([2]). Other mechanisms such as focus fronting or clefting seem to correspond to a contrastive interpretation ([3], i.a.). (3) a. Es Juan el que viene. Clefts However, [4], i.a., observe that constituent fronting in Majorcan Catalan is also a very common ‘It is Juan who comes.’ strategy to mark information focus, and that fronted information foci are characterized by a low nuclear tone followed by rising-falling boundary tones which extend to the postfocal b. El que viene es Juan. Pseudo-clefts region. No phonological analysis of this type of information fronted focus was developed. c. Juan es el que viene. Inverted pseudo-clefts In this paper we explore the prosodic realization of fronted information focus in Majorcan Information focus Contrastive focus Catalan and offer a phonological analysis framed within Match Theory, a constraint-based Monolinguals Bilinguals Monolinguals Bilinguals model of the syntax-phonology interface put forward in [5], [6], [7] and [8], i.a. A production task has been designed to elicit information fronted focus sentences in [ S] Clefting 71.1% Stress shift 77% [ S] Clefting 61.4% Stress shift 72% F CF Majorcan Catalan. The participants will be presented with written questions in a PowerPoint P-movement 14.5% Clefting 18% Focus fronting 15% Clefting 23% presentation which will be read by the researcher (e.g. Què venia na Joana de Sant Jordi? ‘What [ O ] P-movement 47.9% Stress shift 83% [ O ] Clefting 61.8% Stress shift: 63% F O CF O did Joana from Sant Jordi sell?’). In the following slide, the participants will be presented Clefting 23.3% Clefting 15% Focus fronting 23.6% P-movement: 27% with a written response, that is, the information fronted focus constituent (e.g. Ceràmica [ O ] Neutral WO 43.6 % Neutral WO 99% [ O ] Clefting 41.2% Neutral WO 87% F OI CF OI ‘ceramic’). The participants will be asked to produce a response using all sentence constituents Clefting 21.3% Focus fronting 23.7% (e.g. including the verb plus all postfocal constituents), which appear below the fronted Table 1: Types and frequency of focus marking strategies in neutral focus (left panel) and contrastive focus (right panel) declaratives; types of clefts (see (3)) are not distinguished here. information focus constituent in randomized order. All sentence constituents are controlled for size (short = one prosodic word, long = two prosodic words), syntactic function of the focus (S = subject, D = direct object, I = indirect object), and syntactic structure (SVDI, SVD, SV, Neutral focus Clefts 44,9% Pseudo-clefts 13,4 % S). The experiment consists of a total of 144 items (72 target sentences x 2 repetitions). The Inverted pseudo-clefts 41,5% prosodic grouping of the utterances are analyzed. Segmental and tonal targets are labelled Contrastive focus Clefts 70,98% by hand and measurements with respect to F0 timing are collected automatically through a Pseudo-clefts 23,52 % Praat script. Inverted pseudo-clefts 5,4 % The results of a pilot study on one participant show that the boundary tone H- associated Table 2: Types and frequency of cleft constructions attested in neutral and contrastive focus declaratives with the focused constituent has a secondary association with the nuclear stressed syllable (monolingual speakers). of a constituent to the right of the focalized constituent, that is, the postfocal region. If the postfocal constituent after the verb is heavy enough to be parsed into its own phonological [1] van Rijswijk, R., Muntendam, A. G., Dijkstra, T. 2017. Focus in Dutch Reading: an eye-tracking phrase, in compliance with syntactic structure (1), then H- associates with the verb (2) (Fig. experiment with heritage speakers of Turkish. Language, Cognition and Neuroscience 32(8), 984- 1, left panel). Otherwise, if the postfocal constituent is not parsed into its own phonological 1000. phrase but instead into a phonological phrase together with the verb, H- associates with the [2] Kim, J.-Y. 2018. Heritage speakers’ use of prosodic strategies in focus marking in Spanish. nuclear stressed syllable of the postfocal constituent (4) (Fig. 1, right panel), producing an International Journal of Bilingualism 23(5), 986-1004. unfaithful syntax-prosody mapping (3). Our analysis of the pattern observed is that the H- [3] Zubizarreta, M. L. 1998. Prosody, Focus, and Word Order. Cambridge MA: The MIT Press. boundary tone of the focalized constituent has a secondary association with the rightmost stressed syllable of the phonological phrase that contains the focused constituent. Under a [4] Gutiérrez-Bravo, R. 2002. Focus, Word Order variation and intonation in Spanish and English. theory of recursive prosodic structure (cf. [9], [8], i.a.), this is a maximal phonological phrase, In C. Wiltshire & J. Camps (eds), Romance Phonology and Variation. John Benjamins, 39-53. containing the minimal phonological phrase that is the focalized constituent. Thus, a crucial [5] Muntendam, A. 2013. On the nature of cross-linguistic transfer. BLC 16(1), 111–131. assumption of our analysis is that prosodic structure is recursive, as the phonological phrases [6] Leal, T., Destruel, E. & Hoot, B. 2018. The realization of information focus in monolingual and involved can be minimal and maximal. Another relevant aspect of our proposal is that bilingual native Spanish. LAB 8(2): 217–251 phonological well-formedness markedness constraints on the size of prosodic constituents [7] Moreno Cabrera, J.C. 1999. Las funciones informativas: las perífrasis de relativo. In: Gramática can override the effects of syntax-prosody interface constraints and thus have an effect on Descriptiva de la Lengua Española, I. Bosque & V. Demonte (eds.), Madrid, Espasa, 4245-4302. the secondary association of H-.

(1) [ForceP Force [TopP [Top’ Top [FocP [NP ceràmica ] [Foc’ Foc [FinP Fin [TP venia [vP v+V [VP V [NP

146 147 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS ceràmica ]]]]]]]] [DP na Joana de Sant Jordi ]]] Exploring the interaction between co-speech gestures and speech prosody in the marking of contrastive focus in French: a pilot study (2) ((Ceràmica)φmin venia)φmax (na Joana de Sant Jordi)φmax Clara Lombart L* H- ____| University of Namur (NaLTT, LSFB-Lab) (3) [ Force [ [ Top [ [ ceràmica ] [ Foc [ Fin [ venia [vP v+V [VP V [NP ForceP TopP Top’ FocP NP Foc’ FinP TP Recent studies (e.g., [1], [2], [3]) have begun to define prosody not only on the basis of ceràmica ]]]]]]]] [ na Joana ]]] DP acoustic cues but also based on co-speech gestures. Formal and functional relationships between these two levels of structure suggest that prosody is fundamentally audiovisual. A (4) ((Ceràmica) (venia na Joana) ) φmin φmin φmax case in point is the marking of the overlapping of two independent informational notions: L* H- ______| contrast and focus, i.e., contrastive focus. Contrastive focus is defined as new information used to be opposed to given information (e.g., [4], [5]). In this poster presentation, I consider three major types of contrastive foci: correction, selection and discourse contrast (see examples (1)-(3)). Each type of contrastive foci can be marked by prosodic cues (i.e., F0 rise, higher pitch range, longer syllabic duration, and longer pauses) (e.g., [6], [7], [8] for French) and by co-speech gestures. For instance, an eyebrow raise and/or a head nod can co-occur with a F0 rise (e.g., [9], [10], [11], [12]). In this poster presentation, I put forward some aspects of my PhD project, which is led by the following research questions: 1) are the prosodic cues of contrastive focus in French associated with co-speech gestures?, 2) are these gestures formally and functionally different from the prosodic markers used in French Belgian Sign Language (LSFB)? For the French part, my study relies on videotaped data extracted from the FRAPé corpus ([13]). Figure 1. Waveform, spectrogram, f0 contour and orthographic, phonetic and prosodic transcription of Selected data were produced by four people interacting with someone familiar during four [Ceràmica] venia na Joana de Sant Jordi (left panel) and [Ceràmica] venia na Joana (right panel) ‘Joana (from Sant F F spontaneous tasks of free composition (i.e., describing pictures, drawing faces, categorizing, Jordi) sold ceramic’. explaining differences). The analysis of the datais still ongoing. Contrastive foci are delimited from an informational point of view. The Inter-Pausal units (IPU) that contain them and 50% of the preceding and following IPU are annotated at the level of speech (i.e., lengthening, References syllabic duration, F0 per syllable, pitch range, prosodic prominence, and articulation rate) [1] Chomsky, N. & Halle, M. 1968. The Sound Pattern of English. New York: Harper & Row. and of hand/head/eyebrow/body gesture (i.e., form, duration, and temporal coordination [2] Vallduví, E. 1991. The role of plasticity in the association of focus and prominence. Proceedings with speech). This methodology was elaborated after a pretest (outside the FRAPé corpus) of the Eastern States Conference on Linguistics (ESCOL) 7, 295-306. during which a French native speaker interacted with a friend (both aged between 18 and 25 [3] Solà, J. 1990. L’ordre dels mots en català. Notes pràctiques. In Solà, J. (Ed.), Lingüística i normativa. years old) during a “Guess Who?” game. 79 of the IPU (i.e., 186 tokens, 262 syllables, and 519 Barcelona: Empúries, 91-124. phones) produced during 5 minutes of recording were analyzed, and 34 of them contained [4] Vanrell, M. M. & Fernández-Soriano, O. 2013. Variation at the interfaces in Ibero-Romance: a contrastive focus. Catalan and Spanish prosody and word order. Catalan Journal of Linguistics 12, 253-282. The results of the pretest show that the prosodic marking of contrastive focus in French is [5] Selkirk, E. 2011. The syntax-phonology interface. In Goldsmith, J., Riggle, J. & Yu, A. (Eds), The different from the one of non-contrastive (parts of) IPU. It is characterized by 1) a lower pitch handbook of phonological theory, 2nd edition. Hoboken, New Jersey: Wiley Blackwell, 435-484. range, 2) a lower average of syllabic duration, 3) more pitch compressions and expansions, [6] Elordieta, G. & Selkirk, E. 2018. Notes on prosodic headedness and tone in Tokyo Japanese, 4) a slower articulation rate, and 5) different properties of prominent syllables (i.e., F0 of Standard English, and Northern Bizkaian Basque. In Hana-Bana: A Festschrift for Junko Ito and the vowels, duration, silent pauses, and rising tones). A different prosodic pattern is also Armin Mester [https://escholarship.org/uc/item/09m654fp]. associated with each type of contrastive foci. Furthermore, 82.3% of the contrastive foci [7] Kratzer, A. & Selkirk, E. 2020. Deconstructing information structure. Glossa 5(1), 113. are produced with at least one gesture, mainly a head movement (see example (4)). Some [8] Elfner, E. 2012. Syntax-prosody interactions in Irish. Doctoral dissertation, University of variations are noticeable in the form of gestures, their duration, their syllabic position, and Massachusetts, Amherst. in the amount of gestures produced during contrastive foci. However, some trends emerge [9] Ito, J. & Mester, A. 2013. Prosodic subcategories in Japanese. 124, 20-40. from the corpus. In fact, the articulators (hands, head, eyebrows, or body) used for co-speech gestures are related to different types of contrastive foci, and/or to different values of the prosodic cues. One example is that hand gestures are linked with the longest syllables (mean duration: 257 ms) of the pretest corpus (mean duration: 186 ms). These first results already allow us to highlight the correlation between gesture and prosody in the marking of contrastive focus in French. They need to be confirmed by the analyses of the corpus selected for the ongoing study. The integration of syntactic strategies,

148 149 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS the interaction between prosodic, syntactic, and gestural resources, as well as the taking into On the variation of post-focal prosody – partial and complete register compression account of task effects and task progression, are other aspects that I am going to address in my research. Frank Kügler Goethe University Frankfurt 1) Contrastive focus: correction type (within square brackets) A: Does she have a fringe? A focused constituent is prosodically marked on the focused constituent highlighting or demarcating this constituent from other constituents [1]. Cross-linguistically however, it B: No, she [doesn’t have a fringe] remains an open issue to which extent post-focal, usually given elements and the prosodic Contrastive focus: selection type (within square brackets) 2) variation found in that domain relates to the expression of focus marking. This paper explores A: Is your character a woman or a man? the conditions of the variation found in post-focal prosody. Post-focal compression (PFC) B: A [woman] [2] is widespread among the languages of the world, nevertheless a non-universal cue to 3) Contrastive focus: discourse contrast type (within square brackets) encode focus [e.g. 3,4]. Typologically diverse languages like Estonian [5], Finnish [6,7], French [8,9], German [10–12], Greek [13,14], Hindi [15,16], Icelandic [17], Mandarin [2], Persian [18] Does your character have [a pale] or [a rather tanned] skin color? and Swedish [19] show post-focal pitch events in a compressed pitch register. 4) Contrastive focus (correction type, within square brackets) accompanied with a head Based on prior work, I argue for a theory of two distinct post-focal compression types. movement For instance, in Finnish, Hindi and Mandarin partial compression occurs while complete ______head compression occurs in German and Greek. The claim pursued here is that post-focal Does your hair... Does your [character] have brown hair? compression comes in different types for a clear functional reason. The phonology ofa language governs the need to express the functional load of tones or accents located in [1] Guellaï, B., Langus, A., & Nespor, M. 2014. Prosody in the hands of the speaker. Frontiers in the post-focal domain. Lexical and postlexical, intonational tones have a functional load like Psychology 5(700), 1-8. segments have. Their different functions are listed in Table 1. All phonological functions like [2] Loehr, D. 2014. Gesture and prosody. In Müller, C., Cienki, A., Fricke, E., Ladewig, S.H., McNeill, D., lexical distinction or demarcating prosodic constituents are prosodically expressed. However, & Bressem, J. (Ed.), Body - Language - Communication. An International Handbook on Multimodality pragmatic reduction of post-focal material goes hand in hand with a deletion of pragmatic in Human Interactions 2. Berlin/Boston: De Gruyter Mouton, 1381-1391. meaning of intonational tones. The prediction thus is that languages that express pragmatic [3] Wagner, P., Malisz, Z., & Kopp, S. 2014. Gesture and speech in interaction: An overview. Speech meanings via intonation show complete PFC, while all other languages show partial PFC. Communication 57, 209232. The conclusion hence is that if the phonology of a given language demands a high need to [4] Lambrecht, K. 1994. Information structure and sentence form. Topic, focus, and the mental maintain phonological contrasts post-focally the post-focal register compression is partial. representations of discourse referents. Cambridge: Cambridge University Press. [5] Navarrete-González, A. 2019. The notion of focus and its relation to contrast in Catalan Sign If on the other hand the phonology demands a low need to express post-focal pitch events Language (LSC). Sensos-e 6(1), 1840. because pragmatic meaning of intonational tones is cancelled out register compression is [6] Beyssade, C., Delais-Roussarie, E., Marandin, J.-M., & Rialland, A. 2004. Prosody and Information complete. in French. In Corblin, F., & de Swart, H. (Ed.), Handbook of French Semantics. Chicago: University of Chicago Press, 477-500. Table 1. Comparison of the function of different types of tones along with language examples [7] Féry, C. 2001. Focus and Phrasing in French. In Féry, C., & Sternefeld, W. (Ed.), Audiatur Vox for each tonal category and the type of PFC. Sapientiae. Berlin/Boston: De Gruyter Mouton, 1-29. Type of tone Function PFC type Languages [8] Lacheret, A. 2003. Focalisation et circonstance. Que nous dit la prosodie du francais parle ? lexical tone lexical distinction partial Mandarin Bulletin de la Société de Linguistique de Paris, 137-160. [9] Ambrazaitis, G., & House, D. 2017. Multimodal prominences: Exploring the patterning and usage lexical tone grammatical partial ? of focal pitch accents, head beats and eyebrow beats in Swedish television news readings. lexical pitch accent lexical distinction partial Swedish Speech Communication 95, 100-113. phrase tone demarcation, phrasing partial Finnish, Hindi [10] Esteve-Gibert, N., Loevenbruck, H., Dohen, M., & D’Imperio, M. 2017. The development of prosodic and gesture cues to focus in French preschoolers. Poster at Phonetics and Phonology pitch accent pragmatic meaning & complete German, English, Persian in Europe (Koeln). phrasing (headedness) [11] Ferré, G. 2014. A multimodal approach to markedness in spoken French. Speech Communication 57, 268282. References [12] Prieto, P., Puglesi, C., Borràs-Comes, J., Arroyo, E., & Blat, J. 2015. Exploring the contribution of prosody and gesture to the perception of focus using an animated agent. Journal of Phonetics, [1] Kügler, F. & Calhoun, S. 2020 Prosodic encoding of information structure. A typological 49, 4154. perspective. In The Oxford Handbook of Language Prosody (ed. C. Gussenhoven & A. Chen), [13] Meurant, L., Lepeut, A., Gabarró-López, S., Tavier, A., Vandenitte S., Lombart, C., & Sinte, A. Chapter 31. Oxford: Oxford University Press. Ongoing. The Multimodal FRAPé Corpus: Towards building a comparable LSFB and Belgian French [2] Xu, Y. 1999 Effects of tone and focus on the formation and alignment of f0 contours.Journal of Corpus. University of Namur: Laboratory of French Belgian Sign Language (LSFB-Lab). Phonetics 27, 55–105.

150 151 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS [3] Xu, Y. 2011 Post-focus compression. Cross-linguistic distribution and historical origin. In Different pitch accents affect perceived focus position in German wh-questions Proceedings of the 17th International Congress of Phonetic Sciences (ICPhS XVII), Hong Kong (ed. E. Zee & W.-S. Lee), pp. 152–155. Sophie Kutscheid1, Katharina Zahner2, Adrian Leemann3, Bettina Braun1 [4] Xu, Y., Chen, S.-W. & Wang, B. 2012 Prosodic focus with and without post-focus compression. 1University of Konstanz, 2University of Trier, 3University of Bern A typological divide within the same language family? The Linguistic Review 29, 131–147. [5] Ots, N. 2017 On the phrase-level function of f0 in Estonian. Journal of Phonetics 65, 77–93. Pitch accent type has been shown to affect prominence perception [1]. Here, we investigate [6] Mixdorff, H., Vainio, M., Werner, S. & Järvikivi, J. 2002 The Manifestation of Linguistic Information whether it also affects the perception of accentlocation (focus)? We used wh-questions, which in Prosodic Features of Finnish. In Proceedings of the Speech Prosody 2002 Conference (ed. B. Bel & I. Marlien), pp. 515–518. Aix-en-Provence: Laboratoire Parole et Langage. can have focus on the wh-word, the verb, or on the final word, to fill this gap, see example (1). [7] Arnhold, A. 2016 Complex prosodic focus marking in Finnish. Expanding the data landscape.

Journal of Phonetics 56, 85–109. (1) a. [Wann]F zahlt Chris? b. Wann [zahlt]F Chris? c. Wann zahlt [Chris]F? [8] German, J. & D’Imperio, M. 2010 Focus, Phrase Length, and the Distribution of Phrase-Initial Engl. ‘When does Chris pay?’ Rises in French. In Proceedings of the Fifth International Conference on Speech Prosody 2010, paper 207. Chicago, USA. Focused words are typically associated with pitch accents. We concentrate on wh-questions [9] Féry, C. 2014 Final compression in French as a phrasal phenomenon. In Perspectives on Linguistic with focus on the verb, which are compatible with situations that question the lexical Structure and Context. Studies in honor of Knud Lambrecht (ed. S. K. Bourns & L. L. Myers), vol semantics of the verb (paying vs. claiming money) or its polarity (paying vs. not paying). From 244, pp. 133–156. Amsterdam/Philadelphia: John Benjamins Publishing Company. studies on lexical stress perception, we know that listeners tend to interpret high-pitched [10] Peters, J., Hanssen, J. & Gussenhoven, C. 2015 The timing of nuclear falls. Evidence from Dutch, unstressed syllables as stressed [2-5] (high pitch=stress bias). We extend this work and study West Frisian, Dutch Low Saxon, German Low Saxon, and High German. Laboratory phonology 6. 1 [11] Kügler, F. & Féry, C. 2017 Postfocal downstep in German. Language and Speech 60, 260–288. whether different nuclear pitch accent types (H+L* L-%, L+H* L-%, L*+H H-^H%, Fig. 1) also [12] Féry, C. & Kügler, F. 2008 Pitch accent scaling on given, new and focused constituents in affect the interpretation of phrasal prominence, and thus the focus in Germanwh -questions. German. Journal of Phonetics 36, 680–703. To this end, we ran a forced-choice perception study using FindingFive in which 24 [13] Gryllia, S. 2008 On the nature of preverbal focus in Greek. A theoretical and experimental approach. German listeners (mean age: 28.8 years, 6 female, 17 male, 1 not indicated) judged the International series, vol 200. Utrecht: LOT. position of the accented word in German wh-questions that consisted of three monosyllabic [14] Baltazani, M. & Jun, S.-A. 1999 Focus and topic intonation in Greek. In Proceedings of the 14th words, see example (1). Responses were collected on a computer key: ‘A’ for focus on the wh- International Congress of Phonetic Sciences (ICPhS) (ed. J. J. Ohala, Y. Hasegawa, M. Ohala, D. word (word 1), ‘D’ for the verb (word 2) ‘G’ for the last word (word 3). In the 36 experimental Granville & A. C. Bailey), vol 2, pp. 1305–1308. San Francisco: University of California. trials, the verb was accented and resynthesized into three different conditions: the pitch [15] Patil, U., Kentner, G., Gollrad, A., Kügler, F., Féry, C. & Vasishth, S. 2008 Focus, word order and peak preceding the verb (H+L* L-%, Fig. 1a, resynthesized from an L+H* L-% recording on intonation in Hindi. Journal of South Asian Linguistics 1, 53–67. the verb), the pitch peak on the verb (L+H* L-%, Fig. 1b, resynthesized half from H+L* L-% [16] Kügler, F. 2020 Post-focal compression as a prosodic cue for focus perception in Hindi. Journal and half from L* H-^H% recordings), and the pitch peak after the verb (L* H-^H%, Fig. 1c, of South Asian Linguistics , 38–59. 10 resynthesized from an L+H* L-% recording on the verb). Intonation contour was rotated [17] Dehé, N. 2009 An intonational grammar for Icelandic. Nordic Journal of Linguistics 32, 5–34. [18] Rahmani, H., Rietveld, T. & Gussenhoven, C. 2018 Post-focal and factive deaccentuation in across three different lists (Latin Square Design; 12 items per condition) and combined with Persian. Glossa: a journal of general linguistics 3, 13. 72 filler trials, which had focus on either the first word (wh-element, half with L+H* L-% and [19] Bruce, G. 1982 Developing the Swedish Intonation Model. In Working Papers 22, pp. 51–116. half with L* H-^H%) or the third word (half L+H* L-%, half H+L* L-%). All stimuli were recorded Phonetics Laboratory, Department of General Linguistics, Lund University, Sweden. by a male native speaker of German and subsequently PSOLA-resynthesized. Responses (word 1, 2, 3) for experimental trials were analysed using multinomial regression (mlogit package [mlogit, 6]). We entered intonation condition as predictor with L+H* L-% as reference level; word 2 (i.e., perceived focus on word 2) was chosen as reference for the response variable. For L+H* L-% (reference), there were more word 2 responses compared to word 1 (ß = -0.66, SE = 0.14, z = -4.8, p < 0.0001) and word 3 responses (ß = -1.14, SE = 0.16, z = -7.0, p < 0.0001). As expected by the high pitch=stress-account, the H+L* L-% condition (i.e., word 1 high pitched) led to significantly more word 1 responses than word 2 responses (ß = 0.85, SE = 0.19, z = 4.5, p < 0.0001). The L*+H H-^H% condition (i.e., word 3 high pitched) led to more word 3 than word 2 responses (ß = 1.36, SE = 0.21, z = 6.5, p < 0.0001), see Fig. 2. Hence, participants preferably indicated the high-pitched word as focused, which replicates earlier findings on lexical stress, observed in offline and online tasks [2-5]. The position of the pitch peak is thus an important factor for listeners’ judgements of focus position in German wh-questions (cf. [2-5]). Consequently, different pitch accent types may lead to misinterpretations of the intended focus structure, thereby potentially hampering the transmission of information structure. (While L+H* is the prevailing focus

152 153 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS accent [7], H+L* can occur in unenthusiastic statements, with an recurrent notion [8, 9], phonology and semantics,” Arbeitsberichte des Instituts für Phonetik und Digitale Sprachverarbeitung and L*H-^H% in rising contours [6]). Our finding bears important implications for second der Universität Kiel (AIPUK), vol. 25, pp. 115-185, 1991. language learners and the annotation of intonation by researchers, which we will discuss in [9] Zahner, K., “Pitch accent type affects stress perception in German: Evidence from infant and more detail in the talk. adult speech processing,” PhD thesis, KOPS: University of Konstanz, 2019.

1 Note that wh-questions in German can end in a final rise or a final fall [6].

Figure 1: Sound pressure wave, spectrogram and PSOLA-resynthesized intonation contours (smoothed with bandwidth 10) for one experimental word (capitals indicate focus / the accented word; a) L+H* L-%, b) H+L* L-% and c) L* H-^H%).

Figure 2: Average responses to words 1, 2, and 3 across intonation conditions.

References

[1] Baumann, S. and Winter, B., “What makes a word prominent? Predicting untrained German listeners’ perceptual judgments,” Journal of Phonetics, vol. 70, pp. 20-38, 2018. [2] Zahner, K., Kutscheid, S., and Braun, B., “Alignment of f0 peak in different pitch accent types affects perception of metrical stress,” Journal of Phonetics, vol. 74, pp. 75-95, 2019. [3] Dilley, L. C. and Heffner, C. C., “The role of f0 alignment in distinguishing intonation categories: Evidence from American English,” Journal of Speech Sciences, vol. 3, pp. 3-67, 2013. [4] Zahner, K., Kember, H., and Braun, B., “Mind the peak: When museum is temporarily understood as musical in Australian English,” in Proceedings of the 18th Annual Conference of the International Speech Communication Association (Interspeech), Stockholm, Sweden, 2017, pp. 1223-1227. [5] Friedrich, C. K., Alter, K., and Kotz, S. A., “An electrophysiological response to different pitch contours in words,” Neuroreport, vol. 12, pp. 3189-3191, 2001. [6] Croissant, Y., “Estimation of random utility models in R: The mlogit Package,” Journal of Statistical Software, vol. 95, pp. 1-41, 2020. [7] Welby, P., “Effects of pitch accent position, type, and status on focus projection,”Language and Speech, vol. 46, pp. 53-81, 2003. [8] Kohler, K., “Terminal intonation patterns in single-accent utterances of German: Phonetics,

154 155 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS DAY 2 - Poster Session A contrary to English learners (cf. Figure 2). We discuss these differences under the light of the SLA: Vowel training & Pronunciation Instruction SLM [4], concerning the role of acoustic (dis-)similarity under different prosodic conditions, and the relevance of prosodic strengthening in the acquisition of new L2 features/sounds. (Ma sœur)GA (a une peur bleue)GA (de l'obscurité)GA Does initial/final accent facilitate the acquisition of rounded vs unrounded contrast in L2 French oral vowels?

Fabian Santiago1 & Paolo Mariano2 1Université Paris 8 & CNRS (UMR 7023), 2Université de Lille & CNRS (UMR 8163)

Introduction & Goals. The acquisition of L2 Prosody (intonation, accentuation, rhythm, etc.)

and L2 segmentals (consonants and vowels) has mainly been studied separately. Most L2 Figure 1. Prosodic annotations via Polytonia & Prosogram tools for the utterance Ma soeur a une peur bleue phonology models such as [1] or [4] do not make clear predictions concerning the acquisition de l’obscrutité (‘My sister is afraid of darkness’) produced by àSpanish learner of L2 French. Initial/final accents of segmentals via their interface with the prosodic structure. Yet, some studies report effects of APs (GA) were detected for rising/high pitch contours reaching the median/high pitch range levels of of the L1 prosodic hierarchy in the production of L2 English consonants (e.g., [3]). To the best speakers (H or M labels). of our knowledge, no research has studied whether prosody (via the production of accented syllables or boundary tones) has a positive/negative/null effect on L2 French vowels’ accuracy. [2] claims that the production of accented syllables or final melodic contours (rising or falling) has positive effects for better perceiving and producing certain L2 French vowels. For instance, this author assumes that French vowel /y/ is better perceived and produced by L2 learners when it occurs in strong prosodic positions (when final rising/high pitch movements at the right boundaries of Accentual Phrases (AP) or rising final contours in yes-no neutral questions/major continuations are produced). However, no experimental research has been conducted in order to validate these pedagogical assumptions. In this paper, we focus on the production of L2 French rounded vs unrounded oral vowels /i/, /y/, /u/, /e/, /ɛ/, /ø/, Figure 2. F2xF3 vowel charts for the set of vowels /i/,/u/,/y/ according to the three groups and the two prosodic conditions /œ/ in two different prosodic conditions: accented vs unaccented. Our goal is to provide (Non accentuée =unaccented, accentuée = accented) . Ellipses account for 68% of vowel tokens dispersion. empirical evidence testing the alleged positive effects of prosody on the acquisition of these challenging sounds of L2 French. References Methods. We analyzed oral productions of 30 participants: 10 French native speakers (control group), 10 L2 French learners with L1 Spanish, and 10 L2 French learners with L1 [1] Best, C. T., & Tyler, M. D. (2007). Nonnative and second-language speech perception: English. Data comes from read aloud tasks of two different corpora designed for the analysis Commonalitiesand complementarities. In M. Munro, & O.-S. Bohn (eds.), Second language of L2 pronunciation [9]. We analyse different acoustic properties of 6.3k vowels. In order to speech learning, Amsterdam, The Netherlands: John Benjamins, 13–34. examine the pronunciation accuracy of vowels, we compute the degree of acoustic overlap [2] Billieres, M. (2008). Le statut de l’intonation dans l’evolution de l’enseignement/apprentissage via Pillai scores [7] for the following pairs: /i/~/y/, /e/~/ø/ and /ɛ/~/œ/. Pillai scores allow us to de l’oral en FLE. Le Francais dans le Monde, Recherches et applications, 43, 27-37. [3] Choi, J., Kim, S. & Cho, T. (2016). Effects of L1 prosody on segmental contrast in L2: The case estimate the amount of overlap of these vowel categories the acoustic space of participants. of English stop voicing contrast produced by Korean speakers. J. Acoust. Soc. Am. 139 (3) EL76. This metric was obtained by running MANOVAS on F1, F2, F3 and duration as dependent [4] Flege, J.E. (1995). Second language speech learning: Theory, findings, and problems. In M. variables, as a function of group and prosodic condition (accented vs unaccented). For Munro, & O.-S. Bohn (Eds.), Second language speech learning, Amsterdam, The Netherlands: determining the presence/absence of an initial/final accent associated to left/right boundaries John Benjamins, 233–277. of AP in pre-nuclear positions, we use automatic prosodic labels provided by Polytonia and [5] Fougeron, C. (2001). Articulatory properties of initial segments in several prosodic constituents Prosogram [8] as illustrated in Fig. 1. Vowels associated to left/right boundaries of other in French, Journal of Phonetics, 29 (2), 109–135. higher prosodic constituents in French were not analysed. [6] Jun, S.A. & Fougeron, C. (2002). Realizations of accentual phrase in French intonation. Probus, Results & Discussion. In a global perspective, our results show that the production of initial/ 14, 147–172. final accents at the AP’s levels result in a smaller acoustic overlap of L2 French vowels. This [7] Mairano, P. et al. (2019) Acoustic distances, Pillai scores and LDA classification scores as metrics observation suggests that prosodic strengthening observed in L1 [5] results in the production of L2 comprehensibility and nativelikeness. Proc. of ICPhS 2019, Melbourne (Australia), 1104- of more canonical L2 vowels as well, partially confirming claims by [2]. However, this positive 1108. effect is not observed across all vowel pairs. We found that /e/~/ø/ is better distinguished [8] Mertens, P. (2014) Polytonia: a system for the automatic transcription of tonal aspects in under the presence of AP’s accents for two groups of learners, i.e. the rounded feature is speech corpora. Journal of Speech Sciences 4 (2), 17-57. hyper-articulated in accented positions compared to unaccented positions. However, in the [9] Santiago, F. & Mairano, P. (2019). Prosodic effects on L2 French vowels: a corpus-based case of Spanish learners, AP’s accents do not seem to facilitate the distinction of /i/~/y/, investigation.In S. Calhoun, P. Escudero, M. Tabain & P. Warren (eds.) Proceedings of ICPhS 2019, Melbourne (Australia), 1084-1088.

156 157 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS L1-based perceptual individual differences in the acquisition of second language phonology: a source of individual variability might also explain the cases of ultimate attainment in L2 Investigating the compactness of native phonetic categories. phonetics – so-called phonetic talent [5]. Overall, the concept of compactness in relation to L1 phonetic categories and perception might help to better understand individual differences Vita V. Kogan in non-native perception. University of Kent, Department of English Language and Linguistics, UK (1) Adult language learners often experience difficulty acquiring a new sound system. Native (L1) phonetic categories influence the perception of non-native sounds and cause a perceptual accent [1], [2]. In its turn, not being able to perceive contrastive segment categories that do not exist in learner’s native language result in production problems and accented speech. Yet, some individuals are remarkably successful at perceiving non-native sounds (e.g., [3]). Such instances demonstrate that perceptual ability is a subject of high variability. Whereas native language background has been the focus of much second language acquisition research to explain such variability, little attention has been paid to the role of individual differences within the same L1 perception. This study investigated how individual differences in native perception affect the degree of perceived dissimilarity between two members of a novel contrast that does not exist in participants’ L1. We argue that not only individuals with various L1s (e.g., Spanish, Figure 1. The compactness of L1 vowel categories (in blue) affects the perception of a novel contrast (in pink). The Japanese, Greek) are equipped differently for the task of non-native perception, but also individual on the left is more likely to discriminate between Russian /i/ and /ɨ/ than an individual on the right. individuals with the same L1 (e.g., Spanish) vary in how their native phonological categories are represented in the perceptual space. Such variability is observable in measures of [1] Best, C. T., & Tyler, M. D. (2007). Non-native and second-language speech perception: compactness of L1 phonetic categories [4], and its effects on non-native perception can be Commonalities and complementarities. In M. J. Munro, & O.-S. Bohn (Eds.), Second language assessed by relating the degree of compactness to the perceived dissimilarity between novel speech learning: The role of language experience in speech perception and production (pp. 13- contrasting sounds. We hypothesized that compact L1 categories give an initial advantage 34). Amsterdam: John Benjamins. in distinguishing non-native contrasts. Because of the ethereal nature of speech perception [2] Flege, J. E. (1995). Second language speech learning: Theory, findings, and problems. In W. and methodological difficulties associated with measuring it, but also because of the notion Strange (Ed.), Speech perception and linguistic experience: Issues in cross-language research (pp. that speakers (and listeners) of the same language vary minimally in the way they process 233-277). Baltimore, MA: York Press. their , only a handful of empirical studies investigated L1-based individual [3] Bongaerts, T., van Summeren, C., Planken, B., & Schils, E. (1997). Age and ultimate differences in perception. attainment in the pronunciation of a foreign language. Studies in Second Language Acquisition, Sixty-eight Spanish monolinguals (average age = 41, females = 26) participated in 447-465. the present study. The degree of compactness of their native category /i/ was measured [4] Kartushina, N., & Frauenfelder, U. H. (2013). On the role of L1 speech production in L2 through a goodness-of-fit rating task, where participants listened to synthesized variants perception: Evidence from Spanish learners of French. In Proceedings of the 14th Interspeech of the Spanish /i/ vowel (differing in F1, F2 or both) and rated them as either good or bad Conference, 2118-2122. exemplars of their internal representation of the category /i/ on an intuitive scale. These [5] Jilka, M., Anufryk, V., Baumotte, H., Lewandowska, N., Rota, G., & Reiterer, S. (2007). ratings provided an individual /i/ compactness index for each participant that was related to Assessing individual talent in second language production and perception. In Proceedings of the individual perceived dissimilarity score for the novel Russian contrast /i - ɨ/. The results the 5th ISALSS (Florianópolis), 243–258. obtained confirmed the hypothesis. Even though L1-based individual differences in perception were small, compactness of the L1 category /i/ contributed significantly to the participants’ ability to perceive the acoustic distance between the Russian /i/ and /ɨ/ (mixed-effects linear regression, β = 0.023, t-value = 2.014). Interestingly, true monolinguals (individuals who reported speaking no foreign languages) were more successful at distinguishing between Russian /i/ and /ɨ/ than functional monolinguals (individuals who had studied foreign languages before but reported not being able to hold a conversation in any of them). True monolinguals also had on average a more compact native category /i/ in comparison to functional monolinguals (an independent two-tailed Wilcoxon test, W = 193, p = .04). The findings suggest that having more compact native categories might facilitate the process of category formation for unfamiliar sounds. Native category compactness as

158 159 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS The role of auditory selective attention in learning to perceive and produce L2 vowels References

[1] Thomson, R.I. (2018). High variability [pronunciation] training (HVPT): A proven technique Josh Frank and Joan C. Mora about which every language teacher and learner ought to know. Journal of Second Language Universitat de Barcelona Pronunciation, 4(2), 207–230. [2] Barriuso, T. A., & Hayes-Harb, R. (2018). High Variability Phonetic Training as a Bridge from High Variability Phonetic Training (HVPT) is a method that utilizes multiple talkers in Research to Practice. CATESOL Journal, 30(1), 177-194. different phonetic contexts to train learners to perceive L2 speech sounds [1,2]. Individual [3] Darcy, I., Mora, J. C., & Daidone, D. (2014). Attention control and inhibition influence phonological differences in attentional resources, especially in the auditory domain, may explain learners’ development in a second language. Concordia Working Papers in Applied Linguistics, Volume difficulties or advantages in learning to perceive and produce L2 sound contrasts. Although 5, March 2014, pp 115-129 auditory attention skills have been associated with advances in L2 perception skills [3,4,5], [4] Safronova, E. (2016) The Role of Cognitive Ability in the Acquisition of Second Language Perceptual Competence. Unpublished PhD Thesis. Universitat de Barcelona the role of auditory selective attention (ASA) in training learners to perceive and produce [5] Mora, J. C. & Mora-Plaza, I. (2019) Contributions of cognitive attention control to L2 speech L2 vowels through HVPT is currently under-studied (but see [6]), as is the extent to which learning. In Nyvad, A. M., Hejná, M., Højen, A., Jespersen, A. B., & Sørensen, M. H. (eds.) A phonetic training might help to improve L2 learners’ ASA [7]. The current study has two Sound Approach to Language Matters - In Honor of Ocke-Schwen Bohn. Dept. of English, School of aims, to examine if individual differences in ASA explain gains in L2 vowel perception and Communication & Culture, Aarhus University, Denmark. 477-499. production after HVPT and to examine the potential of HVPT to improve ASA in the L2. [6] Frank, J. (2020). The impact of auditory attention in L2 vowel perception and production by means Participants (N=42) were native Catalan-Spanish learners of English as a foreign language. of phonetic training. Unpublished MA Thesis. Universitat de Barcelona. http://hdl.handle. HVPT consisted of 4 x 35-minute training sessions focusing on the English /æ/-/ʌ/ vowel net/2445/17221. contrast, which is difficult to Catalan-Spanish learners of English [8]. The participants were [7] Laffere, A., Dick, F., & Tierney, A. (2020). Effects of auditory selective attention on neural phase: trained in perception using identification and AX discrimination tasks, and in production with individual differences and short-term training. NeuroImage, 116717. an immediate repetition task, all using minimal-pair non-words with /æ/ and /ʌ/ in a variety of [8] Aliaga-Garcia, C. (2017). The effect of auditory and articulatory phonetic training onthe perception and production of L2 vowels by Catalan-Spanish learners of English. Unpublished phonetic contexts and spoken by four different native speakers. HVPT effects in perception PhD Thesis. Universitat de Barcelona. were pre- and post-tested through ABX discrimination and lexical decision tasks, and in [9] Humes, L. E., Lee, J. H., & Coughlin, M. P. (2006). Auditory measures of selective and divided production through delayed word (DWR) and sentence repetition (DSR) tasks. The ASA task attention in young and older adults using single-talker competition. The Journal of the Acoustical [9] consists of the identification of information contained in a sentence spoken by a single Society of America, 120(5), 2926-2937. talker, while a similar competitive sentence by a second talker was heard simultaneously. This was given at two time points, before and after the training sessions to examine: (1) the correlation between ASA task performance and gains in perception and production of the vowels from pre- to post-test; and (2) to examine if the sessions of HVPT improve ASA task performance from before to after training. To obtain a level of L2 English proficiency two tasks were used, the X/Y-Lex, which is based on knowledge of vocabulary from 0-10,000- word categories and elicited imitation, where imitation of a sentence can occur only if the learner comprehends the sentence that is heard. A control group (N=13) performed all pre- and post-tests but were not given any of the HVPT sessions. Results from the ABX discrimination task have revealed that HVPT leads to significant improvement in /æ/-/ʌ/ from pre- to post-test in perception for the experimental group when compared to the control group. As well, using K-cluster analysis participants were placed in a High or Low ASA groups, where only the High ASA group that performed HVPT showed significant improvement in the ABX discrimination task. ASA scores did significantly improve from pre- to post-test for the experimental group, but they did also for control group, suggesting that a practice effect may be explaining these results. The ASA task is difficult, for future experiments to be able to more accurately assess the effects of HVPT on ASA, an additional pre-test may be needed to familiarize the participants with the task. Lexical Decision and DSR did not show significant results. The DWR production data is currently being analyzed. This research has implications for understanding L2 perception and production gains from HVPT, including the relationship of auditory selective attention skills. The degree of improvement in auditory selective attention after the four sessions of HVPT, as well as future directions and pedagogical implications will be discussed.

160 161 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Acoustic-Orthographic Interface in L2 English Phonology: The Effect of Word Frequency, most vulnerable for perception and production long vowels were [ɜː], [ɑ:], [i:]. In terms of Concreteness and Grammatical Category short vowels, these were [æ], [ʌ], [ʊ]. The success of correct word transcription depends on word frequency rather than on number of syllables and sound position in a word as well as Sviatlana Karpava1 and Elena Kkese2 characteristics of acoustic input. The participants showed better results for nouns than other 1Univesity of Cyprus, 2Cyprus University of Technology categories (nouns> verbs> adjectives/adverbs), concrete than abstract words. This could be due to the fact that concreteness affects learning, recognition memory and the speed of The present study investigated the acoustic-orthographic interface in L2 phonology by L1 visual recognition, reading and spelling (Spreen and Schulz, 1966; Brysbaert et al., 2014; Cypriot-Greek (CG) speakers. Orthographic forms representing the sounds and/or words Borghi et al., 2017). Age, L2 English proficiency, quality, and quantity of L2 English input are of a language in writing can affect language learners but also perception, production, and important factors to be taken into consideration. Overall, the study has important teaching acquisition of L2 phonology and morphology. The influence of orthography on L2 phonology, and learning implications as language teachers and students can benefit from awareness can be positive facilitating L2 acquisition and pronunciation (Escudero et al., 2008); it can that L2 orthographic input can support or interfere with L2 pronunciation development. be negative leading to non-nativelike pronunciation (Bassetti and Atkinson, 2015; Young- L2 teachers should use L2 orthographic/written input to support student’s perception and Scholten and Langer, 2015); it can have mixed or no effects (Escudero, 2015). This happens production abilities especially for difficult, novel sounds and contrasts. because L2 learners have already acquired the phonological system and orthographic properties of the L1 (first language) and may draw on this knowledge while acquiring the target language. Investigating this interface seems particularly interesting in the L2 classroom References: context. Orthographic forms are usually ignored in the classroom especially when the native and target languages involve an alphabetic system, regardless of the type. Nonetheless, [1] Bassetti, B., & Atkinson, N. 2015. Effects of orthographic forms on pronunciation in experienced orthographic input can facilitate or hinder L2 phonological development; further, it could instructed second language learners. Applied Psycholinguistics 36, 67-91. have mixed or no effects. [2] Borghi, A., Binkofski, F., Castelfranchi, C., Cimatti, F., Scorolli, C., & Tummolini, L. 2017. The For this study, seventy L1 CG undergraduate students (40 male and 30 female; 17-27 challenge of abstract concepts. Psychological Bulletin 143, 263-292. years old; low intermediate to advanced L2 English proficiency) took part in a perception- [3] Brysbaert, M., Warriner, A., & Kuperman, V. 2014. Concreteness ratings for 40 thousand generally known English word lemmas. Behavior Research Methods 46, 904-911. production task, in which they had to listen to a set of words and record these on a given [4] Escudero, P. 2015. Orthography plays a limited role when learning the phonological forms list in a lab using headphones in order not to mishear the stimuli. The aim was to examine of new words: The case of Spanish and English learners of novel Dutch words. Applied whether there is an effect of L1 transfer based on the difference of vowel and consonant Psycholinguistics 36, 7-22. inventories and orthographies of CG and English as well as age, L2 English proficiency and [5] Escudero, P., Hayes-Harb, R., & Mitterer, H. 2008. Novel second-language words and asymmetric L2 language experience on non-native speech perception and written production of L2 lexical access. Journal of Phonetics 36, 345-360. English sounds. With reference to the orthographies of the two linguistic varieties, CG uses [6] Kkese, E. (2020). L2 Writing Assessment: The Neglected Skill of Spelling. Cambridge Scholars the SMG (Standard Modern Greek) orthographic system, which is transparent; there are Publishing. thirty-three graphemes that stand for separate phonemes (one-to-one correspondences), [7] Spreen, O., & Schulz, R. 1966. Parameters of abstraction, meaningfulness, and pronunciability with only a few exceptions (Kkese, 2020), which were not used for the present study. English, for 329 nouns. Journal of Verbal Learning & Verbal Behavior 5, 459-468. [8] Young-Scholten, M., & Langer, M. 2015. The role of orthographic input in second language on the other hand, uses an opaque orthographic system since there may be many-to-one German: Evidence from naturalistic adult learners’ production. Applied Psycholinguistics 36, 93- grapheme-phoneme correspondences; the orthography uses twenty-six graphemes to 114. represent forty-four phonemes (Kkese, 2020). For this task, 120 test items were developed, sixty were involving vowels and the rest consonants. There were 10 conditions for consonant sounds (6 test items each): [ð], [z], [θ], [v], [d], [ŋ], [h], [b], [g], [ɹ] and 10 conditions for vowel sounds (6 test items each): [æ], [ɜː], [ɔː], [i:], [u:], [ɑ:], [e], [ʌ], [ə], [ʊ]. The dictation task was split into 6 dictation sessions; 20 test items for each (10 consonant and 10 vowel test items). The test items were randomised per condition, frequency (high- and low-word frequency), number of syllables in a word (one- and two- syllables), position of the sound in a word (initial, medial, and final), grammatical category (lexical: nouns, verbs, adjectives, adverbs), word concreteness and imagery and the voice of the presenter (male, female). Differences were examined in the dependent variable (percentage of correctness) thought to be caused by the invariable variable (seven above mentioned factors). The findings suggest that there is an effect of L1 CG phonological and orthographic systems on L2 English vowel and consonant sound perception and written production. Based on the findings, the most difficult consonants for perception appeared to be [ð], [θ], [ŋ], [g]. The

162 163 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Phono-lexical representations of old and new word forms in the mental lexicon: are both equally malleable?

Miren Adrian and Joan C. Mora Universitat de Barcelona

Recent evidence has shown that L2 learners function with inaccurate phono-lexical representations [1, 2, 3]. Yet, Darcy and Thomas [2] found, for perception and in an L2 context, that L1 Korean learners of English could develop accurate representations for L2 English words (e.g., blue as [bluː]) that had been previously misrepresented in the mental lexicon (e.g. [bʊˈluː]). But, how does “updating” of inaccurate representations occur? No research to date has explored this question in a FL learning setting, nor has research in this context examined the extent to which the malleability of phono-lexical forms may vary as a function of when a word was learnt. The present study investigated the potential differential Figure 1. Research design malleability of previously-established and newly-established phono-lexical representations (old and new words) encoding a confusable L2 vowel contrast (English /æ/-/ʌ/). References Fifty-seven Spanish-Catalan bilingual learners of English were randomly assigned to either an experimental (N = 42) or a control group (N = 15). The experimental group engaged [1] Darcy, I. & Holliday, J. 2019. Teaching an old word new tricks: phonological updates in the L2 in 4 x 30-minute sessions of phonetically-oriented (non-lexical) high-variability phonetic lexicon. In J. Levis, C. Nagle, & E. Todey (Eds.), Proceedings of the 10th Pronunciation in Second training (HVPT) on English /æ/-/ʌ/ through AX discrimination, identification, and immediate Language Learning and Teaching Conference 2018 (Ames, IA: Iowa State University), 10-26. [2] Darcy, I., & Thomas, T. 2019. When blue is a disyllabic word: Perceptual epenthesis in the repetition tasks. Non-lexical training based on nonwords was deemed to enhance training mental lexicon of second language learners. Bilingualism: Language and Cognition, 1- 19. gains by avoiding the activation of inaccurately represented phonological forms [4, 5]. Since [3] John, P. & Cardoso, W. 2020. Are word-final consonants codas? Evidence from Brazilian our study aims at comparing the malleability of old and new word forms, first, target known Portuguese ESL/EFL learners. In J. Volin & R. Skarnitzl (eds.). Pronunciation of English by speakers (old) and unknown (new) /æ/ and /ʌ/ words were identified through a lexical knowledge of other languages. Newcastle upon Tyne: Cambridge Scholars Publishing, 117-138. test. Then participants engaged in a series of word-learning tasks with the unknown words [4] Thomson, R. I. and Derwing, T. M. 2016. Is phonemic training using nonsense or real words to acquire and consolidate form-meaning mappings for them, on which they were tested more effective? In J. Levis, H. Le., I. Lucic, E. Simpson and S. Vo (Eds.). Proceedings of the 7th before the onset of the HVPT to ensure they had been acquired. Then these target old (16 Pronunciation in Second Language Learning and Teaching Conference 2016 (Ames, IA: Iowa State tokens) and new (16 tokens) word forms were pre- and post-tested before and after the University), 88-97. HVPT, in perception through a lexical decision (LD) task and in production through a delayed [5] Ortega, M., Mora-Plaza, I. & Mora, J. C. Forthcoming. Differential effects of lexical and nonlexical HVPT on the production of L2 vowels. Kirkova-Naskova, A., Henderson, A., & Fouz-González, sentence repetition task (DSR). The LD test provided a measure of perceptual sensitivity to J. (Eds.), (2021). English pronunciation instruction: Research-based insights. Amsterdam, The the /æ/-/ʌ/ contrast in a lexical context by measuring learners’ ability to correctly identify Netherlands: John Benjamins. nonwords (e.g. [dæk] for duck, or [mʌp] for map). We computed a measure of accuracy of nonword identification as an index of learners’ ability to lexically encode the /æ/-/ʌ/ contrast for old and new words. As for the DSR test, it provided vowel frequency values (f0, F1, F2) for /æ/ and /ʌ/, in old and new words, allowing us to observe malleability of phono-lexical representations at a production level. Production gains for each word type (old, new) were obtained by calculating spectral distance scores between participants’ vowel productions and those of native speakers. Previously collected pilot data had suggested that phono-lexical representations of new words were easier to update in production than those of old words, which were more resistant to change. The data from this study, currently under analysis, will determine whether L2 learners’ phono-lexical representations for new words are easier to update than those of old words, which were tested both in perception and production. Our working hypothesis is that initially inaccurate L2 phonological representations are likely to get robustly entrenched over time as L2 vocabulary grows, hindering pronunciation development. Consequently, for words that were learnt a long time ago, updating pronunciation might require the type of intensive exposure provided by L2 immersion contexts or high variability phonetic training.

164 165 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS The Role of Speaking Anxiety on L2 English Speaking Fluency, Pronunciation References: Accuracy and Comprehensibility [1] Teimouri, Y., Goetze, J., & Plonsky, L. (2019). Second language anxiety and achievement: a meta- Gisela Sosa-López analysis. Studies in Second Language Acquisition, pp. 1-25. University of Barcelona [2] Horwitz, E. K., Horwitz, M. B. & Cope, J. (1986) Foreign Language Classroom Anxiety. The Modern Language Journal, 70 (2), pp. 125–132. [3] Young, D. J. (1990). An investigation of students’ perspectives on anxiety and speaking. Foreign Psycho-social factors such as anxiety are an important source of individual differences in the Language Annals, 23, pp. 539–553. acquisition of foreign languages [1]. Foreign Language Anxiety (FLA) is known to have a negative [4] Price, L. M. (1991). The subjective experience of foreign language anxiety: Interviews with highly effect on language achievement [1], since it hinders learners’ ability to perform in a FL class anxious students. In Language anxiety: From theory and research to classroom implications, eds. successfully [2]. Despite extensive previous research on FLA, speaking anxiety (the feelings E. K. Horwitz and D. J. Young, 101–108. Upper Saddle River, NJ: Prentice Hall of nervousness when speaking the target FL) and its potential influence on the fluency and [5] Mak, B. 2011. An exploration of speaking-in-class anxiety with Chinese ESL learners. System, 39, pronunciation of L2 speech has so far received little attention, even in instructed FL classrooms pp. 202–214. [6] Liu, M. & Jackson, J. (2008). An exploration of Chinese EFL learners’ unwillingness to communicate where learners report speaking-oriented activities, especially oral presentations, to be highly and Foreign Language Anxiety. The Modern Language Journal, 92, pp. 71-86. anxiety-evoking [3]. In addition, certain conditions during communicative interaction in an [7] Pérez Castillejo, S. (2018). The role of foreign language anxiety on L2 utterance fluency during L2 may increase speakers’ anxiety levels, such as speaking to a native L2 interlocutor [9, 10], a final exam., pp. 1- 19. perceiving negative attitudes from interlocutors [11, 12], or the type and complexity of an oral [8] Phillips, E. M. (1992). The effects of language anxiety on student oral test performance task to be performed in the L2 [3, 7, 15]. These conditions have been found to affect speech and attitudes. The Modern Language Journal, 76, pp. 14-26. mainly in terms of speech rate and pause frequency and duration [7, 15], but to the best of our [9] Woodrow, L. (2006). Anxiety and speaking English as a second Language. RELC Journal, 37 (3), pp. 308-328. knowledge their effects on L2 pronunciation and comprehensibility have not been investigated. [10] Çagatay, S. (2015). Examining EFL students’ foreign language speaking anxiety: the case at a The present study examined the effects of L2 speaking anxiety on L2 speaking fluency, Turkish state university. Procedia - Social and Behavioral Sciences, 199 (3), pp. 648-656. pronunciation accuracy (accentedness) and comprehensibility by manipulating two [11] Bassett, R., Behnke, R. R., Carlile, L. W., & Rogers, J. (1973). The effects of positive and negative interlocutor-related communicative conditions during two L2 oral narrative tasks: nativeness audience responses on the autonomic arousal of student speakers. The Southern Speech and collaborativeness. Nativeness (native, non-native) was operationalized as a within-subjects Communication Journal, 38, pp. 255-261. variable and it was manipulated by having participants interact in an oral task with different [12] Hartanto, D., Kampmann, I. L., Morina, N., Emmelkamp, P.G.M., Neerincx, M.A. & Brinkman, W-P. (2014). Controlling social stress in virtual reality environments. PLoS ONE, 9 (3), pp. 1-17. interlocutors, a native English speaker (native) and a Spanish accented L2 English speaker (non- [13] Goberman, A. M., Hughes, S. & Haydock, T. (2011). Acoustic characteristics of public speaking: native). Collaborativeness (collaborative, noncollaborative) was operationalized as a between- Anxiety and practice effects.Speech Communication, 53(6), pp. 867–876. subjects variable, so that participants were assigned to the native and non-native interlocutors [14] Christenfeld, N., & Creager, B. (1996). Anxiety, alcohol, aphasia, and ums. Journal of Personality described above under one of two conditions during oral interaction and task performance. In and Social Psychology, 70 (3), pp. 451–460. [15] Mora, J.C. & Mora-Plaza, I. (2019). The effect of task complexity and speaking anxiety on L2 one condition, interlocutors were very kind and helpful and did not generate communication rd breakdowns (collaborative). In the other condition interlocutors were cold, unhelpful and pronunciation and speaking fluency.43 AEDEAN Conference. November 13-15, 2019. University of Alicante (Spain). generated communication breakdowns (non-collaborative). Thirty-four EFL learners, after [16] Ortega, L., Iwashita, N., Rabie, S. & Norris, J. M. (2002). An investigation of elicited imitation having engaged in conversation with their assigned interlocutors, performed two oral narrative tasks in crosslinguistic SLA research. Conference Handout from Paper Presented at the Meeting of tasks consisting in the retelling of two excerpts from Charles Chaplin’s filmAlone and Hungry. the Second Language Research Forum, Toronto, Canada. L2 learners’ oral narratives were assessed for speaking fluency through speed (speech and articulation rates) and breakdown (pause frequency and duration within and across AS-units, phonation-time ratio and mean length of runs) fluency measures. In addition, they were also assessed for pronunciation accuracy and comprehensibility through native speakers’ ratings of accentedness and ease of understanding, respectively. Speaking anxiety levels were measured through two Likert-scale questionnaires and physiological measures of emotional arousal, heart rate (HR) and galvanic skin response (GSR). Proficiency was controlled for through an elicited imitation task [16]. Results showed non-significant effects of speaking anxiety on speaking fluency measures. HR measures were sensitive to the interlocutors’ nativeness, being the values higher with the NNS. However, GSR was not sensitive to any of the experimental conditions. Further data analyses will show the extent to which individual stress levels across conditions is reflected (and how) in the pronunciation accuracy and comprehensibility ratings of L2 learners’ speech.

166 167 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Effects of task complexity on second language fluency and pronunciation point to the need for further investigation into how objective and subjective measures of L2 pronunciation may relate to task complexity. Joan C. Mora, Ingrid Mora-Plaza and Natalia Fullana References Universitat de Barcelona [1] Kim, Y., & Taguchi, N. 2015. Promoting task-based pragmatics instruction in EFL classroom contexts: The role of task complexity. Modern Language Journal, 99, 656–677. Task-based language teaching (TBLT) research on task complexity has shown that more [2] Robinson, P. (Ed.). 2011. Second language task complexity: Researching the cognition hypothesis cognitively complex tasks enhance a focus on form and result in oral productions of increased of language learning and performance. Amsterdam, The Netherlands: Benjamins. [3] Robinson, P. & Gilabert, R. 2007. Task complexity, the Cognition Hypothesis and second language linguistic accuracy and complexity, hence, promoting L2 lexical, grammatical and pragmatic learning and performance. International Review of Applied Linguistics in Language Teaching, 45, development [1,2,3]. According to Robinson’s [4] Cognition Hypothesis, manipulating task 161-176. complexity may trigger linguistic development by enhancing learners’ attention to form while [4] Robinson, P. 2005. Cognitive complexity and task sequencing: Studies in a componential they are engaged in communicative tasks. Also, speaking fluency has been consistently shown framework for second language task design. IRAL-International Review of Applied Linguistics in to decrease as the cognitive complexity of a task increases [5,6]. However, whether increased Language Teaching, 43(1), 1-32. task complexity enhances attention to phonetic form and impacts L2 pronunciation accuracy [5] Baralt, M., Gilabert, R., & Robinson, P. (Eds.). 2014. Task sequencing and instructed second language positively is still a largely under-researched issue in TBLT [7]. Whereas some recent studies learning. London, UK: Bloomsbury. indicate that increased cognitive complexity may benefit segmental accuracy in production [6] Housen, A., Kuiken, F., & Vedder, I. 2012. Complexity, accuracy and fluency. Dimensions of [8] and perception [9] others have shown that overall pronunciation accuracy decreases as L2 performance and proficiency: Complexity, accuracy and fluency . inSLA Amsterdam, The Netherlands: Benjamins. task demands increase [10]. The aim of this study is to explore the quality of L2 learners’ [7] Gurzynski-Weiss, L., Long, A. Y., & Solon, M. 2017. TBLT and L2 pronunciation: Do the benefits of oral productions in terms of speaking fluency (speed and breakdown fluency measures), tasks extend beyond grammar and lexis? Studies in Second Language Acquisition, 39(2), 213-224. objective and subjective global measures of pronunciation accuracy such as pitch range [11] [8] Solon, M., Long, A. Y., & Gurzynski-Weiss, L. 2017. Task complexity, language-related episodes, and accentedness and comprehensibility in two versions of an oral narrative task differing and production of L2 Spanish vowels. Studies in Second Language Acquisition, 39(2), 347-380. in cognitive complexity. [9] Mora, J. C. & Levkina, M. 2018. Training vowel perception through map tasks: The role of The present study manipulated task complexity within subjects by asking 34 advanced linguistic and cognitive complexity. In J. Levis (Ed.), Proceedings of the 9th Pronunciation in Second L2 English learners to perform a simple and a complex version of comparable monologic Language Learning and Teaching conference, ISSN 2380-9566, University of Utah, September, problem-solving tasks [12] in L1-Spanish/Catalan (map tasks, used as baseline) and L2-English 2017 (pp. 151-162). Ames, IA: Iowa State University. (fire chief tasks). Languages and task versions were counterbalanced across participants. [10] Crowther, D. & Trofimovich, P. & Isaacs, T. & Saito, K. 2018. Linguistic dimensions of second Task complexity effects on learners’ L2 speech were assessed through temporal measures of language accentedness and comprehensibility vary across speaking tasks. Studies in Second Language Acquisition, 40, 443-457. speed and breakdown fluency (speech and articulation rates, pause frequency and duration, [11] Mennen, I., Schaeffler, F. & Dickie, C. 2014. Second language acquisition of pitch rangein phonation time ratio and mean length of run), objective measures of pitch range, and 7 German learners of English. Studies in Second Language Acquisition, 36, 303–329. English native speakers’ judgements of accentedness (perceived pronunciation accuracy) [12] Gilabert, R. 2007. Effects of manipulating task complexity on self-repairs during L2 oral and comprehensibility (ease of understanding). Individual differences in L2 proficiency production. IRAL-International Review of Applied Linguistics in Language Teaching, 45(3), 215-240. (vocabulary size and elicited imitation) and working memory (digit span) were controlled for. The results showed that language (L1 vs. L2) and cognitive complexity (simple vs. complex) had an effect on learners’ fluency. As expected, participants spoke and articulated faster in the L1 than in the L2, and produced more and longer pauses, spending a higher proportion of time pausing in the L2 than in the L1. In addition, increased task complexity appeared to negatively affect both L1 and L2 speaking fluency. Speech rate was lower on complex than simple tasks and pause length was longer on complex than simple tasks. The total proportion of speaking time and the mean length of runs were lower on complex than on simple tasks. Concerning L2 pronunciation, a main effect of language on pitch range was found, whereby pitch range was lower in L2 than in L1. Task complexity did not seem to have an effect on pitch range, but it affected accentedness and comprehensibility. That is, the complex task oral narratives were judged to be more strongly accented and less comprehensible than those of the simple task, meaning that the increased task demands of a more complex task might have prevented participants from paying attention to pronunciation and negatively influenced pronunciation accuracy and comprehensibility. The findings of this study contribute to incipient research on the role of task complexity in L2 pronunciation and

168 169 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS For the inclusion of pronunciation instruction early in the L2 curriculum: averages than native Spanish speakers but shorter than for English (see ranges from [7]; evidence from the acquisition of L2 Spanish voiceless stops [8]), Group I have longer VOTs than Group II in T1 but they end the semester with overall shorter VOTs than Group II. All these results taken together suggest that explicit instruction Rebeka Campos-Astorkiza, Katriese DeLeon, Kendall Locascio and Shannon Sullivan of pronunciation has a greater impact on beginners’ than on intermediate learners’ sound Ohio State University production. This supports a Spanish curriculum that includes pronunciation teaching as early as possible. In addition, this study present evidence supporting our hypothesis that There is growing evidence that explicit instruction of pronunciation has a positive impact beginners are prone to greater changes due to instruction because their sound categories on second language (L2) learner’s production of the L2 sound system (e.g. for L2 Spanish are less solid than more advanced learners. and L1 English, see [1, 2, 3]). Authors have used these findings to advocate for the inclusion of explicit teaching of pronunciation for L2 learners. In addition, there is some evidence References that pronunciation instruction can benefit different levels of L2 proficiency ([4]). However, it is unclear what types of learners might benefit most from this instruction, more precisely [1] Lord, G. (2005). (How) can we teach foreign language pronunciation? On the effects of a Spanish whether the impact might be different for beginners and for intermediate/advanced learners. phonetics course. Hispania, 88(3), 557–567. Based on the concept of the “learning plateau” ([5, 6]), we hypothesize that beginners might [2] Gonzalez Lopez, V.; Counselman, D. (2013). L2 Acquisition and Category Formation of present a more malleable system and exhibit greater improvements from pronunciation SpanishVoiceless Stops by Monolingual English Novice Learners. Selected Proceedings of the 16th Hispanic Linguistics Symposium, ed. Jennifer Cabrelli Amaro et al., 118-127. Somerville, MA: instruction, while more advanced learners, whose L2 categories are more formed, might Cascadilla Proceedings Project. show more resistance to improvement due to instruction. Furthermore, we argue that, [3] Camus-Oyarzún, P. A. (2016) The effects of pronunciation instruction on the production of in order to answer our research question, it is important to explore the acquisition of a second language Spanish: A classroom study. Doctoral dissertation: Georgetown University particular sound or phonological process by considering potential differences in certain [4] Lee, J., Jang, J., & Plonsky, L. (2015). The effectiveness of second language pronunciation linguistic contexts versus others across L2 proficiency levels, rather than by just comparing instruction: A meta analysis. Applied Linguistics, 36(3), 345-366. the overall production rates across levels. This latter approach might not yield any significant [5] Flege, J. E. (1988). Factors affecting degree of perceived foreign accent in English sentences.The differences across levels (e.g. in [3]) because the impact of pronunciation instruction might Journal of the Acoustical Society of America, 84(1), 70-79. be apparent only in some linguistic contexts. [6] Munro and Derwing. (2008) Segmental Acquisition in Adult ESL Learners: A Longitudinal Study To explore our hypothesis, we focus on the acquisition of Spanish voiceless stops /p, t, k/ by of Vowel Production. Language Learning [7] Amengual, M. (2012). Interlingual influence in bilingual speech: Cognate status effect ina learners whose L1 is English, and compare two types of learners who received pronunciation continuum of bilingualism. Bilingualism: Language and Cognition, 15(3), 517-530. instruction as part of a Spanish college class in a US university. Spanish and English differ in [8] Zampini, M. L. (2014). Voice onset time in second language Spanish. The handbook of Spanish the degree of aspiration or VOT duration for their voiceless stops: Spanish voiceless stops Second Language Acquisition, 111-129. have a short VOT, while English ones have a long VOT in stressed contexts, especially word- initially. The data was gathered as part of a bigger project, See your Speech, which combines research and teaching at the college level. More precisely, the data comes from a teaching module developed for Spanish college classes, where students record themselves reading words in Spanish via a web-based interface and get instant feedback their pronunciation via that interface. Students complete the module at two timepoints: the beginning (T1) and end of the semester (T2). This study includes two sets of participants: Group I (N=20; beginners) are third-semester students taking a language course that includes explicit phonetic instruction, and Group II (N=27; intermediates) are Spanish majors/minors taking an upper-level Spanish pronunciation course. For the data analysis, we measure the participants’ VOT during /p, t, k/. We use linear regression to explore the effect of Group (GI, GII), Timepoint (T1, T2) and their interaction on the VOT duration. In addition, we also consider the effect of stress, word position and stop place of articulation as independent factors in the linear regression models. Results show that, when looking at the overall VOT duration, both groups decrease their durations in T2 vs. T1. However, when exploring the interaction between Timepoint and the linguistic factors, it becomes clear that the change or decrease for Group II is limited: VOT durations are the same in T1 and T2 for /k/ and unstressed positions. For Group I, VOTs are shorter in all contexts in T2 vs. T1, showing that beginners decrease their VOTs across the board. This finding lends evidence to our claim that cross-level differences need to be examined in a variety of linguistic contexts, as the overall VOT duration did not capture these level-based patterns. Results also indicate that, while both groups present longer VOT

170 171 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Can L2 Articulatory Phonetic Training Aid Perception? [2] Flege, J.E. 1987. The production of ‘new’ and ‘similar’ phones in a foreign language: evidence for the effect of equivalence classification.Journal of Phonetics 15, 47-65. Carrie A. Ankerstein [3] Flege, J.E. 2003. Assessing constraints on second-language segmental production and Saarland University, Saarbruecken, Germany perception. In Meyer, A. & Schiller, N. (Eds.), Phonetics and phonology in language comprehension and production: Differences and similarities. Berlin: Mouton de Gruyter, 319–355. Learning the speech sounds of an L2 is a particular challenge for non-native speakers of a [4] Sakai, M. & Moorman, C. 2018. Can perception training improve the production of second language phonemes? A meta-analytic review of 25 years of perception training research. language. A common component of models that seek to explain L2 phonological acquisition is Applied Psycholinguistics 39(1), 187-224. the influence that the L1 has on L2 sound perception and production. It has been argued that [5] Linebaugh, G. & Roche, T. 2015. Evidence that L2 production training can enhance perception. sounds that are similar but different in the L1 and L2 will be the most difficult to learn, even Journal of Academic Language and Learning 9(1), A1-A17. more difficult, generally speaking, than new sounds [1, 2]. [6] Hanulikova, A., & Weber, A. 2010. ‘Production of English interdental fricatives by Dutch, German, A component of Flege’s Speech Learning Model argues that in order to produce L2 sounds and English speakers’. In K. Dziubalska-Kołaczyk, M. Wrembel, & M. Kul (eds.), Proceedings of the accurately, clear perception of the L2 sounds must already be in place. Flege argued that “L2 6th International Symposium on the Acquisition of Second Language Speech, New Sounds, Poznań, phonetic segments cannot be produced accurately unless they are perceived accurately” [3, Poland, 1-3 May 2010 (pp. 173-178). Poznan: Adam Mickiewicz University. p. 27]. A meta-analysis of the effect of perception training on production has found some [7] Iverson, P., Ekanayake, D., Hamann, S., Sennema, A. & Evans, B.G. 2008. ‘Category and perceptual support for that assumption [4]. interference in second-language phoneme learning: An examination of English /w/-/v/ learning Less studied, however, is whether production training can affect perception. Linebaugh and by Sinhala, German, and Dutch speakers’. Journal of Experimental Psychology: Human Perception Roche trained native Arabic speakers on the production of a number of problematic English and Performance, 34, 1305–1316. vowel contrasts and the /g, ʤ/ contrast and found significant improvement for the perception [8] Eckert, H. & Barry, W. 2005. The Phonetics and Phonology of English Pronunciation. Trier: of the contrasts in comparison to control participants who received no articulatory training [5]. Wissenschaftlicher Verlag Trier. The current study continues on this lesser explored avenue. Native speakers of German (n = 13) were tested for their perception of generally difficult English contrasts, e.g., /gra, kra/, /v, w/, /s, θ/ and /z, ð/, before and after articulatory phonetic training. Participants were recruited from the state of Saarland in Germany where in addition to the traditionally problematic sounds - the dental fricatives [6] and the v-w distinction [7] - voicing in onset consonant clusters is problematic due to the devoicing of the stops in such clusters in Saarlandian German [8]. Perception of the contrasts was tested in a forced-choice phoneme recognition task used as a pre-test and post-test in which participants heard a sound, e.g., /gra/, and had to make a choice between “gra” and “kra” on an answer sheet. Articulatory phonetic training was given as part of a class on American English phonetics at Saarland University and focus was on feeling the vibration of the larynx for voiced versus unvoiced sounds; feeling the vibration on the lower lip for the voiced labio-dental fricative /v/ as opposed to the labio-velar approximant /w/; and feeling the vibration between the tongue and the teeth for the dental fricatives /θ, ð/ as opposed to the alveolar fricatives /s, z/. Participants were instructed to feel the vibration of the larynx using their fingers and to focus on the sensation on their lower lips during voiced fricatives. Though preliminary results showed support for an effect of articulatory phonetic training on perception, full data analysis showed no statistically significant effects (all p > 0.05), which may be due to ceiling effects in some participants at both pre-test and post-test. The results do, however, lend support for further research to explore whether articulatory training can help L2 learners with poor phoneme recognition, i.e., at or around chance, prior to training. The results have important implications for the effect of articulatory training in the classroom as aiding not only production but also perception and add to the growing body of literature investigating the interplay between perception and production of speech sounds.

Reference

[1] Best, C.T., McRoberts, G.W. & Goodell, E. 2001. Discrimination of non-native consonant contrasts varying in perceptual assimilation to the listener’s native phonological system. Journal of the Acoustical Society of America 109, 775–794.

172 173 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Embodied prosodic training enhances the learning of L2 pronunciation and specific Figure 1. Gestures depicting pitch and rhythmic patterns on the sentence level. vocalic features

Peng Li1, Florence Baills1, Lorraine Baqué 2, and Pilar Prieto 1,3 1Departament de Traducció i Ciències del Llenguatge, Universitat Pompeu Fabra, 2Universitat Autònoma de Barcelona 3Institució Catalana de Recerca i Estudis Avançats (ICERA)

Previous studies have shown that prosodic training (e.g., Alazard, 2013; Gordon & Darcy, 2016; Missaglia, 2007) and using hand gestures encoding suprasegmental (e.g., Gluhareva & Prieto, 2017; Yuan et al., 2019) or segmental features (e.g., Xi et al., 2020) improve L2 pronunciation. However, few previous studies have assessed whether prosodic training with gestures (i.e., embodied prosodic training) can jointly improve L2 pronunciation. The present study explores the effects of embodied prosodic training on the pronunciation of non-native French front rounded vowels (i.e., /y, ø, œ/) and overall pronunciation proficiency. Figure 2. Mean accentedness rating of the dialogue-reading task (left panel) and the sentence-imitation task (right Fifty-seven Catalan learners of French received three sessions of 15-minute training in one of panel) across groups and tests. Error bars indicate 95% confidence intervals. Asterisks indicate significant contrasts. the two conditions: No Gesture (n = 29) and Gesture (n =28). In both groups, the participants watched short plays in French, read the script of the dialogues, and received sentence-by- sentence pronunciation training. The Gesture group repeated the training sentences after the instructor while watching hand gestures depicting pitch and rhythmic features of the sentences (see Figure 1). In contrast, the No Gesture group repeated the sentences without gestural aid. The target vowels had a high frequency in the training dialogues and appeared in various prosodic environments. The learning outcome was assessed in a pretest/posttest/ delayed posttest paradigm with a dialogue-reading task and a sentence-imitation task. Overall pronunciation proficiency was assessed for comprehensibility, accentedness, and fluency by three native French teachers, while a formant analysis acoustically assessed the front rounded vowels. The results were submitted to a set of linear mixed-effects models with condition (no gesture vs. gesture), test (pre vs. post vs. delay), and interaction as fixed effects while participant and item as random intercepts. Post-hoc analysis was done by a series of t-test with the significance level adjusted by the Bonferroni method. The results of the overall pronunciation proficiency Figure 3. Scatter plot of F1 and F2 of /y/ produced by the participants in the dialogue-reading task (left panel) and sentence-imitation task (right panel) across groups (upper panel for No Gesture group; lower panel for Gesture group) of both tasks showed that while both groups improved comprehensibility and fluency at and tests. The circles indicate 95% confidence intervals. posttest and delayed posttest, embodied prosodic training triggered more substantial improvement on accentedness. Specifically, the Gesture group continuously improved Reference accentedness throughout the three tests, while the No Gesture group only improved from pre- to posttest (see Figure 2 for the scatter plot of /y/ as an example). As for the acoustic [1] Alazard, C. (2013). Rôle de la prosodie dans la fluence en lecture oralisée chez des apprenants de analyses, the Gesture group improved their F2 at posttest and maintained this improvement Français Langue Étrangère (Doctoral dissertation). Université Toulouse le Mirail. at delayed posttest. By contrast, the No Gesture group did not show such an improvement. [2] Gluhareva, D., & Prieto, P. (2017). Training with rhythmic beat gestures benefits L2 pronunciation The enhanced F2 values observed in the Gesture group suggest that embodied prosodic in discourse-demanding situations. Language Teaching Research, 21(5), 609–631. training helps learners to produce the front rounded vowels in a more target-like fashion [3] Gordon, J., & Darcy, I. (2016). The development of comprehensible speech in L2 learners. Journal (see Figure 3 for /y/ as an example). To conclude, embodied prosodic training may help the of Second Language Pronunciation, 2(1), 56–92. pronunciation of segmental accuracy and improve accentedness in an L2. [4] Missaglia, F. (2007). Prosodic training for adult Italian learners of German: The Contrastive Prosody Method. In J. Trouvain & U. Gut (Eds.), Non-Native Prosody: Phonetic Description and Teaching Practice (Issue section 4). De Gruyter Mouton. [5] Xi, X., Li, P., Baills, F., & Prieto, P. (2020). Hand gestures facilitate the acquisition of novel phonemic contrasts when they appropriately mimic target phonetic features. Journal of Speech, Language, and Hearing Research, 63(11), 3571–3585. [6] Yuan, C., González-Fuente, S., Baills, F., & Prieto, P. (2019). Observing pitch gestures favors the learning of Spanish intonation by mandarin speakers. Studies in Second Language Acquisition, 41(1), 5–32.

174 175 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS DAY 2 - Poster Session B These findings provide insight into the phonology of syllable weight in Arabic by providing Prosodic Structure, Discourse & Pragmatics quantitative evidence that questions previous assumptions about mora-sharing in VVC rimes, and also support the claim that geminates and clusters behave differently [11]. Syllable structure, vowel length and mora-sharing in Lebanese Arabic Table 1. Summary of results and analyses

Niamh E. Kelly American University of Beirut, Lebanon

In Levantine Arabic, duration has been shown to be a reliable cue for distinguishing long and short vowels [4] and singleton and geminate consonants [6]. The phonological concept of a mora has been shown to relate to duration [8,5,3], and phonetic measurements have been used to describe patterns of mora-sharing between long vowels and coda consonants in Arabic. [11] proposes that mora-sharing is allowed in order to avoid trimoraic syllables, and that in Levantine Arabic, mora-sharing is only permissible when the syllable rime contains a long vowel. [1] proposed that in non-final syllables in Levantine Arabic, a coda C has a mora in VC rimes but shares a mora in VVC, so both syllable types are considered bimoraic. This was supported by measurements in a small-scale study showing that long vowels in VVC rimes were shortened (because V and C share a mora), but short vowels in VC rimes were not (V and C each have their own mora). Similarly, in a recent phonetic study of Lebanese Arabic, phonologically long vowels were found to be shortened before intervocalic geminates (CVVG.GV), while short vowels were not affected [6]. (Intervocalic geminates are considered ambisyllabic, with the first half counted as a coda and the second half counted as an onset.) However, previous phonetic work on Lebanese Arabic only examined a very small number of tokens, involving just one speaker and one lexeme [2,1], or examined vowels only before geminates, not singletons or clusters [6]. There is no quantitative research on Lebanese Arabic specifically examining vowel duration in syllables closed by a non-geminate, that is, a singleton consonant that is part of an intervocalic cluster, CV(V)C.CV. The current investigation extends previous work by examining - on a larger scale - the duration of long and short vowels in open syllables, CV(V).CV, and those followed by an intervocalic cluster, so closed by a singleton consonant (with the second consonant considered the onset Figure 1. Vowel duration for short and long vowels based on whether the syllable is open, or closed of the following syllable), CV(V)C.CV, as well as short vowels followed by an intervocalic by a geminate or a singleton consonant. geminate, CVG.GV. (As in [6], half of the duration of the geminate was counted as the coda of the initial syllable.) Six native speakers aged 18-22 produced 472 tokens (disyllabic, initial stress) in sentence-medial position. Reference Linear mixed-effects regression tests in R [9] found that for long vowels, there is no difference in vowel duration based on whether the syllable is open or closed by a singleton (CVV. [1] Broselow, E., Chen, S., & Huffman, M. (1997). Syllable weight: convergence of phonology and phonetics. Phonology, 14(1), 47-82. CV vs CVVC.CV) (Figure 1). For short vowels, those in syllables closed by a singleton (CVC. [2] Broselow, E., Huffman, M., Chen, S., & Hsieh, R. (1995). The timing structure of CVVC CV) are significantly shorter than those in open syllables (C .CV) and those followed by a V syllables. In M. Eid (Ed.), Perspectives on Arabic Linguistics VII (p. 119-138). Amsterdam coda geminate (CVG.GV). Similar to [6], there is no difference in duration between short and Philadelphia: Benjamins. vowels that are followed by a coda geminate and those in an open syllable. A coda singleton [3] Cohn, A. (2003). Phonological structure and phonetic duration: the role of the mora. (CVC.CV) was also found to be significantly longer than a coda geminate (CVG.GV). Results Working Papers of the Cornell Phonetics Laboratory, 15, 69-100. were the same across vowel qualities. An analysis is proposed (Table 1) whereby syllables [4] Hall, N. (2017). Phonetic neutralization in Palestinian Arabic vowel shortening, with containing a short vowel followed by a coda geminate count as bimoraic, while those closed implications for lexical organization. Glossa: a journal of general linguistics, 2(1), 48, 1-23. by a coda singleton have their coda C weighing more than a coda geminate, so the vowel gets [5] Hubbard, K. (1993). Mapping phonological structure to phonetic timing: moras and shortened in a compensatory way, to maintain bimoraicity. duration in two Bantu languages. Berkeley Linguistics Society, 19, 182-192. The lack of a difference in duration between long vowels in open vs closed syllables is explained [6] Khattab, G., & Al-Tamimi, J. (2014). Geminate timing in Lebanese Arabic: the relationship as follows: long vowels followed by an intervocalic consonant cluster (CVVC.CV) are actually between phonetic timing and phonological structure. Laboratory Phonology, 5 (2), 231-269. parsed as open syllables, with the first consonant of the cluster forming a semisyllable [7] Kiparsky, P. (2003). Syllables and moras in Arabic. In C. Fery & R. Vijver (Eds.), The Syllable on its own [7] (CVV.C.CV). This avoids onset clusters that violate the Sonority Sequencing in Optimality Theory (p. 147-182). Cambridge: Cambridge University Press. Principle [10]. As previously thought, long vowels followed by geminate consonants (CVVG. [8] Port, R., Dalby, J., & O’Dell, M. (1987). Evidence for mora timing in Japanese. Journal of the GV) partake in mora-sharing [2,6]. In this way, trimoraic syllables are avoided but by two Acoustical Society of America, 81, 1574-1585. different processes.

176 177 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS [9] R Development Core Team. (2008). R: A Language and Environment for Statistical The Role of Quantity in Arabic Syllable Structure Restrictions Computing [Computer software manual]. Vienna, Austria. [10] Selkirk, E. (1984). On the major class features and syllable theory. In M. Aronoff & R. Emily Lindsay-Smith Oehrle (Eds.), Language Sound Structure (p. 107-136). Cambridge: MIT Press. University of Oxford [11] Watson, J. (2007). Syllabification patterns in Arabic dialects: long segments and mora-sharing. Phonology, 24, 335-356. Previous research on Arabic syllable structure (Broselow 1992, Kiparsky 2003, Watson 2007, Farwaneh 2009 inter alia) suggests that modern varieties can be classified based on where they insert an epenthetic vowel into a triconsonantal cluster. In the following examples, the word is underlyingly /qult.lu/ where the full stop indicates a syllable boundary and the triconsonantal cluster is -ltl-. CV varieties like Cairene epenthesise to the right of the central consonant as in (1); VC varieties like Iraqi epenthesise to the left of the central consonant as in (2); and C varieties like Moroccan do not epenthesis at all and accept the triconsonantal cluster as in (3).

(1) ?ul-ti-l-u (Cairene) This typology is incomplete. Not only do modern varieties (2) gil-it-l-u (Iraqi) differ in how they repair syllable structure variations as (3) qəl-t-l-u (Moroccan) shown above, but they also differ in terms of what syllable say.Pfv Subj.1Sg-Dat-Obj.3Sg structures they permit, and this restriction is a matter of ‘I said to him’ syllable structure rather than linear sequences of segments. (4) be:t.kum ‘your house’ The two halves of this typology do not covary as shown in (5) bad:.lat ‘she changed’ (6) ki.ta.bit.lu ‘I wrote to him’ the table below, and both are needed to account for the *ki.tabt.lu variation found across modern varieties. In this paper, I (7) ba:b+kum -> bab.kum focus on half of this typology: the tolerance of particular ‘your door’ syllable structures, using data from 16 dialects ranging (8) kul:+kum -> kul.lu.hum from Morocco to Jordan to Yemen found in unpublished ‘all of them’ PhD theses written by native speakers as well as published (9) ʔult+lak -> ʔul.ti.lak ‘I told you’. research monographs.

I argue that there is disparity in tolerance of quantity - that is, variation in acceptance of geminate CVC: and CV:C versus CVC1C2 syllables word-medially. In the following examples, C stands for consonant, V for vowel, : for long segment, and the full stop marks syllable boundaries. Baghdad Iraqi permits medial CV:C syllables (see 4) and medial CVC: (see 5) but

not CVC1C2 (see 6); whereas Cairene permits none of these superheavy syllables (see 7-9). These preferences can be seen where and epenthesis are permitted and blocked, such as whether postgeminate syncope occurs, the conditions for closed syllable shortening, or whether clusters created by affixation are repaired, as well as in what is found in underived words.

These patterns can be accounted for by restrictions at different levels of syllable structure. Varieties that permit long segments can have restrictions on the segmental level, whereas varieties that do not can have restrictions on the moraic level, and varieties that allow long consonants or clusters but not long vowels have restrictions on the X-slot tier. In addition to the canonical 2 or 3 segments or moras, individual varieties can also permit extra element in their medial syllable according to variety-specific restrictions, thus allowing for some restricted consonantal clusters. These restrictions can include coronality, fast speech, or sonority, as shown in the table below.

178 179 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Repair Type Tolerance Type No. Elements Extra Element Restrictions Word-Initial Voicing Alternations in Five Romance Languages Oran Algerian Coda Segmental 2 Sonority Mathilde Hutin1, Yaru Wu1, Ioana Vasilescu1, Lori Lamel1 and Martine Adda-Decker2 Palestinian Coda Segmental 2 Coronal 1Université Paris-Saclay, CNRS, LISN, 2Université Paris 3 Sorbonne Nouvelle, LPP Lebanese Coda Segmental 2 Coronal Baghdad Iraqi Coda Segmental 2 - Over the last decade, the access to increasingly larger corpora processed with digital tools Wadi Ramm Coda Segmental 3 - has led to new ways to revisit old questions, such as the ones about linguistic variation Muscat Coda Segmental 2 Sonority, Lexical, Fast Speech [1, 2]. In particular, it allows us to test whether synchronic phonetic variational patterns San’ani Onset Segmental 2 Fast Speech mirror linguistic evolution [3]. One phenomenon representative of this question is voicing Makkan Onset Moraic 2 Branching Nucleus alternation. From the observation of diachronic change, it is known that, when a segment Qatari Coda Moraic 2 Branching Nucleus gains a voiced feature, it is lenited, and when it loses it, it is fortified. The onset position Cairene Onset Moraic 2 - (word-initially and after a heterosyllabic consonant) is usually a strong position, i.e. favoring Tunisian Coda Moraic 3 - fortition, in our case devoicing, while both the intervocalic and coda positions (word-finally Rufaidah Onset Moraic 3 - and before a heterosyllabic consonant) are weak positions, i.e. favoring lenition, in our Rwaili Onset Moraic 3 - case, voicing [4]. From a synchronic perspective, voicing alternation has been established Ha:yil Onset Moraic 3 - at different degrees in several Romance languages in intervocalic position ([5] on Spanish; Moroccan Coda X-slot 3 - [6] on Spanish and Catalan; [7] on Italian) and in coda position ([5, 8] on Spanish; [9, 10] Casablanca on French; [11, 12, 13] on Romanian). The strong position, however, has received far less Libyan Tripoli Coda X-slot 3 Consonantal attention. We thus propose an extensive account of voicing alternation for all voiced and voiceless stops in word-initial position in five Romance languages. Does this position really As such, this addition to the Arabic syllable structure typology allows for mechanisms to promote fortition over lenition? account for the data in each and every dialect, rather than merely providing a canonical set For this study, five Romance languages were investigated: Portuguese, Spanish, French, of features that not all varieties fit. Italian and Romanian. The corpora were acquired from the LDC or ELRA, or developed within international research projects: IST ALERT for part of the data in Portuguese and French, IRST Bibliography for part of the data in Italian [14] and OSEO Quaero for all languages. In total, this study is carried out using ~1k hours of spoken data, i.e. 3.5M initial stops (cf. Table 1). Broselow, E. 1992. Parametric variation in Arabic dialect phonology . In Eid, M. and McCarthy, The corpora were manually transcribed and automatically aligned using pronunciation J. (eds.) Perspectives on Arabic Linguistics IV. Amsterdam & Philadelphia : John Benjamins variants for the laryngeal feature [15]: every /ptk/ could be aligned with either canonical Publishing Company, 7-47. [ptk] or non-canonical [bdg] and so could every /bdg/, depending on which phone model the Farwaneh, S. 2009. Towards a typology of Arabic dialects: The role of final consonantality. Journal system judged to be a better match to the particular instance of each stop (ex. Fr. bas /ba/, of Arabic and Islamic Studies 9 , 82-109. ‘low’, could be transcribed as either [ba] or [pa]). Table 2 shows, among other things, that (i) Kiparsky, P. 2003. Syllables and moras in Arabic. In Fery, C. and van de Vijver, R. (Eds.) The Syllable voiced stops are more often devoiced than voiceless stops are voiced in word-initial position, in Optimality Theory. Cambridge: Cambridge University Press, 147-182. except in Italian, and (ii) the rates of voicing are more steady than those of devoicing. Watson, J. 2007. Syllabification Patterns in Arabic Dialects: Long Segments and Mora Sharing. To investigate these results more finely, we took a closer look at the stop’s left context, i.e. Phonology 24, 335-356. the last phone of the preceding word. The data was split into five categories depending on whether the left context was a pause, a vowel, a sonorant, a voiced obstruent or a voiceless obstruent. Figure 1 shows that devoicing happens at high rates after pause in all five languages, thus indicating that these instances of devoicing are fortition. This is confirmed by the fact that /bdg/ are devoiced after any obstruent in French and in any context in Portuguese: in these two languages, an onset in post-coda position is fortified even in contexts where devoicing cannot be due to assimilation of the laryngeal feature. Finally, a generalized linear model was conducted on the data to establish whether the selected factors have significant impacts on word-initial fortition (canonicalvs non-canonical realization of the voiced stop onsets). The following variation factors were included in the model: language, phone, phone duration, left context and speaker gender. The model indicates that (i) voiced stops are less likely to devoice in French than in other languages; (ii) /b/ is more likely to devoice than /d/ and less than /g/; (iii) shorter stop durations favor devoicing; (iv) voiced stops are more likely to devoice when preceded by pause than when

180 181 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS preceded by other contexts; and (v) female speakers are more likely to devoice than male [4] Ségéral, P. & Scheer, T. 2008. Positional factors in lenition and fortition. Lenition & fortition: 131- speakers. 172 This study intends to answer various theoretical questions regarding fortition in synchrony [5] Vasilescu, I., Hernandez, N., Vieru, B. & Lamel, L. 2018. Exploring Temporal Reduction in Dialectal and the disjunctive context favoring it in Romance languages using large corpora of Spanish: A Large-scale Study of Lenition of Voiced Stops and Coda-s. Proceedings of Interspeech. automatically aligned data of connected speech. [6] Hualde, J. & Prieto, P. 2014. Lenition of intervocalic alveolar fricatives in Catalan and Spanish. Phonetica 71, 109-127 [7] Hualde, J. & Nadeu, M. 2011. Lenition and phonemic overlap in Rome Italian. Phonetica 68, 215- Nb of Nb of Voiceless Stops Nb of Voiced Stops Total Language 242 Hours p t k b d g ptkbdg [8] Ryant, N. & Liberman, M. 2016. Large-scale analysis of Spanish/s/-lenition using audiobooks. Portuguese 114 74520 33716 88199 9195 98065 7135 310830 Proceedings of Meetings on Acoustics 22ICA, Vol. 28, No. 1 Spanish 223 272802 122741 364009 106946 358161 34900 1259559 [9] Jatteau, A., Vasilescu, I., Lamel, L. & Adda-Decker, M. 2019. Final devoicing of fricatives in French: French 176 189701 80202 166491 44479 269455 17008 767336 Studying variation in large-scale corpora with automatic alignment. Proceedings of the 19th ICPhS, Italian 168 46496 23537 60960 6611 67627 5276 210507 295–299 Romanian 300 230641 81480 224359 41403 301279 23457 902619 [10] Jatteau, A., Vasilescu, I., Lamel, L., Adda-Decker, M. & Audibert, N. 2019. “Gra[f]e!” Word- Total 981 814160 341676 904018 208634 1094587 87776 3450851 final devoicing of obstruents in Standard French: An acoustic study based on large corpora. Table 1. Number of hours and of each onset stop for each language. Proceedings of Interspeech, 1726-1730 [11] Chitoran, I., Hualde, J. & Niculescu, O. 2015. Gestural undershoot and gestural intrusion – from perceptual errors to historical sound change. Proceedings of 2nd ERRARE Workshop, Sinaia p t k ptk b d g bdg ptkbdg (Romania). Portu- 6.55 7.37 7.55 7.14 11.87 21.77 12.75 20.42 12.03 [12] Hutin, M., Niculescu, O., Vasilescu, I., Lamel, L. & Adda-Decker, M. 2020. Lenition and fortition of guese stop codas in Romanian. Proceedings of SLTU-CCURL 2020: 226-234 Spanish 5.53 7.61 9.50 7.77 12.97 11.00 12.92 11.56 9.27 [13] Hutin, M., Jatteau, A., Vasilescu, I., Lamel, L. & Adda-Decker, M. 2020. Ongoing phonologization of French 8.14 6.04 8.07 7.73 11.05 9.66 11.17 9.92 8.67 word-final voicing alternations in two Romance languages: Romanian and French. Proceedings of Italian 6.68 4.24 9.24 7.43 7.08 4.80 8.26 5.22 6.60 Interspeech. Shanghai, China. Romanian 5.67 4.50 5.46 5.40 4.68 6.34 8.42 6.28 5.76 [14] Marcello, F., Giordani, D. & Coletti, P. 2000. Development and Evaluation of an Italian Broadcast Table 2. Rates of non-canonical realizations for each stop onset in each language. News Corpus. Proceedings of LREC, 2000. [15] Gauvain, J.-L., Lamel, L. & Adda, G. 2002. The LIMSI broadcast news transcription system. Speech communication, 37(1-2), 89-108.

Figure 1. Rates of devoiced realizations of /bdg/ depending on the left context.

References

[1] Adda-Decker, M. 2006. De la reconnaissance automatique de la parole à l’analyse linguistique des corpus oraux. Papier présenté aux Journées d’Étude sur la Parole (JEP2006), Dinard, France, 12–16 juin [2] Coleman, J., Renwick, M. E.L. & Temple, R.A.M. 2016. Probabilistic under-specification in nasal place assimilation. Phonology 33(3), 425-458 [3] Ohala, J. J. 1989. Sound change is drawn from a pool of synchronic variation. Language change: Contributions to the study of its causes, 173, 198.

182 183 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Dialogue Management in Autism Spectrum Disorder ASD entrain to each other successfully, but not instantly [8]. However, given that listeners are very sensitive to even small differences in turn-timing [7] and backchannelling [9] and form Simon Wehrle1, Francesco Cangemi1, Kai Vogeley2 & Martine Grice1 personality impressions about speakers extremely rapidly [10], the communication style 1IfL-Phonetik, University of Cologne, Germany; 2University Hospital Cologne, Germany of the ASD group may still be perceived as odd or unusual, at least by typically developed listeners. Introduction. Dialogue management can be considered the most essential skill of human communication. Rapid, short-gap turn-taking, for example, has been shown to be remarkably similar across different languages [1]. Backchannelling (BC; producing listener signals such as “mmhm”, “okay”) is a related phenomenon that is less well investigated, but can be presumed to play an equally important role. Given the fact that Autism Spectrum Disorder (ASD; [2]) is at its core a disorder of social communication, it is surprising that very little is known about dialogue management in speakers with ASD. The little data we do have tend to be from children and produced in settings which are far from natural or spontaneous [3]. In the present contribution, we investigate the turn-taking and backchannelling behaviour of German adults with and without ASD engaged in semi-spontaneous dialogue. Data and Methods. We recorded 14 native German speakers with high-functioning ASD diagnosed at the University Hospital Cologne and 14 control (CTR) speakers matched for age, verbal IQ and gender. Autism-Spectrum Quotient scores [4] were obtained for all speakers Figure 1. Turn-timing: Floor transfer offset values by group and part of dialogue. Negative values represent overlaps; (ASD mean = 41.4 ; CTR mean = 15.7). Speech was elicited through Map Tasks [5] performed positive values represent gaps. in matched dyads (ASD-ASD; CTR-CTR). Average dialogue duration was 20 minutes per dyad, with a total duration of ~5 hours. Turn-timing results are reported using the measure of floor transfer offset (FTO [6], where positive values represent gaps and negative values represent overlaps). For prosodic analysis, all tokens were hand-corrected and smoothed. F0 values were taken at 10% and 90% of token duration and the difference calculated in semitones. Results. Overall, the timing of turn-taking was very similar across groups (and to results reported in previous research), with a clear preference for short gaps and an overall mean FTO of 240 ms for the CTR and 321 ms for the ASD group across all turn transitions (n = 5671). The longer mean FTO for the ASD group mainly reflects a higher proportion of long gaps (>700 ms [7]). A more detailed analysis reveals some subtle differences beyond the global similarities. Specifically, we see that the behaviour of most (but not all) ASD dyads markedly differed from that of CTRs, but only in the earliest stages of dialogue (i.e. up to the Figure 2. Backchannels: Proportion of intonation contours by group and part of dialogue. Contours within the range first mismatch between maps). During this early period, ASD speakers produced more long of +/- 1 semitone are categorised as level. Width of bars represents rate of backchannels per minute. gaps (26% of all transitions), whereas they later achieved a rhythm of turn-taking closely mirroring that which CTR dyads performed throughout the task (with about 12% long gaps; see Fig. 1). References The ASD group produced fewer backchannels per minute (7.32) than the CTR group (9.51) (total: n = 2371). As with turn-timing, this difference between groups holds true for [1] Stivers, T., et al. (2009). Universals and cultural variation in turn-taking in conversation. PNAS, most but not all dyads and is more pronounced at the beginning of the task. Further, CTR 106, 10587. dyads produced a greater variety of BC types than ASD dyads. Prosodic analysis revealed [2] American Psychiatric Association. (2013). Diagnostic and Statistical Manual of Mental Disorders (5th ed.). Washington, DC. that intonational realisation is specific to BC type, e.g. 80% of “mmhm” tokens were produced [3] Heeman, P. A., et al. (2010). Autism and interactional aspects of dialogue. In Proc. 11th Meeting with rising intonation, whereas 64% of “genau” (“exactly”) tokens were produced with falling Special Interest Group Discourse and Dialogue (pp. 249-252). intonation. These mappings were clearest for the CTR group, whereas half of the ASD group [4] Baron-Cohen, S., et al. (2001). The autism-spectrum quotient (AQ). Journal of Autism and produced mainly BCs with rising intonation, regardless of type (see Fig. 2). Developmental Disorders, 31(1), 5-17. Discussion. Our study is the first to extend the well-known finding that turn-taking is [5] Anderson, A. H., Bader, M., Bard, E. G., Boyle, E., Doherty, G., Garrod, S., ... & Sotillo, C. (1991). strikingly similar across languages and cultures to a group of speakers with different cognitive The HCRC map task corpus. Language and Speech, 34(4), 351-366. styles. Our results strengthen the notion that turn-taking is a truly fundamental aspect of [6] Levinson, S. C., & Torreira, F. (2015). Timing in turn-taking and its implications for processing human communication. Markedly different dialogue management by some but not all ASD models of language. Frontiers in Psychology, 6, 731. dyads was found only in the earliest parts of conversation. This shows that speakers with [7] Kendrick, K. H., & Torreira, F. (2015). The timing and construction of preference: A quantitative study.” Discourse Processes, 52, 255-289.

184 185 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS [8] Levitan, R., Benus, S., Gravano, A., & Hirschberg, J. (2015). Entrainment and Turn-Taking in Conversational prosody of people with depression and their therapists Human-Human Dialogue. In AAAI Spring Symposia. [9] Cutrone, P. (2005). A case study examining backchannels in conversations between Japanese– Hannah Grimes and Laurence White British dyads. Multilingua, 24(3), 237-274. School of Education, Communication and Language Sciences, Newcastle University [10] McAleer, P., Todorov, A., & Belin, P. (2014). How do you say ‘Hello’? Personality impressions from brief novel voices. PloS One, 9(3), e90779. People with clinical depression often have distinctive prosodic patterns, such as lower fundamental frequency (F0), reduced F0 range and lower articulation rate [1]. This view of depressed prosody is largely derived from read speech, however, and conversational dynamics between speakers with depression and their interlocutors are poorly understood. Prosodic convergence is often observed in the conversations of non-depressed interlocutors, who tend to become closer in articulation rate (sometimes also F0) as interactions proceed [2]; indeed, convergence may reflect successful interactions [3]. One of few studies of interactional prosody between depressed speakers and clinical workers suggests variable accommodation behaviour according to depression severity: clinical interviewers show convergence on F0 features with severely depressed speakers, but may actually diverge with less depressed speakers [4]. The present study aimed to assess: a) if consistent prosodic differences are seen between speakers with chronic depression and their therapists; b) whether the prosodic features of depressed speakers and their therapists change over the course of therapy; c) if prosody mirrors any alleviation of depression over time. To this end, we transcribed and measured patient-therapist exchanges collected for a project examining the efficacy of a behaviour therapy for refractory depression often associated with personality disorders [5]. Given typical sex differences in F0, we focused on female-female pairs (patients N=24; therapists N=8). Taking utterances randomly sampled from early and late within the first and last sessions of a course of (typically) 29 weekly sessions, we compared patient and therapist prosodic measures: articulation rate; median F0; 10-90% F0 range (i.e., omitting upper and lower deciles); F0 standard deviation (SD). A standard measure of depression severity, PHQ-9 [6], showed a significant reduction over the course of therapy (p < .0001), allowing us to examine how prosodic features varied within speakers as their depression levels changed. The effects of speaker (patient vs therapist), session (first vs last), time-within-session (early vs late) were analysed using mixed-effects linear regression models. Median F0 was consistently higher for therapists than for patients (p < .0001), with no interactions between speaker, session or time-within-session. As shown in Figure 1, therapists’ articulation rate was consistently higher than that of patients (p < .0001). There was also a three-way interaction between speaker, session and time-within-session (p < .05), attributable to an increase in patients’ articulation rate over the first therapy session. Pauses were also more frequent (p < .0001) and longer (p < .05) in patients’ speech than in that of therapists. There were no correlations between patient F0 measures and depression levels, but patients with greater depression had higher final-session articulation rate, as did their therapists (both p <. 05). This may indicate relative anxiety towards the end of a less successful course of therapy, but needs corroboration. Finally, therapists showed lower first-session F0 range and SD with more depressed patients, possibly indicating empathetic accommodation [cf 4]. Our findings contradict some previous studies [1] in not finding consistent relationships between severity of depression and patients’ articulation rate or F0. Indeed, our data indicate a greater heterogeneity in depressive prosody than sometimes suggested: in particular, speakers with underlying personality disorder may tend to exhibit persistent depressive- style prosody even when depression itself has alleviated. The potential negative impact

186 187 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS of such speaking styles on social engagement points to possible therapeutic work directly Age and migratory background in the realization of rhotics by Italian schoolchildren targeting speech. Finally, patients’ articulation rate increased over the course of the first session. This may be Chiara Meluzzi1 and Nicholas Nese2 attributable, in part, to the attenuation of patients’ prosodic expression of depression with 1 University of Milan, Italy 2University of Pavia, Italy increasing task engagement [7]. The additional possibility of prosodic convergence with the therapists’ higher articulation rate requires further investigation. This work explores the influence of age and migratory background in shaping the sociophonetic variation of rhotics as produced by Italian L1 schoolchildren. Rhotic phones have been found to be especially hard to acquire cross-linguistically and to be propense to misarticulation, due to their delicate aerodynamics required for multiple rapid occlusions and their complex articulation (cf. [1], [4] and, for Italian, [8]). For Italian, it is particularly interesting the phonological opposition between singleton and geminated rhotics (cf. [6]). This opposition is conveyed only by phonetic durational cues (e.g., rhotic and preceding vowel duration) and/or by rhotic type, with singletons commonly produced as taps and geminate rhotics appearing as trills, similar to what happens for other languages such as Spanish (cf. [5]). Due to the lack of phonetic studies on Italian schoolchildren, our main aim is to test whether different rhotic types are associated with specific phonological context, with a basic opposition between taps in singleton and trills in geminate context, and when this opposition is systematically acquired. Furthermore, we wanted to test if the durational cues associated with are also affecting rhotics in singleton vs. geminated contexts. Finally, we aim at investigating whether other factors like children’s migratory background could play a role in shaping variation in the production of rhotics. Figure 1: Mean articulation rate for female patient/therapist pairs, within therapy sessions (early vs late) We recorded 81 Italian children from North-West Italy, aged between 6 and 11 years old, with and between sessions (first vs last) different migratory internal background (i.e., from different part of Italy). Children perform a naming task of pictures presented on a map, with 20 rhotics occurring as singletons, References geminates, and in pre- and post-consonantal position. To avoid the production of different lexical items from the ones expected, we add the target word under the picture. Recordings [1] Horwitz, R., Quatieri, T.F., Helfer, B.S., Yu, B., Williamson, J.R., & Mundt, J. (2013). On the were conducted in a separated room within the elementary school children attended during relative importance of vocal source, system, and prosody in human depression. In 2013 IEEE the normal hours of lessons, one pupil at a time. Recordings were made with a TASCAM DR-40 International Conference on Body Sensor Networks (pp. 1-6). recorder with an external Sony microphone, in .wav format with a sampling rate of 44.1 KHz. [2] Levitan, R., Gravano, A., Willson, L., Benus, S., Hirschberg, J., & Nenkova, A. (2012). Acoustic- prosodic entrainment and social behavior. In Proceedings of the North American Chapter of the Besides phonetic data, we also collected sociolinguistic questionnaires that pupils compiled Association for Computational Linguistics: Human Language Technologies (pp. 11–19). at home with their parents, in order to map each participants’ social status and personal [3] Manson, J.H., Bryant, G.A., Gervais, M.M., & Kline, M.A. (2013). Convergence of speech rate in origins from different areas of the Italian peninsula. The 1600 tokens were manually isolated conversation predicts cooperation. Evolution and Human Behavior, 34(6), 419-426. and annotated in PRAAT by following the protocol provided in [3]: rhotics are considered [4] Yang, Y., Fairbairn, C., & Cohn, J.F. (2013). Detecting depression severity from vocal prosody. to be produced as sequences of gestures of constrictions and apertures (see Fig. 1), being IEEE Transactions on Affective Computing, 4(2), 142–150. either monophasic, in case of intervocalic taps, or multi- phasic, such as in case of pre- and [5] Lynch, T. R., Hempel, R. J., Whalley, B., Byford, S., Chamba, R., Clarke, P., Clarke, S., Kingdon, post-consonantal singleton rhotics or with trills. D. G., Stanton, M., Swales, M., Watkins, A., & Russell, I. T. (2020). Refractory depression – Mixed model analysis performed in IBM SPSS 20 determine the impact of the different social mechanisms and efficacy of radically open dialectical behaviour therapy (RefraMED): Findings and linguistic variables. It has been shown that speakers’ age affect rhotic type production, of a randomised trial on benefits and harms.British Journal of Psychiatry, 216, 204–212. especially in the case of trills in geminated position, which are notoriously quite difficult to [6] Krönke, K., Spitzer, R., & Williams, J., B, W. (2001). The PHQ-9: Validity of a brief depression acquire in many languages (e.g., [2] and [7]). Indeed, children aged 6 y.o. produced trills, and severity measure. Journal of General Internal Medicine, 16(9), 606–613. also ‘heavy trills’ characterized by 8 phases of closures. However, a systematic opposition [7] Alghowinem, S., Goecke, R., Wagner, M., Epps, J., Breakspear, M., & Parker, G. (2013). Detecting between geminated and singleton trills appear only around age 10, in association also with depression: A comparison between spontaneous and read speech. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 7547–7551). durational cues associated to both the rhotic and the preceding vowel. Finally, the pupil’s dialectal background derived from their familiar origins paid a role only for youngest children, but disappeared in the production of older ones (age 10-11).

188 189 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Organization of conversations depending on the number of participants

Valéria Krepsz1, Viktória Horváth1, Anna Huszár1 and Dorottya Gyarmathy1 1Hungarian Research Centre for Linguistics

The success and implementation of conversations depends on not only how the information is passed on, but also on many factors, such as timing that is strongly determined by, e.g. the number of participants. Previous results suggest that the basic mechanism of conversation changes fundamentally as the number of speakers increases, and they affect the structure of conversation (turn-takings, backchannel responses etc.). As the number of participants increases, the duration of gaps increases and the duration of silent pauses decreases, however not equally [1]. The system of turn-takings become more complex and the potential turn-taking points and signals become exponentially increased [2, 3], while the number of the backchannel responses decline. The aim of this present research is to investigate how the number of participants influences Fig. 1 An example of annotation of rhotic following the protocol in [3]: the geminated rhotic in the word “burro” (eng. the static descriptive data of conversation, along with its dynamically changing structure, Butter) is produced as a trill composed by 4 closure and 4 aperture phases. focusing on the evolution of turn-taking. 20 recordings from BEA (Hungarian Spontaneous Speech Database) database (women speakers, aged between 20 and 50 years) were used References for the research (altogether/in total 4 hours of material). 10 recordings consist of the fieldworker1 (Fw1) and the subject (S) of the conversation. With regard to the other10 [1] Boyce, S. E., Hamilton, S. M., & Rivera-Campos, A. 2016. Acquiring rhoticity across languages: An recordings, a third speaker (fieldworker2, Fw2) joined the same two speakers. The protocol ultrasound study of differentiating tongue movements. Clinical linguistics & phonetics, 30(3-5), was the same in both cases: fw1 provided a topic to be discussed, then they talked to each 174-201. other as equal partners of the conversation. Subsequently, the conversation was organized [2] Carballo, G. & Mendoza, E. 2000. Acoustic characteristics of trill productions by groups of spontaneously not preceded by any preparation. In two- and three-participant speech Spanish children. Clinical Linguistics & Phonetics, 14(8), 587-601. [3] Celata, C., Meluzzi, C., & Ricci, I. (2016). The sociophonetics of rhotic variation in Sicilian dialects recordings, we analysed and compared the frequency, types, and Floor Transfer Offset [4] of and Sicilian Italian: corpus, methodology and first results.Loquens , 3(1), 025. altogether more than 600 turn-takings (TTs). The following parameters were also analysed: [4] Chung, H., Farr, K., & Pollock, K. E. 2019. Rhotic vowel accuracy and error patterns in young i) total duration of speaking per person, ii) duration and proportion of gaps, iii) duration of children with and without speech sound disorders. Journal of communication disorders, 80, 18- silent pauses, frequency and duration of iv) backchannel responses (BCs) and v) overlapping 34. speech. The occurrence of turn-takings and other characteristics of speech was described [5] Hualde, J. I. 2004. Quasi-phonemic contrasts in Spanish. In WCCFL 23: Proceedings of the 23rd along the time of the conversation, based on an automatic Praat script [5] created for this West Coast Conference on Formal Linguistics (Vol. 23, p. 374). Somerville, MA: Cascadilla Press. purpose. To examine the dynamic change, we took into account the centre of the interval of [6] Payne, E. M. 2005. Phonetic variation in Italian consonant gemination. Journal of the International the BCs and silent pauses and compared it with its position in the conversation. For statistical Phonetic Association, 153-181. analysis, GLM test, regression analysis, and Tukey-post hoc analysis were performed in the [7] Rafat, Y. 2008. The acquisition of allophonic variation in Spanish as a second language. In R program. Annual Conference of the Canadian Linguistic Association, Vancouver, BC. Turn takings occurred after silent pauses in two-thirds of the cases, while TTs occurred after [8] Zmarich, C., van Lieshout, P., Namasivayam, A., Limanni, A., Galatà, V., & Tisato, G. (2011). overlapping speech in almost one-third of the cases both in two and three participated Consonantal and vocalic gestures in the articulation of Italian glide/w/at different syllable conversations. Negative FTO-values were greater in the two-participated conversations positions. Contesto comunicativo e variabilità nella produzione e percezione della lingua, Atti than in the three-participated recordings. In contrast, positive FTO-values were greater in del, 7, 9-24. the 3-participated conversations. The ratio of gaps was almost the same regardless of the number of speakers. The duration of silent pauses decreased in the vicinity of TTs and over time as well. BCs occurred in a greater ratio in three-participated conversations than two- participated ones. Furthermore, the frequency of BCs increased preceding TTs and over time as well. In addition, the occurrence of overlapping speech increased progressively in the conversations. The ratio of overlaps was greater in 2-participated conversations than in 3-participated recordings. However, the overlaps occurred at TTs in the majority of the cases in the 3-participated recordings. Although there were statistically corroborated and trend- like differences between two- and three-person conversations, significant differences could be observed between individual speakers. The impact of competition for more speaking

190 191 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS rights and different communication strategies has also been observed in the increase of the Phonetic Register Coding in Formal and Informal Speech number of speakers. Erika Brandt, Gediminas Schüppenhauer, Melanie Weirich and Stefanie Jannedy References Leibniz-ZAS Berlin

[1] Bonito, J. A., & Hollingshead, A. B. 1997. Participation in small groups. In Burleson, B. R. & The effect of an addressee’s social role has not been thoroughly explored as a factorin Kunkel, A. W. (Eds.), Communication yearbook 20, Sage Publications, Inc., 227– 261. [2] De Ruiter, J. P., Mitterer, H., & Enfield, N. J. 2006. Projecting the end of a speaker’s turn:a multi-modal speech production. This includes the social constellation of the interlocutors in cognitive cornerstone of conversation. Language 82, 515–535. an interaction and the functional-situational aspects of the conversation. It has long been [3] Riest, C., Jorschick, A. B., & de Ruiter, J. P. 2015. Anticipation in turn-taking: Mechanisms and observed that often speakers are so socially skilled as to alter their speech production information sources. Frontiers in Psychology 6(376). in dependence to their addressee ([1] Bell, 1984), specifically in child- or animal directed [4] Stivers, T. et al. 2009. Universals and cultural variation in turn-taking in conversation. Proceedings speech ([2] Burnham et al., 2002), or in dependence to the referee ([3] Hay, Jannedy, et al., of the National Academy of Sciences of the United States of America 106(26). 10587-92. 2010). In this study, we have explored the effect of the perceived and suggested social role [5] Boersma, P., & Weenink, D. 2008. Praat: doing phonetics by computer. Computer program. of the addressee on the speech production while varying the function of the discourse. The study of intra-speaker variation as a field of study has gained traction with the Third Wave Keywords: conversation, turn-taking, timing patterns, database in sociolinguistics ([4] Eckert, 2012; [5] 2018) where studies focus on the speech styles of individuals as they maneuver social situations. It is our intent to understand the factors that The research was supported by the National Research, Development and Innovation lead to gauging a social situation which triggers specific verbal behavior and to understand Office of Hungary, project No. K-128810. the factors that affect listeners’ person perception ([6] Osgood et al., 1957). We created six pictures of the same individual roughly keeping the same body posture but all varying in clothing styles (from formal jacket to T-shirt) and accessories (glasses to piercing). 86 participants rated these pictures for perceived formality. Fig. 1 shows the pictures that were rated to be the most formal (right panel) and the least formal (left panel). In parallel to these pictures, we created short choreographed – and as much as possible identical – movies in which the person in the video listens and occasionally lightly nods or slightly changes in posture. The two films corresponding to the two levels of formality were used in the experiment. So far, nine speakers (German university students) engaged with the person seen on a computer screen in a presumed video-call. They were told that the other person would only listen but not speak and that they would have to do all the speaking. They were given a context (i.e. this is your new neighbor [informal] or this is your boss [formal]) and were then told the function of the discourse (i.e. report on potential activities in your town [nothing at- stake] or ask for a pay raise [at stake]). To get the greatest possible contrast in formality, speakers were asked to converse with the formal persona while having something riding on the conversation (at stake situation) or just converse with the informal persona without having anything riding on it (nothing-at-stake). An analysis of the F0 height and F0 variation by speaker revealed an on average higher F0 in the formal (F) at stake situation compared to the informal situation (I). Also, there was more F0 variation in the informal compared to the formal situation (see Table 1). A preliminary analysis of the differences in vowels between the formal and informal conditions reveals individual differences. There is a tendency towards an overall expanded vowel spacefor speaker 4 (Fig.3, right panel) in the formal condition. Speaker 1, left panel, though only shows a higher F2 for [i], which may be indicative of more clarity or smiling (friendliness) and a shortening of the front tube. Thus, we are now investigating the individual differences as to how formality is expressed in fine phonetic detail. Further spectra and temporal analyses are currently being carried out.

192 193 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS References: Pragmatic and Acoustic-Phonetic Aspects of Motivating Speech

[1] Bell, A. (1984). Language style as audience design. Language in Society 13.2, 145–204. Jana Voße1,2, Oliver Niebuhr3, Petra Wagner1,2 [2] Burnham, D., C. Kitamura & U. Vollmer-Conna (2002). What’s new, pussycat? On talking to 1Phonetics Work Group, Bielefeld University, Bielefeld University babies and animals. Science 296.5572, 1435–1435. 2CITEC, Bielefeld University, Germany [3] Hay, J., S. Jannedy & N. Mendoza-Denton (2010). Oprah and /ay/: Lexical frequency, referee 3 Centre for Industrial Electronics, University of Southern Denmark, Sonderborg/DK design, and style. In: The Routledge Sociolinguistic Reader. M. Meyerhoff & E. Schleef (eds.), pp53–58. [4] Eckert, P. (2012). Three waves of variation study: The emergence of meaning in the study of A motivating way of speaking is useful for many different communication situations, e.g., sociolinguistic variation. Annual Review of Anthropology 41.1, 87–100. during teaching, in the area of sports or in clinical settings. Accordingly, motivating language [5] Eckert, P. (2018). Meaning and Linguistic Variation: The Third Wave in Linguistics. Camridge: has been a research topic for many years. Besides the analysis of content- and structure- Cambridge UP. related characteristics of motivating language (e.g., [1]), recent studies began to investigate [6] Osgood, C. E., G. J. Suci & P. H. Tannenbaum (1957). The measurement of meaning. 47. University the prosodic and acoustic-phonetic properties of motivating speech (e.g., [2], [3]). of Illinois press. The current study investigates whether motivating speech consists of different pragmatic (1) Variation in Formality elements and whether these elements have specific acoustic-phonetic characteristics. The framework for this investigation is the Motivating Language Theory (MT) developed by Sullivan (1988) [4], who claims that motivating language consists of three core elements, meaning- making language (MM), empathetic language (E) and direction-giving language (DG). In the present work, we interpreted these elements towards communication settings in the field of sports and healthy nutrition and analysed them in the same corpus as [2]. The data is grouped into two conditions of either more motivational speech (MMS) or less motivational speech (LMS) on Figure 1. These two pictures triggered the most extreme ratings in a cluster analysis of personality the basis of online ratings. Each condition is represented by three monologues. ratings for the factor “formality” (left: informal; right: formal). The following hypotheses were investigated: • H1: All three elements of MT occur in the MMS condition, but not or to a lesser degree in (2) Variation in F0 and F0 Mean the LMS condition. Subject Gender Mean F0 (Hz) Variation F0 (SD) Situation • H2: Speech pertaining to the elements of MT shows element-specific, differentiated nothing vs. at stake acoustic-phonetic characteristics between MMS and LMS. 1 (m) F=I I>F recipe & lowering rent The analysed features are based on [2]: duration of elements (s), RMS intensity (Pa), 2 (f) F>I I>F online semester & deadline extension intensity variation coefficient, pitch median (st rel. to 200 Hz), and pitch variation coefficient. 3 (f) I>F F>I town activities & salary increase The data was annotated manually on the level of speech acts, which, in turn, were, based 4 (m) F>I I>F town activities & salary increase on content, projected onto the labels of the three elements of MT. The acoustic-phonetic 5 (m) F>I I>F online semester & deadline extension analysis was conducted automatically using Praat [5] scripts, while the statistical analysis 6 (f) F>I F>I town activities & salary increase was carried out in R [6] on z-scored normalized data per speaker. 7 (f) I>F I>F online semester & deadline extension Comparing the frequencies of the elements MM, DG, and E in the conditions LMS and MMS, we found instances of all three elements in both conditions albeit in different distributions, 9 (f) I>F I>F recipe & lowing rent X2 (2, N = 442) = 64.386, p < 0.001. Figure 1 shows a more balanced distribution across all Table 1. Mean F0 (column 2) and Variation in F0 (column 3) by subject for the formal at-stake (F) vs. informal and element labels in MMS, while in LMS the frequency for element DG is clearly lower than the nothing-at-stake (I) situations (column4). frequencies for element MM and E. Therefore, this study provides support for H1. For the statistical assessment for H2, we constructed linear mixed effect models predicting (3) German Tense Vowels in Two Registers in F1-F2 Formant Space the acoustic-phonetic features as a function of condition*label with the package “lmer4” [7]. “Speaker” was entered as random intercept. The model for RMS intensity yields an interaction effect between LMS and E (estimate = -1.0151, SE = 0.4793, t = -2.118) leading to a lower RMS intensity relative to DG labels (see Figure 2) in the LMS condition. The models for the other acoustic-phonetic features yield no significant effect. Therefore, these findings are in line with our hypothesis in one of five investigated acoustic-phonetic parameters, with the caveat of having been observed on a relatively limited dataset (~53 minutes of speech material), and amounts to the interpretation that a comparatively higher intensity in DG elements relative to E elements may have a detrimental effect in speech-based motivation. However, further research is needed to clarify whether the data sparsity given in the present study Figure 2. Vowel space in at-stake/formal (red) and nothing-at-stake/informal situation (green) condition for speakers affected the obtained results. 01 (left) and speaker 04 (right).

194 195 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS DAY 2 - Poster Session C Sound Change and Language Contact

Revisiting assumptions about stress and ‘stress-shift’ in Munster Irish

Connor McCabe1 1Trinity College, Dublin

Munster Irish (MI) is the southernmost of the three regional macrodialects of Modern Irish (Gaelic). The variety’s most noted feature is its apparently weight-sensitive lexical stress, which diverges from the simple initial stress of other Irish varieties. 20th century descriptions of MI stress [1,2] comprising scholars’ personal impressions have been uncritically assumed Figure 1: Distribution of labels per condition. as the basis for most formal phonological analyses [3,4,5,6,7]. Limited direct investigation of acoustic data has focussed on phonetic exponents of prominence in current speakers [8,9]. Reliability of traditional stress descriptions for multiple languages are increasingly in question, given ambiguity in terminology, personal usage-preferences, and perceptual biases [10]. An accurate picture of stress location is key for higher-level prosodic descriptions, e.g. an Autosegmental-Metrical-style pitch-accent inventory. Recorded data contemporary to foundational dialect descriptions for MI fortunately exist, albeit limited in type and quality, in the Doegen Records [11]. Digitised 1928 recordings of 13 MI speakers re-telling The Prodigal Son were selected for analysis. To my knowledge, no instrumental phonetic work has previously been undertaken on recordings from this collection. Initial inspection suggested ‘stress-shift’ was not uniformly present within and across speakers, even in straightforward, prototypical cases such as light-heavy mionnán ‘kid’ (predicted /mʲəˈnˠɑːnˠ/), with variously distributed prominence(s) frequently emerging. As such, the relationship between phonetic parameters and syllable position was investigated independent of assumptions around stress location, focussing on disyllables. There were Figure 2: RMS Intensity (z scores) per label and per condition. 1906 disyllabic words (3,812 syllables). 8 of 9 possible weight-type pairings were represented, with light-light (LL) vastly predominating (1276), followed by heavy-light (HL; 342), LH (118), References: L[ax] (68), HH (49), H[ax] (42), [ax]L (10), and [ax]H (1). Syllables were labelled in Praat [12] and measured for maximum, mean, and range of values for F0 (Hz) and intensity (dB). Measures [1] Mayfield, J., & Mayfield, M. (2018).Motivating language theory: Effective leader talk in the workplace. were standardised (z-scored) by speaker to allow for pooling and comparison. Cham: Springer. A binomial logistic regression was fitted in R using Bayesian methods [13,14] with intensity and [2] Voße, J., & Wagner, P. (2018). Investigating the phonetic expression of successful motivation. pitch measures with normally distributed priors as predictors of syllable position. Bayesian Proceedings of ExLing 2018, Paris, 117-120. doi: 10.36505/ExLing-2018 methods allow greater flexibility in model interpretation, and can better cope with gaps in [3] Skutella, L. V., Süssenbach, L., Pitsch, K., & Wagner, P. (2014). The prosody of motivation: First the data [15,16]. The model was specified to use 4 chains of 10,000 iterations each with a results from an indoor cycling scenario. Studientexte zur Sprachkommunikation: Elektronische 2,000-iteration warm-up. Trace plots and R-hat values indicated satisfactory convergence. Sprachsignalverarbeitung 2014, 209-215. [4] Sullivan, J. J. (1988). Three roles of language in motivation theory. Academy of Management Results indicate lower log-odds of a second-syllable prediction as standardised mean Review, 13(1), 104-115. intensity and intensity range increase, while, more weakly, log-odds increase as maximum [5] Paul Boersma & David Weenink (2018). Praat: doing phonetics by computer [Computer intensity and mean and range of F0 increase. In other words, this model favours an initial- program]. Version 6.1.09, retrieved from http://www.praat.org/ syllable prediction given greater overall intensity, while pronounced F0 activity marginally [6] R Core Team (2019). R: A language and environment for statistical computing. favours a final-syllable prediction. This is particularly interesting given the LL majority in the R Foundation for Statistical Computing, Vienna, Austria. URL : https://www.R-project.org/. data, as one would expect all parameter exaggerations typically associated with prominence [7] Douglas Bates, Martin Maechler, Ben Bolker, Steve Walker (2015). Fitting to favour the initial syllable, unlike in historically ambiguous LH or L[ax]. The results suggest Linear Mixed-Effects Models Using lme4. Journal of Statistical Software, 67(1), 1-48. doi:10.18637/ that the relationship between lexical stress and phonetic prominence, especially to an jss.v067.i01. English-dominant ear, may not be, nor ever have been, straightforward. A final-stress percept associated with pitch excursion on an unstressed syllable would be reminiscent of the case

196 197 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS of native- and non-native perceptions of, e.g., Welsh stress [17], and of non-native ‘stress- [15] Nicenboim, B. & S. Vasishth. 2016. “Statistical models for linguistic research: Foundational deafness’ [18]. Ideas – Part II”. Language & Linguistics Compass 10(11), 591-613. There is much more to be investigated than this preliminary examination can cover. [16] Vasishth, S., B. Nicenboim, M.E. Beckman, F. Li & E.J. Kong. 2018. “Bayesian data analysis in the Refinement of the statistical analysis is currently underway, and ongoing doctoral research is phonetic sciences: A tutorial introduction”. Journal of Phonetics 71, 147-161 [17] Williams, B. 1985. “Pitch and duration in Welsh stress perception: the implications for considering the possible role of unstable initial-syllable weight tied to P-centre variation [19] intonation”. Journal of Phonetics 13, 381-406. driven by initial . Whether ambiguity in prosodic cues was truly reanalysed [18] Peperkamp, S., I. Vendelin & E. Dupoux. 2010. “Perception of predictable stress: A cross- as a stress-shift by native speakers of this variety, or whether this was a misinterpretation by linguistic investigation”. Journal of Phonetics 38, 422-430. th early-20 -century scholars remains an open question. [19] Ryan, K. “Onsets contribute to syllable weight: statistical evidence from stress and meter”. Language 90(2), 309-341.

Figure 1: Joint posterior distributions (left) and 95% credible intervals (right) of the slope of standardised acoustic parameters used as predictors of log-likelihood of a second-syllable prediction in this model (x-axis in all graphs) for a given constituent of a disyllabic word.

References

[1] O’Rahilly, T.F. 1932. Irish Dialects Past & Present. Dublin: Brown & Nolan. [2] Ó Sé, D. 1989. “Contributions to the study of word stress in Irish”. Ériu 40, 147-178. [3] Doherty, C. 1991. “Munster Irish stress” in Proceedings of the 10th West Coast Conference on Formal Linguistics, 115-126. [4] Green, A.D. 1997. The Prosodic Structure of Irish, Scots Gaelic, and Manx. PhD dissertation, Cornell University. [5] Bennett, R. 2012. Foot-conditioned phonotactics and prosodic constituency. PhD dissertation, University of California-Santa Cruz. [6] Iosad, P. 2013. “Head-dependent asymmetries in Munster Irish prosody”. Nordlyd 40(1), 66-107. [7] Windsor, J.W. 2017. From phonology to syntax – and back again: Hierarchical structure in Irish and Blackfoot. PhD dissertation, University of Calgary. [8] Windsor, J.W., S. Coward & D. Flynn. 2018. “Disentangling Stress and Pitch Accent in Munster Irish” in Proceedings of the 35th West Coast Conference on Formal Linguistics. [9] Kukhto, A. (2019). “Exceptional stress and reduced vowels in Munster Irish” in Proceedings of the 19th International Congress of Phonetic Sciences, 1565-1569. [10] de Lacy, P. (2014). “Evaluating evidence for stress systems” in H. van der Hulst (ed.) Word Stress: Theoretical and Typological Issues. Cambridge: CUP, 149-193. [11] The Doegen Records Web Project. URL: http://www.doegen.ie [12] Boersma, P. & D. Weenink. 2020. Praat: doing phonetics by computer [Computer programme]. Version 6.1.34. http://www.praat.org/ [13] R Core Team. 2020. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. [14] Bürkner, P. 2018. “Advanced Bayesian Multivel Modeling with the R packing brms”. The R Jounral 10(1), 395-411.

198 199 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS “Triggered” phonetic interaction: incremental shifts in English migrants to Austria within a code-switched paradigm

Ineke Mennen1, Robert Mayr2, Ulrich Reubold1, Sanne Ditewig1, Kerstin Endes1, and Sarah Melker1 1University of Graz, 2Cardiff Metropolitan University

While much research has examined phonetic interaction in L1 and L2 speech in contexts where bilinguals’ languages are carefully separated to avoid dual language activation [1], comparatively little research has investigated phonetic interaction in code-switched (CS) speech, where both languages are maximally activated (but see 2-9). This study deliberately uses a code-switched paradigm to maximally trigger language interaction [8], in order to study its influence on the occurrence and direction of phonetic influences. We present data from 25 (11 female, 14 male, age range: 24-71, all educated to tertiary level) native speakers of Standard Southern British English (SSBE) prior to post-pubescent emigration to Austria, where they have lived for between 2 and 37 years (henceforth EMIGRANT), and 10 monolingual SSBE controls (henceforth CONTROL) living in the UK. EMIGRANT participants read L1 (English) sentences with L2 (German) items inserted, containing segments that were potential sources of interaction ( /l-ɫ/, /v-w/, /ʃt(ʁ)-st(ɹ)/; henceforth called sound pairs). These segments were varied to occur both before and after the German item, in order Figure 1: The Second Formant Frequency in Hz for CS /s/ (color) in relation to /s/ and /ʃ/ in non-CS control speech to determine whether the phonetic influence is progressive or regressive, e.g. (black), for speakers showing significant differences between CS and non-CS /s/. Sounds from CS speech are subdivided according to the position of the /s/ before (red) or after (green) the German item. For the sake of a better presentation /v-w/: “I like Wurstel with mustard”, “He wants a Wiener Schnitzel” of potentially multimodal (and therefore non-normal) distributions of the data, we opted for violin plots instead of /st-ʃt/: “She walked to Stadteck station”, “They stayed at Stiftung” boxplots, because a violin plot shows the full distribution of the data as rotated kernel density plots on both sides.

In order to capture language-mode (i.e. CS vs non-CS) and speaker-group (EMIGRANT vs. References CONTROL) related differences, we additionally recorded sounds of the same types, produced by both speaker-groups, but taken from non-CS (L1) contexts. We only report here on (a) [1] Grosjean, F. (2001). The bilingual’s language modes. In J. Nicol (Ed.), One mind, two languages: comparisons within the sound pairs in non-CS speech (in order to capture potential attrition), Bilingual language processing, (pp. 1-22). Malden, MA: Blackwell Publishers Inc. [2] Antoniou, M., Best, C., Tyler, M., & Kroos, C. (2011). Inter-language interference in VOT production and (b) on potential shifts due to L2-to-L1 influences in the productions of the three sounds by L2-dominant bilinguals: Asymmetries in phonetic code-switching. Journal of Phonetics, 39, /s,w,ɫ/ in English words in CS speech, as we expected them to shift towards their counterparts 558-570. /ʃ,v,l/ in the inserted German words in the same sentences. The sound productions were [3] Bullock, B., & Toribio, A. J. (2009). Trying to hit a moving target: On the sociophonetics of code- quantified by the second formant frequency F2 (for /v-w/), the first spectral moment (for the switching. In L. Isurin, D. Winford, & K. De Bot (Eds.), Multidisciplinary approaches to code- fricatives in /ʃt(ʁ)-st(ɹ)/), or the distance between the first and the second formant frequency switching [Studies in Bilingualism 41], (pp. 189-206). Amsterdam: John Benjamins. (for /l-ɫ/). [4] Bullock, B., Toribio, A. J., Gonzalez, V., & Dalola, A. (2006). Language dominance and performance Results of Linear Mixed Effects models and further Estimated Marginal Means analyses outcomes in bilingual pronunciation. In M. O’Brien, C. Shea, & J. Archibald (Eds.), Proceedings of reveal that for (a) there are no effects of speaker-group (EMIGRANT vs. CONTROL) in non-CS the 8th Generative Approaches to Second Language Acquisition Conference (GASLA 2006) (pp. 9-16). speech, and for (b) there are only subtle and insignificant shifts of /s,w,ɫ/ towards /ʃ,v,l/ on Somerville, MA: Cascadilla Proceedings Project. the group level. However, some individual speakers did show significant shifts (13/25 s>ʃ[ ]; [5] Goldruck, M., Runnqvist, E. & Costa, A. (2014). Language switching makes pronunciation less 5/25 [w>v], and 3/25 [ɫ>l] shifts). The shifts are fairly incremental, given the wide distributions native-like. Psychological Science, 25(4), 1031-1036. and the CS sounds falling mostly between the two categories of sound pairs (cf. Figure 1). No [6] Muldner, K., Hoiting, L., Sanger, L., Blumenfeld, L., & Toivonen, I. (2019). The phonetics of code- effect was found for the relative position of the segments (progressive/regressive influence). switched vowels. International Journal of Bilingualism, 23(1), 37-52. [7] Olson, D. J. (2012). The phonetics of insertional code-switching: Suprasegmental analysis and a Finally, multiple regression tests were carried out to test the impact of the predictor variables case for hyper-articulation. Linguistic Approaches to Bilingualism, 2(4), 439-457. age of emigration, length of residence, amount of L1 use, amount of L2 use, contact with L1 in [8] Olson, D. J. (2013). Bilingual language switching and selection at the phonetic level: Asymmetrical settings where code-mixing is likely, and contact with L1 in settings where code-mixing is not likely transfer in VOT production. Journal of Phonetics, 41, 407-420. on the Estimated Marginal Means of the individual shift sizes. The variable amount of L2 use [9] Piccinini, P., & Arvaniti, A. (2015). Voice onset time in Spanish-English spontaneous code- 2 was the only variable that predicted the shift sizes significantly (Adjusted R = 0.13, p < 0.05), switching. Journal of Phonetics 52: 121–37. but only for the shifts of non-CS vs. CS /w/ (the more L2 use, the more /w/ shifts to /v/). None of the other predictor variables were found to explain the individual differences.

200 201 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Dialect in the City: Asymmetrical retention of traditional features of Gheg Albanian [1] Gjinari, J., Beci, B., Shkurtaj, Gj., Gosturani, Xh. & Dodi, A. (2007) Atlasi Dialektologjik i Gjuhës Josiane Riverin-Coutlée1, Conceição Cunha1, Enkeleida Kapia1,2 & Jonathan Harrington1 Shqipe. Università degli studi di Napoli L’Orientale, Napoli. 1Institute of Phonetics and Speech Processing, Ludwig-Maximilians-University Munich 2Academy of [2] Ismajli, R. (2005) Drejtshkrimet e shqipes: Studim dhe dokumente. Akademia e Shkencave dhe e Albanological Sciences Arteve e Kosovës, Prishtinë. [3] Sjoberg, O. (1992) Underurbanisation and the Zero Urban Growth Hypothesis: Diverted migration in Albania. Geografiska Annaler. Series B, Human Geography 74(1), 319. This study explores how the phonological system of the Gheg dialect spoken in and around [4] Trudgill, P. (1986) Dialects in Contact, Blackwell, Oxford. Tirana, the capital of Albania, has been affected in heterogenous and homogeneous [5] Beci, B. (1978) Rreth tipareve karakteristike të dy dialekteve të shqipes. Studime Filologjike 15(4), communities as a result of language and urban planning policies during and post communism. 5387. Albanian is a lesser-studied language of the Indo-European family comprising two major [6] Gjinari, J. (1966) Sprovë për një ndarje dialektore të gjuhës shqipe. Studime Filologjike 3(4), dialects, Tosk and Gheg [1]. In 1972, a primarily Tosk-based standard variety (henceforth: 103105. the Standard) was introduced for use in education, writing, public speaking, and the media [7] Bates, D., Maechler, M., Bolker, B. & Walker, S. (2015) Fitting linear mixed-effects models using [2]. In addition, due to national policies of interregional allocation of the labor force in the lme4. Journal of Statistical Software 67(1), 148. context of industrial decentralization [3], major cities like Tirana experienced an influx of [8] Wood, S. N. (2011) Fast stable restricted maximum likelihood and marginal likelihood estimation internal migrants from all around the country. Consequently, the Gheg dialect of the locally- of semiparametric generalized linear models. Journal of the Royal Statistical Society (B) 73(1), born residents in Tirana was relegated to oral use in the private sphere and came to be in 336. intense contact with Tosk and other varieties of the internal migrants. By contrast, nearby [9] Chambers, J. K. (1992) Dialect acquisition, Language 68(4), 673705. rural areas were virtually unaffected by internal migrations and speakers were exposed to the Standard mainly through education and the media. The issue to be considered here is Example (1) the extent to which there is now divergence between city (Tirana) Gheg and rural Gheg as a consequence of intense pressure from the Standard and Tosk on the former, but not the Orthographic form Spoken Tosk/Standard Gheg latter [4]. miu ‘the mouse’ [miu] [miu] To address this question, we selected two distinguishing features of Gheg: 1) morpho- një mi ‘a mouse’ [mi] [miː] lidh ‘tie’ (imperative) [lið] [lið] phonological vocalic length contrasts present in Gheg, but not in Tosk/Standard [5], as in (ti) lidh ‘(you) tie’ (indicative) [lið] [liːð] Example 1; 2) in contexts where Tosk/Standard have phonological oppositions between diphthongs and monophthongs [6], as in Example 2. Two groups of Example (2) Gheg speaking adults participated in this study: 14 city Gheg speakers living in Tirana and 8 rural Gheg speakers living in the nearby village of Bërzhitë. They took part in a picture- Orthographic form Spoken Tosk/Standard Gheg naming task and produced 32 words 4 times. These include for the length condition 19 and thua ‘claw’ [θua] [θu] 6 words that are distinguished by long and short vowels respectively in Gheg but not in Tosk/ hu ‘picket’ [hu] [hu] Standard; and 7 words that are monophthongal in Gheg but which are diphthongal in Tosk/ Standard. Vowel duration was measured after hand-correction of the segment boundaries, then log-transformed and analyzed with a linear mixed effects model [7]. In order to test for Statistics 1 monophthongization, two separate generalized additive mixed models (GAMMs) were fitted to the trajectories of F1 and F2 (11 measurement points, unnormalized formant frequencies; Length*Group: F(1, 30.92)=0.31, p>.05 see Statistics 2) [8]. Length: F(1, 27.47)=26.19, The results for length (Figure 1) show a clear and significant distinction in duration between p<.001 short and long vowels, but no effect of speaker group (Statistics 1), meaning that city Gheg Group: F(1, 22.52)=3.56, p>.05 and rural Gheg speakers preserved length contrasts. The results for monophthongization suggest a greater degree of diphthongisation for city Gheg than for rural Gheg speakers Best model found (in R syntax): (Figure 2). It is possible that daily interactions with speakers of Tosk and the Standard have lmer(log(duration) ~ Length + led city Gheg to change, while exposure to the Standard only in the media and education (Group|Word) + (Length|Speaker)) has not caused rural Gheg to change. In city Gheg, the existing length contrast has not been Figure 1. Duration of Short and Long vowels lost, while a new contrast between mono- and diphthongs has been acquired; this makes the current phonological system of city Gheg more complex than that of rural Gheg, and of Tosk/ Standard. Whether the length contrast will eventually be lost is an open question, but this challenges the idea that old features are lost before new ones are acquired, as hypothesized by [9]. References

202 203 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Perceptual equivalence between prevoicing and pitch: implications for sound change

Jiayin Gao and James Kirby University of Edinburgh

Coarticulation constitutes one source of variation in speech which may give rise to sound change. One prominent sound change model [1] postulates that at least some sound changes, such as the phonologization of contrastive vowel nasalization, arise from the reinterpretation of an acoustic cue as being associated with its coarticulatory effect. By hypothesis, this reinterpretation is possible only when at a previous stage, the source and the effect are perceived as equivalent. For example, when hearing a ṼN sequence, listeners Figure 2. Estimated F1 and F2 trajectories for Tosk/Standard diphthongs /ye/, /ue/ and /ua/ in city (dashed) and are more sensitive to the acoustic consequences of a lowered velum gesture than its precise village (solid) speakers temporal alignment: that is, they may perceive the occurrence of nasality in the same way, regardless of whether it occurs during the Ṽ or the N. Perceptual equivalence may provide Statistics 2 the basis for a listener to reinterpret the intended signal, potentially giving rise to sound Best model found (in R syntax): change. bam(F ~ s(time_pts, k=11) + Group_Diph + s(time_pts, by=Group_Diph) + s(time_pts, Speaker, bs=”fs”, m=1) + s(time_pts, x While nasalization can be characterized by a single gesture of velum lowering giving rise Word, bs=”fs”, m=1)) to a constellation of acoustic cues over a stable temporal interval, changes such as the of onset voicing and f0 involve more complex gestural constellations and multiple acoustic cues in successive temporal intervals. In this study, we examine whether these differences affect the applicability of perceptual equivalence as an explanation of sound change. Specifically, we examine two cues to the perception of voicing (VOT and onset f0) in French, a language with the kind of “true voicing” laryngeal contrast found in languages like Afrikaans prior to sound change taking place [2]. Previous work has shown that VOT and f0 trade perceptually in at least some regions of the stimulus space [3]; however, if source and effect are perceptually equivalent, we might expect them to trade across the stimulus space [1]. To test this idea, 92 native French listeners completed an AX discrimination task to assess how they discriminated between stimuli differing in the way the two cues varied: congruent variation (ADDitive: e.g., -40 ms/130 Hz vs. -100 ms/90 Hz), incongruent variation (CANCELing: e.g. -40 ms/130 Hz vs. -100 ms/90 Hz), and variation of only one cue with the other held constant. To mitigate participant fatigue, we restricted the VOT continuum to the critical continuum steps: half of the stimuli pairs were unambiguously within the voiced category (-40 vs. -100 ms VOT), while for the other half, one stimulus was at the voiced-voiceless boundary (0 vs. -60 ms VOT). Following [1], if perceptual equivalence obtains, we should find that ADD > CANCEL, with ADD > VOT > CANCEL if perceptual equivalence is especially strong, i.e. if an incongruent pitch cue cancels the effect of longer negative VOT. As seen in Fig. 1, discrimination accuracy (as measured by d-prime score) depends on the ambiguity of VOT. Evidence for perceptual equivalence (ADD > CANCEL) was found only when VOT was ambiguous. Overall discrimination accuracy was lower for a majority of listeners when stimuli were drawn from the unambiguous VOT range, suggesting poorer within-category discrimination. For stimuli with ambiguous VOTs, higher d-prime scores in the PITCH condition weakly predicts the difference between the ADDitive and CANCELing conditions (Fig. 2, left), suggesting that listeners who are more accurate at discriminating stimuli differing only in pitch substitute pitch for VOT only when VOT is ambiguous. In summary, our study finds some evidence for perceptual equivalence between VOT and onset f0, but not across the entire stimulus space (cf. [4]). We suggest this is because the so- called trading relation observed between these two cues is not solely a function of gestural

204 205 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS alignment in the way that nasalization is, and that within-category perceptual equivalence Sound change and rhythm in Altiplateau Mexican Spanish may not be a precondition for cue reweighting in all sound changes. In the specific case of the phonologization of onset f0, prior ambiguity in terms of the primary cue/coarticulatory Gillian Marchini and Michael Ramsammy source appears necessary for cue reweighting to occur; a plausible trigger of this ambiguity is University of Edinburgh the cross-linguistically widespread failure to circumvent the aerodynamic voicing constraint [5, 6]. The question of vowel compression in Spanish is much debated. Within this paper, we present evidence from Altiplateau Mexican Spanish that phonologically constrained, coda- driven vowel compression is a variety-specific phenomenon in Spanish. Previously considered cross-linguistic ([1]), language-specific research has shown that vowels in closed syllables are not universally shorter than those open ([2]). Particular to Spanish, Aldrich and Simonet ([3]) showed that onset complexity shortens vowels in Spanish, ultimately concluding that, “we are aware of no published finding suggesting that, in Spanish, coda presence (or complexity) drives compensatory vowel shortening” ([3]:268). In light of these compression effects, further questions are raised concerning the classification of Spanish as a syllable-timed language with relatively uniform vowel duration. Although coda-induced shortening does not occur in cross-dialectal findings ([3]), our analysis of Altipleateau Mexican Spanish reveals that coda-induced compression is potentially a variety-specific phenomenon. In doing so, it further concurs with conclusions concerning Figure 1. Discrimination (d-prime) by condition and ambiguity of VOT. the inaccuracy of Spanish’s rhythm classification. Our variable context was outlined as the mid and low stressed and unstressed vowels in word-final syllables in elicited, continuously read speech. Although materials varied slightly from Aldrich & Simonet’s research (where elicited sentences were used), the experimental design and variable context parallel previous methods. The speech of five female speakers of Altiplateau Mexican Spanish was segmented using the Montreal Forced Aligner and manually checked following well-attested segmentational protocol ([4]) (see Figure 1 for example). Statistical analysis of normalised formant-frequency, duration and intensity measurements reveals that all vowels are centralised and shortened in closed syllables, but these effects are most extreme for unstressed vowels. Although onset-induced shortening is noted in open syllables, no such effects are noted in closed, suggesting that onset-induced compression is either overridden or unperceivable when compared to that of the coda. Similarly, no phonetic correlates are noted: coda manner impacts negligibly on the extent of shortening. Intensity Figure 2. Correlation of d-prime between pitch-only condition and the differential between varies negligibly across syllable structure and stress. ADDITIVE and CANCELING conditions for ambiguous (L) and unambiguous (R) VOT. Results therefore suggest that compression does occur in Altiplateau Mexican Spanish but is phonologically constrained: shortening and centralisation are coda-induced and most References extreme on unstressed vowels. As such, we conclude that that coda-induced compression is a feature of Spanish, although it is variety-specific. Language [1] Beddor, P. S. (2009). A coarticulatory path to sound change. , 85(4), 785–821. The existence of these compression effects supports claims that classifications of Spanish [2] Coetzee, A. W., Beddor, P. S., Shedden, K., Styler, W., & Wissing, D. (2018). Plosive voicing in Afrikaans: Differential cue weighting and tonogenesis. Journal of Phonetics, 66, 185–216. as a prototypical syllable-timed language lacking reduction are inaccurate ([3], [5]). They [3] Whalen, D.H., Abramson, A.S., Lisker, L. & Mody, M. (1990). Gradient effects of fundamental further suggest that, with regard to compression, dialect-specific phonetic-phonological frequency on stop consonant voicing judgments. Phonetica, 47, 36–49. interactions allow certain varieties of Spanish to behave in a way typically associated with [4] Kingston, J., Macmillan, N.A., Dickey, L.W., Thorburn, R. and Bartels, C. (1997). Integrality in the stress-timed languages, i.e., English. It is therefore suggestive that dichotomies between perception of tongue root position and voice quality in vowels. The Journal of the Acoustical syllable- and stress-timed languages are not as fixed as previously claimed and that Society of America, 101(3): 1696–1709. languages, and varieties within them, exist along a rhythm continuum ([3]; [5]; [6]). What is [5] Ohala, J. J. (1983). The origin of sound patterns in vocal tract constraints. In: MacNeilage, P.F. more, the combination of both qualitative and quantitative changes for compression aligns (ed.), The Production of Speech, 189–216. Springer, New York. with claims that phonological rhythm does not solely arise from timing, but rather from the [6] Ohala, J. J. (2011). Accommodation to the aerodynamic voicing constraint and its phonological marking of prominence. Although timing may contribute to this, other acoustic properties, relevance. Proc. ICPhS XVI, Hong Kong, 64–67. e.g., formant-frequency, may interact to indicate stress ([7], [8], [9]). Nevertheless, in order to

206 207 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS substantiate these claims, it is necessary to examine compression across varieties in order DAY 2 -Poster Session A to pinpoint which dialect-specific phonologies allow variety-specific phonetic realisations. SLA: Acquisition of vowels & consonants Notwithstanding however, this work is still a small-step towards highlighting the importance of a dialect-specific approach to analysing compression, rhythm and prominence within a How Spanish natives realize the English vowel /ʌ/ in Anglicisms language. Linda Bäumler1 Figures (1) 1Universität Wien

Previous research [1, 2] on borrowings revealed high inter-speaker variability in the realization of phonemes that do not exist in the own phonological system. An example for this is the use of Anglicisms containing /ʌ/ (e.g. in brunch or bluff) by Spanish natives. The English vowel )

z /ʌ/ differs unambiguously from any vowel of the Spanish phoneme system [3]. Regarding

H 0 (

the English influence on Spanish, previous literature suggests a higher impact in Hispano- y

c America than in Spain, due to geographic proximity to the US [4]. However, as globalization n

e results in increased mobility and overwhelming availability of English media, geographic u

q ataques proximity might no more be a relevant mediator of the impact of English on Spanish. e r o n c F To analyze variation of /ʌ/ among Spanish natives, I analyzed speech samples of 71 CVC Spanish natives from Mexico and Spain. They read a word list containing 100 Anglicisms apart from fillers. Among these were six Anglicisms containing ʌ/ / when pronounced in the Time (s) source language English: the four proper Anglicisms club, brunch, bluff and puzzle, and the two derivates brunchear ‘to brunch’ and blufista ‘bluffer’. I conducted an acoustic analysis 8.7 8.8 8.9 9 with PRAAT [5], measuring the first and second formant-frequencies in order to reveal the distribution of the frequency-values. To allow a comparison between the different forms of Figure 1 - Waveform and phonetic transcription of the sentence ‘ataques’, uttered by a speaker of Altiplateau Mexican realization, I used the k-means clustering method in R base [6] to form two groups of speech Spanish samples. Considering the gender differences in vowel frequencies, clustering was conducted separately for males and females. Subsequently a mixed effects model was calculated to References reveal significant influences on pronunciation according to speaker’s country of origin and further socio-demographic factors. Additionally, two-sample t-tests were used to analyze [1] Maddieson, I. (1985) Phonetic Cues for Syllabification. In: V. A. Fromkin (Ed.),Phonetic Linguistics: mean group differences between F1 and F2 by country and gender. Essays in honor of Peter Ladefoged. 203 – 221, New York, NY: Academic Press The two groups generated with the clustering method represented two ways of vowel [2] Katz, J. (2012) Compression Effects in English.Journal of Phonetics 40: 390 – 402 realization (cf. figure 1): The first group comprised speech samples in which speakers tried to [3] Aldrich, A.C & Simonet, M. (2019) Duration of syllable nuclei in Spanish. Studies in Hispanic and imitate the English model, e.g. [blʌf] or [blaf] (group 1). The second group consisted of speech Lusophone Linguistics 12(2): 247 – 280 samples with formant frequencies corresponding with traditional Spanish /u/ (group 2), [4] Turk, A., Nakai, S., Suagahar, M. (2006) Acoustic Segment Duration in Prosodic Research: A meaning that their realization was in line with Spanish grapheme-phoneme-correspondence. Practical Guide The mixed effects-model did not reveal a significant influence of speaker’s origin onthe [5] Arvaniti, A. (2012) The usefulness of metrics in the quantification of speech rhythm. Journal of realization, whereas proficiency in English did have a significant effect (p>0,05). Table1 Phonetics 40: 351 – 373 shows the formant frequency mean values of the different groups. For males in group 1, the [6] Duaer, R. M. (1983) Stress-timing and syllable-timing reanalyzed. Journal of Phonetics 11: 51 – 62 frequencies of F1 and F2 were significantly higher in speech samples of Spaniards compared [7] Turk, A. & Shattuck-Hufnagel, S. (2013) What is speech rhythm? A commentary inspired by to those of Mexicans (p = 0.01 for F1, p = 0,01 for F2), indicating that the pronunciation of the Arvaniti & Rodriquez, Krivokapi, and Goswami & Leong. Laboratory Phonology, 4(1): 93–118 vowel by the Spaniards is more open and fronted. Within the females, only the F2- values [8]Frota, S. & Vigário, M. (2001), On the correlates of rhythmic distinctions: The European/Brazilian showed a significant difference between Spaniards and Mexicans (p < 0,001), indicating that Portuguese case, Probus 13: 247 – 275 the pronunciation of Spanish females is more fronted. Contrarily, when comparing formant [9] Smith, R. & Rathcke, T. (2020) Dialectal phonology constrains the phonetics of prominence, frequencies in group 2 no significant differences were found, with the only exception Journal of Phonetics, 78: 1–17 being the F1 frequencies of male Spaniards and Mexicans (p = 0,02). The fact that in all other subgroups of group 2, no significant differences were found is not surprising, as no greater regional differences in the articulation of vowels in Spanish have been described [9]. Overall, compared to the American original, the vowels pronounced by both Mexicans and Spaniards are more fronted and more closed. These preliminary results suggest that geographic proximity might not be as important as stated in previous literature, whereas factors associated with globalization might be more relevant regarding the English influence

208 209 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS on Spanish. Acoustic Analysis of Spanish Front Vowels Produced by Chinese Speakers in Spontaneous Speech

Yongfa Cao1 and Agnès Rius-Escudé2 1 Universitat de Barcelona, 2 Universitat de Barcelona

Currently, the relationship between the Spanish-speaking world and China is experiencing a period of high momentum, driven by China’s economic growth and its greater global insertion (Esteban, 2018; Roldán et al., 2016), leading to a specific phenomenon called the “Chinese fever for studying Spanish” (Díez, 2013). Numerous studies and models attempt to describe and analyse the acoustic characterisation of Spanish but only a few focus on that of Spanish L2 by Chinese speakers (Cao & Rius-Escudé, 2019; Igarreta Fernández, 2019; Pérez García, 2019; Jiménez & Tang, 2018). Therefore, there is a research gap in the field of vocalism of Spanish L2 by Chinesespeakers. This poster presents the results obtained in an investigation based on the acoustic characterisation of the front vowels /i/ and /e/ of Spanish L2 by Chinese in spontaneous speech. The study set out to create an acoustic description for each vowel and to identify any significant differences between men and women, between stressed and unstressed vowels and between the different levels of Spanish of Chinese speakers. The results have also been Figure 1: Groups generated by the clustering method (Plots were created using PhonR1.0.7 package by McCloy 2016) compared with those of native Peninsular Spanish speakers in spontaneous speech. For the experiment, 36 Chinese speakers were recorded (18 men and 18 women), aged between 19 and 30, from 16 provinces and 13 universities in China. They all study Spanish as a foreign language but have different levels according to the CEFR (Council of Europe, 2001) (25% basic users, 44% independent users, and 31% proficient users), with learning phases ranging from 9 months to 12 years. From this corpus (434 minutes), a total of 1489 vowel sounds were obtained: 726 /i/ and 763 /e/ between stressed and unstressed vowels. From the corpus of native Peninsular Spanish speakers (Alfonso Lozano, 2010), a total of 420 vowel sounds were obtained: 175 /i/ and 245 /e/ between stressed and unstressed. First and second formant frequencies (F and F ) were examined for these vowels using the Table 1: Mean values in Hertz of F1 and F2 in Group 1 (imitation of English model) and Group 2 (Spanish grapheme- 1 2 phoneme-correspondence) Praat programme (Boersma & Weenink, 2018). Results of the comparative acoustic analysis (Student´s t-test) indicate that there are References significant differences between the vowels uttered by Chinese men and women in their production of Spanish front vowels /i/ and /e/, both stressed and unstressed (Table 1). There [1] Rodríguez González, F. 2018. Aspectos ortográficos del anglicismo. Lebende Sprachen 63(2), are no significant differences between stressed and unstressed vowels /i/, and in the vowels

350-373. /e/, there are differences in 1F by men and in F2 by women. These results coincide with [2] Meisenburg, T. 1992. Graphische und phonische Integration von Fremdwörtern am Beispiel those of the vowels produced by native Spanish speakers (Table 2). The results of Fisher’s des Spanischen. Zeitschrift für Sprachwissenschaft 11(1), 47-67. LSD indicate that there are differences in the Spanish front vowels /i/ and /e/ according to the [3] Flege, J. & Wayland, R. 2019. The role of input in native Spanish Late learners’ production and level of Spanish of the Chinese speakers. In the comparative analysis between the results perception of English phonetic segments. Journal of Second Language Sudies 2(1), 1-44. of Chinese speakers and those of Spanish speakers, our research found the following: [4] Rodríguez González, F. 1999. Anglicisms in contemporary Spanish, an overview. Atlantis XXI, male Chinese speakers have no difficulty in articulating the stressed vowels /i/, but they 103-39. articulate the unstressed vowels /i/ with a smaller mouth opening; female Chinese speakers [5] Boersma, P., & Weenink, D. 2020. Praat: doing phonetics by computer. Computer program. articulate the stressed vowels /i/ with a smaller mouth opening, and they articulate the [6] R Core Team (2013). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL http://www.R-project.org/. vowels /i/, both stressed and unstressed, with their tongue more towards the front of the [7] Bradlow, A. 1995. A comparative acoustic study of English and Spanish vowels. Journal of the mouth than native Spanish speakers; male Chinese speakers articulate the vowels /e/, both Acoustical Society of America 97(3), 1916-1924. stressed and unstressed, by advancing the tongue more towards the front of the mouth; [8] Yang, B. & Whalen, D. 2015. Perception and Production of English Vowels by American Males female Chinese speakers have no problem articulating the stressed vowels /e/, however, and Females. Australian Journal of Linguistics 35(2), 121-141. they articulate the unstressed /e/ with a greater mouth opening and with their tongue [9] Hualde, J. 2014. Los sonidos del español. Cambridge: Cambridge University Press. more towards the front of the mouth than native Spanish speakers. The results allow us to obtain an approximation of the features of the front vowels of Spanish

210 211 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS L2 by Chinese speakers in a context of spontaneous speech, with which we can propose Acoustic Analysis of English Vowel Production by L1 and L2 (Hijazi Arabic) Speakers teaching-learning activities that will enable Chinese learners to improve the pronunciation of these vowels. Wael Almurashi, Jalal Al-Tamimi and Ghada Khattab Newcastle University, United Kingdom

Dynamic cues—in particular, vowel-inherent spectral changes (VISC)—have been shown to play a major role in vowel identification and provide essential information not only for diphthongs but for monophthongs as well (e.g. Morrison & Assmann 2012). However, with the majority of work in this area being restricted to English, more work is required to evaluate the importance of dynamic cues across languages, as well as in second-language acquisition.

Table 1. Average F1 and F2 for front vowels /i/ and /e/ of Spanish L2 by male and female Emerging work from our lab on Arabic suggests that, while dynamic cues in the spectral Chinese speakers properties of vowels improve the identification of Arabic vowels, their classification power is attenuated in this language compared with work on other languages (Almurashi et al. 2020). This raises the question of how or whether dynamic properties of vowels in a particular language are acquired by second-language learners. This type of investigation has been lacking, with the majority of work focusing on a static approach and measuring the first two formants (F1 and F2) at the monophthong vowel’s midpoint and making conclusions about second-language attainment on this basis (e.g. Munro 1993). Dynamic cues, which have not

Table 2. Average F1 and F2 for front vowels /i/ and /e/ of Spanish L1 by male and female yet been investigated in the literature on Arabic learners of English, may provide a more native Spanish speakers comprehensive account of second-language vowel production that would prove beneficial in predicting second-language difficulty and advance second-language research on what counts as ‘similar’ vowels in two languages (e.g. Jin & Liu 2013). This study’s aim is to examine the Reference production patterns of English vowels by Hijazi Arabic L2 (L2) learners compared with those of native English (NE) speakers and their L1 Hijazi Arabic (HA) patterns using static (vowel [1] Alfonso Lozano, R. (2010). El vocalismo del español en el habla espontánea. Doctoral dissertation, midpoint) and dynamic cues (the direction of the [direction], the spectral rate of Universidad de Barcelona. vowel movement [slope], and the amount of vowel change [offset] models; e.g. McDougall & [2] Boersma, P., & Weenink, D. (2018). Praat: Doing phonetics by computer (Version 6.0. 37) [Computer software]. Amsterdam: Institute of Phonetic Sciences. Nolan 2007; Almurashi et al. 2020). Using discriminant analysis, this study evaluates the role [3] Cao, Y., & Rius-Escudé, A. (2019). Caracterización acústica de las vocales del español hablado of static versus dynamic cues in HA, (HA) L2, and NE vowels, as well as the vowel duration, the por chinos. Phonica, 15, 3–22. third formant (F3), and the fundamental frequency (F0) as additional cues (e.g. Hillenbrand [4] Council of Europe. (2001). Common European Framework of Reference for Languages: Learning, et al. 1995). teaching, assessment. Cambridge University Press. Data were collected from 60 participants (20 HA, 20 L2, and 20 NE balanced by gender). [5] Díez, P.M. (2013). Fiebre china por estudiar español. Sociedad de ABC de la Fecha, 3, 56. The target vowels were examined in a list with varied consonantal contexts. HA participants [6] Esteban, M. (2018). Relaciones España-China. Real Instituto Elcano. produced eight HA vowels and L2 and NE participants produced 10 English monophthong [7] Igarreta Fernández, A. (2019). El Comportamiento fónico de los sinohablantes ante las vocales del vowels (with a total of 10,080 tokens). For the purposes of this research, vowel duration, the español: Efectos de la distancia lingüística sobre el proceso de adquisición. Doctoral dissertation, first three formant and F0 values were extracted from one location (50% for the static model) Universidad Autónoma de Barcelona. and multiple locations (two [20% and 80%]), three [20%, 50%, and 80%], and seven locations [8] Jiménez, J., & Tang, A. (2018). Producción del sistema vocálico del español por hablantes de [20%, 30%, 40%, 50%, 60%, 70%, and 80%] for the dynamic models) over the course of the vowel chino. RLA. Revista de Lingüística Teórica y Aplicada, 56 (1), 13–34. duration. Results show that dynamic cues provide insights into all three group production [9] Pérez García, R. (2019). La adquisición del sistema vocálico del español por hablantes con lengua patterns that are not normally gleaned from static measures alone. Importantly, however, materna alemán y chino. Doctoral dissertation, Universidad Autónoma de Barcelona. dynamic cues were used differently by L2 and NE speakers. As well as main differences in the [10] Roldán, A., Castro, A., Pérez, C., Echavarría, P., & Ellis, R. (2016). La presencia de China en América overall F0, F1, F2, or F3 patterns between the two groups of speakers, showing an influence Latina: Comercio, inversión y cooperación económica. Medellín: KAS, Universidad EAFIT. of L1 HA on second-language production patterns, there were main differences in the direction, slope, and amount of formant change that must also contribute to the perception and identification of non-native production (e.g. Figure 1). Using discriminant analysis, the dynamic cues had relatively higher classification rates for all three groups than when using a single point (e.g. static cues) during the vowel duration (Table 1). In addition, vowel duration was found to play a significant role in the classification accuracy for HA and L2, while F0 was the most important additional cue for accurately classifying NE vowels. Our results are in line with dynamic approaches and highlight the importance of looking beyond static cues and beyond the first two formants for insights into what contributes to a non-native accent.

212 213 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS measurements and accentedness ratings. Language and Speech 36, 39-66. https://doi. org/10.1177/002383099303600103

Figure 1. Results of the direction model (measured at seven points) of the /u:/ vowel produced by HA, L2, and NE participants.

Table 1. The average classification rates of the discriminant analysis for one, two, and three locations of the vowels produced by HA, L2, and NE participants trained (with a leave-one-out cross-validation) on various parameters, with and without the vowel duration (higher scores are marked in bold).

[1] Almurashi, W., Al-Tamimi, J., & Khattab, G. 2020. Static and dynamic cues in vowel production in Hijazi Arabic. The Journal of the Acoustical Society of America 147, 2917−2927. https://doi. org/10.1121/10.0001004 [2] Hillenbrand, J. M., Getty, L. A., Clark, M. J., & Wheeler, K. 1995. Acoustic characteristics of American English vowels. The Journal of the Acoustical Society of America 97, 3099-3111. https:// doi.org/10.1121/1.411872 [3] Jin, S. H., & Liu, C. 2013. The vowel inherent spectral change of English vowels spoken by native and non-native speakers. The Journal of the Acoustical Society of America 133, 363-369. https:// doi.org/10.1121/1.4798620 [4] McDougall, K., & Nolan, F. 2007. Discrimination of speakers using the formant dynamics of /uː/ in British English. Proceedings of the 16th International Congress of Phonetic Sciences (Saarbrücken, Germany), 1825-1828. [5] Morrison, G. S., & Assmann, P. 2012. Vowel inherent spectral change (Springer Science & Business Media). https://doi.org/10.1007/978-3-642-14209-3 [6] Munro, M. J. 1993. Productions of English vowels by native speakers of Arabic: Acoustic

214 215 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS The Acoustic Characteristics of Consonants and Vowels in an Understudied Context of of real and imagined foreigner-directed speech. Journal of the Acoustical Society of America, Foreigner Directed Speech 121(5): 3044. [2] Uther, M., Knoll, M. and Burnham, D. 2007. Do you speak E-NG-LI-SH? A comparison of Azza Al Kendi foreignerand infant-directed speech. Speech Communication, 49(1): 2-7. Sultan Qaboos University [3] Uther, M., Giannakopoulou, A. and Iverson, P. 2012. Hyperarticulation of vowels enhances phonetic change responses in both native and non-native speakers of English: Evidence from an auditory event- related potential study. Brain Research, 1470: 52-58. The acoustic features of vowels and consonants in clear speech and infant directed speech [4] Boersma, P. and D. Weenink. 2009. Praat: doing phonetics by computer. Accessed September (IDS) have been extensively investigated, but a less explored area is that of foreigner directed 30, 2016 from http://www.fon.hum.uva.nl/praat/. speech (FDS). FDS research has hypothetically assumed that FDS should be similar to IDS [5] Team, R.C. 2017. R: a language and environment for statistical computing. R Foundation for or clear speech given that foreigners have linguistic needs in the target language. Indeed, a Statistical Computing. Austria, Vienna. small body of research found FDS to be hyperarticulated, having a larger vowel space and longer vowel duration, but not higher mean f0 compared to adult directed speech (ADS) [1], [2]. Hyperarticulated speech to nonnative speakers has been found to facilitate learning of new phonetic categories [3]. Differences in the methodologies used and the heterogeneity of FDS contexts point to the need for further research in this area. The current study extends existing research on FDS and examines speech to an understudied but significant population of ‘foreigners’, that of African and Asian domestic helpers in Oman. The study focuses on the acoustic characteristics of Arabic speech directed to foreign domestic helpers (FDH), expanding this area of research beyond work on English; it also provides a first report on consonants in FDS. The study also looks at whether FDH social and linguistic characteristics (length of residence (LoR), degree of foreign accentedness and religion) contribute to the magnitude of adjustments in speech directed to them. This study examined the production of Arabic /iː/, /uː/, /aː/ and /t/, /tˤ/, /s/, /sˤ/ in the speech of 22 native Omani speakers, either addressing their FDHs or a native speaker (NS). A questionnaire was used to elicit background information of the FDHs. A foreign accent rating experiment using 10 Omani native speakers as raters was used to generate degree of foreign accentedness of FDHs. To elicit comparable samples of FDS and ADS, a list of target words containing the target vowels and consonants was designed. Then, the participants were engaged in an interactive spot the difference task which consisted of six picture pairs with three different scenes. The scenes had objects that represented the target words and some distracters. Data was analysed using the speech software Praat [4]. For consonants, spectral moments, duration, intensity, F1 and F2 of the following vowels were compared in productions to FDH and NS interlocutors. For vowels, vowel space, F1 and F2 measures, f0, intensity and duration were examined. Mixed effect models using R software were used to generate results [5]. Speech to FDHs yielded greater vowel space expansion, higher F1 of all vowels and higher pitch and intensity than speech directed to the NS, but there was no effect on F2 of the front or back high vowels. Moreover, all vowels in speech to FDHs were surprisingly shorter in duration than those in speech to the NS. Kurtosis and intensity values of /tˤ/ were higher in FDH-directed speech compared to speech directed to the NS. Skewness values of /s/ and /sˤ/ were higher in speech directed to FDHs. However, hyperarticulation of consonant contrasts was not attested. Despite the heterogeneity of the FDHs, their LoR, degree of foreign accentedness and religion did not affect the extent of adaptations in speech directed to them. Results are discussed in relation to the naturalistic learning setting of the FDHs. Also, external factors relating to the peculiarity of the FDH context are considered.

References

[1] Scarborough, R., Dmitrieva, O., Hall-Lew, L., Zhao, Y. and Brenier, J. 2007. An acoustic study

216 217 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS The acquisition of the English /s-z/ voice contrast by learners of three L1s

Paolo Mairano1, Leonardo Contreras Roa1,2, Marc Capliez1, Caroline Bouzon1 1University of Lille, 2University of Rennes 2

Introduction. English has a high functional load voice contrast between /s/ and /z/, which is active word-initially (sing /s/ - zing /z/), word-medially (fussy /s/ - fuzzy /z/) and word-finally (rice

/s/ - rise /z/). In this study we investigate the acquisition of this contrast /s/ - /z/ by FrenchL1 (FR), Northern ItalianL1 (NI) and American SpanishL1 (AS) learners. Given the status of [s] and [z] in their L1s, and based on predictions of current L2 phonology acquisition models (SLM [1] and MDH [2]), we expect that the three groups will show different voicing patterns for these sounds. /s/ and /z/ are phonemes in FR (casse /kas/ - case /kaz/), obligatory allophones in varieties of NI ([z] before voiced sounds, [s] in front of voiceless consonants, cf. [3]), yet voicing contributes to oppose minimal pairs such as casa /ˈkasa/ [ˈkaza] - cassa /ˈkasːa/ [ˈkasːa] (where the primary feature is length). AS only has /s/, but partial or total voicing can happen in various contexts (esp. syllable coda) due to non-obligatory voice assimilation with the following consonant, esp. in casual speech [4]. So, based on the SLM and despite potential difficulties encountered Figure 1. Average periodic proportion for realisations of /s/ and /z/. by all learners, we expect that (i) FR and NI learners will exhibit no difficulties in producing distinct realizations for /s/ and /z/; (ii) AS learners will exhibit difficulties in producing distinct realisations for this contrast. Additionally, based on MDH, we expect that (iii) AS learners will have difficulties producing /z/ (universally and cross-linguistically more marked) rather than /s/, and that (iv) AS and NI learners will show difficulties with /z/ word-finally (more marked than word-initially, and non-existent in NI). Data and analysis. We analysed productions by 40 instructed learners from the IPCE-IPAC corpus of L2 English. Learners were 15 speakers of FR (12 F, 3 M, age = 24, SD = 6.59), 15 speakers of NI (11 F, 4 M; age = 22.5, SD = 2.38), 10 speakers of AS (3 F, 7 M; age = 30.2, SD = 6.98) from Peru (n = 5), Chile (n = 3), Colombia (n = 2). The protocol of the IPCE-IPAC corpus includes various tasks, but for the present study we only considered recordings of the read- aloud of a newspaper article (506 words). The recordings were transcribed orthographically, phonetized and aligned with WebMAUS, and manually verified. For each occurrence of /s/ and /z/, we extracted the proportion of periodic signal and the duration via a custom Praat script. We then excluded occurrences of inflectional /s/ or /z/ for plural, rd3 person, genitive and clitic forms of has and is (which treated in a separate study). In total, we analysed 2110 realisations of /s/ and 1206 realisations of /z/. The results were analysed statistically in R with LME models. Figure 2. Average duration for realisations of /s/ and /z/. Results and discussion. Figures 1 and 2 show the average proportion of periodicity and the Reference duration in seconds for the target segments. Pairwise tests on the interaction of Phoneme x Group reflect hypotheses (i) and (ii) based on SLM, for both periodicity and duration: AS [1] Best, C. T., & Tyler, M. D. (2007). Nonnative and second-language speech perception: Commonalities learners tend not to produce any difference in periodicity and duration between /s/ and /z/, and complementarities. In M. Munro, & O.-S. Bohn (Eds.), Second language speech learning (pp.13– whereas FR and IT participants show distinct realizations for these two sounds. We also observe 34). Amsterdam, The Netherlands: John Benjamins. that periodicity for /s/ and /z/ realisations by AS learners is low, confirming (iii). However, the [2] Eckman, F. R. (2008). Typological markedness and second language phonology. In H. C. Hansen difference in /z/ periodicity word-medially vs word-finally is not significant for AS learners for Edwards, M.L Zampini (Eds.) Phonology and second language acquisition (pp. 95-115). Amsterdam: NI learners, disconfirming (iv). We argue that this may be due to the transfer of non-obligatory John Benjamins. voice assimilation in syllable coda from AS [4], and to epenthetic word-final schwas in NI [5], [3] Bertinetto, P. M., & Loporcaro, M. (2005). The sound pattern of Standard Italian, as compared with which transform word-final consonants into word-medial ones. Finally, a by-item analysis of the varieties spoken in Florence, Milan and Rome. Journal of the International Phonetic Association, the data reveals interesting patterns, by which voicing is sometimes influenced by spelling: 35(2), 131-151. low amounts of periodicity are found for /s/ words spelled with graphemes unambiguously [4] Hualde, J. I. (2005). The sounds of Spanish with audio CD. Cambridge University Press. indicating /s/ (, ). Conversely, higher amounts of periodicity are found for /s/ words [5] Broniś, O. (2016). Italian vowel in loanword adaptation. Phonological analysis of the spelled with , which exhibits a less regular grapheme-to-phoneme correspondence with Roman variety of Standard Italian. Italian Journal of Linguistics, 28(2), 25–68. either /s/ or /z/ (e.g. release, disguise, various), thereby providing support for recent work on [6] Bassetti, B., Escudero, P., & Hayes-Harb, R. (2015). Second language phonology at the interface this topic [6]. between acoustic and orthographic input. Applied psycholinguistics, 36(1), 1-6.

218 219 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Voice assimilation of morphemic -s in the L2 English of L1 French, L1 Spanish and L1 Italian learners

Leonardo Contreras Roa1,2, Paolo Mairano2, Marc Capliez2, Caroline Bouzon2 1University of Rennes 2, 2University of Lille

Introduction. This study investigates the pronunciation of morphemic -s in L2 English by L1 French, L1 Italian and L1 Spanish learners. Morphemic -s can be a flectional suffix expressing plurality, 3rd person, genitive, or clitic forms of is and has. Phonologically, it is subject to a progressive voice assimilation rule, and is therefore pronounced as /s/ in front of voiceless consonants, and /z/ in front of vowels and voiced consonants. A comparative analysis of L1 French, L1 Italian and L1 Spanish learner speech productions is especially interesting due to the phonological and morphological characteristics of these three L1s. From a morphological perspective, the -s suffix exists in French and Spanish to mark plurality (but it is often silent in French), but not in Italian, which marks the plural with a synthetic morpheme Figure 1. Average periodic proportion for word-final inflectional /z/ by type of preceding phoneme (C[-voice], C[+voice], expressing number and gender (e.g. tavoli ‘tables’). Phonologically, French has a phonemic V) and following segment (voiced, unvoiced), by L1 of speakers contrast opposing /s/ and /z/, Northern Italian has [s] and [z] as contextual allophones (the latter being used before another voiced segment [1]), and Spanish only has /s/, with non- Reference obligatory full or partial assimilation of voice from the following consonant in syllable coda, especially in casual speech [2]. From a phonotactic point of view, French allows word-final /s/ [1] Bertinetto, P. M., & Loporcaro, M. (2005). The sound pattern of Standard Italian, as compared and /z/, Italian allows word-final /s/ (although infrequent) but not /z/, while Spanish allows /s/ with the varieties spoken in Florence, Milan and Rome. Journal of the International Phonetic (see above for non-obligatory assimilation). Given these facts, and given the higher relative Association, 35(2), 131-151. markedness of /z/ in word-final position with respect to /s/ [3], we hypothesize that L1 French [2] Hualde, J. I. (2005). The sounds of Spanish with audio CD. Cambridge University Press. and L1 Italian learners will find it easier than L1 Spanish learners to reproduce the outcome [3] Eckman, F. R. (2008). Typological markedness and second language phonology. In H. C. of the voice assimilation rule. Hansen Edwards, M.L Zampini (Eds.) Phonology and second language acquisition (pp. 95-115). Data and analysis. We analyzed data from the IPCE-IPAC corpus [4] obtained from a Amsterdam: John Benjamins. read-aloud task and a semi-guided interview produced by 15 L1 French learners, 15 L1 [4] Andreassen, H., Herry-Bénit, N., Kamiyama, T., & Lacoste, V. (2015). The ICE-IPAC Project: Testing Northern Italian learners, and 10 L1 American Spanish learners. Recordings of both tasks the Protocol on Norwegian and French Learners of English. 18th International Congress of were orthographically transcribed, phonetized and force-aligned via WebMAUS, and then Phonetic Sciences, 9–11. manually corrected. We first analyzed the presence and absence of expected occurrences of [5] Plag, I., Homann, J., & Kunter, G. (2017). Homophony and morphology: The acoustics of word- morphemic -s, and found 1610 realizations. The proportion of periodicity for every occurrence final S in English. Journal of Linguistics, 53(1), 181–216. was extracted via a custom Praat script and analysed in R with linear mixed-effects models. Results. Our predictions were only partially reflected by the results. L1 French learners (who have /s/ and /z/ in their L1 as phonemes occurring in word-final position) were the most successful in producing the expected patterns of periodicity, as expected. By contrast, L1 Spanish learners were found to be better than L1 Italian learners in doing so, as shown in Figure 1. While the results for Italian learners reflect predictions based on the Differential Markedness Hypothesis [3], which states that voiced sounds are universally more marked in word-final position than in word-medial or word-initial position, the same does not seem to hold for L1 Spanish learners. This is all the more surprising in that the same learners showed little periodicity in the realization of /z/ in non-morphemic -s in our previous study. So, we argue that such results may be due the Spanish non-obligatory voice assimilation in syllable coda (and therefore also in word-final position); the existence of this rule in their L1 may promote syllable-coda voice assimilation in L2 English, even though with a change in directionality (regressive assimilation to progressive assimilation). Additionally, we argue that the different behavior showed by L1 Spanish learners in morphemic vs non-morphemic -s (cf. our previous study) may reflect recent findings on in L1 English [5] and may have potential repercussions on models of L2 phonology acquisition.

220 221 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Perceptual learning in vowel systems Both experiments suggest that perceptual learning generalizes based on shared phonological features in the domain of manipulation: Altered F1 in /ɪ/ training shifts the high~mid boundary, Chelsea Sanker and altered F2 in /u/ training shifts the front~back boundary. The restricted extension is not Yale University consistent with contrast maintenance or speaker normalization, which predict additional shifts. Speakers’ perception of sounds can be shifted based on exposure to altered characteristics in these sounds. Shifts in one sound can also be extended to sounds with shared characteristics, e.g. VOT across place of articulation (Kraljic & Samuel 2006). What drives generalization: shared features, contrast preservation, or normalization to the speaker? Behaviour of vowels can help tease these apart. Perceptual training in vowels usually uses exposure to several shifted vowels (e.g. Ladefoged & Broadbent 1957; Maye, Aslin, & Tanenhaus 2008), and tests perception of only one or two additional contrasts (Mitterer 2006; Chládková, Podlipskỳ & Chionidou 2017), leaving ambiguity about which characteristics are crucial for generalization. Two perception studies tested how exposure to shifted F1 or F2 in a single vowel quality influences other vowels that either match in a feature realized in the manipulated dimension (height and backness/roundness, respectively) or do not. The results suggest that perceptual shifts generalize to vowels sharing phonological features in the domain of manipulation. 128 American English speakers participated in each study. They heard monosyllabic English words and identified each as matching one of two written options. First, listeners completed Figure 1. Proportion of responses of the vowel with the higher F1, by manipulation and contrast. (Pooled raw data, a training block in which they made consonant decisions between two response options with not the model output.) the same vowel; across listeners, there were two different formant manipulations. Second, listeners completed a testing block in which they made vowel decisions. In Study 1, training stimuli were words with /ɪ/. Half of participants heard these words with raised F1, and the other half heard them with lowered F1. In the testing block, participants made decisions between response options differing only in vowel quality. There were seven vowel contrasts, differing in height or tenseness, with a 4-step continuum from the speaker’s mean F1 of one vowel quality to the speaker’s mean F1 for the other vowel quality. In Study 2, training stimuli were words with /u/, and the manipulation conditions were raised F2 vs. lowered F2. In the testing block, participants made decisions about six vowel contrasts, most differing in backness and roundness, with a 4-step continuum from the speaker’s mean F2 of one vowel quality to the speaker’s mean F2 for the other vowel quality. Statistical results are from mixed effects models for the log odds of each response in vowel identifications. The fixed effects were training condition (manip); continuum step; and vowel quality of the base recording. There was a random intercept for participant and for word pair. Figure 2. Proportion of responses of the vowel with the higher F2, by manipulation and contrast. (Pooled raw data, In Study 1, exposure to raised F1 in /ɪ/ resulted in fewer identifications of vowels in mid~high not the model output.) continua as mid, consistent with lower realization of high vowels (see Fig 1). This was significant Reference for /ɪ-ε/ (β = -0.82, SE = 0.2, t = -4.2, p < 0.001) and the back lax mid~high pair /ʊ-ɔ/ (β = -0.48, SE = 0.16, t = -2.9, p = 0.0037), and marginally significant for /i-ei/ (β = -0.46, SE = 0.27, t = -1.7, [1] Chládková, K., Podlipskỳ V.J., & Chionidou, A. 2017. Perceptual adaptation of vowels generalizes p = 0.093). No other contrasts were significantly affected by manipulation. Diphthongization across the phonology and does not require local context. Journal of Experimental Psychology: differences in tense-lax pairs could impact ambiguity of test items, but base recording wasn’t Human Perception and Performance 43(2), 414-427. [2] Kraljic, T., & Samuel, A. G. 2006. Generalization in perceptual learning for speech. Psychonomic a larger factor for these contrasts than for others. Bulletin and Review 13, 262-268. In Study 2, exposure to raised F2 in /u/ resulted in fewer identifications of vowels in front~back [3] Ladefoged, P., & Broadbent, D.E. 1957. Information conveyed by vowels. Journal of the Acoustical continua as front (see Fig 2). This was not significant for /u-i/ (β = 0.15, SE = 0.16, t = -0.96, p = Society of America 29(1), 98-104. 0.34), perhaps due to limited ambiguity of the test items. However, it was significant for most [4] Maye, J., Aslin, R. N., & Tanenhaus, M. K. 2008. The weckud wetch of the wast: Lexical adaptation front~back continua: /ʊ- ɪ/ (β = -0.27, SE = 0.1, t = -2.6, p < 0.001), /ou-ei/ (β = -0.66, SE = 0.19, to a novel accent. Cognitive Science 32(3), 543-562. t = -3.5, p < 0.001), and /ʌ-ε/ (β = -0.6, SE = 0.18, t = -3.4, p < 0.001). The lack of effect for ɔ/ -ε/ [5] Mitterer, H. 2006. Is vowel normalization independent of lexical processing? Phonetica 63(4), is likely because F2 intermediate between them aligns best with /ʌ/. 209-229.

222 223 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Categorization of Californian English vowels by Catalan-Spanish bilinguals Reference

Lucrecia Rallo Fabra, Mark Amengual and Michael D. Tyler [1] Cebrian, J. 2006. Experience and the use of duration in the categorization of L2 vowels. Journal of Phonetics, 34, 372-387. [2] Rallo Fabra, L. & Romero, J. 2012. Native Catalan learners’ perception and production of English Catalan-Spanish bilinguals learning English as a foreign language encounter many difficulties vowels. Journal of Phonetics, 40, 491–508. producing and discriminating English vowel contrasts [1] [2]. This can be partially accounted [3] Flege, J. E. (1995). Second language speech learning: Theory, findings and problems. In Strange, W. for by the cross-linguistic difference in the number of contrastive vowel monophthongs in (Ed.). Speech Perception and Linguistic Experience: Theoretical and Methodological Issues. Baltimore: the L1 (8 in Catalan and 5 in Spanish) and the L2 systems (12 in Californian English). Current York Press, 233–277. models of L2 speech learning, SLM [3], PAM-L2 [4] and L2LP [5], posit that predictions of ease/ [4] Best, C. T., and Tyler, M. D. 2007. Nonnative and second-language speech perception: difficulty in L2 phonological learning can be made on the basis of the perceptual mapping of Commonalities and complementarities. In Munro, M. & Bohn, O.-S. (Eds.), Language Experience in Second Language Speech Learning: In Honor of James Flege. Amsterdam: John Benjamins, 13–34. L2 phones onto L1 categories through a perceptual assimilation task. [5] Escudero, P. (2005). Linguistic Perception and Second Language Acquisition. PhD Dissertation. In the present study, 50 experienced Catalan-dominant bilinguals with an advanced level of Utrecht University. English, categorized Californian English vowels as instances of the Catalan vowel categories. [6] Strange, W., Weber, A., Levy, E. S., Shafiro, V., Hisagi, M. & Nishi, K. (2007). Acoustic variability Four Californian female speakers produced 10 instances of each vowel target embedded within and across German, French and American English vowels: Phonetic context effects. in the h_bba nonword and the best four tokens were selected on the basis of auditory Journal of the Acoustical Society of America, 122, 1111–1129. judgement. The bilabial context was chosen to reduce the effect of coarticulation [6] [7]. We [7] Tyler, M. D. , Best, C. T., Faber, A. & Levitt, A. G. 2014. Perceptual assimilation and discrimination focused on nine vowels /i ɪ ɛ æ ə ʌ ɑ ʊ u/, with 6 repetitions per token, and we also included of non-native vowel contrasts. Phonetica 71 (1), 4–21. [8] Faris, M. M., Best, C. T. & Tyler, M. D. 2016. An examination of the different ways that non-native one repetition per token of the remaining monophthongs /e o/, the r-coloured vowels and phones may be perceptually assimilated as uncategorized. Journal of the Acoustical Society of vowel + /r/ sequences /ɝ ɪr ɑr ɛr ɔr ʊr/ and the diphthongs /aɪ aʊ ɔɪ/ as fillers. The listeners America, 139, EL1–EL5. were provided with labels corresponding to the Catalan vowel monophthongs/i e ɛ a ə ɔ o u/ [9] Faris, M. M., Best, C. T. & Tyler, M. D. 2018. Discrimination of uncategorised non-native vowel and vowel sequences /ai au ɛi ɛu ei eu ia iɛ ie io iɔ iu oi ou ɔi ɔu ua ue uɛ ui uo uɔ əi əu/ plus contrasts is modulated by perceived overlap with native phonological categories. Journal of three extra response options: (1) other speech sound, (2) unknown speech sound and (3) not Phonetics, 70, 1–19. speech sound. Immediately after they rated the goodness of fit using a 7-point scale (1 = a [10] Cebrian, J., Mora, J.C. & Aliaga García, C. 2011. Assessing crosslinguistic similarity by means of rated discrimination and perceptual assimilation tasks. In Wrembel, M, Kul, M. & Dziubalska- poor fit, 7 = a perfect fit). Kołaczyk, K. (Eds.). Achievements and Perspectives in the Acquisition of Second Language Speech: New In line with previous research [7, 8, 9], the threshold to determine whether a given vowel was Sounds 2010, Volume I. Frankfurt am Main: Peter Lang, 41–52. categorized was set at 70%. The results (see Table 1) showed that only the three point vowels [11] Rallo Fabra, L. 2005. Predicting ease of acquisition of L2 speech sound sounds: A perceived /æ i u/ were consistently categorized as a Catalan vowel, with goodness ratings between 5.4 dissimilarity test. Vigo International Journal of Applied Linguistics, 2, 75–92. and 5.7, indicating moderately good fits to the native categories. The remaining vowels were [12] Amengual, M. 2016. Cross-linguistic influence in the bilingual mental lexicon: Evidence of cognate uncategorized in PAM’s terms. Recent studies on L2 vowel perception [8, 9] have defined effects in the phonetic production and processing of a vowel contrast. Frontiers in Psychology, 7: three possible patterns for contrasts involving uncategorized phones based on the degree 617. doi: 10.3389/fpsyg.2016.00617. of perceived phonological overlap. Each phone in a non-overlapping contrast is perceived as consistent with a different set of L1 categories, as was the case for the majority of pairwise combinations (e.g., /i/-/ɪ/ and /æ/-/ɛ/), which should be discriminated accurately. In partially- overlapping contrasts (e.g., /u/-/ʊ/, /æ/-/ʌ/ and /æ/-/ɑ/) there is at least one shared response category. In these cases, discrimination should be moderate to good. Finally, there was one completely overlapping contrast (/ɪ/-/ɛ/), which should be difficult to discriminate because the L2 phones are perceived as consistent with the same set of L1 categories. Our findings partially replicate and extend prior work investigating the perceptual assimilation of English vowels to Eastern Catalan vowels [10], [11]. For instance, /æ/, /ʌ/ and /ɑ/ were perceived as most similar to Catalan /a/ with close goodness ratings, but perceived phonological overlap with other Catalan vowel may support discrimination to a certain extent. The case of /ɛ/ is surprising because we expected a clearer mapping onto Catalan /ɛ/. We speculate that language experience with the L2 and cross-dialect differences may account for this discrepancy with previous research, which was based on inexperienced L2 listeners [10], [11]. Unlike the Eastern Catalan variety spoken in Catalonia, Majorcan Catalan has /ə/ in stressed position, which might induce a perceptual overlap of L1 /ə/, /e/ and /ɛ/ vowel categories. This overlap has also been found in a recent study of bilingual vowel production and processing by Catalan-Spanish bilinguals [12] and it shows the lack of stability of the /e/- /ɛ/ contrast in Catalan.

224 225 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS The emergence of phonological processes in foreign language learning: The lexical has to be realized or not is a matter of explicit learning, whereas the nature of the status of French liaison consonants consonant, namely its duration, is learned implicitly over the years of learning. The emergence of phonological processes in foreign language learning therefore seems to be linked to a Elisabeth Heiszenberger1 and Elissa Pustka1 psycholinguistic proceduralization process [6] rather than to a lexical model based solely on 1University of Vienna forms of linguistic representations influenced by orthography [7].

-French liaison is a phonological process in which a final consonant, mute in isolation or before a consonant, is realized when the following word begins with a vowel (e.g. des amis [de.za.mi] ‘friends’). Having inspired various theoretical frameworks for phonology, from (post-) generative models [1], [2] to usage-based approaches [3], [4], more recently studies on liaison are oriented towards a psycholinguistic perspective to test these linguistic generalizations [5], [6], [7]. Previous studies on learners of French [8] revealed that aside from omissions (e.g. des amis *[de.a.mi], two particular phenomena can be observed which are absent in pre- literate children acquiring L1 French: learners may show non-resyllabified liaisons (e.g. des amis *[dez.a.mi]) and mispronunciations of liaison consonants in accordance with regular L1 grapheme-phoneme correspondences (e.g. des amis *[de.sa.mi]/*[des.a.mi]). Unlike L1 speakers, L2 acquisition of liaison might therefore be mainly based on orthography. However, given the unpredictability of the (non-)realization of French word-final consonants (e.g. fils [fis] ‘son’ vs. ils [il] ‘they’), it remains unclear whether orthography-induced liaison consonants can be considered as such or have to be interpreted as spelling pronunciations of word-final consonants. To address this question, we investigate the (non-)realization of word-final in the same words in eight categorical liaison contexts (e.g. ils ont [il.zɔ͂ ] ‘they have’) and in nine non-liaison contexts (e.g. ils veulent [il.vœl] ‘they want’) read aloud by 50 native German- speaking high school students (31 beginners and 19 intermediate learners of French). Since in L1 French, liaison consonants are shorter than word-final consonants [9], [10], we also Figure 1. Comparison of normalized duration of fixed word-final [s] and [z] and liaison consonants (after the clitic compare the duration of the liaison consonants with the duration of six fixed word-final [s] pronoun ils ‘they’) realized by the beginners and intermediate learners as well as by the native speakers. (e.g. tennis [tenis] ‘tennis’) and four fixed word-final [z] (e.g. chemise [ʃəmiz] ‘shirt’) realized by the students and by four native French-speaking teenagers from Paris. To account for Reference speaker dependent variability, we normalised the absolute durations against the absolute [1] Encrevé, P. 1988. La liaison avec et sans enchaînement. Paris: Seuil. word duration. [2] Schane, S. A. 1967. La Liaison et l’élision en français. Langages 8, 37-59. The results suggest that even the beginners know that final consonants remain mute [3] Bybee, J. 2001. Phonology and Language Use. Cambridge: Cambridge University Press. in non-liaison contexts, except for ils ‘they’ (31% of erroneous realizations). Interestingly, [4] Bybee, J. 2005. La liaison: effets de fréquence et constructions.Langages 158, 24-37. this is the only context in which non-resyllabified liaisons occur more frequently (44%) than [5] Chevrot, J.-P., Dugua, C., & Fayol, M. 2009. Liaison acquisition, word segmentation and omissions (29%; in all the other liaison contexts: 77%-87%). (Non-)realization patterns of construction in French: a usage-based account. Journal of Child Language 36(3), 557-596. word-final consonants in non-liaison contexts consequently seem to be overgeneralized and [6] Harnois-Delpiano, M., Cavalla, C., & Chevrot, J.-P. 2019. Comparing French liaison acquisition transferred to liaison contexts by the learners. One reason for this could be that the textbook in L1 children and L2 adults. Methodological issues in exploring differences and similarities. used in the classroom follows a communicative approach and does not address liaison at all. Language, Interaction and Acquisition 10(1), 45-70. Durational measurements confirm the assumption that the beginners do not process [7] Wauquier, S. 2009. Acquisition de la liaison en L1 et L2 : stratégies phonologiques ou lexicales?. word-final consonants differently from liaison consonants. As could have been expected, the AILE 2, 93-130. size relation for the normalised duration of the consonants produced by the native speakers [8] Racine, I., & Detey, S. 2015. Corpus oraux, liaison et locuteurs non-natifs : de la recherche en is the following: liaison consonants < word-final [z] < word-final [s] (see Figure 1). T-tests phonologie à l’enseignement du français langue étrangère. VALS-ASLA 102, 1-25. reveal that the durational differences between the consonants are significant (p < 0.01). [9] Spinelli, E., McQueen J.G., & Cutler, A. 2003. Processing resyllabified words in French.Journal of Memory and Language 48, 233-254. Liaison consonants and fixed word-final consonants produced by the beginners learners, [10] Nguyen, N., Wauquier-Gravelines, S., Lancia, L., & Tuller, B. 2007. Detection of liaison by contrast, do not differ significantly (p > 0.01). Even though the duration of word-final and consonants in speech processing in French, experimental data and theoretical implications. liaison consonants approaches the target value as the students progress, omission errors In: P. Mascaro & M.-J. Solé (Eds.). Segmental and prosodic issues in Romance phonology (pp. 3-25). persist (6%-56%). The findings consequently indicate that the ability to determine whether Amsterdam: John Benjamins.

226 227 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Investigating the phonetics-phonology interface in real time: gradient and categorical development of a novel L2 contrast

James Turner University of Southampton, UK

When a novel L2 phonemic contrast is developed and lexicalised by language learners, acoustic measurements of the production of one or both segments often appear to shift in the direction of the target L2 forms. However, it is sometimes unclear whether this change is the result of the learners using the intended segment more in the required lexical items (categorical/phonological development) or gradient, surface-level change where both distributions of variation shift towards their respective L2 targets (continuous, phonetic development). While some research has suggested phonological representations are derived and updated by encounters with phonetic information (Pierrehumbert, 2016), others have suggested lexical contrasts can develop before the underlying variation linked to the phonemes is acquired (Darcy et al., 2012). As such, it is unclear whether a shift in phonetic properties activates phonological restructuring or, in contrast, whether acquiring the phonological distinction is necessary for phonetic change to occur. The current study addresses this question by pooling together acoustic measurements of French /y/ and /u/ productions uttered by 42 L1 English learners of French before (Time 1) and after (Time 2) the learners spend a year abroad in a French-speaking country. French minimal pairs containing the contrast are embedded in carrier phrases and read aloud in a randomised order. Participants are generally advanced foreign language learners with on average 7.5 years (range: 0-12) of classroom learning experience before their residence abroad. Statistical models controlled for this effect and that of French proficiency, as indexed by vocabulary knowledge (Brysbaert, 2013)Dutch, and German, but not for French. In the present study we report the construction of a French equivalent. Participants get a random Figure 1: Clustering productions into acoustically front and back tokens then comparing clusters to a) original vowel sequence of 56 French words of varying difficulty and 28 French-looking nonwords. They labels and b) the time point at which they were produced. Respective accuracy scores are indicated in terms of have to indicate which words they know. The results showed a big difference (effect size of percentage correct use. d = 3.6. In the first stage of analyses, the data are temporarily stripped of their ‘Segment’ and ‘Time References Point’ labels and K-medoid clustering of second formant measurements is used to group the data into two acoustically similar pools: a front vowel and a back vowel. Comparing these Brysbaert, M. (2013). LEXTALE-FR a fast, free, and efficient test to measure language proficiency in clusters with the original labels of /y/ and /u/ gives a phonological accuracy score at Time French. Psychologica Belgica, 53(1), 23–37. Darcy, I., Dekydtspotter, L., Sprouse, R. A., Glover, J., Kaden, C., McGuire, M., & Scott, J. H. G. (2012). 1 and Time 2, i.e. what proportion of times learners correctly use an acoustically “front” Direct mapping of acoustics to phonology: On the lexical encoding of front rounded vowels in vowel in contexts that require /y/ and what proportion of times learners correctly use an L1 English- L2 French acquisition. Second Language Research, 28(1), 5–40. acoustically “back” vowel in contexts that require /u/ (Figure 1). Pierrehumbert, J. B. (2016). Phonological Representation: Beyond Abstract Versus Episodic. Annual Results reveal that producing a front vowel in /y/ contexts is easier at both time points but Review of Linguistics, 2(1), 33–52. does not improve from pre- to post- year abroad. In contrast, producing an acoustically back vowel in /u/ contexts does improve over the residence abroad. Crucially, after controlling for this categorical development, there is no significant backing of /u/ in terms of gradient change. This suggests that more progress is made in terms of phonological development than phonetic category formation. These findings appear to suggest that the updating of lexical representations does not entail simultaneous phonetic category formation, potentially indicating a partial disassociation between linguistic levels (Darcy et al., 2012). Further implications for L2 speech learning models are discussed.

228 229 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS DAY 2 - Poster Session B F0 declination and the semantics of spoken sentences. More generally, the results of the Tone, Intonation, Rhythm & Stress experiment can be related to a proposal, which states that the semantic and/or syntactic structures underlying the spontaneously spoken sentences accommodate the planning of Intonational (in)variance in spontaneously spoken utterances prosodic structure [6,7], part of which the F0 declination is.

Nele Ots 1 Goethe University of Frankfurt am Main

The study investigates the relationship between the parameters of F0 declination and the different linguistic domains such as spoken full sentences and smaller prosodic chunks within these sentences. For instance, height of F0 peaks at the very beginning of spoken sentences correlates with sentences’ duration [1-2] or with duration of first prosodic phrases [3]. Domain-initial F0 raising indicates pitch-related pre-planning, which might be necessary because F0 must be prevented to fall too low at the end of sentences or utterances [1- 2,4]. Consequently, domain-final F0 valleys are more or less constant for a given speaker. Some recent studies, however, do not find the constant low F0 [5-6]. The aim here is to seek evidence for a proposal that intonational invariance depends on the linguistic domain of F0 declination [5]. In particular, the content of spoken sentences is typically distributed across multiple prosodic chunks (e.g., a man is [pause] pushing a purple car). The proposal is that F0 valleys are more or less constant at the end of sentences (i.e. utterances) but not the end of intermediate prosodic chunks. For the speech production experiment, native speakers of Estonian (N=45) were asked to describe pictures of simple transitive actions (e.g. a man pushing a car). The pictures depicted two or three actors, of whom always the two carried out some action. When the third actor was present, then it was very similar to the undergoer of the action, except for a single feature (e.g. the car being pushed was purple but the other car in the scene was red). The participants of the experiment were instructed to name this feature every time they Figure 1. F0 peaks and valleys by condition in utterances (N=1680) and by duration in prosodic chunks (N=2678). encountered three actors in a picture. Therefore, the two- and three-actor pictures elicited The first column shows the mean F0 maximum (in semitones; top) and the mean F0 minimum (in semitones; bottom) in short and long sentences. The arrows indicate 95% confidence intervals. The second column shows utterances containing either simple or modified noun phrases (e.g., ‘a man is pushing ’ a car F0 maxima (top) and minima (bottom; in Hertz) as a function of the prosodic chunks’ duration (ms). vs ‘a man is pushing the purple car’) respectively. Thus, the visual stimuli elicited short and long spontaneous utterances (respective mean durations are 1875 ms and 3061 ms) that References constituted a single prosodic chunk or were divided into several prosodic chunks. A number of F0 measures were evaluated in relation to utterance and prosodic chunk duration: i) [1] Liberman, M. & Pierrehumbert, J. 1984. Intonational invariance under changes in pitch range F0 maxima at the beginnings of utterances and chunks, and ii) F0 minima at the ends of and length. In Aronoff, M. and Oehrle, R. T. (Eds.),Language Sound Structure. Studies in Phonology. utterances and chunks. Presented to Morris Halle by his Teacher and Students. Cambridge, MA: The Massachusetts The results show that both utterance-initial and chunk-initial F0 maxima are higher in longer Institute of Technology, 155-233. than shorter utterances and prosodic chunks (see Figure 1). Moreover, while chunk-final F0 [2] Yuan, J. & Liberman, M. 2014. F0 declination in English and Mandarin broadcast news speech. minima lowered with increasing duration of prosodic chunks (consistent with earlier findings Speech Communication 65, 67–74. for Estonian in [5]), the utterance-final F0 minima did not depend on utterance length. In [3] Fuchs, S., Petrone, C., Krivokapic, J., & Hoole, P. 2013. Acoustic and respiratory evidence for other words, the acoustic measurements indicate that F0 valleys are constant at the ends of utterance planning in German. Journal of Phonetics 41, 29–47. utterances (i.e. sentences) but not necessarily at the ends of prosodic chunks that form parts [4] Thorsen, N. G. 1985. Intonation and text in standard Danish. The Journal of the Acoustical Society of these utterances. of America 77(3), 1205–1216. The results of the study support the idea that F0 declination is domain-dependent and [5] Asu, E. L., Lippus, P., Salveste, N., & Sahkai, H. 2016. F0 declination in spontaneous Estonian: implications for pitch-related preplanning. In Proceedings of Speech Prosody (Boston). that intonational invariance is related to semantic and pragmatic coherence. Relatedly and [6] Chen, A. C.-H. & Tseng, S.-C. 2019. Prosodic encoding in Mandarin spontaneous speech: Evidence strikingly, this study also demonstrates that the height of sentence-initial F0 peaks correlates for clause-based advanced planning in language production. Journal of Phonetics, 76, 100912. with utterances’ duration despite of the increasing number of prosodic chunks within these [7] Keating, P. & Shattuck-Hufnagel, S. 2002. A prosodic view of word form encoding for speech utterances (in contrast to findings for German in [3]). Our findings, thus, capture the intricate production. UCLA Working Papers in Phonetics 101, 112–156. relationships between the parameters of F0 declination and highlight the interaction between

230 231 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Universal and language specific intonational properties in the perception of Discussion. Where MB differs from both cross-linguistics tendencies and the L1 ofthe interrogativity: Questions vs statements in Markina Basque speakers, accuracy in the identification of questions and statements dropped significantly. The results support a non-universalist view of intonational meaning. In our case, we find Jennifer Zhang, Izaro Bedialauneta, and José Ignacio Hualde substantial differences in the interpretation of contours that do not conform with universal University of Illinois at Urbana-Champaign tendencies between native speakers and L1 or L2 speakers of other dialects.

The interpretation of intonational contours is not universal, despite strong cross-linguistic tendencies [1]. For instance, a majority of languages have rising interrogative contours, and rising intonation in final statements is exceptional [2]. However, even across varieties of the same language, intonational differences may lead to misinterpretations. Here, we test the hypothesis that intonational contours are less likely to be interpreted correctly if they do not conform either with the listeners’ L1 or with universal tendencies, by examining perception of statements and questions in Markina Basque (MB) by native speakers, speakers of other Basque dialects, and L1 Spanish/L2 (non-Markina) Basque speakers. In MB, both unmarked statements and questions show final falling contours (Most MB words are lexically unaccented, but some bear a lexical accent [3]. All words in this study are lexically unaccented). In neutral statements containing more than one prosodic phrase, an HL contour is aligned with the penultimate syllable of the preverbal phrase (fig. 1). In questions, an additional HL aligns with the final syllable of the utterance (fig. 2). This pattern is not found in other Basque varieties [4]. In single-phrase utterances, on the other hand, there is a single HL aligned with the penultimate syllable in statements (fig. 3) and with the final syllable in questions (fig. 4). Arguably, the crucial difference between statements and questions is the location of the final HL movement. This difference in alignment is a MB- specific feature. Methods. We selected 8 texts, four multi-phrase and four single-phrase, recorded both with statement and question intonation. The pitch contours were manipulated so that questions would have statement contours and vice versa, by adding or deleting a final HL in multi- phrase utterances and shifting the location of the only fall in single-phrase utterances. In addition, 3 verum focus statements, which also have a circumflex contour, were included (see poster). Participants and task. 15 native speakers of Markina Basque, 29 native speakers of other Basque dialects, and 20 Spanish-L1/Basque-L2 speakers were presented with audio stimuli via Qualtrics. For each stimulus, they were asked to respond whether it was a question or not. Reaction times were also collected. Predictions. We predicted that native MB speakers would interpret the original contours as intended and that our manipulation would shift their perception, regardless of other cues to interrogativity. Predictions for other groups differed depending on the number of phrases in the stimulus and the extent towhich contours conform to universal tendencies or to L1 patterns. Multi-phrase statements in MB, which present the universally most common pattern, should be identified with greatest accuracy. Additionally, the multi-phrase question contour resembles the circumflex accent of questions in the Spanish of the Basque Country [5], we therefore expected all participants to accurately distinguish multi-phrase statements and questions, see also [6]. We expected the least native-like results in single-phrase examples, where only the position of the final peak had been manipulated. Results. A binomial logistic regression on yes/no responses as a function of the interaction between L1 and the number of prosodic phrases returned significant differences between MB speakers and both other groups (p<0.0001). Speakers of other Basque dialects and L2 speakers were significantly less accurate than MB speakers in interpretations of unmanipulated stimuli (see fig 5 and 6). They were also less affected by the manipulations than native MB speakers.

232 233 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS References Intonational variation in the vocatives of Peninsular Spanish

[1] Ladd, D. R. 1980. On intonational universals. Advances in Psychology 7, 389-397. Paolo Roseanoa,b, Lourdes Romeraa, Ana Ma. Fernández Planasa, Francesco Rodriqueza,c, [2] Gussenhoven, C. 2002. Intonation and interpretation: phonetics and phonology. Speech aUniversitat de Barcelona, bUniversity of South Africa, cAlbert-Ludwigs-Univ. Freiburg, Prosody 2020, Aix-en-Provence. [3] Hualde, J.I. 2000. On system-driven sound change: Accent shift in Markina Basque. Lingua 110, In recent decades, the study of intonation in Spanish has expanded, thanks mainly to the 99-129. AMPER [1] and AIEE [2] projects. The types of vocative compiled by the AIEE include the calling [4] Elordieta, G. & Hualde, J.I., 2014. Intonation in Basque. S-A Jun, ed., Prosodic Typology II, 405-463. and insistent calling vocatives [3]. In much of the peninsula, however, the characteristics of Oxford: OUP. vocatives with pragmatic functions other than the two just mentioned remain to be explored. [5] Robles-Puente, Sergio. 2011. Absolute questions do not always have a rising pattern: Evidence from Bilbao Spanish. Laboratory Approaches to Romance Phonology 5, 98-107. The main objective of this study is to describe the intonational characteristics of different [6] Gaminde, I., A. Romero, A. Etxebarria & U. Garay. 2013. Bai/ez galderen pertzepzioaren aldeak types of vocatives in three localities inside the Spanish linguistic domain: Soria, Albacete and informatzaileen ama hizkuntzaren arabera: euskararen prosodia gaitasuna lantzeko zenbait Granada. In each site, two informants (a man and a woman) were recorded. Each informant datu argigarri. Fontes Linguae Vasconum 116: 273-286. uttered 27 vocatives: three repetitions of three types of vocative with names characterized by different accentual positions (the namesBárbara , Manolo, and Magalí). The corpus therefore included 162 vocatives. The data were elicited by means of a Discourse Completion Task (DCT) [4] that contained three conversational situations suitable for obtaining vocatives of greeting, confirmation-seeking and reprimand [5]. The data were prosodically annotated in Praat. The annotation was carried out at phonological (Sp_ToBI, [6]) and phonetic levels (IPrA [7]). Seven nuclear configurations were observed: L+H* !H%, L+H* HL%, L+H* L%, L+H* H%, L* H%, L+H* LH%, whose distribution varied according to locality and pragmatic function (Figure 1). In the greeting vocatives, at all sites the following configurations appear: L+H* !H% (described as the most common in vocatives [8]) and L+H* HL% (for which a meaning of insistence or emphasis has been proposed [3]). Only in Granada does L+H* L% appear with this function. As regards confirmation-seeking vocatives, the presence of L+H* H% stands out at all three sites, appearing almost exclusively with this pragmatic function. This suggests the hypothesis that it has a more specific semantics than other patterns (such as L+H* !H% or L+H* HL%) which appear in a wide range of situations. And as for reprimand vocatives, several patterns appear at each of the sites: some of them do not seem to be usable in other situations of the DCT: L+H* LH% in Soria and L* HL% and Granada. Therefore, these patterns appear to have very specific semantics. In the case of the name with the stress on the last syllable (Magalí), interesting phenomena regarding the phonetic-phonology interface were detected. In particular, when tonal crowding occurs, different types of text-tune alignment were observed. Cases of stressed vowel lengthening were detected (Figure 2), like those already described for Spanish and other languages ([9], [10]). In addition, cases were observed in which the lengthening was accompanied by an under-realization of one of the tonal targets. For example, the tonal target L of the border tone /LH%/ was realized as [!H] (Figure 3), since the F0 did not descend to the lower values of the speaker’s range as it does in the canonical realizations. Cases of partial tonal truncation were also detected, characterized by an incomplete realization of the last tonal movement. In Figure 4, for example, it is observed that the tonal target L of the border tone /HL%/ is realized as [!H] because the F0, despite experiencing a fall, does not reach its minimum level at the end of the utterance as it does in canonical realizations. On the whole, this study reports the presence of different intonation patterns in the vocatives of three sites of the Iberian Peninsula (Soria, Albacete, Granada). The distribution of the patterns on the one hand indicates the existence of dialectal differences, and on the other hand depends clearly on the pragmatic context and the communicative function with which

234 235 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS the utterances are made. Prosodic Constraint Interaction in Sicilian Intonation Francesco Rodriquez Universitat de Barcelona

This study deals with the phonetics-phonology interface of Sicilian intonation. While the Italian varieties that are spoken on Sicily have been analyzed prosodically in several studies ([1], [2], [3]), only recently have first attempts been made to describe the intonation of basic sentence-types in Sicilian (broad focus statements and information-seeking yes-no questions) from an acoustic, a phonological and a dialectometric perspective [4]. While Sicilian broad focus statements present an intonational pattern that is constant across speakers and geoprosodic areas (H+L* L-L%), its yes-no questions present a wider range of phonological nuclear configurations (H*+L H-L% in Catania, H*+L L-L% in Ragusa, L*+H L-L% in the other areas). The questions where the penultimate or antepenultimate syllable Figure 1 is stressed present canonical realizations of intonational patterns in all geoprosodic areas. The questions where the last syllable of the utterance is stressed, on the other hand, are of particular interest because they imply tonal crowding on the last syllable (e.g. /L*+H L-L%/ on ‘baccaLÀ?’ = ‘codfish?’). Depending on the item and the geoprosodic area, tonal crowding is avoided by means of different phonetic implementation strategies: boundary tone deletion (also called tonal truncation in the literature), displacement of the configuration (also known as peak retraction), and mora insertion (also described in terms of lengthening). Some of these Figure 2 Figure 3 Figure 4 tune-text accommodation strategies have been described for other Romance languages, too ([5], [6], [7]). References By analyzing a total of 270 questions from 10 speakers from 6 Sicilian provinces (our elicited data from [4]) the variation of phonetic implementation strategies between dialects as well [1] Martínez Celdrán, E., & Fernández Planas, A.M. (Coords.) (2003-2019). Atlas Multimèdia de la as between speakers of the same dialect is explained within the Stochastic Optimality Theory Prosòdia de l’Espai Romànic. http://stel.ub.edu/labfon/amper/ (St-OT) framework by means of a set of prosodic faithfulness, markedness and alignment [2] Prieto, P., & Roseano, P. (Coords.) (2009-2013). Atlas interactivo de la entonación del español. constraints motivated in [8]: (1) Max-IO(T): Assign a violation mark for every autosegment in http://prosodia.upf.edu/atlasentonacion/ the input that does not surface in the output. (2) *Crowding: Starting from the second pair, [3] Estebas-Vilaplana, E., & Prieto, P. (2010). Castilian Spanish intonation. In P. Prieto, & P. Roseano lign assign a violation mark for every pair of tones [Tn, Tn+1], such that Tn Tn+1. (3) A (T*,’): Assign (Eds.), Transcription of the Intonation of the Spanish Language (pp. 17-48). Lincom: München. a violation mark for every prominent autosegment that is not aligned with the stressed [4] Vanrell, M.M., Feldhausen, I., & Astruc, L., (2018). The Discourse Completion Task in Romance syllable in the output. (4) Dep-IO(): Assign a violation mark for every mora in the output that prosody research: Status quo and outlook. In I. Feldhausen, J. Fliessbach, & M. M. Vanrell (Eds.), does not have a corresponding mora in the input. Methods in prosody: A Romance language perspective (pp. 191-227). Berlin: Language Science Given that the classic version of Optimality Theory (OT) does not allow multiple optimal Press. output candidates for a given input, one of its stochastic implementations is used: the [5] Huttenlauch, C., Feldhausen, I., & Braun, B. (2018). The purpose shapes the vocative: Prosodic Gradual Learning Algorithm ([9], [10]), that is a noisy and data-driven version of classic OT, realisation of Colombian Spanish vocatives. Journal of the International Phonetic Association, 48(1), 33-56. where constraints are represented as Gaussian distributions on a continuous ranking scale. [6] Hualde, J.I., & Prieto, P. (2015). Intonational variation in Spanish: European and American These distributions are gradually displaced according to empirical input-output ratios that varieties. In S. Frota, & P. Prieto (Eds.), Intonation in Romance (pp. 350-391). Oxford: Oxford tell the algorithm which constraints ought to be stricter or laxer. Every time the grammar University Press. generates an output form, its constraints are randomly assigned a selection point on their [7] Hualde, J.I., & Prieto, P. (2016). Towards an International Prosodic Alphabet (IPrA). Laboratory own probability distributions, the numerical size of which determines their definitive ranking Phonology, 7(1):25, DOI: 10.5334/labphon.11 during this specific evaluation time. [8] Frota, S., & Prieto, P. (2015). Intonation in Romance: Similarities and differences. In S. Frota, & In our case, the trained dialect models (n=100.000 trials, p = 0.1 plasticity) not only generate P. Prieto (Eds.), Intonation in Romance (pp. 392-418). Oxford: Oxford University Press. multiple optimal output candidates of a given input structure at different evaluation times [9] Roseano, P., & Fernández Planas, A.M. (2018). L’amuntegament tonal en castellà, català i friülà but also approximate the frequency distribution observed in the empirical data. In Trapani, en la Teoria de l’Optimitat. Sintagma, 30, 23-37. for example, the L*+H L-L% nuclear configuration of interrogatives where the last syllable is [10] Rodriquez, F., Roseano, P., & Elvira-García, W. (2020). Grundzüge der sizilianischen Prosodie. stressed is either realized as truncated L*+H (L-L%) (~88% of cases) or left-displaced

236 237 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS this empirical input-output ratio (Tab.1). [10] Boersma, P. & Hayes, B. 2001. Empirical tests of the Gradual Learning Algorithm. Linguistic Inquiry 32, 45-86. Cross-dialect tonal variation in the Padang dialects of Dinka

Mirella Blum1 and Bert Remijsen1 1University of Edinburgh

Across the world’s languages, time pressure exerts substantial influence on the realization of tone patterns [1, 2]. In this context, the Dinka language (Nilo-Saharan, South Sudan) is of particular interest, because the sonorous duration of the syllable—the domain over which tonemes are realized—varies massively, as a function of utterance-final lengthening and vowel length [3]. The syllable nucleus can be occupied by a short, long, or overlong vowel, and level and contour tonemes alike can be associated with syllables of any length. The tension Figure 1. Constraint Grammar for the intonation of Trapani Sicilian (n = 100.000 trials, plasticity p=0.1), where between sonorous duration and tonal realization represents a major source of cross-dialect the training data was generated based on the empirical input-output ratio (left). During an evaluation time t this tonal variation, as different dialects present different responses to this tension. In this study, grammar can produce the hierarchy shown in the tableau (right) that favours the truncated candidate L*+H (L-L%) we explore the largely uninvestigated Dinka Padang dialects, which have tonal phenomena for the input L*+H L-L%; (ox). “ox” (oxytone) refers to the last syllable being stressed. that have not been previously described in Dinka. This investigation is based on new data collected with several native-speaker consultants for each of the dialects under investigation. Input candidates for Total frequency Relative frequency of Relative frequency of The data are analysed using ear-based analysis complemented with the inspection of F0 L*+H L-L%, (ox) of model output model output empirical data traces. L*+H L-L%, ox (mora insertion) 10 <0.001 0 Take the example of ŋâaap, ‘sycamore tree,’ which is specified for a Fall toneme across L*+H L-(L%), ox (partial truncation) 17 <0.001 0 dialects of Dinka. In the Hol dialect, which belongs to the Bor dialect cluster, this toneme is realized as a Fall consistently across contexts, both utterance-finally (1a) and utterance- L*+H (L-L%), ox (total truncation) 87787 ~0.88 ~0.88 medially (1b)1. However, in the Ngok dialect, which belongs to the Padang dialect cluster, the Fall is realized as a Fall in utterance-final contexts (2a) but as a Low tone in utterance-

238 239 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS and where the pressure of tonal targets is nonexistent. The phenomena under investigation The Trini Sing-Song: Sociophonetic variation in Trinidadian English prosody and also shed light on the diachronic development of tone in Dinka. Note that, in the context in differences to other varieties (2b), Fall and Low neutralize in the Ngok dialect. Under hypocorrection [4], this could lead to a scenario in which tonemes are no longer contrastive. We hypothesize that a comparable Philipp Meer1 and Robert Fuchs2 process explains the synchronic inventory of the Agar dialect, which has only three tonemes 1University of Münster & University of Campinas, 2University of Hamburg [5]. The question of whether and to what extent individual postcolonial varieties of English develop their own, distinctive linguistic norms (i.e. endonormativity) is a central question References in research on postcolonial Englishes [1]. In the Caribbean island of Trinidad, variation in prosody may be a particularly important dimension at which endonormative tendencies in [1] Xu, Y. and Sun, X. 2002. Maximum speed of pitch change and how it may relate to speech. the local variety of English come to the fore: Trinidadian English (TrinE) prosody is popularly Journal of the Acoustical Society of America 111, 1399-1413. described as ‘sing-song’ and thought to be a characteristic feature that distinguishes it from [2] Zhang, J. 2002. The Effects of Duration and Sonority on Contour Tone Distribution: Typological Survey other Englishes [2]. On the phonological level, previous studies indicate that intonation and Formal Analysis. PhD. dissertation, UCLA, 2001. New York, NY: Routledge. phrase (IP-)final rises and frequent alternation between H tones and L* pitch accents in [3] Remijsen, B. & Gilley, L. 2008. Why are three-level vowel length systems rare? Insights from prosodic units below the IP [2, 3] might contribute to TrinE ‘sing-song’. On the phonetic level, Dinka (Luanyjang dialect). Journal of Phonetics 36(2), 318–344. the degree of variation in pitch may also play a role [3, 4]. However, evidence on global pitch [4] Ohala, J. J. 1989. Sound change is drawn from a pool of synchronic variation. L. E. Breivik & E. H. Jahr (eds.), Language Change: Contributions to the study of its causes. [Series: Trends in parameters such as pitch level, range, and dynamism is currently limited. Specifically, there Linguistics, Studies and Monographs No. 43]. Berlin: Mouton de Gruyter. 173 – 198. is currently no evidence on [5] Andersen, Torben. 1987. The Phonemic System of Dinka. Journal of African Languages and (1) whether a wide pitch range and very dynamic pitch are endonormative features of TrinE, Linguistics 9, 1–27. (2) to what extent TrinE differs from other varieties of English regarding these aspects, and, (3) to what extent pitch variation is generally sociolinguistically conditioned. Based on read and spontaneous data from 111 speakers, we analyze pitch level, range, and

1 dynamism in TrinE (69 speakers) in comparison to Southern Standard British (BrE; 22 speakers) In the examples, slashes indicate surface phonological form. The difference in tonal and Educated Indian English (IndE; 20 speakers), and investigate sociophonetic variation in specification for the infinitive of ‘hate’ is not relevant. TrinE prosody with a view to these global F0 parameters. We follow [5, 6] in measuring the overall distribution of F0 in terms of pitch level (median F0 in Hz), range (80% range in st), and dynamism (coefficient of variability). We test for inter-varietal and sociolinguistic effects by using (mixed-effects) linear regression models that include gender as a fixed effect, thus controlling for the possibly confounding effect of gender. With regard to the cross-varietal comparison (see Fig.1), the results show that pitch level in TrinE is not higher than in BrE and IndE; compared to both varieties, TrinE generally has the lowest level. TrinE has a (non-significantly) wider pitch range than IndE but a somewhat smaller range than BrE in read speech, while in spontaneous speech it has a (non-significantly) wider range than both BrE and IndE. Likewise, TrinE has (non-significantly) more dynamic pitch than IndE in read speech, but not BrE, while TrinE only has marginally (and non-significantly) more dynamic pitch than IndE and BrE in spontaneous speech. In sum, TrinE does not have a wider pitch range (except in spontaneous speech) or more dynamic pitch. However, an analysis of the influence of speaker gender complicates the picture (see Fig. 2). Male speakers across the three varieties generally show few differences in pitch range and dynamism, and these differences are below the level of significance. In contrast, considerably more female TrinE speakers were found to have a wider pitch range than female BrE speakers in spontaneous speech and more pitch dynamism than both female BrE and IndE speakers in this speaking style. It is female Trinidadian speakers, in spontaneous speech, that have particularly wide pitch ranges and dynamic pitch (see also Table 1). An important finding of this study is that pitch variation patterns are not homogenous in TrinE, but systematically sociolinguistically conditioned across gender, age, and ethnic groups, and rural and urban speakers (see Table 1). There is a considerable degree of systematic local differentiation in TrinE prosody. More generally, the findings may be taken to indicate that

240 241 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS endonormative tendencies and sociolinguistic differentiation in TrinE prosody are interlinked. Our findings further indicate that, on a phonetic level (in addition to previously identified patterns on the phonological level), a high degree of variability in pitch could play a role in the popular perception of TrinE being ‘sing-song’, at least for female Trinidadian speakers.

Figure 1. Estimated means and confidence intervals superimposed on observed distributions (smoothed violin plots with individual values) for pitch level (median F0 in Hz), range (80% range in semitones), and dynamism (pitch dynamism quotient) across British, Indian, and Trinidadian English in read and spontaneous speech (effect: variety). Levels of significance: p < .05 (*), p < .01 (**), p < .001 (***).

References

[1] Schneider, E. W. (2007). Postcolonial English: Varieties around the world. Cambridge, UK: CUP. [2] Drayton, K.‑A. (2013). The prosodic structure of Trinidadian English Creole (PhD thesis). University of the West Indies, Trinidad & Tobago. [3] Ferreira, J.‑A., & Drayton, K.‑A. (2017). Trinidadian English. Author manuscript submitted for publication, University of the West Indies, Trinidad & Tobago. [4] Wilson, G. (2007). Utterance-final pitch prominence (MPhil thesis). University of Cambridge, UK. [5] Fuchs, R. Pitch range, dynamism, and level in postcolonial Englishes: A comparison of Educated Indian English and British English. Proceedings of Speech Prosody, 893–897. [6] Maxwell, O., Payne, E., & Billington, R. (2018). Homogeneity vs. heterogeneity in Indian English: Investigating influences of L1 on f0 Range.Proceedings of Interspeech, 2191–2195.

242 243 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS No word stress in Conchucos Quechua? Evidence from peak alignment data References Timo Buchholz Freie Universität Berlin [1] Adelaar, Willem F. H. & Pieter Muysken. 2004. The languages of the Andes. Cambridge, New York: Cambridge University Press. Languages of the Quechua family have variously been described as having regular, lexically [2] Parker, Gary. 1976. Gramática quechua: Ancash-Huailas. Lima: Ministerio de Educación. noncontrastive, word stress on the [among others, 1, 2, 3, 4, 5], although mostly [3] Cusihuamán, Antonio. 2001. Gramática Quechua: Cuzco-Collao, 2nd edn. Lima: Centro Bartolomé based on impressionistic data (except [6, 7, 8]) and with lengthy additional descriptions of de las Casas. unexplained variation. This lets us suspect that the phenomenon under discussion is not [4] Stewart, Anne M. 1984. New approaches to coping with stress: A case study in Conchucos Quechua. word stress as defined in [9], but instead perhaps something else (e.g. pitch accents, phrasal Work Papers of the Summer Institute of Linguistics, University of North Dakota Session 28, tones [10, 11]). 193–212. I present an analysis of acoustic data of Conchucos Quechua from fieldwork in the town [5] Wetzels, Leo & Sérgio Meira. 2010. A Survey of South American stress systems. In Rob of Huari, Conchucos, Peru. The data is semi-spontaneous from dialogical communication Goedemans, Harry van der Hulst & Ellen van Zanten (eds.), A survey of word accentual patterns games. All participants are fully bilingual in Spanish and Quechua and were recorded playing in the languages of the world. Berlin: de Gruyter, 313–380. all the games first in one, then the other language. Conchucos Quechua has contrastive [6] O’Rourke, Erin. 2005. Intonation and Language Contact: A Case Study of two Varieties of Peruvian vowel length [2]. We can identify only three tonal contours in the entire pragmatically rather Spanish. Urbana-Champaign: University of Illinois Dissertation. diverse corpus: a rising (LH), and two falling ones (falling HL and rising-falling LHL). The data [7] O’Rourke, Erin. 2009. Phonetics and phonology of Cuzco Quechua declarative intonation: An agrees broadly with the existing descriptions in that a pitch event characterized by a rise or instrumental analysis. Journal of the International Phonetic Association 39(3), 291–312. fall occurs near the end of words, either on the penult or the final syllable. However, this is [8] Hintz, Diane M. 2006. Stress in South Conchucos Quechua: A Phonetic and Phonological Study. only the case in a subset of the words, most likely those rightmost in a phrase. In a corpus International Journal of American Linguistics 72(4), 477–521. of 487 noun phrases from one game, an identifiable tonal contour occurred on every 1.27 [9] Hyman, Larry M. 2014. Do All Languages Have Word Accent? In Harry van der Hulst (ed.), Word content words, slightly less often than in French or Korean [12], suggesting that contours are stress: Theoretical and typological issues. Cambridge: Cambridge University Press, 56–82. assigned at an AP-like level which can map to more than one content word. In those words [10] Gordon, Matthew J. 2014. Disentangling stress and pitch accent: A typology of prominence at where no pitch event occurs, duration also does not cue the penult as being prominent different prosodic levels. In Harry van der Hulst (ed.), Word stress: Theoretical and typological [13]. When a pitch event occurs, it often does so on a syllable with a reduced vowel, even issues. Cambridge: Cambridge University Press, 83–118. if that syllable is the putatively stressed penult. Phrase-final vowel devoicing can extend to [11] Hyman, Larry M. 2006. Word-prosodic typology. Phonology 23(2), 225–257. the penult of the final word; in that case, the pitch event moves towards the left, suggesting [12] Jun, Sun-Ah & Cécile Fougeron. 2000. A Phonological Model of French Intonation. In Antonis it seeks to be realized on an available TBU close to the right edge, but not necessarily the Botinis (ed.), Intonation: Analysis, Modelling and Technology. Dordrecht: Springer, 209–242. penult. [12] Buchholz, Timo & Uli Reich. 2018.The realizational coefficient: Devising a method for empirically We hypothesize that phrase-final peaks are not due to association of an H tone with a putatively determining prominent positions in Conchucos Quechua. In Ingo Feldhausen, Jan Fliessbach & stressed penult (against [8]), but due to alignment with a boundary. In order to test this, we Maria d. M. Vanrell (eds.), Methods in prosody: A Romance language perspective. Berlin: Language conducted peak alignment measurements on words with a rise-fall occurring in the final two Science Press, 123–164. syllables from another game (247 words from 16 speakers). For comparison, Spanish data [13] R Core Team. 2015. R: A Language and Environment for Statistical Computing. Vienna: R consisting of paroxytonic words with a rise-fall (138 words) from the same game and the Foundation for Statistical Computing. same speakers was also analyzed. Statistical analysis in R [14] confirmed the impression that peak alignment in the Quechua data is more variable than in the comparable Spanish data (Figure 1), where peaks more strictly occur within the stressed syllable. Regression models showed that the end of the penult was a good predictor for peak alignment in Spanish, as expected (R² = 0.75, F (1, 51) = 158, p < 0.001 for words in phrase-final position, R² = 0.68, F (1, 85) = 189, p < 0.001 for words in non-final position). In Quechua, however, no syllabic landmark in the supposedly stressed penult was a good predictor. In phrase-final Quechua words (operationalized as followed by silence) with a rising-falling contour, the peak position was best (but not well overall) predicted from the end of the final syllable, with the duration of both the penult and the final syllable as predictors (adj. R² = 0.54, F (3, 104) = 42.49, p < 0.001; for non-final words (n=140), no model reached an adj. R² above 0.5). Peak alignment in the Quechua data also showed no significant difference between utterance types. The lack of a determinable syllabic anchor and the far more variable peak placement (compared to Spanish) extending into the final syllable are taken as evidence against association of the high tone responsible for the peak with a prominent position. However, qualitative analysis of alignment patterns from other games suggests that in some Figure 1. Pitch peak position distribution in Quechua (A,B) words and Spanish paroxytone (C, D) words with a word- utterances, reference to the word penult must be made as an anchor point for pitch events. final falling pitch contour, in phrase-final (B, D) and non-final (A, C) position. N(A)= 140; N(B)= 107; N(C)= 86; N(D)= 52. I therefore propose that our data from Conchucos Quechua occupies a rather marginal position in a stress typology such as proposed in [9]: it pays very little attention to stress, but not none at all. I suggest that the observed prosodic variability is due to speakers taking recourse to different prosodic configurations available to them in their bilingual competence.

244 245 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Speech rhythm as a sociophonetic marker: read speech are also observed in spontaneous speech. Taken together, our findings highlight A comparison of spontaneous and read speech in multiethnolectal Swiss German the fact that the rather syllable-timed ‘staccato rhythm’ constitutes a sociophonetic feature of multiethnolectal Zurich German which persists across speaking styles. Marie-Anne Morand, Melissa Bruno, Seraina Nadig, Sandra Schwab & Stephan Schmid a) b) University of Zurich

In Western Europe, adolescents growing up in culturally diverse urban settings often develop ways of speaking that differ remarkably from the traditional vernaculars; Clyne [1] labels such ways of speaking ‘multiethnolects’, as several minority groups use them collectively. In multiethnolects of Germanic languages, a so-called ‘staccato’ rhythm has been observed. Such alleged syllable-timing can be investigated by rhythm metrics which calculate the durational variability of vocalic and consonantal intervals ([2], [3], [4]). A previous study on read speech in Swiss German [5] revealed that adolescents who were rated as speaking rather multiethnolectal Swiss German displayed slower syllable rate and less vowel duration variability than adolescents who were rated as speaking rather traditional dialect. The present c) d) contribution aims at verifying to what extent speech rhythm is a stable sociophonetic feature across speaking styles (read vs. spontaneous speech), examining (1) whether the two groups of speakers (multiethnolectal vs. traditional) differ with respect to speech rate (syll/sec) and rhythm metrics (varcoC, nPVI-V, %V), and (2) whether these differences are similar in the two speaking styles. The production data consist of read and spontaneous speech of 24 adolescents, who were recorded at schools in the city of Zurich (13 females; mean age = 14.42 years; SD = 0.83). Spontaneous speech was elicited during a ‘spot the difference’ game, using adapted pictures from the DiapixUK dataset [6]. We selected utterances without speaker overlap that contained at least three syllables. These utterances were automatically segmented in phones and syllables using WebMAUS [7]; the output of the automatic segmentation was checked Figure 1. Speech rate (a), varcoC (b), nPVI-V (c) and %V (d) as a function of group (mono = traditional, multi = manually (and corrected, if necessary). In order to determine to what extent speakers were multiethnolectal) and speaking style (DIA = spontaneous, LES = read). perceived as speaking multiethnolectal or traditional Swiss German, speech samples of the adolescents were rated by 40 adolescents from another school (25 females; mean age = 14.8 References years; SD = 0.56), on a 7-point Likert scale ranging from (1) ‘not multiethnolectal at all’ to (7) ‘completely multiethnolectal’. From the entire set of speakers, we selected 12 speakers who [1] Clyne, M. 2000. Lingua Franca and ethnolects in Europe and beyond. Sociolinguistica 14, 83-89. were rated as speaking most multiethnolectally and 12 speakers who were rated as speaking [2] Ramus, F., Nespor, M., & Mehler, J. 1999. Correlates of linguistic rhythm in the speech signal. most traditionally. Cognition 73, 265-292. As can be seen in Figure 1 (a-d), results show first an effect of group for speech rate (F(1, [3] Grabe, E., & Low, E. L. 2002. Durational variability in speech and the rhythm class hypothesis. 22) = 8.91, p = .007) and vowel variability (nPVI-V; F(1, 22) = 6.72, p = .017), but not for Laboratory Phonology 7, 515-546. consonant variability (varcoC; F(1, 22) = 0.29, p = .596) nor for percentage of vocalic intervals [4] Dellwo, V. 2006. Rhythm and speech rate: a variation coefficient for DeltaC. In Karnowski, (%V; F(1, 22) = 0.38, p = .544). More precisely, speakers of rather multiethnolectal Swiss P., & Szigeti I. (eds.), Language and Language Processing: Proceedings from the 38th Linguistics Colloquium. Frankfurt: Peter Lang, 231-241. German show a slower speech rate (4.95 syll/sec) and less vowel variability (nPVI-V = 46.17) [5] Morand, M.-A., Bruno, M., Julmi, N., Schwab, S., & Schmid, S. (2020). Speech rhythm in than speakers of rather traditional Swiss German (speech rate = 5.54 syll/sec; nPVI-V = 49.15). multiethnolectal Zurich German. Proceedings of the 10th International Conference on Speech These findings, especially with regards to vowel variability, confirm that multiethnolectal Prosody. Tokyo (virtual). Swiss German rhythm tends to approximate a more syllable-timed rhythm. Second, results [6] Baker, R., & Hazan, V. 2011. DiapixUK: task materials for the elicitation of multiple spontaneous show a significant effect of speech style for all examined features (F(1, 22) = [9.62-70.90] p = speech dialogs. Behavior Research Methods 43(3), 761-770. [.001-.005]). More precisely, speech rate is faster in spontaneous speech (5.68 syll/sec) than [7] Kisler, T., Reichel, U., & Schiel, F. 2017. Multilingual processing of speech via web services. in read speech (4.8 syll/sec) (Fig. 1a); vowel variability (Fig. 1b), consonant variability (Fig. Computer Speech & Language 45, 326-347. 1c) and percentage of vocalic intervals (Fig. 1d) are higher in spontaneous (nPVI-V = 51.09, [8] Grosjean, F., & Deschamps, A. 1972. Analyse des variables temporelles du français spontané. varcoC = 0.49, %V = 42.86) than in read speech (nPVI-V = 44.23, varcoC = 0.45, %V = 40.97). Phonetica 26(3), 129-157. These results confirm that spontaneous speech triggers more rhythmic variability than read speech [8]. Third, we do not observe a significant interaction between group and speech style for any of the features under study (see Fig. 1; F(1, 22) = [0.002-1.09], p = [.307-.97]), which indicates that the differences observed between multiethnolectal and traditional speakers in

246 247 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS The acquisition of English accent retraction by speakers of a non-plastic language vs apple pie). Interestingly, these are not the contexts that are most difficult for L1-Spanish learners to acquire. Instead, deaccentuation of final indefinite pronouns is particularly Stephanie Landblom and José Ignacio Hualde problematic, even at high levels of proficiency. University of Illinois at Urbana Champaign

Nuclear Accent (NA) placement in English is subject to a number of grammatical constraints and pragmatic conditions [1]. Such distinctions have been shown to be difficult for L2 speakers to learn [2], [3]. We consider four accent-retraction rules: (a) narrow focus on a non-phrase- final word (I wore a red shirt vs I wore a red shirt), (b) unaccented indefinites in phrase-final position (I saw someone vs I saw John), (c) left-stress compounds vs. adjective+noun sequences (the White House vs the white house) and (d) final vs. non-final stressed compounds (Green Street vs Green Avenue). Here we compare the production of native and non-native speakers in order to address the question of whether all four rules are equally difficult to learn for L2 speakers, in particular for native speakers of Spanish, a language generally described as having less prosodic flexibility compared with English [4]. An issue that may affect learning that we also examine is whether native speakers are consistent in the implementation of accent retraction across all tested categories. Methods. 20 L1-English and 23 L1-Spanish/L2-English speakers from various Spanish dialects and of varying proficiency levels (as measured by a cloze test) participated in this study. They were asked to read a series of sentences out loud, which were presented with a contextual question and designed to elicit one of the four conditions. The production data were analyzed in two different ways. First, measurements of F0, duration and intensity were taken from the last vowel in each utterance. Secondly, three trained annotators (not the authors) judged each utterance as having final or nonfinal main prominence. Results. A Principal Component Analysis was run to reduce the dimensionality of the acoustic measurements. PC1 accounted for the most variance in the data and maximum and mean pitch measures were identified as the main contributors. A mixed effects linear regression examining relevant factors such as context and proficiency level or speaker group was then run on the output which showed that L1 speakers had significantly lower values in those contexts where the accent was predicted to be retracted, except that there was not a significant difference in the Phrase vs Compound comparison. L2 speakers showed much smaller differences than L1 speakers across all contexts (Figs, 1&2). Further analysis of the L2 data showed that, as proficiency level increases, accent retraction is more consistently implemented in sentences with nonfinal focus and in left-accented compounds. The most difficult category contrast to acquire appears to be that between pragmatically neutral sentences ending in a noun vs an indefinite pronoun, where we actually find a negative References correlation between proficiency and accuracy (Fig. 3). The results of the perceptual annotation of the sound files by trained listeners indicate [1] Ladd, D. R. (2008). Intonational phonology. Cambridge University Press. that L1 speakers were perceived to have consistently retracted the accent in the contexts [2] Maastricht, L., Krahmer, E., & Swerts, M. (2016). Prominence patterns in a second language: where that was predicted, but also to some extent in contexts that do not require retraction, Intonational transfer from Dutch to Spanish and vice versa. Language Learning, 66(1), 124- especially in the two comparisons involving compounds. L2 speakers were perceived to have 158. overall lower levels of retraction. The lowest levels of accent retraction for L2 speakers are [3] Zubizarreta, M. L., & Nava, E. (2011). Encoding discourse-based meaning: Prosody vs. syntax. found in sentences ending with an indefinite pronoun (Fig 4). Further analysis shows that Implications for second language acquisition. Lingua, 121(4), 652-669. this is true across proficiency levels, consistently with what the acoustic analysis shows. [4] Vallduví, E. (1991). The role of plasticity in the association of focus and prominence, In Proceedings of the Eastern States conference on linguistics (escol). Discussion. Overall, for native speakers, our experimental results agree with standard descriptions regarding NA placement in English. Unexpected perceived retraction might be due to contrastive emphasis in the adjective in phrases like the white house and to individual differences in which particular compounds belong to the right-accent class (e.g, apple cake

248 249 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Splitting the rhythmic continuum in rhythmic classes: An HCA approach Catalan) and is better determined by VrPVI. On the whole, thus, HCA seems to be a valid method to split the rhythmic phonetic continuum in phonological classes, while DA gives hints Paolo Roseano1 about the most effective rhythm metrics. 1Universitat de Barcelona, University of South Africa

Although studies about rhythm have undergone important paradigm shifts in the last decades [1], the majority of authors nowadays would agree that: (1) phonetically speaking, there is a continuum between (at least) two ideal poles (the stress-timed pole and the syllable- timed pole); (2) phonologically speaking, there are (at least) two rhythmic classes (stress-timed languages and syllable-timed languages). In spite of the fact that authors agree that there is a phonetic continuum and that there exist phonological classes, there is no agreement on how the phonetic/acoustic continuum can be split into phonological classes. In other words, when researchers study the rhythmic Table 1. Mean values of 9 rhythm metrics for properties of a specific language and they want to classify it based on its acoustic properties, 8 languages (GAL = Galician, ITA = Italian, CAT they cannot rely on any method that is at the same time robust and straightforward. = Catalan, EMI = Emilian, SPA= Spanish, LAD The methods used so far are of two kinds: graphic methods and statistical methods. Graphic = Ladin, ENG = English, GER = German). methods (already used in classical studies like [2], [3]) consist in plotting the results of the Figure 1. ΔV/ΔC plot of 8 languages. analysis of rhythm metrics on a 2D Cartesian coordinate system (see an example with our data in Figure 1) and establishing, by means of a qualitative/visual interpretation of the graph, if the language under study is closer to the stress-timed pole or to the syllable-timed pole. The main disadvantage of graphic methods is that the belonging of a language to a rhythmic class is established merely on the base of an analysis by sight of the graph and is not validated quantitatively. The studies that use statistics to classify a language usually carry out metric-by-metric Figure 2. Dendrogram resulting from the cluster analysis with 9 rhythm metrics (V%, ΔC, ΔV, varcoΔC, varcoΔV, CrPVI, analyses. For example, [4] uses GLMM to test whether Catalan differs from English and VrPVI, CnPVI, VnPVI) of 8 languages (Spanish, Galician, Italian, Catalan, English, German, Ladin, Emilian). Spanish in 7 rhythm metrics taken one by one, while there is no analysis of all metrics jointly. The absence of an overall statistical analysis is a shortcoming of this approach. In this paper we argue that hierarchical cluster analysis (HCA) can provide a clear-cut answer [1] Mairano, P. 2010. Rhythm Typology: Acoustic and perceptive studies. Doctoral dissertation, to the questions like ‘Is language X stress-timed or syllable-timed?’, and such answer is based Università degli Studi di Torino, Torino, Italy. on a statistical analysis that takes into account all rhythm metrics jointly. A subsequent [2] Ramus, F., Nespor, M., & Mehler, J. 1999. Correlates of linguistic rhythm in the speech signal. discriminant analysis (DA) can determine the most effective rhythm metrics. Cognition 73(3), 263-292. To test this method, we used recordings of The North Wind and the Sun read by 48 speakers, [3] Grabe, E., & Low E.L. 2002. Durational variability in speech and the rhythm class hypothesis. In i.e. 6 for speakers each of the following 8 languages: 3 syllable-timed languages (Central Gussenhoven, C. & Warner, N. (Eds.), Papers in Laboratory Phonology 7. Berlin: de Gruyer, 515- Peninsular Spanish, Galician, Tuscan Italian), 3 stress-timed languages (Southern British 546. [4] Prieto, P., Vanrell, M.M., Astruc, L., Payne, E., & Post, B. 2012. Phonotactic and phrasal properties English, German, Ladin), an unclassified language (Emilian) and a language that has been of speech rhythm: Evidence from Catalan, English, and Spanish. Speech Communication 54(6), considered problematic or intermediate by previous studies (Catalan) ([1], [3], [4], [5], [6]). 681-702. Our hypothesis is that Emilian is stress-timed because it displays both complex consonant [5] Dellwo, V. 2006. Rhythm and speech rate: A variation coefficient for deltaC. In Karnowski, P. & clusters and a phonological contrast between long and short vowels [7], while Catalan is Szigeti, I. (Eds.), Language and language-processing. Frankfurt: Peter Lang, 231-241. syllable-time because it does not show complex consonant clusters, nor quantitative vowel [6] Roseano, P. 2020. Il ritmo linguistico del ladino dolomitico: Studio acustico del badiotto. Ladinia reduction or length contrasts in vowels [8]. XLIV, 1-52. As usual in studies on rhythm, vocalic and consonantal intervals were annotated in a Praat [7] Canepari, L., & Vitali, D. 1995. Pronuncia e grafia delbolognese. Rivista Italiana di Dialettologia textgrid for each recording. Correlatore [9] was used to calculate the mean values of 9 metrics 19, 119-164. (Table 1) put forward in classical studies about rhythm: V%, ΔC, ΔV, varcoΔC, varcoΔV, CrPVI, [8] Marsà, P., & Roseano, P. (2019). El ritme del català: anàlisi a partir de textos fonèticament VrPVI, CnPVI, VnPVI ([2], [3], [5]). Subsequently, Gabmap [10] was used to carry out a HCA equilibrats. Estudios de Fonética Experimental, XXVIII, 47-79. with Ward’s method and a DA. [9] Mairano, P., & Romano, A. 2010. Un confronto tra diverse metriche ritmiche usando Correlatore. Results show that the there are two clearly separated clusters (Figure 2). One of them In Schmid, S., Schwarzenbach, M. & Studer, D. (Eds.), La dimensione temporale del parlato. corresponds to stress-timed languages (English, German, Ladin and Emilian) and is better Proceedings of the V National AISV Congress (University of Zurich, 4-6 February 2009), 79-100. [10] Nerbonne, J., Colen, R., Gooskens, C., Kleiweg, P., & Leinonen, T. 2011. Gabmap: A web determined by ΔC. The other one to syllable-timed languages (Spanish, Galician, Italian and application for dialectology. Dialectologia, Special Issue II, 65-89.

250 251 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Pretonic lengthening in Ukrainian as the lexical stress domain extension rhythmic stress in Ukrainian, and that the regular alternation of rhythmically stressed and unstressed syllables is distorted in the vicinity of pretonic syllables. As initially suggested in Janina Mołczanow, Beata Łukaszewicz, Anna Łukaszewicz [7], these facts indicate that the existence of pretonic lengthening in Ukrainian is unconnected University of Warsaw to rhythm, but is closely related to the domain of lexical stress, which appears to extend to the pretonic syllable. Temporal adjustments are common in different loci, the most widely described prosodic positions causing an increase in duration include word boundaries and metrically strong (1) Prosodic position Example positions (lexical/grammatical stress). Less known is the presence of lengthening in the initial σσσˈσ(σ) zapakuˈvaty ‘to pack’ syllables adjacent to stress. In languages such as Catalan ([1]), Russian ([2]), and in some East Slavic dialects ([3]), lengthening affects the syllable immediately preceding lexical stress. A second σσσˈσ(σ) nezapeˈrečnyj ‘unquestionable’ related phenomenon, called “stress – adjacency effect’’, has been observed in English, where pretonic σσσˈσ(σ) peresaˈdyty ‘to transplant’ a stressed syllable undergoes lengthening before another stressed syllable, e.g., in peach light in comparison to peach delight ([4], [5], [6]). The present paper reports on an acoustic study of pretonic lengthening in Ukrainian. The existence of pretonic lengthening has gone unnoticed in the traditional descriptions, and although recent acoustic research ([7], [8], [9]) has pointed to its presence in Ukrainian, this phenomenon has not been systematically investigated. The present paper reports on an acoustic study of pretonic lengthening based on the data collected from 12 monolingual native speakers of Ukrainian (8F, 4M; aged 24-65, M = 42), living in Western Ukraine. Duration measurements of the vowel /a/ were conducted in three

prosodic conditions (initial - second - pretonic). All the words had the structure [σσσˈσ(σ1-2)], with lexical stresses always falling on the fourth syllable (see (1) for illustration). 8 words in 3 repetitions were recorded for each prosodic condition. The identity of consonants adjacent to the target vowel was controlled in terms of the place of articulation. Participants were asked to read target words embedded in a frame (Skažete … druhyj raz ‘You (pl.) will say … for the second time’. Segmentation was done manually by the authors using Sound Forge and Praat ([10]). In sum, measurements from 852 segments (24 tokens x 3 repetitions x 12 Figure 1. Mean vowel duration (ms) depending on position. speakers; 12 vowels were excluded due to disfluencies of pronunciation) were entered into statistical analyses, conducted in SPSS, v. 26. Linear mixed effects models were built to test References the effect of position on vowel duration. Intercepts and slopes for speaker and item were included in the random structure. The results demonstrate that though pretonic vowels are [1] Chitoran, I., & Hualde, J. I. 2007. From to diphthong: The evolution of vowel sequences in Romance. Phonology 24, 37–75. longer than both the vowels in the initial and the second position (see Fig. 1), significant [2] Kasatkina, R. F. 2005. Moskovskoe akan’e v svete nekotoryx dialektnyx dannyx. Voprosy results were obtained for the pretonic vs. initial positions (β = -11.92, SE = 4.43, t = -2.689, p < Jazykoznanija 53, 29–45. 0.05), but not for the pretonic vs. second positions (β = -7.33, SE = 4.43, t = -1.656, p = 0.108). [3] Bethin, C. Y. 2006. Stress and tone in East Slavic dialects. Phonology 23, 125–156. The phenomenon of pretonic lengthening is interesting because by reducing the temporal [4] Bolinger, D. 1965. Forms of English: Accent, Morpheme, Order. Harvard University Press, difference between the pretonic and the tonic syllable, pretonic lengthening weakens the Cambridge, Massachusetts. durational cue to lexical stress. The functionality of this prominence-related duration effect [5] Van Lancker, D., Kreiman, J., Bolinger, D. 1987. Anticipatory lengthening. Journal of Phonetics 61, remains unclear. The lengthening reported for English has been interpreted as the timing 339-347. effect of rhythm – the lengthening of the stressed syllable before another stressed syllable [6] White, L. 2002. English speech timing: a domain and locus approach, Ph.D. dissertation, leads to the equalisation of feet duration (([4], [5], [6]). However, this explanation cannot University of Edinburgh. be applied to Ukrainian and other Slavic languages, which lengthen an unstressed syllable [7] Łukaszewicz, B., & Mołczanow, J. 2018. Rhythmic stress in Ukrainian: Acoustic evidence of a and, thus, blur the syntagmatic distinction between the metrically prominent syllable and bidirectional system. Journal of Linguistics 54, 367-388. the preceding prosodically weak position. It is argued in [3] that pretonic lengthening in [8] Mołczanow, J., Łukaszewicz, B., & Łukaszewicz, A. 2018. Rhythmic stress or word-boundary East-Slavic dialects is caused by high tone with is associated with a syllable immediately effects? Comparison of primary and secondary stress correlates in segmentally identical word pairs. In Proceedings of the 9th International Conference on Speech Prosody, 908-912. preceding main stress. However, the lengthening effect detected in Ukrainian is very subtle [9] Łukaszewicz, B., & Mołczanow, J. 2018. The role of vowel parameters in defining lexical and in comparison with that reported for the East-Slavic dialects, and available F0 measurements subsidiary stress in Ukrainian. Poznań Studies in Contemporary Linguistics 54, 355-375. [9] do not reveal any pitch rise associated with pretonic syllable in Ukrainian. It has been [10] Boersma, P., & Weenink, D. 2020. Praat: doing phonetics by computer. Computer program. demonstrated in ([7], [8]) that the effect of pretonic lengthening is bigger than the effect of www.praat.org.

252 253 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS DAY 2 - Poster Session C [7] Rescorla, R. A., & Wagner, A. R. 1972. A theory of Pavlocian conditioning: Variations in the Phonological Processes effectiveness of reinforcement and nonreinforcement. In A. H. Black & W. F. Prokasy (Eds.) Classical Conditioning II: Current research and theory. New York: Appleton Century Crofts, Durational differences of word-final /s/ emerge from the lexicon: 64-99. [8] Baayen, R. H., Chuang, Y. Y., AND Blevins, J. P. 2018. Inflectional morphology with linear Evidence from pseudowords mappings. The Mental Lexicon 13 (2), 232-270. doi: 10.1075/ml.18010.baa [9] Chuang, Y.-Y., Vollmer, M. L., Shafaei-Bajestan, E., Gahl, S., Hendrix, P., & Baayen, R. H. 2020. 1 1 2 Dominic Schmitz , Ingo Plag , Dinah Baer-Henney The processing of pseudoword form and meaning in production and comprehension: A 1 English Language and Linguistics, Heinrich-Heine-Universität Düssedorf computational modeling approach using linear discriminative learning. Behavior Research 2Linguistics and Information Science, Heinrich-Heine-Universität Düsseldorf Methods. doi: 10.3758/s13428-020-01356-w

Recent research has shown that seemingly identical suffixes such as word-final /s/ in English show systematic differences in their phonetic realization (e.g. [1], [2], [3]). Most recently, Schmitz et al. [4] have demonstrated that the durational differences between different types of /s/ also hold for pseudowords: the duration of /s/ is longest in non-morphemic contexts, shorter with suffixes, and shortest in clitics. At the theoretical level such systematic differences are unexpected and unaccounted for in current theories of speech production (e.g. [5], [6]). Recently, Tomaschek et al. [2] applied principles of discriminative learning theory (e.g. [7]) and found that measurements derived from their discriminative network are able to predict the patterning of /s/ durations. Following this approach, we implemented a Linear Discriminative Learning (LDL, e.g. [8]) network trained on real word data in order to predict the durations of pseudowords’ final /s/ using the production data by [4]. Adopting the implementation of pseudowords into LDL networks by Chuang et al. [9], we find that the durations of different types of /s/ are successfully predicted by measures derived from our discriminative network. That is, different types of /s/ emerge as separate inflectional categories with distinct durations. The present study shows that durations of pseudowords’ suffixes assessed in a production study can be predicted by LDL networks trained on real word data. That is, durations of pseudowords’ suffixes can be predicted based on their relations to the lexicon. They emerge through the support for their morphological functions from the pseudowords’ sublexical and collocational properties.

References

[1] Plag, I., Homann, J., & Kunter, G. 2017. Homophony and morphology: The acoustics of word- final S in English. Journal of Linguistics 53 (1), 181–216. doi: 10.1017/S0022226715000183 [2] Tomaschek, F., Plag, I., Baayen, R. H., & Ernestus, M. 2019. Phonetic effects of morphology and context: Modeling the duration of word-final S in English with naïve discriminative learning. Journal of Linguistics, 1–39. doi: 10.1017/S0022226719000203 [3] Plag, I., Lohmann, A., Ben Hedia, S., & Zimmermann, J. 2020. An is an , or is it? Plural and genitive-plural are not homophonous. In Livia Körtvélyessy & Pavel Stekauer (Eds.) Complex Words. 260-292. Cambridge: Cambridge University Press. [4] Schmitz, D., Plag, I., & Baer-Henney, D. 2020. The duration of word-final /s/ differs across morphological categories in English: Evidence from pseudowords. Manuscript submitted for publication. [5] Roeslofs, A., & Ferreira, V. S. 2019. The architecture of speaking. In P. Hagoort (Ed.) Human language: From genes and brains to behavior. MIT Press, 35-50. [6] Turk, A., & Shattuck-Hufnagel, S. 2020. Speech Timing: Implications for Theories of Phonology, Phonetics, and Speech Motor Control. Oxford: Oxford University Press.

254 255 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Old English in Stratal OT Tableau 1 Input /finf/ (Old English) WxP MAX-μ *C-C *VOICED-CO- MAX-C Hideo Kobayashi1, Peter M. Skaer2 DA 1 2 University of Hyogo, Hiroshima University a. finf *!

The deletion of a nasal coda has been perceived as one of the Old English (OE) compensatory b. fi: *! * lengthening (CL) types, as exemplified in finf /finf/ > fīf /fi:f/ and gans /ɡans/ > gōs /ɡo:s/ c. fi *! * among other published data under (1) below (Bloomfield 1933; Crystal 2008; de Chene and Anderson 1979; Lass 1994; Lass & Anderson 1975). (the inputs are Old High Germanic). CL d. > fi: * is defined as the conservation of syllable weight in which the deleted mora migrates to an e. fin *! * * unfaithful segment when the nucleus vowel or moraic coda is either deleted or realigned with a non-moraic segment (Hayes 1989). Though Opalińska (2009) argues that OT (Prince Tableau 2 and Smolensky 1993/2004) cannot account for CL as seen in feorh > fēores, we nonetheless * * suggest that such a phonological event is sufficiently explained by OT. The constraint hierarchy Old English WxP MAX-μ C-C VOICED-CO- MAX-C of Opalińska (2009) consists of unsuitable constraints on the target CL in the standard OT, as DA will be fleshed out during the online poster session. Input (from stem level): /finf/ Central to our account lies in the contrast between constraint interactions in OT and those a. fin * * * in Stratal OT. Let us formulate a viable OT ranking to explain the CL due to the deletion of a nasal. It is posited that MAX-μ (which preserves the mora) (McCarthy 2008) interacts with b. fin * other markedness constraints, such as WxP (which grants a coda consonant a mora) (Hayes c. > fi: * 1989) and *COM-CODA (which bans a complex coda) (Opalińska 2009). All these constraints d. fi * * * are undominated in such a way to recover the weight loss of the nasal coda. WxP, by its definition, assigns the mora to the nasal coda while the word-final coda [f] remains non- e. finv * * * * moraic on the strength of extrametricality. The extrametrical consonant is bracketed in < > Input (from word level): /fi:/ * and the moraic consonants are in bold. The ranking of VOICED-CODA above MAX-C bans f. fi * the voiced coda in the output (McCarthy 2003; 2008). The actual form [fi:] triumphs over other wrong forms in the standard model of OT (see Tableau 1). g. > fi: Let us construct a Stratal OT ranking to account for CL in fīf [fi:]. To be consistent h. fi:v * * throughout this study, we maintain the same input /finf/ and constraints as used in standard OT. The Richness of the Base only applies to the input on a stem level but not to a word level (Kiparsky 2011). WxP assigns mora to the nasal consonant in coda. The deletion of this Refs; Bloomfield, L. (1933). Language. NY: Holt, Rinehart and Winston. Crystal, D. (2008). A Dictionary moraic nasal causes the immediately preceding vowel to increase its length compensatorily. of Linguistics and Phonetics. Malden, MA.: Blackwell. de Chene, B. and Anderson, S. R. (1979). may result from the phonetic make-up of contiguous segments (Jones Compensatory lengthening. Language 55(3), 505-35. Hayes, B. (1989). Compensatory lengthening 1989). The output [fi:] on the stem level triumphs over the others (see Tableau 2). The in moraic phonology. Linguistic Inquiry 20, 253-306. Jones, C. (1989). A History of English Phonology. stem-level phonology feeds this form to the word-level phonology in Stratal OT, whose London: Longman. Kiparsky, P. (2011). Compensatory lengthening. In Cairns, C. E. and Raimy, E. advantage is to enable varying strata to analyze different patterns of syllabification (Kiparsky (Eds.). Handbook of the Syllable. 31-69. Leiden: Brill. Lass, R. (1994). Old English: A historical linguistic 2011). On a word level, the surface true [fi:] wins over the others. Both OT and Stratal OT companion. Cambridge: CUP. Lass, R. and Anderson, J. M. (1975). Old English Phonology. Cambridge: models predict the nasal-deletion driven OECL. CUP. McCarthy, J. J. (2003). Sympathy, cumulativity, and the Duke-of-York gambit. In Féry, C. and van Part of the constraint interactions in Tableau 2 may well hold to account for other CL events de Vijver, R. (Eds.). The Syllable in Optimality Theory. 23-76. Cambridge: CUP. McCarthy, J. J. (2008). Doing in Proto Germanic /θαŋx-to:/ (think, past 1 sg.) → [θα:x-to:] (Lass 1994: 25) and West Saxon Optimality Theory: Applying theory to data. Malden, MA.: Backwell. Opalińska, M. (2009). Compensatory OE /reʝn/ (rain) → [re:n] (Jones 1989: 152) among others, where fricatives elide. lengthening in Old English. Folia Linguistica Historica, 38, 25(1-2), 235–52. Prince, A. and Smolensky, P. (1993/2004). Optimality Theory: Constraint interactions in Generative Grammar. Malden, MA.: Blackwell.

256 257 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Articulatory trade-offs between voicing and vowel height in (post)alveolar fricatives. A multi-level phonetic study

M.P. Bissiri*, C. Celata° *Università degli studi dell’Insubria, Italy °Università degli studi di Urbino ‘Carlo Bo’, Italy

The aim of our study is to investigate the supraglottal correlates of voicing in Italian alveolar and postalveolar fricatives in low and high vowel contexts. We address the general issues of how phonetic granularity enhances paradigmatic contrasts in language-specific ways (e.g. [2], [9]). Voicing and frication have conflicting aerodynamic requirements: a transglottal pressure difference is required for voicing, while a pressure drop across the fricative oral constriction is necessary for frication. Several articulatory strategies can be used to overcome the biomechanical constraints of voicing during frication. Voiced sibilants such as /z/ and /ʒ/ are generally shorter, more fronted, and articulated with posterior cavity expansion and a smaller groove compared to /s/, /ʃ/ in German, English, Croatian, French, Turkish (e.g. [4], [5], [6], [8], [10]). There is cross-subject and cross-language variation in the way the different strategies are implemented, though. Moreover, (post)alveolar fricatives are reported to be longer when the surrounding vowels are high, compared to when the surrounding vowels are low (e.g. [7], [11]), which has been interpreted as anticipatory scanning (shorter distance to be covered by the articulators from the fricative constriction to the vowel); however, only voiceless fricatives have been investigated. It has also been reported that the cricothyroid muscle is more active in high vowels, favouring the periodicity of vibration (e.g. [12], [3]). The current study is a further step of evidence addressing in particular the question of how vowel height interacts with supralaryngeal voicing adjustments. Alternative hypotheses will be tested: 1) voiced fricatives in a high vowel context are longer; given the difficulty to maintain voicing for a longer time, articulatory strategies for voicing are more prominent in high vowel than in low vowel environments; 2) voiced fricatives in a high vowel context are Figure 2. Contact anteriority as determined by the Center of Gravity index in voiced and voiceless fricatives (one speaker). longer; since high vowels support voicing, there are smaller articulatory differences between voiced and voiceless fricatives in this context; 3) an earlier consonantal release is found in References high vowel contexts as the main strategy to maintain voicing, independent on vowel quality. Four female speakers of a Tuscan variety of Italian aged 31-39 were recorded via [1] Chen, C., Celata, C. & Ricci, I. 2017. An EPG + UTI study of syllable onset and coda coordination SynchroLing while producing 5 repetitions of nonword stimuli containing /s sː z ʃ ʃː ʒ/. and coarticulation in Italian. In Bertini, C., Celata, C., Lenoci, G., Meluzzi, C. & Ricci, I. (Eds.), SynchroLing is a platform for multilevel phonetic data acquisition and analysis which allows Social and biological factors in speech variation. Studi AISV 3. Milano, Officinaventuno. real-time synchronization of audio, UTI and EPG channels ([1]). Nonwords were trisyllables [2] Cho, T. 2015 Language Effects on Timing at the Segmental and Suprasegmental Levels. In with stress on the antepenult and consonants were symmetrically flanked by /a/ or /i/ (e.g. / Redford, M.A. (Ed.), Handbook of Speech Production. John Wiley, 505-529. paˈsapo/, /piˈsipo/). Mixed effects models were used to determine the role of four predictors [3] Ishikawa, C.C., Pinheiro, T.G., Hachiya, A., Montagnoli, A.N., Tsuji, D.H. 2017. Impact of (i.e. voicing, gemination [for voiceless fricatives only], vowel height and place of articulation) Cricothyroid Muscle Contraction on Vocal Fold Vibration: Experimental Study with High-Speed on consonant duration. In order to compare voiceless and voiced fricatives, confidence Videoendoscopy. Journal of Voice 31(3): 300-306. intervals of contact anteriority (CA), contact posteriority (CP), groove extension or contact [4] Jongman, A, Wayland R. & Wong, S. 2000. Acoustic characteristics of Ebglish fricatives. Journal centrality (CC) and center of gravity (CoG) were calculated. Tongue splines are analysed by of the Acoustical Society of America 108 (3): 1252-1263. means of generalized additive models (GAM). [5] Liker, M. & Gibbon, F. 2013. Differences in EPG contact dynamics between voiced and voiceless lingual fricatives. Journal of the International Phonetic Association 43 (1). The analysis is still in progress and the preliminary results show that voiced fricatives are [6] Proctor, M.I., Shadle, C. & Iskharous, K. 2010. Pharyngeal articulation in the production of voiced significantly longer in the /i/ context, however the shortening effect of voicing is stronger in and voiceless fricatives. Journal of the Acoustical Society of America 127(3): 1507-1518. the /i/ than in the /a/ environment (Figure 1). Articulatory differences depend on the place [7] Schwartz, M.F. 1969. Influence of vowel environment upon the duration of /s/ and /∫/. Journal of articulation of the fricatives as well as on the vocalic context, although most differences of the Acoustical Society of America, 46 (2B): 480-481. neutralize in the /i/ context (Figure 2). Cross-subject variation is also found. We discuss [8] Solé, M.-J. 2018. Articulatory adjustments in initial voiced stops in Spanish, French and English. the findings to show how secondary phonetic cue variation enhance voicing differences in Journal of Phonetics 66: 217-241. language-specific ways.

258 259 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS [9] Solé, M.J. & Ohala, J.J. 2010. What is and what is not under the control of the speaker. Intrinsic Perception of yeísmo-variation among Spanish natives

vowel duration. In Fougeron C., Kühnert B., D’Imperio M. & Vallée, N. (Eds.) Papers in Laboratory 1 Phonology 10. Berlin: de Gruyter, 607-655. Carmen Quijada van den Berghe , Linda Bäumler², Verena Weiland² and Elissa Pustka² 1 [10] Ünal-Logacev, Ö., Fuchs, S. & L. Lancia (2018) A multimodal approach to the voicing contrast University of Salamanca, ²University of Viena in Turkish: Evidence from simultaneous measures of acoustics, intraoral pressure and tongue palatal contacts. Journal of Phonetics 71: 395-408. Previous research [1, 2] revealed a large number of pronunciation variants for the Spanish [11] Weglarski, A., Sewall, A., Schiavetti, N. & Metz, D.E. 2000. Effect of vowel environment on graphemes and (i.e. in pollo ‘chicken’): among others [ʝ], [ʎ], [l], [ʒ], [ʃ], [j], [ɟ͡ ʝ], [d͡ʒ] consonant duration: an extension of normative data to adult contextual speech. Journal of and [t͡ʃ], as well as combinations of these variants. Numerous studies investigated their Communication Disorders 33(1): 1-10. geographical distribution on the one hand, and articulatory as well as acoustic characteristics [12] Whalen, D., Gick, B., Kumanda, M. & Honda, K. 1999. Cricothyroid activity in high and low on the other hand [3, 4, 5]. However, these approaches exhibit major limitations: First, the vowels: exploring the automaticity of intrinsic F0. Journal of Phonetics 27: 125-142. majority of the studies use data from a very small number of speakers, often produced in phonetic laboratories. Second, the perception of the different variants has only been investigated in a very limited context, i.e. based on recordings done in phonetics laboratories or on natural speech of only two persons and with test participants exclusively from Spain [6]. In order to study the pronunciation of and in large corpora of spoken language, a standardized coding system, based on visual and auditory criteria, is necessary. It can only include a limited number of coding categories that should represent the perception of native Spanish speakers. To generate a code, built on native perception of yeísmo among speakers from different varieties, we designed a perception test based on speech samples of the corpus Fonología del español contemporáneo (FEC) [7, 8]. This corpus is comprised of recordings from more than 200 speakers originated from 17 Spanish-speaking regions in Spain and Hispanic America. Due to this supraregional approach, we were able to create an online test including all the above-mentioned variants, spoken by people of different origin [9]. We distributed the online test among natives in Hispanic America and Spain: in total, 235 Spanish natives participated, leading to 9 400 observations of perceptive judgement. Their task was to listen to a word, spoken by two different people, and to indicate whether the two pronunciation variations of or were different or not. Each participant listened to a total of 30 audio pairs. The speech samples for the perception test were chosen on the basis of a previous audio- visual phonetic transcription of the data by means of the software Praat [10]: three of the authors transcribed the data individually before the phonetic transcriptions were compared. Subsequently, we categorized them into five groups: We assigned audio pairs, which we assume to have clearly different articulatory and acoustic characteristics, i.e. the approximant [j] and the fricative [ʃ], to the main category, assuming that even non-trained listeners would clearly differentiate them. Second, we constructed asubcategory , in which each task includes two variants of an articulation type, i.e. the pronunciations [ʃ] vs. [ʒ]. Thirdly, we grouped all variants consisting of two or more phones, such as [ʎj], in the combinations category. Fourth, we assigned cases, that showed ambiguous auditive and visual cues, to the group difficulties. To check the validity of the measurement, we furthermore included fillers to the test. Results show that the participants rated the speech samples of the five categories significantly different (p<0,001). Whereas 81% of the participants perceived a difference within the main category phones, this only applied for 32% within the subcategory. Combinations were relatively well identified: 67% of the participants marked them as different from their mono-phonic equivalents. Although the perception of speech samples shows similar distributional patterns in all participating countries, Argentinians were significantly more likely to perceive speech samples as equal than Mexicans and Spaniards. The results of the test (fig. 1) confirm that native speakers have similar ease and difficulty in categorizing the spelling variants of and as researchers who do the audio-visual corpus coding. Thus, the categorization is confirmed as appropriate and contributes to more comparable and

260 261 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS economic future research: in order to analyze the yeísmo-phenomena, large corpora could Isochrony and gemination in Soikkola Ingrian: challenges for phonological models be annotated using a two-stage code system, that is first based on the main categories and combinations enabling a screening, and second, on the subcategories allowing more detailed Kuznetsova, Natalia1, 2, Irina Brodskaya2, Elena Markus3,4 analyses. 1Università Cattolica del Sacro Cuore; 2Institute for Linguistic Studies, Russian Academy of Sciences; 3University of Tartu; 4Institute of Linguistics, Russian Academy of Sciences (1) The Soikkola dialect of Ingrian, a vanishing Finnic variety near St. Petersburg (historical Ingria; Russia) with few speakers left, manifests a rare ternary quantity contrast of consonants, attested only in closely related Estonian, Livonian, and Saami. Ingrian ternary contrast relies less on secondary durational cues in other foot segments than in the latter languages, at least in the disyllabic foot [1]. The trisyllabic foot, little studied before, shows prosodic differences from both the disyllabic foot and a combination of the disyllabic and the monosyllabic foot. It manifests two idiosyncratic phenomena: specific types of prosodically motivated gemination and ongoing reduction of second syllable long vowels (Table ), noted already in [2, pp. 182– 184]. Our acoustic study aimed at a comprehensive analysis of an interaction of phonological length and foot structure in segmental durations within 22 quantitative patterns of trisyllables and 4 disyllables (Table 2). The dataset (4259 tokens) was based on elicitation of phrase-final words collected from 5 speakers in 2014-2016 and explored by mixed linear regression modelling. We claim that observed effects can be explained through an interaction of two main phonetic tendencies: isochrony and “anti-isochrony”. Isochrony is cross-linguistically known as a trend for poly-subconstituent shortening [4, pp. 135–143], but in Finnic languages it also includes lengthening of a short vowel after a light (C)V stressed syllable: tapa [ˈtabḁ ː] ‘kill:imp’ [5], [6]. “Anti- isochrony”, less known outside Finnic languages, is an effect of lengthening of segments before Figure 1: Perception within the different categories longer sounds, most notably of consonants before long vowels [5], [7], [8, p. 15], [9, p. 90], [10]. One of the most evident effects of isochrony in our data is ongoing phonological shortening References of long second syllable vowels (V2) in most trisyllables in contrast to disyllables, which, being

shorter feet, still maintain long V2 (Figure 1). Trisyllables reached a stage of prosodic evolution [1]Rost Bagudanch, A. 2013. La transcripción fonética en estudios dialectales: propuestas en el present e.g. in Estonian, where the V2 length contrast gave place to a strictly inverse ratio of caso del yeísmo. Revista de Filología Española, 165-192. V2 duration to the length of the first syllable [6]. In turn, “anti-isochrony” is best represented [2] Real Academia Española & Asociación de Academias de la Lengua Española. 2011. Nueva by historical prosodic gemination of singleton consonants before long vowels, widespread gramática de la lengua española. Fonética y fonología, Barcelona: Espasa Libros. also in other Finnic varieties, e.g. [11], [7]. This gemination is present both in di- and trisyllabic [3] Gómez, R. & Molina Martos, I. 2013. Variación yeísta en el mundo hispánico. Frankfurt a.M./Ma- feet, e.g. *kanā > kann̆ a ̄ ‘hen:part’ and gave rise to the middle length class of consonants (short drid: Iberoamericana Vervuert. [4] Fernández Trinidad, M. 2010. Varaciones fonéticas del yeísmo: un estudio acústico en mu- geminates). Soikkola Ingrian trisyllabic feet additionally manifest gemination before a short jeres rioplatenses. Estudios de Fonética Experimental, 263-292. vowel starting the sequence of two light syllables *CVCV(C) (Table ). [5] Martínez Celdrán, E. 2015. Naturaleza fonética de la consonante ‘ye’ en español. Normas 5, We will also report subtler phonetic manifestations of both isochrony and “anti-isochrony” in 117-131. other foot positions, which confirm that the two tendencies are still active in the language. [6] Rost Bagudanch, A. 2016. La percepción de /ʎ/ y /j/ en catalán y en español. Implicaciones en These effects include, for example, phonetic vowel shortening before the two longer length la explicación del yeísmo. Estudios de Fonética Experimental, 41-80. classes of consonants and consonant shortening after long vowels (isochronic effects), or long

[7] Pustka, E., Gabriel, Ch. & Meisenburg, T. 2016. Romance Corpus Phonology: from (Inter-)Pho- geminate lengthening before long V2 in disyllables and word-initial consonant lengthening nologie du Français Contemporain (I)PFC to (Inter-)Fonología del Español Contemporáneo (I) before long vowels (“anti-isochronic” effects). A comparison of effect sizes across positions FEC. In Draxler, Ch. & Kleber, F. (Eds.), Tagungsband der 12. Tagung Phonetik und Phonologie im indicates a growth of isochronic effects from the beginning towards the end of the foot. deutschsprachigen Raum. München: Universitätsbibliothek LMU München, 151-154. Our findings present challenges for current phonological models, both physiologically-based [8] Pustka, E., Gabriel, Ch., Meisenburg, T., Burkard, M. & Dziallas, K. 2018. (Inter-) Fonología and formal ones. Trisyllabic foot is often contested in formal phonological accounts [12], [13], del Español Contemporáneo/(I)FEC: metodología de un programa de investigación para la while our case shows two phonological processes clearly pertinent to the trisyllabic foot only. fonología de corpus. Loquens 5.1. Our results might also add to challenges presented by quantity languages to Articulatory [9] Labov, W. 1972. Sociolinguistic patterns. Oxford: Blackwell. [10] Boersma, P. & Weenink, D. 2020. Praat: doing phonetics by computer. Computer program. Phonology [4], as they show that the degree of poly-subconstituent shortening is regulated by the phonological length of segments and that this shortening is not equal in all foot positions.

262 263 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Local “anti-isochronic” lengthening is also difficult to account for in this framework. [7] K. Nahkola, Yleisgeminaatio: äänteenmuutoksen synty ja vaiheet kielisysteemissä erityisesti Tampereen seudun hämäläismurteiden kannalta. Helsinki: Suomalaisen Kirjallisuuden Seura, Table Idiosyncratic prosodic features of the Soikkola Ingrian trisyllabic foot: a specific type of gemination before a 1987. sequence of two light syllables *-CVCV(C) (historical process) and long V reduction (ongoing process) 2 [8] M. O’Dell, “Intrinsic timing and quantity in Finnish,” Tampere University, Tampere, 2003. [9] K. Suomi, J. Toivanen, and R. Ylitalo, Finnish sound structure: Phonetics, phonology, phonotactics Type of structure Gemination before a short vowel Long V2 reduction 2-syllabic foot kurki [ˈkurg̊ i] ‘crane’ kurk̆ kī [ˈkurkˑiː] ‘crane:ill’ and prosody. Oulu: University of Oulu, 2008. 2-syl. + 1-syl. bifoot murkinā [ˈmurg̊ iˌnaː] ‘break- kerk̆ kīmǟ [ˈkerkˑiːˌmäː] ‘be_in_time:- [10] N. Kuznetsova, “Finnic foot nucleus lengthening: from phonetics to phonology,” in Nordic fast:prt’ sup’ prosody: Proceedings of the XIth Conference, Tartu 2012, E. L. Asu-Garcia and P. Lippus, Eds. 3-syllabic foot *murkina > murk̆ kina [ˈmurkˑina] kerk̆ kīmä > [ˈkerkˑimäː] ‘be_in_ Frankfurt am Main: Peter Lang Edition, 2013, pp. 205–214. ‘breakfast’ time:1pl’ [11] H. Paunonen, “On the primary gemination of Finnish dialects,” Finnisch-Ugrische Forschungen, Table Original quantity patterns in Soikkola Ingrian Figure Mean duration (in ms) of V in vol. 40, pp. 146–164, 1973. 2 [12] V. Martínez-Paricio and R. Kager, “The binary-to-ternary rhythmic continuum in stress typology: in di- and trisyllabic feet; studied structures are in 8 dark yellow structures from Table . Layered feet and non-intervention constraints,” Phonology, vol. 32, no. 3, pp. 459–504, Dec. yellow. Dark yellow highlights the four shortest Means are given in boxes and nrs of to- 2015, doi: 10.1017/S0952675715000287. trisyllabic feet which were additionally compared to kens below the boxes. Note the preser- [13] F. Torres-Tamarit and P. Jurgec, “Lapsed derivations: Ternary stress in harmonic serialism,” four disyllables with the same structure of the first vation vs. shortening of long V2 in the Linguistic Inquiry, vol. 46, no. 2, pp. 376–387, Apr. 2015, doi: 10.1162/LING_a_00186. two syllables (marked as VCV, VːCV etc). Duration di- vs. trisyllables of VCːVː structure

of V2 in these 8 structures is given in Figure 1. (cf. a shift in Vː vs. V under boxes).

1This position is prosodically marked [3], but length contrasts are articulated phrase-finally very

C2+V2

Foot clearly, which allowed us to avoid a risk of occasional vowel contrast reduction or merger caused type VCV VːCV V V CV VRCV VːRCV 1 2 just by an unaccented phrasal position. Besides, quantity languages strongly regulate such effects C + V tapa lōta aita karta kārto as phrase-final lengthening [4]. C + Vː mātā aitās kartās raentā (Cˑ + V) Cˑ + Vː tap̆ pā lōt̆tā ait̆tā kart̆tā kārt̆tō Cː + V natta vōtta aitta kartta 2-syllabic Cː + Vː tappā nōttā aittā karttā C + V lakata vīkate leikata harkata kērsimä (C + Vː) Cˑ + V vōt̆tava voit̆teli murk̆ kina vǟnt̆teli Cˑ + Vː mat̆tāla sūt̆tīma hoit̆tīma kerk̆ kīmä vǟnt̆tīmä 3-syllabic Cː + V kattila ōttele voitteli markkoja Cː + Vː kattīma mūttīma toittīma harkkāma

References

[1] E. Markus, “The phonetics and phonology of a disyllabic foot in Soikkola Ingrian,” Linguistica Uralica, vol. 47, no. 2, pp. 103–119, 2011, doi: 10.3176/lu.2011.2.03. [2] A. Sovijärvi, Foneettis-äännehistoriallinen tutkimus Soikkolan inkeroismurteesta. Helsinki: Suomalaisen Kirjallisuuden Seura, 1944. [3] L. White, “Communicative function and prosodic form in speech timing,” Speech Communication, vol. 63–64, pp. 38–54, Sep. 2014, doi: 10.1016/j.specom.2014.04.003. [4] A. Turk and S. Shattuck-Hufnagel, Speech timing: implications for theories of phonology, speech production, and speech motor control. New York: Oxford University Press, 2020. [5] J. Lehtonen, Aspects of quantity in standard Finnish. Jyväskylä: University of Jyväskylä, 1970. [6] A. Eek, “Units of temporal organisation and word accents in Estonian,” Linguistica Uralica, vol. 26, no. 4, pp. 251–263, 1990.

264 265 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS The pronunciation of Spanish fricative /x/: a corpusphonological study (1) city Verena Weiland University of Vienna

The analysis of the linguistic characteristics of Spanish in the world faces particular challenges due to its pluricentricity: the geographical dimension interacts in a complex way with the social, medial and stylistic dimensions, as well as with aspects of linguistic contact. These connections have yet been disregarded in previous research on the realisation of the Spanish phoneme /x/, which corresponds to the graphemes , and (i.e. in reloj, gente, México) [1]. Based on the corpus Fonología del Español Contemporáneo FEC, this study

examines the influence of different factors on the pronunciation of /x/. It considers three pronunciation of /x/ parameters: first, the phonological context; second, the diatopic dimension; and third, the diastratic factors of age, gender and education. The corpus under investigation is part of the Fonología del Español Contemporáneo FEC project [2, 3] that intends to document and analyse Spanish pronunciation in the world. It includes data from seventeen regions in Spain and Hispanic America. Its research protocol consists of a reading task and a semi-structured interview [4]. According to the FEC guidelines, in each location, at least 12 speakers were recorded who must not have lived in another location for more than one year and who are evenly distributed in terms of gender, age and education. So far, traditional approaches have pointed out the following geographical distribution in the pronunciation of /x/: In northern and central Spain, /x/ is usually realised as strident Figure 1. Pronunciation variants of /x/ in Madrid (Spain), Bogotá (Colombia), Mexico City (Mexico), Punta postvelar or uvular fricative, while in Andalusia, the Canary Islands, the Caribbean, Central Arenas and Santiago de Chile (both in Chile). America and Colombia, the glottal fricative [h] is most common [5]. In Argentina, Peru and Mexico, velar [x] seems to be the standard, whereas, in Chile, we can observe an assimilation References process to [ç] before vowels [6, 7, 8]. The data, drawn from the FEC-corpus and consisting [1] Real Academia Española & Asociación de Academias de la Lengua Española. 2011. Nueva of audio-visual phonetic transcriptions using the software Praat [9], confirm that geography gramática de la lengua española. Fonética y fonología. Barcelona: Espasa Libros. is a decisive factor for these pronunciation variants (p<0,001). Figure 1 shows these four [2] Pustka, E., Gabriel, Ch. & Meisenburg, T. 2016. Romance Corpus Phonology: from (Inter-) pronunciation variants of /x/ in an association plot [10] for speakers from Madrid, Bogotá, Phonologie du Français Contemporain (I)PFC to (Inter-)Fonología del Español Contemporáneo Mexico City, Santiago de Chile and Punta Arenas. The corpus data confirm the uvular (I)FEC. In Draxler, Ch. & Kleber, F. (Eds.), Tagungsband der 12. Tagung Phonetik und Phonologie im pronunciation in Madrid as well as the glottal variant in Bogotá. However, the palatal deutschsprachigen Raum. München: Universitätsbibliothek LMU München, 151-154. pronunciation is not a unique characteristic of Chilean speakers, but it is also frequent in [3] Pustka, E., Gabriel, Ch., Meisenburg, T., Burkard, M. & Dziallas, K. 2018. (Inter-) Fonología Mexico City. Contrary to expectations, velar pronunciation is not the standard in Mexico City. del Español Contemporáneo/(I)FEC: metodología de un programa de investigación para la Moreover, further variation has to be considered: [k], as well as the combinations [kx] and fonología de corpus. Loquens 5.1. [kç], and occur in the corpus. [4] Labov, W. 1972. Sociolinguistic patterns. Oxford: Blackwell. Apart from this, the data support the idea that phonological studies of spoken Spanish [5] Delgado-Díaz, G. & Galarza, I. 2015. ¿Que comiste [x]amón? A Closer Look at the Neutralisation cannot be reduced to geographical factors. Besides diatopic distribution, the phonological of /h/ and Posterior /r/ in Puerto Rican Spanish. In Willis, M. B. et al. (Eds.), Selected proceedings th context is the second decisive factor (p<0,001). First, velar pronunciation occurs mainly after of the 6 Conference on Laboratory Approaches to Romance Phonology, 70-82. the back vowels /o/ and /u/, regardless of whether it is in the onset or the coda of a syllable [6] Lipski, J. 1996. El Español De América. Madrid: Ed. Cátedra. [7] Hualde, J. I. 2005. The sounds of Spanish. Cambridge: University Press. (i.e. in escrújulo or reloj). In the onset of the initial syllable of a word, the palatal variant is [8] Zampaulo, A. 2019. Palatal sound change in the romance languages. Diachronic and synchronic preferred if the right segmental context is a front vowel (i.e. in jinete), but the glottal variant perspectives. Oxford & New York: University Press. if /x/ is followed by a back vowel (i.e. in juzgar). The uvular pronunciation is most frequent [9] Boersma, P. & Weenink, D. 2016. Praat: Doing Phonetics by Computer, URL: www.fon.hum.uva. when /x/ is between central or front vowels (i.e. in queja). The sociolinguistic factors age, nl/praat/ [version 6.0.10]. gender and education have no significant influence on the examples studied in the FEC [10] Flach, S. In press. From movement into action to manner of causation: Changes in argument corpus. However, there is a tendency, which is to be discussed in the conference: the velar mapping in the into-causative. Linguistics, URL: https://sfla.ch/wp-content/uploads/2020/06/ pronunciation [x] is most frequent in the group of people over 60, whereas the glottal variant Flach_From_movement_into_action_to_manner_of_causation.pdf [11/30/2020]. [h] is for the most part used by people under the age of 30.

266 267 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS The impact of non-contrastive voicing assimilation in speech perception that put more weight on language-specific features (e.g. [4]). In addition, this study shows that, while voicing is not phonemic for /s/ in Spanish, it plays a role in the perception of Rebeka Campos-Astorkiza voicing for following obstruents, lending support to a view of speech perception that centers Ohio State University the role of variability and the manifestation of distinctive features across sounds.

The role of assimilation in speech production has been explored especially for languages (1) Test items where the assimilated feature is distinctive for the target sound [1, 2, 3]. In these cases, /las batas/ ~ /las patas/ ‘the robes, the legs’ perceptual compensation for assimilation allows listeners to retrieve the underlying form /las baɾkas/ ~ /las paɾkas/ ‘the boats, the parkas’ of the assimilated sound based on the phonetic context [2]. Based on these patterns, two /las bekas/ ~ /las pekas/ ‘the scholarships, the freckles’ approaches to speech variation in perception have been proposed: a language-specific /los boÆos/ ~ /los poÆos/ ‘the pastries, the chickens’ mechanism that explains the perception of assimilation via language learning ([4]); and a /las dunas/ ~ /las tunas/ ‘the dunes, the music groups’ language-independent approach that argues that general phonetic perception mechanisms /las domas/ ~ /las tomas/ ‘you tame them, you take them’ account for the observed patterns ([5]). However, cases where the assimilated feature is not /las komas/ ~ /las gomas/ ‘you eat them, the erasers’ distinctive for the target sound have not been brought to bear on this discussion as much /las kotas/ ~ /las gotas/ ‘the levels, the drops’ (e.g. [6]). One such case is Spanish voicing assimilation of /s/ in pre-consonantal position, /las kalas/ ~ /las galas/ ‘the coves, the galas’ an allophonic process by which /s/, not contrastive for voicing in the language, gradiently /las kasas/ ~ /las gasas/ ‘the houses, the gauzes’ assimilates in voicing to a following consonant [7]. While this phenomenon has received recent attention from a production perspective [8], its role in perception has not been Table 1 Accuracy rates by stimuli structure explored. Understanding this would bring new data on the debate mentioned earlier. Thus, Stimuli structure Incorrect Correct we explore whether the allophonic voicing of /s/ in Spanish influences the perception of [s]+voiceless obstruent 7.17% 92.83% the voicing contrast of a following obstruent, focusing on whether the voiced vs. unvoiced Matching for voicing allophones display different perceptual effects. [z]+voiced obstruent 2.17% 97.83% In order to answer this research question, we conducted an perceptual identification [s]+voiced obstruent 9.67% 90.33% task where participants listened to a combination of two Spanish words, with the assimilation Mis-matching for voicing context (/s/+obstruent) at the word boundary, and they had to decide what words they had [z]+voiceless obstruent 31.00% 69.00% heard. The stimuli consisted of 4 versions of a word sequence that constitutes a minimal pair for voicing (/las batas/ vs. /las patas/ ‘the robes, the legs’). There were two voicing matching versions: [s]+voiceless obstruent and [z]+voiced obstruent; and two voicing mis- References matching versions: [s]+voiced obstruent and [z]+voiceless obstruent. To create the stimuli, a speaker of Castilian Spanish was recorded producing the word sequences which were then [1] Gow, D. W. and A.M. Im (2004). A cross-linguistic examination of assimilation context effects. manipulated: voiced and voiceless productions of /s/ where spliced to create the 4 versions Journal of Memory and Language 51, 279–296. mentioned above. There were 40 test items (10 word sequences x 4 versions; (1)). 40 fillers [2] Mitterer, H., S. Kim and T. Cho. (2013) Compensation for complete assimilation in speech perception: The case of Korean labial-to-velar assimilation. Journal of Memory and Language were included, formed by combinations of words that presented other minimal pairs (/mis 69, 59–83. kanas/ vs. /mis kamas/ ‘my white hairs, my beds’), with the differing sound spliced in. The [3] Snoeren, N. D., P.A. Hallé and J. Segui. (2006). A voice for the voiceless: Production and perception perceptual identification task was presented as an online survey via Qualtrics. In addition, of assimilated stops in French. Journal of Phonetics 34, 241–268. the survey included questions about participants’ sociolinguistic background. Data from 120 [4] Gaskell, M. G. (2003). Modeling regressive and progressive effects of assimilation in speech participants from central and northern Spain were analyzed, for a total of 2,400 data points. perception. Journal of Phonetics 31, 447–463. Each data point was coded for accuracy according to the voicing of the obstruent in the test [5] Gow, D. W. (2003). Feature parsing: Feature cue mapping in spoken word recognition. Perception item. The effect on accuracy of the stimuli structure ([s]+voiceless, [s]+voiced, [z]+voiceless, & Psychophysics 65, 575–590. or [z]+voiced), the stop’s place of articulation, the participants’ origin (central vs. northern [6] Meunier, C. (1999) Recovering missing phonetic information form allophonic variation. In ICphS Spain), and the interaction between stimuli structure and the other factors was explored 1999. with logistic regression models in R. [7] Campos-Astorkiza, R. (2019) Modelling assimilation: The case of sibilant voicing in Spanish. The results show that voicing of /s/ plays a role in the perception of the voicing distinction In J. Gil & M. Gibson (eds.) Contemporary Studies in Romance Phonetics and Phonology. Oxford for a following obstruent. More precisely, mis-matching stimuli show lower accuracy than University Press. 241-275. matching ones, the lowest being for mis-matched [z]+voiceless (see Table 1) with an accuracy [8] Sedó, B., L. Schmidt and E. Willis. (2020) Rethinking the phonological process of /s/ voicing assimilation in Spanish: An acoustic comparison of three regional varieties. Studies in Hispanic rate of only 69%. This difference in degree of effect for mismatched [z]+voiceless and and Lusophone Linguistics 13(1), 167–217. [s]+voiced might arise from the fact that [s] sometimes occurs with a voiced obstruent in production but it is extremely rare that [z] would occur with a voiceless obstruent ([7, 8]). This explanation for the asymmetry between the voiced and unvoiced allophones relies on listeners’ experience (cf. [2]) and aligns with approaches to perception and context variability

268 269 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Analyzing Phonological Neutralization across Different Experimental Tasks the town’) in contexts where younger informants do not. This study shows the importance of using different tasks in order to understand how bilingual speakers’ pronunciation differs Oihane Muxika Loitzate across generation.

Basque has three voiceless fricatives with different places of articulation: the lamino- References alveolar /s̻/, the apico-alveolar /s̺/, and the prepalatal fricative /ʃ/, which are represented by the graphemes respectively (Iglesias et al. 2016:6). However, speakers tend to [1] Iglesias, A., Gaminde I., Gandarias, L. & Unamuno, L. 2016. Euskararen txistukariak aztertzeko merge them to some extent (Hualde 2010, Urrutia et al. 1988, Gaminde et al. 2013, Muxika- indize akustikoez. Euskalingua 28. 6-18. Loitzate 2017, Beristain 2018). This study analyzes the pronunciation of Basque sibilants in [2] Hualde, J. I. 2010. Neutralización de sibilantes vascas y seseo en castellano. Oihenart: Cuadernos the merging variety of Basque spoken in Amorebieta-Etxano, Bizkaia. de Lengua y Literatura 25. 89-116. [3] Urrutia, H., Etxebarria, M., Turrez, I. & Duque, J.C. 1988. Fonética vasca I: Las sibilantes en el vizcaíno, 33-160. Bilbo: University of Deusto. The production data of 28 bilingual speakers of Basque and Spanish is analyzed. Informants [4] Gaminde, I., Unamuno, L., Iglesias, A. & Gandarias, L. 2013. Bizkaiko neska gazteen kontsonante are between 20 and 66 years old and they have different levels of linguistic dominance, which afrikatuen izari akustikoez. Euskalingua 23. 6-13. is measured by the Bilingual Language Profile (BLP) Questionnaire (Birdsong et al. 2012, [5] Muxika-Loitzate, O. 2017. Sibilant merger in the variety of Basque spoken in Amorebieta- Gertken et al. 2014). The BLP Questionnaire provides a numeric score based on informants’ Etxano. Proceedings of Bilingualism in the Hispanic and Lusophone World. Online: https://www. language history, use, proficiency, and attitudes. After filling out the questionnaire, informants mdpi.com/2226-471X/2/4/25. complete an interview and a read-aloud task. The interview consists of several questions in [6] Beristain, A. 2018a. The acoustic realization of /s̺/ and /ts̻/ by L1 and L2 Basque speakers. In Basque that vary depending on informants’ age and they are created to elicit the vernacular Unamuno, Lorea, Asier Romero, Aintzane Etxebarria & Aitor Iglesias (eds.) Euskararen bariazioa variety of Basque spoken in Amorebieta-Etxano. For the read-aloud task, informants read eta bariazioaren irakaskuntza III, 60-72. Bilbo: UPVEHUko EUDIA ikerketa taldea. sentences that contain 83 words with the three Basque fricatives inserted in the middle of [7] Birdsong, D., Gertken, L.M. & Amengual, M. 2012. Bilingual Language Profile: An easy-to-use carrier sentences. The objective of the read-aloud task is to elicit a more standard variety of instrument to assess bilingualism. Center for Open Educational Resources and Language Learning Basque. All the tasks are recorded in a quiet room using an external microphone connected to (COERLL), University of Texas at Austin. Online: https://sites.la.utexas.edu/bilingual/ (15 May, a laptop through USB. The recordings are made using GarageBand at a sampling rate of 44.1 2015.) kHz. The sounds of interest are segmented manually following the criteria established by Turk [8] Gertken, L. M., Amengual, M. & Birdsong, D. 2014. Assessing language dominance with the Bilingual Language Profile. In Leclercq, Pascale, Amanda Edmonds & Heather Hilton (eds.), et al. (2006) in Praat. The total token count is 4,412. A script is used to automatically extract Measuring L2 proficiency: Perspectives from SLA, 208-225. Bristol, Buffalo & Toronto: Multilingual the acoustic values from the audio files. The acoustic measure is the CoG of the fricatives, Matters. as it has successfully differentiated the three places of articulation of Basque fricatives in [9] Turk, A., Nakai, S. & Sugahara, M. 2006. Acoustic segment durations in prosodic research: A previous studies (Iglesias et al. 2016). The CoG measures “how high the frequencies in a practical guide. Methods in Empirical Prosody Research 3. 1- 28. spectrum are on average” (Boersma & Weenink 2007) and it provides information about [10] Boersma, P. & Weenink, D. 2017. Praat: Doing phonetics by computer. Version 6.0.35. University the place of articulation of fricatives. More specifically, the CoG is higher for sounds that are of Amsterdam: Phonetic Sciences. Online: http://www.praat.org/. produced at the front of the oral cavity and it is lower for sounds that are produced more at the back of the oral cavity. The script extracts the CoG values from the middle of fricatives.

A mixed-effects linear regression modeling is carried out using the R Project for Statistical Computing. Firstly, informants’ gender has an effect on the CoG. As for the effect of place of articulation, the mean CoG results show that informants pronounce the grapheme in the most fronted area of the oral cavity compared to the other two graphemes, whereas the place of articulation of is the least fronted of the three fricatives. This CoG pattern resembles the one in Hualde (2010:94) and Iglesias et al. (2016:13), who described non- merging varieties of Basque, but the contrasts among fricatives in this study are less robust than in theirs. With regards to the factor of BLP score, speakers who are more strongly dominant in Basque have a more robust contrast between and than speakers who are more strongly dominant in Spanish. As for the factor of task, informants have higher CoG values in the reading task than in the interview. The interaction between task and grapheme shows that informants keep a more robust contrast between in the reading task than in the interview. A closer analysis of the tasks shows that older informants use a prepalatal epenthetic fricative in their interviews that younger speakers do not use. More specifically, 8 speakers who are older than 43 years old insert an epenthetic fricative (as in herrixen ‘in

270 271 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS The role of the foot in European Portuguese and Brazilian Portuguese: implications for that this domain is clearly less active in EP than in BP, with implications for the general prosodic phonology and the definition of language phonological profiles phonological profile of each variety.

Marina Vigário1 and Violeta Martínez-Paricio2 (1) Processes that refer to internally layered ternary feet in EP 1University of Lisbon, 2University of Valencia a. Dactylic Loweringhead: esque((l[ɛ́]ti)Ft co)Ft ‘skeletal’ (cf. esquel[é]to ‘skeleton’) b. Spondaic Loweringhead: ((d[ɔ́]ci)Ft l)Ft ‘gentle’ (cf. d[ó]ce ‘sweet’) Two fundamental goals in comparative phonology are (i) the identification of universal b.’ Spondaic Loweringhead – the unique source of [ɔ̃] in EP: (([ɔ̃ ́]te)Ft m) Ft ‘yesterday’ tendencies, and (ii) the explanation of specific patterns of variation. For instance, it is widely c. Obligatory Dactylic Gliding (avoids Dactyls): accepted that all languages organize speech material into prosodic domains, which are non-head fa(míl[j]a) vs. *fa((míl[i]) a) ‘family’; (táb[w]a) vs. *((táb[u]) a) ‘board’ organized hierarchically, forming a structure that obeys a number of constraints. However, Ft Ft Ft Ft Ft Ft c’. Obligatory Dactylic Gliding formed with enclitics: which domains of that structure are universal, if any, and which constraints cross-linguistically non-head beb -o (beb[j]u) ‘drink-it’; ped -a (ped[j]a) ‘ask-it ’ limit the organization of prosodic domains are controversial matters ([1], [2]). Our work e Ft e Ft F contributes to both (i) and (ii) by re-examining the role of the foot in the phonology of two d. Spondaic Loweringheavy non-head: ((líd[ɛ])Ft r)Ft ‘leader’; ((Vít[ɔ])Ft r) Ft ‘id.’ ‘id.’ ‘id.’ closely related varieties of Portuguese: Brazilian and European (BP and EP, respectively). e. Dactylic Loweringheavy non-head: ((Lúci)Ft f[ɛ]r)Ft ((Júpi)Ft t[ɛ]r)Ft BP and EP have by and large the same underlying segmental inventory, and they share a (2) a. Exceptions to Spondaic Lowering in EP: large portion of their vocabulary, as well as the same underlying word representations (e.g., head pu((l[ó]ve) r) ‘pullover’; ((R[ó]ve) r) ‘id.’ (nativized loans) [3]). By contrast, the two varieties differ significantly in the role of lower and higher prosodic Ft Ft Ft Ft b. Exceptions to Dactylic Lowering in EP: domains: in BP productive phonology predominantly refers to lower domains, such as the head ((p[é]sse) go) ‘peach’; ((s[ó]fre) go) ‘eager’ rhythmic categories syllable and foot, while in EP there is richer phonology at higher levels, Ft Ft Ft Ft. the so-called interface categories (i.e., the prosodic word and above) ([4]). While the role of the (3) Some phenomena cuing non-head feet in PB foot in BP has been extensively demonstrated in previous work ([5], and references therein), there is still debate on the status of the foot domain in EP ([5, 6]). This work revisits, extends Secondary stress and discusses the evidence for the foot domain in both varieties, with a special focus on EP. almofada (àl.mo)(fá.da) ‘pilow’, probabilidade (prò.ba)(bì.li)(dá.de) ‘probability’ ([5]) Clipping We show that there is a number of processes that refer to a particular foot type in EP, i.e. an internally layered ternary foot – see (1). The mid vowel neutralization processes in (1a- pròfessór > prófi ‘teacher’; refrígeránte > refrí ([10]) b) have previously been described with reference to some sort of marked feet in BP ([7]), though not an internally layered ternary foot; and they also apply in EP. The same is true of Selected references the gliding process in (1c). The processes in (1d-e) are specific of EP ([6]). Importantly, the [1] Labrune, Laurence. 2012. Questioning the universality of the syllable: evidence from Japanese. phenomena in (1a-b) may have lexical exceptions in both varieties – see (2) ([7], [6]); however, Phonology 29, 113–52. those in (1c-c’) are productive, exceptionless processes; the fact, previously unnoticed, that [2] Frota, S. & M. Vigário. 2018. Syntax-phonology interface. In M. Aronoff (ed.), Oxford Research postlexically attached enclitics trigger Obligatory Dactylic Gliding (1c’) also indicates that Encyclopedia in Linguistics. Oxford: Oxford University Press. https://doi.org/10.1093/ acrefore/9780199384655.013.111 gliding is motivated by an active phonological constraint. Significantly, other marked feet [3] Massini-Cagliari, Gladis, L. C. Cagliari & R. Y. Redenbarger. 2016. A comparative study of the types do not trigger lowering (e.g. iambic feet, cf. você [e]/*[ɛ] ‘you’; avô [o]/*[ɔ] ‘grandpa’, and sounds of European and Brazilian Portuguese. Phonemes and allophones. In L. Wetzels, S. elsewhere gliding of the first of two vowels is optional (cf.cr iança [i]/[j] ‘child’; e.g. [6]). Menuzzi, J. Costa (eds.), The Handbook of Portuguese Linguistics. Malden, MA: Willey-Blackwell, While these phenomena can be described with reference to PW stress and the right edge of 56–68. the PW ([6]), or to marked feet patterns ([7]), such non-metrical analyses fail to capture their [4] Vigário, M., M. Cruz & S. Frota. 2019. Why tune or text? The role of language phonological profile unified motivation. Our claim here is that there is a feature common to all four processes: in the choice of strategies for tune-text adjustment. In S. Calhoun et al. (eds.), Proceedings they either target a specific marked foot type, an internally layered ternary foot([8]), or of the 19th ICPhS, Melbourne, 2019 (pp. 567-571). Canberra: ASSTA Inc. https://icphs2019.org/ they are activated to precisely avoid the emergence of a ternary foot. Besides that, the four icphs2019-fullpapers/pdf/full-paper_443.pdf [5] Magalhães, J. 2016. Main stress and secondary stress in Brazilian and European Portuguese. groups of phonological processes cue the foot domain in EP. In L. Wetzels, S. Menuzzi, J. Costa (eds.), The Handbook of Portuguese Linguistics. Malden, MA: According to this view, EP and BP are similar in having phonology referring to a foot that is Willey-Blackwell, 107–124. the head of PW. The same is not true of PW non-head feet. BP has pretonic binary trochees, [6] Vigário, M. 2003. The prosodic word in European Portuguese. Berlin: De Gruyter. with perceived rhythmic alternations and tonal marking, and has foot-based clippings (see [7] Wetzels, L. 1992. Mid vowel neutralization in Brazilian Portuguese. Cadernos de Estudos 3, taken from [5] and [10]). In EP, none of these are found; rather, there is rich phonology Linguísticos 23, 19–55. cuing the left edge of a higher prosodic domain, the PW ([6]). In this presentation, different [8] Martínez-Paricio, Violeta & René Kager (2015). The binary-to-ternary rhythmic continuum in stress possible representations for the syllables that precede main stress in EP are explored (e.g. typology: layered feet and non-intervention constraints. Phonology 32.3, [9]). [9] Hyman, L. 2011. Does Gokana really have no syllables? Or: What’s so great about being universal? Phonology 28(1), 55 – 85. Based on our analysis, we conclude that not only in BP, but also in EP the foot exists and is [10] Araújo, G. 2002. Truncamento e reduplicação no português brasileiro. Revista de Estudos da the domain of active phonological processes, strengthening the hypothesis that the foot is Linguagem. Belo Horizonte, v.10(1), 61-90. a universal prosodic domain. Nevertheless, our study also corroborates [4]’s observation

272 273 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Perception of word-final aspirated /s/ - a cross-dialectal and cross-linguistic study [e]). As for reaction times, there were no statistical differences between groups. There was, however, an effect of aspiration length, with the longest time to decision in stimuli with 0-30 Jan Wołłejko & Karolina Broś ms of aspiration, and the shortest between 90 and 105 ms, the rest in the mid-range. Plural University of Warsaw decisions required less time (t = -1.986, p < 0.05), and there was a tendency effect pointing to the male speaker as the one whose pronunciations require more time to decide whether The weakening of syllable-final /s/ in Spanish is a phonetic phenomenon that presents a a [h] was heard (t = 1.882, p = 0.063). wide geographical distribution [1, 2]. Its weakened variant can appear as voiceless [h] or voiced [ɦ], or /s/ can be even totally elided [ ], depending on the context. Even without deletion, however, the acoustic weakness of the debuccalised variants of /s/ [3], can affect the understanding of the message by interlocutors,∅ especially when the grammatical context does not indicate the plurality of a noun (estoy con mi[h] padre[h] ‘I am with my parents’ vs. estoy con mi padre ‘I am with my father’) or when a verb appears without pronouns indicating the second or third grammatical person (¿cómo está[h]? ‘How are you?’ vs. ¿cómo está? ‘How is he/she/it?’). In this preliminary study of Spanish /s/ perception, we investigate to what extent speakers are able to perceive the product of aspiration (glottal [h]) and whether there are differences in perception depending on [h] length, native dialect/language of the listener, the preceding vowel or the gender of the speaker. Knowing that [h]’s perceptibility depends, among others, on its duration [4] and the interlocutor’s language-specific experience [5, 6], we decided to identify the perception threshold for word-final aspirated /s/ i.e., minimal duration necessary to perceive it as a marker of plurality in Spanish. To this end, we conducted a speech perception experiment with audio stimuli presented to 54 participants divided into three groups: Poles (4 males, 18 females) who declared to be proficient in Spanish, Peninsular Spaniards (11 males, 11 females) and Canary Islands Spaniards (6 males, Fig. 1. Frequency of plural vs. singular answers by aspiration length and listener group. 4 females) representing the dialect from which our stimuli were sampled. Our hypothesis was that the more exposure to the investigated sound, the better its perception; therefore, we expected Canary Islands Spaniards to perform best in our experiment, followed by Peninsular Spaniards and then Poles. The stimuli were presented in the form of a continuum i.e., each word appeared repeatedly in 8 different versions which differed in the duration of aspiration (0, 15, 30, 45, 60, 75, 90 and 105 ms). We used a total of 8 naturally pronounced words extracted from sentences produced by two native speakers of Canary Islands Spanish, one male and one female. The vowels preceding word-final /s/ were /a e o/. The aspiration present at the end of the word was cut out incrementally to obtain the 8 aspiration lengths until the whole portion of [h] was deleted (0 ms version). This gave us a total of 64 stimuli presented 5 times in two blocks. The female voice was heard separately from the male voice. The participants were asked to decide whether the noun they had just heard was singular or plural. The results of the experiment showed that the word-final aspirated /s/ was best perceived by the natives of the dialectal words who reached up to ca. 90% accuracy, whereas the rest of the Spaniards perceived the sound similarly to the Poles, with lower scores (see Fig. 1). Peninsular Spanish speakers (z= -4.706, p < 0.001) and Polish speakers (z= -5.545, p < 0.001) are significantly different from Canary Islands listeners in their plurality decisions. Fig. 2. Frequency of plural vs. singular answers by aspiration length and speaker. The perception threshold was between 30 and 45 ms for all the three groups. The results of the study also indicated that the aspirated /s/ was better perceived as a plural marker in the References: case of stimuli pronounced by the female speaker (see Fig. 2, z = -13.298, p < 0.001 for male vs. female). Aspiration length was significant both individually and when divided into pre- [1] Lipski, John (1984). On the Weakening of /s/ in Latin American Spanish. Zeitschrift für Dialektologie threshold and post-threshold values (0-30 vs. 45-105 ms, z = 54.095, p < 0.001). There was und Linguistik, 51, 31-43. also a significant effect of the vowel ([o] as opposed to [a], z = -5.698, p < 0.001, no effect of [2] Hualde, José Ignacio (2008). Sound change. The Blackwell Companion to Phonology. [3] Quilis, Antonio (1981). Fonética acústica de la lengua española. Gredos, Madrid.

274 275 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS [4] Marrero, Victoria (1990). Estudio acústico de la aspiración en español, Revista de Filología DAY 2 - Poster Session D Española, 70, 345-397. Prosodic Cues and Questions [5] Boomershine, Amanda (2006). Perceiving and Processing Dialectal Variation in Spanish: An Exemplar Theory Approach. Selected Proceedings of the 8th Hispanic Linguistics Symposium Variation in pitch scaling in English of young simultaneous bilinguals in Singapore [6] Schmidt, Lauren (2013). Regional Variation in the Perception of Sociophonetic Variants of Spanish /s/. Selected Proceedings of the 6th Workshop on Spanish Sociolinguistics. Jasper SIM and Brechtje POST University of Cambridge

Language acquisition by bilingual children in multilingual, multidialectal contexts like Singapore is exceptionally complex. Variable outcomes can be attributed to cross-linguistic influence (CLI), but are also influenced by differences in their language experience. Children may be dominant in different languages, and while parents may all be native speakers of their local variety, accent features may vary due to differences in their language backgrounds and cultural affiliations[1]. Past studies in bilingual language acquisition have found quantity of input to account for variation in language outcomes[2,3], but only a few have examined the effects of differences in the phonological and phonetic quality of parental input on acquisition[4,5]. Further, little attention has been given to such variation in prosodic development, despite considerable variation in tonal alignment and scaling between dialects in adult speech[1]. This study examined phonetic variation in the pitch scaling in the English of simultaneous child bilinguals in Singapore and explored how differences in their ethnic mother tongue (eMT), language dominance, and maternal input contributed to the variation. The participants were nine simultaneous bilingual children (mean age = 58 months, SD = 8.8) and their mothers. To test the effects of their eMT and language dominance on their English production, three children were English-dominant English-Chinese bilinguals (EC, average English use: 85%), three English-dominant English-Malay bilinguals (EM, average English use: 82%), and three English-Malay bilinguals who were more Malay-dominant than EM (MM, average Malay use: 38%). Based on existing literature on the effects of CLI and language dominance, we may expect the production of EC and EM, who would be typically regarded as monolinguals, to be more similar to each other than to MM. The dataset consisted of 172 semi-spontaneous declarative sentences from an information gap activity, which involved the child describing what he/she saw in picture cards so that their mother could match the correct character with the right item. Each target sentence contained a mono- or disyllabic subject, monosyllabic verb in the present continuous tense, and mono- or disyllabic object with a determiner (e.g. ‘Mary is eating an orange.’). All disyllabic words were stress-initial. The confirmational replies of mothers in the same form were taken to be the mothers’ production. The recordings were analysed according to the autosegmental-metrical intonation model of Singapore English[6,7]. Time-normalised F0 (ten points per syllable) was then extracted, and converted into semitones to normalise across participants. A smoothing spline ANOVA analysis with 95% CI (Figure 1) revealed that, compared to Malay children, Chinese children had: (A) larger LH rises, (B) larger HL falls, and (C) larger L-downstepping. There were only small group differences between MM and EM. These ethnic group differences in the rise/fall/downstep ratios were further examined using static measurements, and tested using linear mixed-effects modelling, taking into account various linguistic (e.g. no. of syllables, duration) and non-linguistic variables (e.g. language dominance, eMT). Language dominance was not a significant predictor of variation in scaling for any of the ratios. By contrast, there were significant effects of eMT in each ratio, although some differences were significant only when linguistic factors were considered. Similar patterns were found in mothers’ production. The results suggest that the children’s production

276 277 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS patterns were influenced more by quality of input (eMT, patterns in maternal input) than Reopening the Debate about Tonal Alignment by quantity (language dominance). The similarities between EM and MM show that CLI and quantity of input cannot fully explain variable outcomes in bilinguals. The findings also Louis Hendrix1 contrast with other heritage learners, who tend to assimilate to the major accent, highlighting 1University of Edinburgh the differences in the relative influence of the input. Tonal alignment is the timing of tonal events relative to speech segments. The exact References implementation of tonal alignment was the subject of much debate in the 90s. Tonal alignment could be defined as the alignment of f0 peaks relative to specific acoustic landmarks, such [1] Sim, J. H. (2019). “But you don’t sound Malay!”: Language dominance and variation in the as the beginning of the nucleus [1] or the syllable [2]. Distance between the landmark and accents of English-Malay bilinguals in Singapore. English World-Wide, 40(1), 79–108. https:// the f0 peak was expressed as duration in milliseconds by most researchers, while a few doi.org/10.1075/eww.00023.sim preferred to express such time intervals as a proportion of the syllable rhyme [1]. Ultimately, [2] En, L. G. W., Brebner, C., & McCormack, P. (2014). A preliminary report on the English phonology the field was inconclusive about a precise definition [3]. of typically developing English-Mandarin bilingual preschool Singaporean children: English In recent years, the phonetic parameter of tonal alignment has been identified as phonology of typically developing English-Mandarin Singaporean children. International phonemically contrastive in several typologically unrelated languages [4,5,6]. While the Journal of Language & Communication Disorders, 49(3), 317–332. https://doi.org/10.1111/1460- 6984.12075 previous debate focused on phonetics and intonation, this more recent discussion revisits [3] Law, N. C. W., & So, L. K. H. (2006). The relationship of phonological development and language the unanswered questions about tonal alignment with new information regarding phonology, dominance in bilingual Cantonese-Putonghua children. International Journal of Bilingualism, lexical tone, perception, and innovations in the field. For example, the concept of Tonal Centre 10(4), 405–427. https://doi.org/10.1177/13670069060100040201 of Gravity [7] casts doubt on whether using turning points as indications for tone targets is [4] Mayr, R., & Montanari, S. (2015). Cross-linguistic interaction in trilingual phonological even necessary at all. Further complicating the debate is the question of how alignment development: The role of the input in the acquisition of the voicing contrast*. Journal of Child contrasts are perceived. House [8] suggest that spectral changes at segment boundaries Language, 42(5), 1006–1035. https://doi.org/10.1017/S0305000914000592 impede listeners’ sensitivity to changes in f0. DiCanio et al. [5] explain contrastive tonal [5] Stoehr, A., Benders, T., van Hell, J. G., & Fikkert, P. (2019). Bilingual Preschoolers’ Speech is alignment as systematically contrasting the alignment of tone targets with mora-boundaries. Associated with Non-Native Maternal Language Input. Language Learning and Development, A pattern of classical categorical perception was found for contrastively aligned contours in 15(1), 75–100. Scopus. https://doi.org/10.1080/15475441.2018.1533473 Zhumadian Henan Mandarin [9]. [6] Chong, A. J. (2013). Towards a model of Singaporean English intonational phonology. The Journal We investigated the perceptibility of tonal alignment differences in speakers of Shilluk, of the Acoustical Society of America, 133(5), 3570–3570. https://doi.org/10.1121/1.4806536 a Nilotic language of South Sudan which uses tonal alignment contrastively on falling [7] German, J. S., & Chong, A. J. (2018). Stress, tonal alignment, and phrasal position in Singapore contours [4]. A forced-choice discrimination experiment contained pairs of synthesized English. TAL2018, Sixth International Symposium on Tonal Aspects of Languages, 150–154. ISCA. https://doi.org/10.21437/TAL.2018-30 stimuli with falling contours on the same CVC nonsense word, in both short and long vowel conditions (See figures 1 & 2). We expected to find evidence for categorical perception, but no participants reliably perceived the nonsense stimulus pairs categorically, despite pairs in certain conditions straddling the estimated phonemic boundary between the contrastively aligned falling tonemes in Shilluk. Moreover, considerable differences in perception results between the short and long vowel conditions in the present study clearly demonstrate that the presence of the nucleus-coda boundary changes the perceptibility of the stimulus pairs. These findings suggest that defining tonal alignment simply relative to the onset of the vowel does not accurately reflect the way it is perceived. As noted above, it remains unclear precisely how tonal alignment should be defined phonetically: relative to specific segment boundaries? Proportional to specific segment durations? In some other way entirely? The current experiment may thus not have resembled the phonetic difference between the two contrastively aligned tonemes in Shilluk closely enough. This calls for a revision of assumptions Figure 1: SSANOVAs of normalised pitch contours of each syllable for each group of child participants, with 95% about tonal alignment and a resolution of the unanswered questions about how alignment confidence interval (ribbon around spline). should be defined. What may seem like fairly neutral objective phonetic descriptions of f0 movement fail to accurately capture the phonological essence of the distinctions that people have discussed as being based on “tonal alignment”.

278 279 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Pitch and Functional Characterization of Hesitation Phenomena in Italian Discourse

Loredana Schettino1, Simon Betz2, Franco Cutugno3, Petra Wagner2 1University of Salerno, Italy; 2Bielefeld University, Germany; 3University of Naples “Federico II”, Italy

Previous studies on different languages and speech styles have shown that hesitation markers, like lengthenings and fillers, are characterized by specific pitch features. Typically, a flat pitch contour distinguishes hesitant prolongations from non-disfluent ones, the latter being realized with a higher pitch range and rising contour [1, 2]. Similarly, filled pauses were reported as occurring with low and flat or falling contour relative to the adjacent prosodic context [3, 4, 5]. However, while [1] found the plateau realization to be associated with filled pauses perceived as “felicitous”, and the falling contour with “infelicitous” instances, [6] showed that falling or rising contours systematically serve as a turn-holding device in dialogues. Also, in recent work on Italian discourse, pitch shape was observed as a relevant cue for discriminating lexical fillers (flat contour) from other discourse markers acting as rhetoric devices (peak contour) [7]. Our study further investigates the pitch characteristics of different hesitation types in Italian discourse, to find out whether phonetic differentiation reflects different communicative functions. Our analysis is based on a dataset from the C.H.R.O.M.E. corpus [8], consisting of 80 minutes of semi-spontaneous speech by three female expert tourist guides. Hesitations were defined as “content-free” material conveying procedural meaning in context; specifically, three types were considered: Lengthenings (as in “nell Certosa”), Filled Pauses (“ehm”, “eeh”), and Lexicalized Filled Pauses (“let’s say”, …). They were annotated along with their possible function/s according to their co-text: Hesitative, Word Searching, Structuring, Focusing References (annotation scheme developed after [5]). Inter-annotator agreement yielded a Cohen’s kappa score of 0.73 [9]. Three phonetic parameters were considered: pitch contour shape 1. Silverman, K., & Pierrehumbert, J. (1990). The timing of prenuclear high accents in English. In J. (Flat, Falling, Rising, or Peak); pitch range (ST); distance (ST) between the pitch means of Kingston, & M. Beckman the hesitation and its corresponding intonational unit. For statistical analysis, Generalized 2. Prieto, P., van Santen, J., & Hirschberg, J. (1995). Tonal alignment patterns in Spanish. Journal Linear and Linear Mixed Models were fitted, each with one of the phonetic parameters as the of Phonetics, 23, 429–451. dependent variable, hesitations’ “type” and “function” as interacting independent variables, 3. Schepman, A., Lickley, R., & Ladd, D. R. (2006). Effects of vowel length and “right context” on the and “speaker” and “item” as random intercepts, yielding the following significant results. alignment of Dutch nuclear accents. Journal of Phonetics, 34(1), 1–28. https://doi.org/10.1016/j. wocn.2005.01.004 Among the 975 instances, flat and peak pitch contours are the most frequent realizations 4. Remijsen, B., & Ayoker, O. G. (2014). Contrastive tonal alignment in falling contours in Shilluk. (Fig. 1a, flat=65%; peak=30%). The flat contour is predominant in lengthenings andfilled Phonology, 31(3), 435–462. https://doi.org/10.1017/S0952675714000219 pauses, whereas peak contours only occur with lexical fillers. A visual inspection shows that 5. DiCanio, C. T., Amith, J. D., & García, R. C. (2014). The phonetics of moraic alignment in Yoloxóchitl “pitch contour” distinguishes lexical fillers having also a Structuring or Focusing function (Fig. Mixtec. 4th International Symposium on Tonal Aspects of Languages (TAL-2014), Nijmegen, The 1.b). Especially fillers with a Focusing function occur more frequently with a peak profile Netherlands, 203–210. https://doi.org/10.13140/2.1.1254.0807 (69%) than Word Searching (10%) and Hesitative (12%) ones, the latter being characterized 6. Gussenhoven, C., & Li, N. (2016). Phonological tone and phonetic intonation in Henan Mandarin: by a flat profile. Though varying, pitch range was found to be affected by hesitation types Perceptual evidence. In TIE2016 University of Kent, Canterbury (pp. 3–4). and functions (Tab.1). Lengthenings mostly have a narrower range than fillers, while filled 7. Barnes, J., Veilleux, N., Brugos, A., & Shattuck-Hufnagel, S. (2012). Tonal Center of Gravity: A pauses involved in lexical retrieval generally occur with a wider range than the ones revealing global approach to tonal implementation in a level-based intonational phonology. Journal of general hesitation. Among lexical fillers, a wider range characterizes Focusing or Structuring. the Association for Laboratory Phonology, 3(2), 337–383. https://doi.org/https://doi.org/10.1515/ Also, including pitch contour as a fixed effect in the model shows that those with peak shape lp-2012-0017 (9.31 ST) present a significantly wider range than flat ones (5.42 ST). As for the distance 8. House, D. (1990). Tonal Perception in Speech (Vol. 24). Lund, Lund University Press. between hesitations and their surroundings, lengthenings, filled pauses, and lexicalized filled 9. Gussenhoven, C., & Van De Ven, M. (2020). Categorical perception of lexical tone contrasts and pauses all typically occur with lower pitch and are differentiated further by their functions: gradient perception of the statement-question intonation contrast in Zhumadian Mandarin. Hesitations revealing Word Searching function tend to be more phonetically distant from Language and Cognition, (December 2018), 1–35. https://doi.org/10.1017/langcog.2020.14 context than others, thereby “standing out”, whereas hesitations with a Focusing function do not. These results contribute to unveil the role of pitch in discriminating further communicative functions that hesitations may convey in Italian discourse. They may also provide interesting

280 281 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS findings for technological applications striving towards better communicative efficiency and [9] Landis, J., & Koch, G. 1977. The measurement of observer agreement for categorical data. naturalness via integration of hesitations. Biometrics, 33 1, 159-74.

Figure 1. Pitch profile distribution per hesitation type (1.a, left) and per LFP function (1.b, right)

pitch range (ST) hes-IP distance (ST) Hes_type Function mean Std.Dev. Hes_type Function mean Std.Dev. LEN / 3.86 5.33 LEN / -2.39 3.09 FP / 7.66 7.53 FP / -2.33 3.36 LFP / 7.80 6.09 LFP / -0.34 2.04 LFP HES 4.4 3.15 / HES -1.89 3.5 LFP WS 5.88 5.54 / WS -2.28 2.86 LFP STR 7.98 6.74 / STR -1.02 2.78 LFP FOC 8.47 6.18 / FOC -0.46 2.19 FP HES 5.3 6.2 FP WS 8.79 7.98 Table 1. Mean and Std. Dev. values of pitch range and distance between item and intonation unit. Significant results are in bold.

References

[1] Moniz, H., Trancoso, I. & Mata, A.I. 2010. Disfluencies and the perspective of prosodic fluency. In A. Esposito, N. Campbell, C.Vogel, A.Hussain, and A.Nijholt,(Eds.), Development of Multimodal Interfaces: Active Listening and Synchrony. Springer, 5967, 382–396. [2] Betz, S., Eklund, R. & Wagner, P. 2017. Prolongation in German. Proceedings of the 8th Workshop on Disfluency in Spontaneous Speech (Stockholm, Sweden), 13-16. [3] O’Shaughnessy, D. 1992. Recognition of hesitations in spontaneous speech. Proceedings of 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing (San Francisco, USA), 521-524. [4] Shriberg, E. 2001. To “err” is human: Ecology and acoustics of speech disfluencies.Journal of the International Phonetic Association, 31, 153-169. [5] Cataldo, V., Schettino, L., Savy, R., Poggi, I., Origlia, A., Ansani, A., Sessa, I., & Chiera, A. 2019. Phonetic and functional features of pauses, and concurrent gestures, in tourist guides’ speech. In Piccardi, D., Ardolino, F., Calamai, S. (Eds). Gli archivi sonori al crocevia tra scienze fonetiche, informatica umanistica e patrimonio digitale. Officinaventuno, 205-231. [6] Belz, Malte & Reichel, Uwe D. 2015. Pitch characteristics of filled pauses. Proceedings of the 7th Workshop Disfluency in Spontaneous Speech (Edinburgh, Great Britain), 1-3. [7] Schettino, L., & Cataldo, V. 2019. Lexicalized pauses in Italian. Proceedings of the 10th International Conference of Experimental Linguistics (Lisbon, Portugal), 189-192. [8] Origlia, A., Savy, R., Poggi, I., Cutugno, F., Alfano, I., D’Errico, F., Vincze, L. & Cataldo, V. (2018). An Audiovisual Corpus of Guided Tours in Cultural Sites: Data Collection Protocols in the CHROME Project. Proceedings of the 2018 AVI-CH Workshop on Advanced Visual Interfaces for Cultural Heritage (Castiglione della Pescaia, Italy), 1-4.

282 283 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Another voice for another language? – Language’s impact on vocal register than Anglophones (p < .001). Altogether, our results confirm our hypothesis according to Lucie Judkins, Corine Astésano which prosodic structure is accountable for within-speaker register variations: French is Octogone-Lordat, Université Toulouse Jean Jaurès, France mostly spoken on a higher register than English, regardless of the bilingual proficiency. An unexpected outcome of our study concerns the L1 variable: independently of the language spoken, Anglophones present a wider range than Francophones whose registers sit higher Speakers of multiple languages tend to notice changes in their voices according to the and vary less dramatically. language they speak. These observations suggest that one’s vocal register (pitch level and range [1]) varies across the languages spoken by multilingual individuals. The present study compares productions from speakers of French and English and seeks to analyse this phenomenon in order to better understand its origin(s). Vocal register varies depending on Reading aloud*** Narration Conversation a diversity of factors: age [2], emotional state [3], context of communication [4, 5], socio- MPL (Hz)*** PR (ST) MPL (Hz)*** PR (ST) MPL (Hz)*** PR (ST) linguistic group [6], and status of the language spoken (L1 vs L2) [7]. We built a data collection protocol in order to observe the effect of the language FR ENG FR ENG FR ENG FR ENG FR ENG FR ENG spoken on vocal register. We hypothesised that vocal register differences between the two Anglo. A2 117.94 114.53 10.78 10.57 109.99 107.68 9.28 7.839 112.12 107.18 9.04 8.48 languages are mainly due to each language’s prosodic structure, rather than to the effects Anglo. C1 204.3 193.46 13.44 13.8 182.97 178.28 12.68 12.8 184.03 182.74 10 11.24 of the speakers’ language status (L1 vs L2). French has a trailer-timed accentuation pattern Anglo. BI 198.2 184.64 11.21 11.32 171.61 164.55 11.44 11.37 182.59 175.44 5.56 4.68 (w-S) involving a progressive tension of the unstressed syllables placed before the stressed Franco. A2 208.56 204.24 15.59 12.56 207.67 198.13 10.711 9.42 208.18 188.53 11.91 9.24 syllable resulting in open, anterior and full unstressed vowels [8]. As tension in the vocal Franco. C1 196.74 192.68 8.17 10.01 196.25 179.75 8.57 9.81 186.38 196.58 8.24 7.92 tract leads to higher frequency in the voice, it seems logical that speaking French involves the Franco. BI 222.52 208.98 11.13 10.46 196.1 203.32 9.37 9.61 204.92 203.63 8.69 10.41 adoption of a relatively high register. Conversely, English has a leader-timed structure (S-w). Table 1 – Median Pitch Level (MPL) and Pitch Range (PR) in French and English in each task. One representative Stress is mostly located at the beginning of words and the following unstressed syllables are subject from each level group has been selected and displayed in this table. Main effect: MPL is significantly produced on a decreasing tension of the vocal tract, leading to strongly centralised vowels higher in French. No significant effect of the language spoken on the PR. Significant effect of the task: MPL and lower register. Thus, we propose that French induces the use of a higher vocal register higher and PR wider in the reading task. compared to English. We created the corpus B-FREN3 (Bilingual French/English speech across three tasks, 18 speakers, 108 recordings, 216 minutes of speech [9]) in order to compare language References productions in both languages, within speakers, across three distinctive tasks (reading aloud, narration of a silent video clip, conversation). We recorded 9 Anglophones (5 females) [1] Ladd, R. 1996. Intonational Phonology. Cambridge Studies in Linguistics 79. Cambridge ; New and 9 Francophones (6 females) with variable bilingual proficiency, either French or English, York: Cambridge University Press. ranging from A2/B1 [10] to bilingual from childhood. We manually segmented all recordings [2] Chevrie-Muller, C., Salomon, D., Ferrey, G. (1971). Contribution a l’etablissement de quelques into Inter Pausal speech Units (IPU [11]) and removed non relevant f0 detection errors. We constantes physiologiques de la voix parlee de la femme adolescente, adulte et age. J. Franc. automatically extracted f0 values using the ProsodyPro [12] plugin on Praat [13] and calculated d’Oto-Rhino-Laryng. XVI , 433–455. the median pitch level (MPL), using a Python algorithm [14]. Note that MPL is expressed in Hz [3] Revis, J. (2013). La voix et soi: Ce que notre voix dit de nous. De Boeck Superieur. [4] Simon, A-C., Auchlin, A., Avanzi, M., Goldman, J-P. (2010). Les phonostyles: une description which is relevant for our within-subject statistical analysis. The interval between the highest prosodique des styles de parole en francais. Les voix des Francais: en parlant, en ecrivant, Bern: and lowest corrected f0 values (pitch range, PR) was calculated as the difference between an Lang, 71–88. average of the f0 max and min values [15] extracted on 1ms frames on each IPU. Those were [5] Guaitella, I. (1990). Propositions pour une methode d’analyse de l’intonation en parole converted into semitones (ST; ref. 100Hz) in order to account for perceptual relevance of the spontanee. J. de Phys. Coll., 51 (C2), pp.C2-515-C2-518. measure [16]. We ran a mixt linear regression model (R [17]) on four independent variables [6] Demers, M. (2003). La voix du plus fort: etude acoustique sur le registre vocal en tant qu’indicateur (language, bilingual proficiency, L1, task). The sex variable was controlled but not tested in sociolectal et dialectal en français spontané. Dans Demers M. (dir.), Registre et voix sociale (p. this study. 79-121). Quebec: Nota Bene. Our results show that the language spoken affected significantly the MPL (p < .001) with [7] Gregersen, T., MacIntyre, P. D., & Meza, M. D. (2014). The motion of emotion: Idiodynamic case higher MPL in French than in English. The bilingual proficiency does not have a significant studies of learners’ foreign language anxiety. The Mod. Lang. J., 98 : 574– 588. effect on neither the MPL nor the PR. The task had a strong and significant effect (p < .001) [8] Wenk, B.J. (1983). Effets de Rythme dans le Français Parle. Recherches sur le francais parle, Vol. on both the MPL and PR with the reading task showing the highest and widest registers. 5, Publications de l’Université de Provence, Aix-en-Provence, 147-162. The L1 variable showed no significant direct effect. However, interaction effects were [9] Judkins, L. (2019). Incidence de la langue parlee sur le registre vocal, etude comparative du francais found: the L1 had a significant interaction effect with the language spoken (p < .001) as et de l’anglais. (Memoire de Master 2, UT2J, France). [10] https://www.coe.int/fr/web/common-european-framework-reference- languages/table- Anglophones showed more dramatic differences between their MPL in French and English. 1-cefr-3.3-common-reference-levels-global-scale Lastly, note that between-speaker analyses indicate that Francophones have higher register

284 285 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS [11] Bertrand, R., Blache, P., Espesser, R., Ferree, C., Meunier, B., Priego-Valverde, B., Rauzy, S. Self-confidence in perception and production on non-native sounds (2008). Le CID - Corpus of Interactional Data - Annotation et Exploitation Multimodale de Parole Conversationnelle. Traitement Automatique des Langues, ATALA, 2008, 49 (3), pp.1-30. Natalia Kartushina1, David Soto2,3 and Clara Martin2,3 1University of Oslo, 2BCBL, 3Ikerbasque Basque foundation for science [12] Xu, Y. (2013). ProsodyPro — A Tool for Large-scale Systematic Prosody Analysis. In Proceedings of TRASP’2013, Aix en Provence, France. 7-10. The aim of the present study was to investigate and compare, for the first time, [13] Boersma, P., & Weenink, D. (2009). Praat : doing phonetics by computer [computer program]. metacognitive processing in two language domains by focusing on self-confidence (i.e. how Retrieved from http ://www. praat. org. [14] Python Software Foundation. Python Language Reference, version 2.7. Available at http:// a person rates the quality of one’s own answer) in perception and production of non-native www.python.org speech sounds. Although the dominant theoretical perspectives predict a tight relationship [15] De Looze, C. & Hirst D. (2010). L’échelle OME (Octave-MEdiane): une échelle naturelle pour la between non-native/second-language (L2) speech sound perception and production 1–3, mélodie de la parole. Proceedings of the XXVIIIème Jour. d’Et. sur la Par. (JEP 2010), Mons, Belgium. research provides no evidence for a consistent relationship between them 3–12. The role of [16] Hart, J’t., Collier R., Cohen, A. (1990). A perceptual study of intonation: an experimental-phonetic metacognition in phonetic processing, on the other hand, remains poorly understood (with approach to speech melody. Cambridge, U.K. : Cambridge University Press. some studies reporting weak capacity to monitor sound mispronunciations 13) and no study, [17] R Core Team (2017). R: A language and environment for statistical computing. R Foundation to date, has assessed metacognition in both sound perception and production within the for Statistical Computing, Vienna, Austria. URL https://www.R-project.org/. same experimental procedure. In the current study, we examined novice Spanish speakers’ perception and production of the French vowel contrast /ø/-/œ/ and assessed, on a trial-by-trial basis, their self-confidence in performance. Previous research 8 has shown that Spanish speakers experience difficulties in discriminating these two vowels in both perception and production. Thirty-six participants were, first, familiarized with the target vowels /ø/-/oe/ and their respective labels through a semi-natural auditory exposure to consonant-vowel (CV) syllables containing the target vowels. Then they performed, in a counterbalanced order, 2-Alternative-Forced-Choice vowel identification task and vowel production (via syllable reading) task (200 trials in each). This experimental design addressed some limitations of previous research and allowed us to study the relationship between L2 speech sound perception and production, by controlling, systematically, for L2 experience (all novice learners), for the level of processing assessed in both modalities (prelexical), for the type of input during the exposure (identical familiarization for all participants) and for the type of L2 sounds (non-native French vowels not present in the Spanish vowel inventory). In the perception task, we used a logistic regression to predict accuracy (0 vs 1) across trials, based on trial-by-trial confidence ratings. Our dependent measure was the area under the receiver operating characteristics (ROC) curve: a sensitive, nonparametric bias-free measure of predictive performance in binary classification. The results revealed that participants’ confidence judgments aligned with the accuracy in non-native speech sound perception (t(35) = 5.79, p < 0.001). To analyse metacognitive ability in production, we used a linear regression model to predict participants’ production accuracy (indicated by Mahalanobis [acoustic] distance to the target-vowel space, derived from native French speakers’ productions), based on confidence ratings. Self-confidence did not align with a fine-grained measure of non-native production (t(35) = -0.115, p = 0.908). Yet, separate ROC analyses on a binary measure of non-native production, that situated participants’ productions within/outside the ‘native’ zone, was significant (t (35) = 4.91, p < 0.001), suggesting that metacognitive monitoring in non-native production operates on a relatively coarse, yet meaningful sound representation level. While confidence ratings were similar and highly correlated between the perception and production tasks (rs = 0.65), there were no associations between the two language domains regarding the primary task performance (rs = -0.17) or metacognitive ability across language domains (rs = 0.07). We discuss the ramifications of these findings for the second language processing and the unsettled debate on the relationship between sound perception and production. References

286 287 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS 1. Best, C. T. A Direct Realist View of Cross-Language Speech Perception. in Speech perception and Types of hate speech: How speakers of Danish rate spoken vs. written hate speech linguistic experience: Theoretical and methodological issues (ed. Strange, W.) 171–204 (York Press, 1995). Jana Neitsch1, Oliver Niebuhr2 2. Best, C. T. & Tyler, M. D. Non-native and second-language speech perception: Commonalities 1University of Stuttgart, 2University of Southern Denmark (SDU) and complementarities. in Second language speech learning: The role of language experience in speech perception and production (eds. Munro, M. J. & Bohn, O.-S.) 13–34 (John Benjamins, Hate speech mainly occurs in written form on social-media platforms and discriminates 2007). societal minorities for their national origin, sexual orientation, gender or disabilities. However, 3. Flege, J. E. Second language speech learning Theory, findings, and problems. in Speech Perception and Linguistic Experience: Issues in Cross-Language Research (ed. Strange, Winifred) hate speech can also occur in spoken form in and beyond such platforms [1,2], making its 233–277 (York Press, 1995). increasing occurrence still more harmful. Even though hate speech has a destructive power 4. Bradlow, A. R., Pisoni, D. B., Akahane-Yamada, R. & Tohkura, Y. Training Japanese listeners and is a growing source of concern around the globe, there is hardly any research on the to identify English/r/and/l: IV. Some effects of perceptual learning on speech production. J. investigation of hate speech in both the spoken and the written mode. The present study Acoust. Soc. Am. 101, 2299–2310 (1997). belongs to a series of linguistic experiments that aim at filling this gap. 5. Flege, J. E. & Eefting, W. Production and perception of English stops by native Spanish speakers. We present a web-based pilot study investigating the most frequent types of hate speech J. Phon. 15, 67–83 (1987). in both the spoken and the written mode in Danish. The study compares the perceived 6. Hanulíková, A., Dediu, D., Fang, Z., Bašnaková, J. & Huettig, F. Individual Differences in the severity of different types of hate speech and, moreover, analyses whether the two modes Acquisition of a Complex L2 Phonology: A Training Study. Lang. Learn. 62, 79–109 (2012). intensify and/or mitigate Danish hate speech. The six investigated types of hate speech 7. Hattori, K. & Iverson, P. Examination of the relationship between L2 perception and production: (i.e., feature conditions) were: figurative language (FGL), irony (IRO), Holocaust reference an investigation of English/r/-/l/perception and production by adult Japanese speakers. in (HOL), indirectness (IND), rhetorical question (RQ), imperative (IMP), all similarly short (i.e., Interspeech Workshop on Second Language Studies: Acquisition, Learning, Education and Technology utterances with 20-30 syllables) and derived from original hate speech items (ORIG) taken (ed. Nacano, M.) (Waseda University, 2010). from our Twitter-Facebook corpus [4]. Overall, we designed 12 items per hate-speech type: 6 8. Kartushina, N. & Frauenfelder, U. H. On the effects of L2 perception and of individual differences in L1 production on L2 pronunciation. Front. Psychol. 5, (2014). targeting foreigners, a generic group, and 6 targeting Muslims, a more specific group. 9. Nagle, C. L. Examining the Temporal Structure of the Perception–Production Link in Second The written hate-speech items were then realized by a phonetically trained Danish native Language Acquisition: A Longitudinal Study. Lang. Learn. 68, 234–270 (2018). speaker who met the profile of a prototypical hate speaker [5]. The participants’ task (con- 10. Okuno, T. & Hardison, D. M. Perception-production link in L2 Japanese vowel duration: Training ducted alone and at home) was to rate the items they have read/heard on two dimensions with technology. Lang. Learn. Technol. 30, 61–80 (2016). with a single mouse click: personal unacceptability on the x-axis and necessity of legal/societal 11. Peperkamp, S. & Bouchon, C. The relation between perception and production in L2 consequences for the originator on the y-axis (Fig. 1). So far, 8 native speakers of Danish (∅=25.6 phonological processing. in Proceedings of Interspeech 161–164 (2011). years, SD=6.6 years, 1 male) evaluated the stimuli in a within-subjects design (order: written 12. Sheldon, A. & Strange, W. The acquisition of/r/and/l/by Japanese learners of English: Evidence - spoken) with three days separating participants’ ratings of written and spoken stimuli. The that speech production can precede speech perception. Appl. Psycholinguist. 3, 243–261 (1982). items of both questionnaires were presented in individually randomized orders. Before the 13. Dlaska, A. & Krekeler, C. Self-assessment of pronunciation. System 36, 506–516 (2008). actual web study began, a written instruction illustrated and explained the two-dimensional rating space, allowing participants to familiarize themselves with the procedure. In analyzing the data, absolute values (coordinates) were converted into relative x- and y-coordinates. Descriptive results showed the same feature-condition ranking for both dimensions: FGL, HOL, and IMP were rated most unacceptable and caused the strongest call for consequences (esp. FGL). In contrast, on the lower end of both dimensions we found IND, RQ, and IRO. For the x-coordinates, a mixed-effects regression model was calculated, based on target (Muslims vs. foreigners), feature condition, and mode (spoken vs. written) as independent variables (Fig.2). All conditions – except for HOL (p > 0.08) – differed from FGL by triggering significantly lower ratings (all p < 0.002). Additionally, FGL and IMP items did not differ from HOL items (bothp = 0.08). Furthermore, spoken stimuli and stimuli targeting Muslims resulted in higher ratings than written stimuli or stimuli targeting foreigners (both p < 0.0002). The analysis of the y-coordinates (Fig.3) showed that FGL items called for stronger consequences than all other feature conditions (all p < 0.0002). Similar to the x-coordinates, spoken stimuli as well as stimuli targeting Muslims resulted in higher ratings than written stimuli or stimuli targeting foreigners (both p < 0.0002). Taken together, FGL items were rated most unacceptable and triggered the strongest call for consequences among Danish participants. Additionally, spoken hate speech as well as hate speech targeting Muslims as the more specific target group intensified participants’

288 289 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS evaluation with respect to both unacceptability and the need for consequences for the References originator. Currently, the order of the questionnaires (spoken – written) is swopped to generate [1] Neitsch, J., Niebuhr, O. & A. Kleene. In press. What if hate speech really was speech? Towards more data and exclude a potential order effect. In a next step, we plan a cross-linguistic explaining hate speech in a cross-modal approach. In: Wachs, S., Koch-Priewe, B., Zick, A. comparison with an already existing parallel set of German hate-speech stimuli and their (eds.): Hate Speech–Theoretische, empirische und anwendungsorientierte Annäherungen an eine ratings. gesellschaftliche Heraus-forderung. Springer. [2] Neitsch, J. & O. Niebuhr. 2020. Are Germans better haters than Danes? Language-specific implicit prosodies of types of hate speech and how they relate to perceived severity and societal rules. Proceedings of the 16th Annual Conference of the International Speech Communication Association, Shanghai, China. [3] Neitsch, J. & Niebuhr, O. 2020. On the role of prosody in the production and evaluation of German hate speech. Proceedings of the 10th International Conference on Speech Prosody, Tokyo, Japan, 710-714. [4] Bick, E. (2020). An Annotated Social Media Corpus for German. Proc . 12th International Conference on Language Resources and Evaluation, Marseille, France, 6129-6137. [5] Hrdina, M. 2015. Identity, activism and hatred: Hates peech against migrants on Facebook in the Czech Republic, In: Naše společnost 14(1), 38-47.

Fig. 1: Two-dimensional rating space for participants’ evaluation.

Fig. 2: Unacceptability dimension (x) across target, mode, and all feature conditions.

Fig. 3: Need for consequences dimension (y) across target, mode, and all feature conditions.

290 291 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS The processing of emotional prosody and semantics in French

Francesca Carbone1 and Caterina Petrone1 1Aix-Marseille Université, CNRS, Laboratoire Parole et Langage, Aix-en-Provence, France

Listeners use both semantic and prosodic information to infer speakers’ emotional states. Previous studies have widely confirmed that the co-occurrence of these two “channels” accelerates the elaboration of emotional information [7, 8]. However, the question remains whether prosody or semantics predominates during emotion processing. Here, we report the first study aiming at testing the effects of emotional prosody, emotional semantics and their interaction on skin conductance responses (SCRs) in French. SCRs are abrupt changes in skin conductance that depend on the activity of sweat glands innervated by the ortosympathetic branch of the autonomous nervous system [4]. SCRs are a reliable and non-invasive measure to evaluate listeners’ spontaneous reactions to emotional stimuli varying along the arousal and valence dimensions [4, 6, 9]. Our corpus included 28 utterances produced by a professional actress who rendered three Figure 1. Means and standard error of the SCRs split by emotions (i.e., anger, happy, neutral), channels (semantics only, emotional states (i.e., angry, happy, neutral) through prosody only (e.g., elle a lu l’article, prosody only, semantics+prosody) and the sex of the listeners (i.e., females and males). Judgments for semantically ‘she read the article’ spoken with angry, happy or neutral prosody), through semantics only and prosodically neutral stimuli (our baseline) are illustrated by the grey bar. (e.g. elle a joué avec ses nerfs ‘she played with his nerves’, which contains semantic cues to anger and was spoken with neutral prosody) and through the combination of prosody and References semantics (e.g., elle a joué avec ses nerfs spoken with angry prosody). Acoustically, utterances spoken with angry and happy prosody presented higher mean fundamental frequency, [1] Aue T., Cuny C., Sander D., Grandjean D. 2011. Peripheral responses to attended and unattended higher intensity and shorter speech rate than neutral prosody (p < 0.05), independent angry prosody: A dichotic listening paradigm. Psychophysiology 48, 385-392. of their semantics. Prior to the experiment, materials were validated in their written and [2] Bradley, M. M., & Lang, P. J. 1994. Measuring emotion: The Self-Assessment Manikin and the oral forms to ensure that their semantics and prosody were representative of each target semantic differential. Journal of Behavior Therapy and Experimental Psychiatry 25, 49–59. emotion. Seventy-seven native French listeners (41 females and 36 males) rated the arousal [3] Cao, H., Beňuš, Š., Gur, R.C., Verna, R. & Nenkova, A. 2014. Prosodic cues for emotion: analysis and valence of each selected stimuli on a 5-points Likert scale. Simultaneously, their SCRs with discrete characterization of intonation. In Campbell, N., Gibbon, D. & Hirst, D. (Eds.), were continuously recorded on the volar surface of the medial and distal phalanges of the Proceedings of 7th International Conference on Speech Prosody, Dublin, 130-134. fingers on their non-dominant hand [4]. [4] Dawson, M.E., Schell A.M., Filion D.L. 2000. The electrodermal system. In Cacioppo J.T., Tassinary Behavioural results reflected the general assessment of emotions along the valence and L. G., Berntson G. G. (Eds), Handbook of psychophysiology. Cambridge: Cambridge University arousal dimensions [2]. Valence ratings did not vary across channel conditions, while arousal Press, 200-223. ratings were higher for utterances conveying emotions through prosody only and through the [5] Fitch, W. T. 2010. The evolution of language. Cambridge: Cambridge University Press. combination of prosody and semantics than for utterances conveying emotions by semantics [6] Khalfa, S., Roy M., Rainville P., Peretz, I. 2008. Role of tempo entrainment in psychophysiological only. Concerning SCRs, angry prosody (alone) [β=0.020, SE=0.046, t=6.07, p<0.001] and the differentiation of happy and sad music?International Journal of Psychophysiology 68(1), 17-26. combination of angry prosody and semantics [β=0.025, SE=0.041, t=4.29, p<0.001] triggered [7] Kotz, S. A., & Paulmann, S. 2007. When emotional prosody and semantics dance cheek to cheek: the highest SCRs, with no differences between the two conditions (p > 0.05). The effects of ERP evidence. Brain research 1151, 107-118. prosody on SCRs were limited to female listeners, however, while men were insensitive to [8] Paulmann, S., & Pell, M. D. 2011. Is there an advantage for recognizing multi-modal emotional the experimental manipulations (Figure 1). Our results support the hypothesis that angry stimuli? Motivation and Emotion 35(2), 192-201. prosody attracts the highest degree of attention, as it requires listeners to physiologically [9] Petrone C., Carbone F., Lavau-Champagne M. 2016. Effects of emotional prosody on skin adapt to a potentially dangerous situation [1]. The lack of effects of emotional semantics conductance responses in French. Proceedings of Speech Prosody (Boston, MA, USA), 425-429. supports the claim that prosody is ontogenetically older and more relevant for adaptative [10] Williams, L. M., Barton, M. J., Kemp, A. H., Liddell, B. J., Peduto, A., Gordon, E., & Bryant, R. functions than semantics [5]. Finally, differences in listeners’ sex indicates that women A. 2005. Distinct amygdala–autonomic arousal profiles in response to fear signals in healthy have higher sensitivity to threatening stimuli possibly as a consequence of sex-specific males and females. Neuroimage 28(3), 618-626. evolutionary pressures (e.g., need of ensuring offspring survival) [10]. We aim to expand the study by investigating the correlations between the phonetic/phonological characteristics of the stimuli and the behavioral and SCR results. While intonation contours have been often neglected by emotional speech research, they might be relevant to convey basic emotions [3]. We expect that the three emotional states will have a different characterization in terms of pitch accent/edge boundary tone distribution, which might modulate in turn listeners’ responses.

292 293 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS The Role of Global and Local Pitch Levels in the Perception of Questions in Ede Chaabe showing that the Frequency code ([4]) is not a universal characteristic of questions, and, more importantly, they offer new data for the interaction of lexical tone register and intonation in Tajudeen Mamadou Y, Huteng Dai, Yair Haendler and Mariapaola D’Imperio perception for an understudied language. Rutgers University

Low-pitched ‘Lax prosody’ [5,6] is an areal feature found along the Sudanic Belt, presenting some or all of the following intonational cues for polar questions: final L% tone or falling intonation, final polar or mid tone, final lengthening, breathy termination and/or cancellation of penultimate lengthening [6, 5, 2, 1]. In particular, [1] added a raised register level (i.e. ‘expanded pitch range’) to these characteristics, which would counteract the effect of the presence of a final L% in questions, in line with Frequency Code predictions [4]. In Ede Chaabe (Niger-Congo), however, yes/no questions are systematically realized at a lower global pitch level than statements, as well as with a local final F0 fall (L%), as illustrated in Figure 1. This goes counter to Ede Yoruba with which it shares the same sub-family and which is an established non-lax-prosody language [11]. These production findings put into question the predictions of the Frequency Code, given that neither global pitch differences nor boundary tone marking would go towards a higher pitch level (Register) specification in questions. The Figure 1. Utterance pair (ʔó lówó/? “He has money/Does he have money?”), including questiona (red, low) and aim of this study was hence to perceptually test the combined role of the terminal contour a statement (blue, high) pitch track in Ede Chaabe. (presence vs absence of a fall) and global pitch level manipulation on the perception of polar questions in Ede Chaabe. Based on the results of a production study [3], we hypothesized that global (Register) and local final fall (L%) would affect question identification, independent of lexical tone specification. Our prediction was that both lower global pitch level and the presence of a final falling edge tone would induce question perception, and that the effect would be additive. Two all-H utterance bases (bijɔ́ nɛ́ wó./? ‘Biyo spent money’/‘Did Biyo spend money?’) were resynthesized with PSOLA [8] in PRAAT. After stylizing the pitch contours of each base, the global utterance register levels was modified in five level steps of 10 Hz each, while four IP-final F0 fall steps of 17 Hz were superimposed on the last syllable (L% fall). For the final fall, the magnitude of the difference between steps was fixed but the final L target values in each step varied as a function of the global register levels. The manipulation yielded a total of 120 stimuli which were presented in 3 blocks in subject-randomized order to native speakers of Ede Chaabe. 24 participants (11 females and 13 males) were instructed to perform a two-alternative forced choice labelling task (question vs. statement), via psychoPy3 [9]. A short training was provided Figure 3. Proportion of question responses (with standard errors) to stimuli with different steps of edge tone to each participant at the beginning of each session. A generalized mixed-effects model was level final F0 targets (steps on the x-axis, where 1= no fall; 2=17Hz fall; 3=34Hz fall and 4=51Hz fall) and different fitted to the question response data. We estimated all fixed main effects and interactions of global pitch levels in colored lines, with steps of 130Hz, 140Hz, 150Hz, 160Hz and 170Hz. The black line shows the the independent variables Register, Edge Tone Level and (stimulus) Base. Random effects average of all pitch levels collapsed together. Question base stimuli are plotted in the left panel and statement for participants of the variables Register and Edge Tone Level were estimated as well. We base stimuli in the right panel. controlled for the effect of Age and Gender (male/female participants) by including these two variables in the model. References There was a significant effect of Final Fall (coef=2.97, z=8.43, p<.001), in that stimuli were [1] Cahill, M., 2013, May. Polar question intonation in five Ghanaian languages. In LSA Annual more readily identified as questions with lower steps of this variable (see Figure 3). That is, Meeting Extended Abstracts (Vol. 4, pp. 10-1). stimuli with a final fall of at least 40 Hz were perceived as questions, as opposed to stimuli [2] Kügler, F., 2016. Tone and intonation in Akan. Intonation in African tone languages, 24, pp. 89- with no (i.e 0 Hz) final or smaller F0 final fall. There was also a significant effect of global 129. register level (coef=-.29, z=-2.53, p=.01), indicating that higher levels of global pitch led to [3] Mamadou, T., and D’Imperio, M. (2020). Global and Local Pitch Levels in Ede Chaabe are not a smaller proportion of question responses. A significant Base effect was also observed Predicted by the Frequency Code. 179th Meeting of the Acoustical Society of America, 7-11 (coef=-.35, z=-2.93, p=.003), with more question responses for question base stimuli than for December 2020. statement base ones. [4] Gussenhoven, C. (2002). Intonation and interpretation: phonetics and phonology. In Speech Prosody 2002, International Conference. Our findings support our hypothesis, since both the final fall and the overall register significantly cued questionhood, as predicted. These results add to the body of evidence

294 295 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS [5] Rialland, A., 2007. Question prosody: an African perspective. Tones and tunes, 1, pp. 35-64. Prosodic marking in Russian multiple wh-questions: A sentence production study [6] Rialland, A., 2009. The African lax question prosody: Its realization and geographical distribution. Lingua, 119(6), pp. 928-949. Pavel Duryagin1 and Arthur Stepanov2 [7] Chung, Y. and Rose, S., 2012. Duration and Pitch in Moro Polar Questions. In 43rd Annual 1Ca’ Foscari University of Venice, Italy, 2University of Nova Gorica, Slovenia Conference on African Linguistics, Tulane University. [8] Boersma, P., & Van Heuven, V. (2001). Speak and unSpeak with PRAAT. Glot International, Although the prosody of regular constituent/wh-questions is increasingly often discussed in the 5(9/10), 341-347. context of theories of prominence, focus and prosodic constituency (e.g [1]), prosodic contours [9] Peirce, J., Gray, J., Halchenko, Y., Britton, D., Rokem, A., & Strangman, G. (2011). PsychoPy–A of multiple wh-questions (cf. English Who bought what?) received surprisingly little attention in the Psychology Software in Python. literature so far. Slavic languages offer an additional dimension of interest to the problem asall [10] Gandour, Jackson T. (1978). The Perception of Tone. Tone: A Linguistic Survey, pp. 41- 76, ed. by Victoria Fromkin. New York: Academic Press. wh-phrases are typically fronted in the clausal left periphery forming a wh-cluster, cf. Russian (1). [11] Connell, B., & Ladd, D. R. (1990). Aspects of pitch realisation in Yoruba. Phonology, 7(1), 1-29. Investigation of prosody of multiple wh-questions in these languages is likely to bring new theoretical insights, in particular, regarding realization of units with very similar prosodic properties, limits of prosodic autonomy of wh-phrases and the degree of mapping between prosodic and syntactic boundaries (cf. [2]). Here we report the results of a sentence production study investigating the prosody of Russian multiple wh-questions. The target stimuli included 12 multiple wh-questions with 4 Russian wh- pronouns (kogo ‘who.Acc’, komu ‘who.Dat’, kogda ‘when’, kuda ‘where to’) in all possible word orders, as well as 8 single wh-questions containing non-interrogative pronouns jego ‘he.Acc’, jemu ‘he.Dat’, togda ‘then’, tuda ‘there’ in preverbal position instead of the second wh-pronoun; see (1)-(2). 20 questions and statements of comparable length were included as fillers, for the total of 40 stimuli. Because producing multiple wh-questions is known to be associated with increased contextual demands, each sentence was preceded by a short preamble story setting up an appropriate contextual situation. Participants were asked to read the preamble to themselves and then naturally pronounce a sentence that followed the story. The stimuli were presented in a pseudo-randomized order. The data from 20 native speakers of Standard Russian (14F, mean age 27.8, σ = 3.5) were subjected to analysis. A total number of 327 target stimuli tokens were manually labelled in Praat [3]. Within the initial two-word cluster, the presence/absence of local pitch maxima and the alignment of peaks were measured, as well as mean f0 in semitones for each of the four vowels in the initial four-syllable cluster. The mean pitch and peak alignment were measured within the vowel boundaries (not the entire syllables) to minimize microprosodic effects of non-identical consonants. The analysis of pitch contours showed that, while the first wh-word is obligatorily assigned a pitch accent (cf. [4, 5]), in multiple questions the second wh-word can be optionally assigned a pitch accent phonetically realized as a downstepped second f0 peak (Fig. 1). In single wh-questions, however, the second word (always a non-interrogative pronoun in our data) is normally not associated with a visible f0 peak (Fig. 2). The presence of the second peak is confirmed by mean pitch measurements. Mixed-effect regression analysis showed a robust main effect of question type (single vs. multiple, t (301) = -6.505, p <0.001) on absolute values of mean f0 measured on the stressed vowel of the second constituent. Moreover, as stylized contours in Fig. 3 indicate (see especially the values for vowel 4), even when no visible f0 peak is associated with the second wh-word, wh-pronouns are produced at a higher pitch than their non-interrogative counterparts. The effect of question type remains strong (t (239.5) = -4.294, p <0.001) even after 63 contours with double peaks, mostly multiple wh-questions, are excluded from the regression model. In addition, both low and high boundary tones are licensed at the end of the intonational phrase independently of the presence of high target on the second word. The alignment of peaks varies to a high extent, from early to late and delayed, and correlates with the choice of the edge tone. Our results suggest that in Russian multiple wh-questions the non-initial interrogative pronouns are regularly produced at higher pitch than their non-interrogative counterparts. This is generally in line with the class of prosodic theories that treat wh-phrases on a par with contrastively focused phrases (cf. [6]): in both cases, the increased prosodic prominence is manifested in a heightened pitch. The downstep within a wh-cluster suggests that both wh-phrases belong in a single prosodic domain. It is

296 297 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS yet unclear what factor(s) determine an optional visible second peak in a wh-cluster. One possibility The prosody of information-seeking and echo wh-questions in English has to do with distribution of prosodic weight across two wh-items in line with individual preferences in assessing relative contextual importance of alternatives introduced by each wh-item. María Biezma1, Bettina Braun2, Angela James2 1Linguistics Department and Spanish and Portuguese Studies, UMass Amherst, USA (1) Kogda kogo razbudili? (2) Kogda jego razbudili? 2Department of Linguistics, University of Konstanz, Germany when who-acc. woke when he-acc. woke ‘Who did they wake up when?’ ‘When did they wake him up?’ Inquisitive utterances with wh-words may be used to plainly seek information (InfoS) or to ask about what has just been said, echo questions (EcQs), be it because the questioner didn’t perceive the previous utterance properly (EcQPer) or because the speaker can’t believe it (epistemic: EcQEp). This holds for utterances with a fronted wh-word (and when is your brother coming?) or final wh-word (and your brother is coming when?) ([1]). It is often claimed that the semantics of EcQs is different from that of InfoSs. The different semantics is justified by assumed (categorical) prosodic differences ([1-4]). E.g., [1,2] argue thatEcQs show a complex pitch accent, L+H*, and a final rising contour H-H%. All parts of the utterance are given (and presumably deaccented) while only the wh-word is focused ([1-3]). There is, however, little empirical evidence supporting these claims (but see [5] for German wh-final). [6], on the other hand, argue that EcQ-interpretations arise mostly by discourse considerations. We present data from a study on English wh-questions, investigating prosodic differences across Figure 1. A multiple question with a visible second f0 peak pronounced by a male speaker (left) conditions (InfoS/EcQPer/EcQEp) in two wh-word orders. This is a first step towards deciding between semantic-formal analyses (which rely on cues that justify a different semantics) and and without a visible second f0 peak pronounced by a female speaker (right). discourse-based proposals (in which echoicity is derived discursively). We manipulated the position of the wh-word (fronted vs. final, between-subjects in two experiments) and illocution type (EcQPer, EcQEp, InfoS, within-subjects) on 24 wh- utterances and three contexts (one for each condition). For each word order, the sentences had the same sentence structure, number of syllables and stress patterns. We tested 12 native English participants for each experiment (between 18 and 35 years, 12 female, 12 male). In each trial the context and target interrogative were shown on screen for as long as the participants needed. Participants then said the target sentence out loud and proceeded to the next trial. They were asked to utter the sentence in the most natural way given the context. The productions were annotated at the word level by one of the authors. F0-tracking errors were manually corrected, and the f0-values were processed using ProsodyPro ([7]). The continuous analysis of f0-contours revealed differences across conditions (more in fronted wh-words, Fig. 1), but these cannot be linked to pitch accents on content words. Half of the utterances were annotated using Mae-ToBI ([8]). To assess the reliability of the prosodic annotation, another author annotated 20% of the productions (accuracy 76%, Cohen’s kappa 72%). Furthermore, word durations and the f0-range of accented words were extracted. ToBI labels were analyzed using a logistic hierarchical regression model, durations and f0- excursions with linear-mixed effects regression models; both with participants and items References as crossed random intercepts ([9]); p-values were adjusted using the Benjamini-Hochberg correction, [10]). Results showed that wh-words were mostly rising (L* H-H% in final position: [1] Ozerov, P. 2019. This Is Not an Interrogative: The Prosody of “Wh-Questions” in Hebrew and the Sources 88%, 90%, and 65% in EcQEp, EcQPer, and InfoS, respectively; L*+H in fronted position: 90%, of Their Questioning and Rhetorical Interpretations. Language Sciences 72, 13-35. 88%, and 55%), significantly more so in the two echo-conditions than the InfoS condition, [2] Selkirk, E. 2011. The syntax-phonology interface. In Goldsmith, J., Riggle, J. & Yu, A. (Eds.), The Handbook see Fig (2a). The noun was typically accented in both word orders, with no differences across of Phonological Theory. 2nd edition. Oxford: Blackwell Publishing. 435-485. conditions (mostly L*+H when the wh-word was in final position and H* when the wh-word [3] Boersma, P., & Weenink, D. 2020. Praat: doing phonetics by computer. Computer program. was fronted). The f0-range in the wh-word differed between EcQEp and EcQPer and between [4] Bryzgunova, E. 1980. Intonation. In Shvedova, N.Y. (Ed.), Russian Grammar I. Moscow: Nauka, 96-122. EcQEp and InfoS, but not between EcQPer and InfoS (p > 0.5), see Fig (2b). The same pattern [5] Duryagin, P. 2020. On some factors affecting the choice of tune in Russian wh-questions. In Proc. 10th held for the duration of the wh-word. International Conference on Speech Prosody 2020, 235-239. In sum, there are a number of phonological and phonetic differences across conditions: [6] Büring, D. 2016. Intonation and Meaning. Oxford Surveys in Semantics and Pragmatics 3. Oxford: Oxford the wh-word was more often rising in echo than information-seeking questions, but there do University Press. not seem to be prosodic differences that allow us to make categorical differences between

298 299 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS EcQs and InfS. Phonetically, EcQEp, which have a hint of surprisal, differed from both EcQPer Visual channel influences the accuracy of listeners’ comprehension of Brazilian and InfoS. The results do now show support for a different semantics for EcQs and InfoSs. Portuguese intonation of wh-questions and wh-exclamations

Luma da Silva Miranda1, João Antônio de Moraes2,3, Albert Rilliard2,4

1Eötvös Loránd University, Hungary 2Federal University of Rio de Janeiro, Brazil; 3CNPq, Brazil 4Université Paris-Saclay, CNRS, LISN, France

The multimodality of speech helps listeners to perceive prosodic functions such as Figure 1. Averaged f0-contours for the two word orders, as derived from ProsodyPro ([8]), (a) fronted wh-word, (b) final wh-word. sentence mode [1,2,3,4,5,6]. One of the protocols applied to evaluate the particular influence Grey lines indicate the 95% confidence interval of the mean. of the visual channel is based on McGurk & McDonald’s work [7] that uses audio and visual (a) (b) modalities to convey a set of categories. The particular category presented through each modality may be the same (congruent) or not (incongruent presentations). This work aims at measuring the relative influence of each modality on a given function [8]. In a previous study [6] on the multimodal production and perception of Brazilian Portuguese (BP) wh- questions and wh-exclamations, it was shown that, acoustically, there is a steeper F0 fall in the nucleus for the former and a slighter movement for the latter, plus a higher initial F0 in wh-questions than in wh-exclamations. The intensity patterns of both contours are different. Visually, specific facial gestures for each speech act were described: lowered eyebrow plus the head turned right for wh-questions; lips corner raiser and up and down head movement for wh-exclamations. A perception test showed that BP listeners can identify each contour based on the auditory and visual cues, either separately or combined. In the present study, the audio and the visual performances of BP wh-questions and wh-exclamations were mixed to create congruent (same speech act) and incongruent Figure 2. Frequency of occurrence of the different ToBI-labels for the wh-word (a) and average f0-range in semitones (b). (different speech acts in the two modalities) stimuli, so as to evaluate the relative strength of these modalities in the identification of these speech acts. The stimuli were recorded by ten References BP speakers (five females and five males) from Rio de Janeiro who repeated both speech acts ten times (two hundred utterances). The eighth and ninth repetitions of each act produced [1] Artstein, Ron. 2002. A focus semantics for echo questions. In Workshop on Information by each speaker were chosen so as to avoid a qualitative selection of the performances. Structure in Context, ed. Agnes Bende-Farkas and Arndt Riester, 98–107. IMS Stuttgart. The production of the stimuli was made with the Vegas Pro software [9] following these [2] Bartels, Christine. 1999. The intonation of English statements and questions. Garland Publishing. steps: for the congruent stimuli, two repetitions of the same speech act were used and the [3] Beck, Sigrid, and Marga Reis. 2018. On the form and interpretation of echo wh-questions. Journal of Semantics 1–40. audio of one repetition was dubbed into the video of the other. For incongruent stimuli, one [4] Krifka, Manfred. 2011. Questions. In von Heusinger, Maienborn & Portner, Semantics: An repetition of each speech act was used, dubbing the audio of one speech act into the video international handbook of natural language, Vol. 2, 1742-1748 from the other speech act. The outcome of this procedure was a set of forty stimuli: ten [5] Repp, Sophie, and Lena Rosin. 2015. The intonation of echo wh-questions. In Proceedings of congruent and ten incongruent stimuli of wh-questions and wh-exclamations. By using the the 16th Annual Conference of the International Speech Communication Association. Dresden, PsyToolkit website [10,11], a perceptual identification protocol was designed and applied Germany: Interspeech. over the internet. Thirty-six BP listeners participated; they had to identify, for each stimulus, [6] Ginzburg, Jonathan, and Ivan Sag. 2001. Interrogative investigations. Stanford, CA: CSLI the mode of the perceived sentence (wh-question or wh-exclamation). Publications. The answers to each stimulus were then considered as “hit” (1) or “miss” (0) whether the [7] Xu, Y. 2013. A tool for Large-Scale Systematic Prosody Analysis. In Proceedings of Tools and judge correctly identified the mode of the audio modality. This was done because the audio Resources for the Analysis of Speech Prosody (TRASP 2013), 7–10. Aix-en-Provence, France. modality is supposed to dominate; we evaluate here the effect on the audio identification [8] Beckman, Mary E., Julia Hirschberg & Stefanie Shattuck-Hufnagel. 2005. The original ToBI ratio introduced by the visual modality. It is hypothesized that an incongruent presentation system and the evolution of the ToBI framework. In Sun-Ah Jun (ed.), Prosodic typology, 9–54. will decrease identification, while a congruent one will increase it. The proportion of “hit” Oxford: Oxford University Press. or “ratio of correct answers” was used as the dependent variable of a logistic regression, [9] Bates, D., Maechler, M., Bolker, B., & Walker, S. (2014). lme4: Linear mixed-effects models using Eigen and S4. R package version 1.0-6. http://CRAN.R-project.org/package=lme4. which used as independent variables the “speech act” (two levels: wh-question, PQ, and [10] Benjamini, Y., & Hochberg, Y. (1995). Controlling the false discovery rate: a practical and wh-exclamation, EX), the “congruence” between modalities (two levels: congruent, C, and powerful approach to multiple testing. Journal of the Royal Statistical Society, Series B 57, 289- incongruent, I), and the “speaker groups” who produced the stimuli (five levels of “speakers 300. groups” showing coherent answers were obtained at the end of a simplification procedure;

300 301 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS see [12]). The model showed a main effect of factors “congruence” and “speaker groups” [9] Vegas Pro software, Version 14 of Vegas Pro. 2016. Copyright © [2016] MAGIX. Software available and significant interactions between “speaker groups” and “congruence”, as well as between at: https://www.vegascreativesoftware.com/. “speaker groups” and “speech act”. The main effect of the “congruence” factor indicated [10] Stoet G. 2010. PsyToolkit - A software package for programming psychological experiments that the incongruent stimuli significantly lowered the ratio of correct identification. Fig. 1 using Linux. Behavior Research Methods, 42 (4), pp. 1096–1104. presents the mean identification ratio for the significant interactions. One can observe that [11] Stoet G. 2017. PsyToolkit: A novel web-based method for running online questionnaires and the identification level for each speech act is mostly above 60%. This illustrates notonly reaction-time experiments. Teaching of Psychology, 44 (1), pp. 24–31. [12] Crawley MJ. 2013. The R book (Second edition). United Kingdom: Wiley. the primary role of the audio modality for the expression of the sentence mode, in line with previous studies [3,4,5,8], but also the speaker-dependent use of modalities for the expression of these speech acts in Brazilian Portuguese: some relying more than others on visual performances – but the audio modality being always dominant. It also shows that the QP expression has a higher identification ratio than EX, when produced by some speakers: this underlines the importance of individual performances.

Figure 1: Means and confidence intervals estimated by the regression model for the interactions between factors “Congruence” (separate lines) and “Speaker groups” (x-axis) (left), and between “Speech act” (separate lines) and “Speaker groups” (x-axis) (right).

References

[1] House D. 2002. Intonational and visual cues in the perception of interrogative mode in Swedish. Proceedings of the 7th International Conference on Spoken Language Processing, Denver, Colorado, pp. 1957–1960. [2] Srinivasan RJ, Massaro DW. 2003. Perceiving prosody from the face and voice: Distinguishing statements from echoic questions in English. Language and Speech, 46 (1), pp. 1–22. [3] Borràs-Comes J, Prieto P. 2011. ‘Seeing tunes.’ The role of visual gestures in tune interpretation. Laboratory Phonology, 2 (2), pp. 355–380. [4] Cruz M, Swerts M, Frota S. 2017. The role of intonation and visual cues in the perception of sentence types: Evidence from European Portuguese varieties. Laboratory Phonology, 8 (1), 23, pp. 1–24. [5] Miranda L, Swerts M, Moraes J, Rilliard A. 2021. The role of the auditory and visual modalities in the perceptual identification of Brazilian Portuguese statements and echo questions.Language and Speech, 64 (1), pp. 3–23. DOI: https://doi.org/10.1177/0023830919898886 [6] Miranda LS, Moraes J, Rilliard A. 2019. Audiovisual perception of wh-questions and wh- exclamations in Brazilian Portuguese. Proceedings of the 19th International Congress of Phonetic Sciences, August 5-9, Melbourne, Australia, pp. 2941–2945. [7] McGurk H, MacDonald J. 1976. Hearing lips and seeing voices. Nature, 264, pp. 746–748. [8] Swerts M, Krahmer E. 2004. Congruent and incongruent audiovisual cues to prominence. In: Bel B, Marlin I. (eds.), Proceedings of 2nd International Conference on Speech Prosody, Nara, Japan, pp. 69–72.

302 303 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS Voice quality and speaking rate in Icelandic rhetorical questions in final position. As an apparent general boundary marker, used in both ISQs and RQs to a considerable extent, glottalized voice is not the preferred option to signal rhetorical meaning. Nicole Dehé & Daniela Wochner Instead, breathy voice marks the offset of RQs. University of Konstanz

Breathy or softer voice quality and longer durations/ slower speaking rates have turned out to be salient features that discriminate rhetorical questions (RQs) from information-seeking questions (ISQs) across languages (e.g., Braun et al. 2018; Dehé & Braun 2020a, Asu et al. 2020; Zahner et al. 2020). For Icelandic, Braun and Dehé (2020b) show that polar and wh- RQs and ISQs differ with regard to nuclear pitch accent type, peak timing and scaling of the nuclear pitch accent, height of initial pitch, as well as the amount of prenuclear accents. The following post hoc study analyzes the data from Braun and Dehé (2020b) in terms of voice quality and speaking rate. Of main interest are the questions whether Icelandic RQs are produced with a higher occurrence of breathy voice quality and slower speaking rates than ISQs and whether Icelandic is to add to the cross-linguistic picture of the phonetics of RQs. The data was collected in a production experiment mimicking dialogue situations. 17 native speakers of Icelandic (avg. age 26.9; 11 female, 6 male) produced polar and wh- questions that, depending on the given contextual information, functioned as either RQ or Figure 1. Voice quality in wh (left) and polar (right) questions. ISQ. Eventually, 645 target utterances (see a. and b. for examples) entered the analyses. a. Hver borðar rækjur? ‘Who eats shrimp?’ b. Þarf einhver stensil? ‘Does anyone need stencils?’ Voice quality (VQ, modal, breathy and glottalized) was perceptually annotated at four positions of the target utterances, following the procedure of Braun et al. (2019): At the wh-word/ the finite verb in V1, the finite verb in V2/ the stressed syllable ofeinhver (‘anybody’), the sentence final objects, and at offset of the utterance (the ). Speaking rate was operationalized as the number of syllables per second. For statistical analysis, series of liner mixed models and general linear mixed models were run with illocution type (ISQ, RQ) and question type (wh, polar) as fixed factor and participants and items as crossed-random factors. The following reports on statistically significant results only (p<0.05). Regarding VQ (Figure 1), polar questions were more often realized with initial glottalized VQ than wh-questions. RQs were less often realized with initial modal VQ than ISQs, and polar questions were less often realized with modal VQ than wh-questions. In second position (verb in V2/ einhver), RQs were realized with glottalized VQ more often than ISQs, whereas ISQs were produced with modal VQ more often than RQs. At offset of the utterance, RQs showed more occurrences of breathiness than ISQs. Also, there was an interaction between illocution type and question type for modal VQ: In polar Figure 2. Average speaking rate of polar and wh RQs and ISQs (+/- 1SE). questions, ISQs were more often realized with final glottal VQ than RQs. Conversely, glottalized References VQ was more frequent in wh-RQs than in wh-ISQs. An interaction between illocution type and Árnason, K. 2005. Hljóð. Handbók um hljóðfræði og hljóðkerfisfræði. Reykjavík: Almenna bókafélagið. question type was also observed for modal VQ, suggesting a stronger difference for modal Árnason, K. 2011. The Phonology of Icelandi and Faroese. Oxford: Oxford University Press. VQ in wh-questions than in polar questions: RQs were less often realized with modal VQ than Asu, E. L., Sahkai, H. & Lippus, P. 2020. The prosody of rhetorical and information-seeking questions ISQs. in Estonian: preliminary results. In Proceedings of the 10th International Conference on Speech Regarding speaking rate (Figure 2), RQs had a slower speaking rate than ISQs (polar: 5.2 Prosody (Speech Prosody 2020), 381-4. Tokyo, Japan. (RQs) vs. 6.4 (ISQs) syllables per second; wh: 4.3 (RQs) vs. 5.7 (ISQs)). Wh-questions were Braun, B., Dehé, N., Neitsch, J., Wochner, D. & Zahner, K. 2019. The prosody of rhetorical and information-seeking questions in German. Language and Speech 62, 779-807 produced with slower speaking rates than polar questions. Dehé, N. 2014. Final devoicing of /l/ in Reykjavík Icelandic. In Proceedings of the 7th International The results for speaking rate are in line with results for other languages, supporting the Conference on Speech Prosody (Speech Prosody 2014), Dublin, Ireland. suggestion that temporal cues are used cross-linguistically to distinguish between RQs and Dehé, N. & Braun, B. 2019. The prosody of rhetorical questions in English. English Language and ISQs in prosody. Breathy VQ was mainly realized in utterance-final position in our data. Linguistics. Online first view, DOI: https://doi.org/10.1017/S1360674319000157. We interpret this as an interaction between VQ and intonation. In Icelandic, all utterance Dehé, Nicole & Braun, B. 2020b. The intonation of information-seeking and rhetorical questions in types, irrespective of grammatical sentence type and pragmatic function, have L% (Árnason Icelandic. Journal of Germanic Linguistics 32, 1-42 Zahner, K., Xu, M., Chen, Yi., Dehé, N. & Braun, B. The prosodic marking of rhetorical questions in 1998, 2005, 2011, Dehé & Braun 2020b). It is therefore conceivable that Icelandic speakers Standard Chinese. In Proceedings of the 10th International Conference on Speech Prosody (Speech exploit the manipulation of VQ as a compensation strategy for the use of boundary tones Prosody 2020), 389-93. Tokyo, Japan.

304 305 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 POSTER SESSIONS

workshopS

Poster Presentation

306 307 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 INVITED SPEAKERS WORKSHOP Intonational phonetics and register variation in two Oceanic languages Forms and Representations in Phonology: In Honour of Bob Ladd Janet Fletcher University of Melbourne Defining prosody in relation to plurality of structuring For many years, Ladd’s seminal text “Intonational Phonology” (Ladd 1996/2008) has been Mary Beckman the go-to reference for anyone aspiring to document intonational variation in languages and The Ohio State University language varieties. Among the many themes explored by Ladd and his many collaborators are the ways in which variation in tonal alignment and scaling can be meaningful in a range of In this talk, I pay tribute to Bob Ladd (teacher, mentor, colleague, friend) by trying to different languages. In treatments of under-described languages determining whether tone connect the dots among the many profound insights he offers in the chapters of his book scaling for example, is contrastive or merely reflective of paralinguistic gradience is a critical Simultaneous structure in phonology. One of those chapters is in itself a tribute to a man who question to consider when attempting to document intonational categories. In this talk I will was our mutual teacher, Charles F. Hockett. In the chapter “On duality of patterning” Bob present some recent work on intonational variation in two very different languages of the brings out a long-standing problem in our understanding of this “design feature” of human Pacific: Nafsan (Vanuatu), and Tahitian (French Polynesia). The latter has been relatively well language. All too often, Hockett’s idea is conflated with something very different, the idea of documented phonologically, the other less so. Neither have had full intonational descriptions “la double articulation” that André Martinet expounded around the same time that Hockett beyond impressionistic observations until relatively recently. As in many languages of the was formulating his own very different views of language structure. Martinet’s notion is that world, tonal scaling rather than intricate tonal alignment differences for example appears to utterances can be segmented at two levels of analysis to form a hierarchy of units. To use play a critical meaningful role. Martinet’s own example, an utterance of the sentence J’ai mal à la tête. (as produced in my dialect of English) can be segmented at the first level into four morphemes: Ladd, D.R. (2008). Intonational Phonology. Cambridge: CUP. {my} + {head} + {hurt} + {s}

and at the second level, into nine phonemes: Tune and Text - chat or chatter? /m/ + /aɪ/ + /h/ + /ɛ/ + /d/ + /h/ + /ɚ/ + /t/ + /s/ Martine Grice University of Cologne with the hierarchical relationship of “composition” holding between the two strings. That is, the morpheme {my} “is composed of” the phonemes /m/ + /aɪ/, the morpheme {head} An important part of Ladd’s work on autosegmental-metrical phonology deals with how tune “is composed of” the phonemes /h/ + /ɛ/ + /d/, and so on. Bob points out the difficulty that and text are associated, both in speech and song, and how adjustments have to be made this understanding of the relationship between morphemes and phonemes poses for the in cases where the tune is complex but there is not enough segmental material to bear the analysis of Mandarin Chinese morphemes, and suggests that the difficulty is circumvented if tones (inter alia Ladd, 2008). These adjustments can be made to the tune, involving truncation we instead apply a more accurate understanding of Hockett’s notion. of a melody or the choice of an alternative, simpler melody, or to the text involving the prolongation of available syllables, which in some cases involves phonological lengthening. To fully appreciate the insight offered here, we can go back to Hockett’s own writings in which In this talk I focus on a further adjustment to the text that takes place in cases where vowel he explicitly rejected Martinet’s idea (e.g., Hockett 1955 pp 14ff, Hockett 1961). These writings lengthening is not possible, either because the phonology does not permit this or because make clear that Hockett’s idea is more properly called “plurality of structuring” and can be there is no vowel to prolong. In these cases, vowels are inserted. Two languages serve as connected to insights in other chapters of Bob’s book. For example, by calling it “structuring” examples, Italian and Tashlhiyt Berber. rather than “patterning” we invoke the idea (expressed so eloquently in Intonational phonology In Italian, loan words with a final consonant, such as or , are as well as in several of the other chapters in this book) that the phonology of my utterance often pronounced with an additional word-final schwa. The insertion of this schwa is highly of J’ai mal à la tête. is a hierarchical structure in its own right. This structure is independent of variable and dependent on a number of factors (e.g. speaker-specific preferences, metrical the sentence’s morphosyntactic structure and governed by prosodic rules of segmentation structure and the identity of the word-final consonant). Crucially, a considerable amount of and composition that are independent of morphology and syntax. These rules are language- variation is conditioned by intonation: A vowel is more likely to occur – and is acoustically specific, but related to the physics and physiology of speech in ways that allow listeners to more prominent – if the intonation is complex or rising than if it is falling (Grice, Savino & impute a phonological structure to another’s utterance from the phonetic correspondences Roettger, 2018). even when they have no clue to the morphosyntactic structure. This view of prosody as “raw organizational structure” allows for a richer typology of potential leaf nodes in the prosodic Tashlhiyt Berber, known for its long consonantal sequences inserts vowels word- hierarchies of signed languages as well as of spoken languages. It also could lead to more finally and word-medially (Ridouane & Fougeron, 2011, Grice, Ridouane & Roettger, 2015). explanatory models of how segments might emerge in the vocalizations of babies as they Here too, this insertion is not only affected by speaker-specific patterns and properties of interact with their caregivers. adjacent consonants, but also by intonation, with a vowel being more likely to surface in positions in which communicatively relevant tonal movements are realised.

308 309 4th4th PHONETICSPHONETICS ANDAND PHONOLOGYPHONOLOGY ININ EUROPE,EUROPE, 20212021 WORKSHOPSWORKSHOPS In both of these languages, the insertion of a vowel creates a segmental environment The tone-syllable relationship: A moraic account with high periodic energy and a rich harmonic structure, which is optimal for realising Haruo Kubozono intonation contours. The need to realise intonational tones thus leads to the enhancement NINJAL of the segmental makeup of the material bearing these tones. Inserting vocalic material serves to increase the likelihood of transmitting and retrieving This talk discusses the relationship between the tone and the syllable based on Ladd’s (1996: intonational meaning (Roettger & Grice, 2019), in line with findings that suggest that the 132) observation that “In some languages… there seems to be a limit to the number of tones interpretation of intonational contours is strongly impaired if conflicting requirements of that can be realised on a single syllable, the most common limit being two”. It provides tune and text lead not to enhancement of the text but to a truncation of the tune (Odé empirical evidence for this observation through analysis of vocative (calling) intonation in 2005; Rathcke 2013). Insofar as vowel insertion facilitates the production and perception of several dialects of Japanese. Specifically, a close examination of tonal crowding reveals that intonation, it is relevant for autosegmental-metrical phonology and indicates that tune and Japanese dialects attempt to avoid syllables with multiple tones in several ways: (i) vowel text are to some extent interdependent. lengthening (mora augmentation), (ii) H tone retraction, (iii) tone coalescence, and (iv) tone deletion. These seemingly independent phenomena conspire to create a situation where References maximally two tones can be realised on a single syllable in Japanese. I argue that there exist two underlying principles behind this situation, one by which one mora can bear up to one Grice, Martine, Rachid Ridouane & Timo Roettger. (2015). Tonal association in Tashlhiyt Berber: tone, and the other by which syllables can be maximally two moras long. Evidence from polar questions and contrastive statements. Phonology 32(2): 241-266. Grice, Martine, Michelina Savino &Timo Roettger. (2018). Word final schwa is driven by intonation Me, You and Language too - why a language has multiple and multiply-overlapping – The case of Bari Italian, Journal of the Acoustical Society of America, 143 (4): 2474–2486. phonetics and phonologies Ladd, D. Robert. (2008). Intonational Phonology, Cambridge: Cambridge University Press. James M. Scobbie Odé, Cecilia (2005). Neutralisation or Truncation? The perception of two Russian pitch accents on Queen Margaret University, Edinburgh utterance-final syllables,Speech Communication, 47 (1-2), 71-79. Ridouane, Rachid & Cecile Fougeron (2011) Schwa elements in Tashlhiyt word-initial clusters, Journal of the Association of Laboratory Phonology, 2 (2), 275-300. It is interesting to explore the tensions between “sub-parts” of phonology with different Roettger, Timo & Martine Grice (2019). The tune drives the text - Competing information channels of properties (e.g. prosody and segmental phenomena) and between competing functional speech shape phonological systems, Language Dynamics and Change. DOI:10.1163/22105832- pressures arising in different aspects of phonetics (acoustic, perceptual, articulatory), as well as 00902006. tensions between cognitive-physiological, social and system-internal pressures. Discussion of Rathcke, Tamara (2013). On the neutralizing status of truncation in intonation: A perception study these tensions tends towards the dualistic: substance vs. system, etc. Laboratory phonology, of boundary tones in German and Russian, Journal of Phonetics 41 (3-4), 172-185. I have previously claimed, has attempted to apportion difficult interface phenomena to either phonetics or phonology (using both formal and experimental argumentation), and have argued that we need to be comfortable with a degree of nondeterministic overlap: a Underpinnings of sentential pitch accents speech community needn’t agree completely on its phonologisation, and individuals don’t Carlos Gussenhoven need to systematise absolutely everything with the same degree of clarity. National Yang Ming Chiao Tung University, Taiwan This talk presents a tripartite model of these domains and interfaces within a broad exemplar worldview. The model cross-cuts phonetics/phonology, neither of which is viewed As amply attested in Ladd’s Patterns of sentence stress (2008: ch 6), cross-linguistic generalization as a fundamental module or level. Instead, each of these “phons” is multiple. This model about the locations of sentential pitch accents in languages present the usual typological still emphasises flexibility and non-determinism, and claims that because we should not picture of similarities and differences. Generally, we are able to say that, across languages, seek one correct system in monolingual phonology, we can better understand how (some) a given structural feature, the segmental composition of syllable rimes, say, or the aspectual experimental results can be expected to be (sometimes) as ambiguous as the phenomena systems of verbs, are variations on a theme. In the case of pitch accent locations, there are they investigate. Such ambiguity is a necessary part of flexibility, and flexibility is necessary for a number of themes, or ‘underpinnings’. Among these are the phonological weakness of development and change. But, rather than seeing phonetics and phonology as fundamental, verbs, the importance of focused information, rhythm, and constituency. The first two of each sound system is seen as fluctuating compromise between three forces. Tendencies these were extensively discussed in Ladd (1980) and Ladd (1996, 2008), respectively. I will towards systematisations promoted by the autonomous and academic meta-analysis of sub- give examples from different languages of grammatical instantiations of the underpinnings, lexical and sub-sentential elements and their possible systematisation (Language); internal stressing that although these remain readily discernible, the forms these generalizations demands arising from properties of the mental and embodied self (Me); and external take can be very different. As a taster, I note here that rhythm shows up as phonological demands arising from communication with others (You). No two individuals are identical, rule in the English Rhythm Rule, but as a morphosyntactic rule in Nubi Adjective Accent Shift, nor do they have the same experience with others; and neither the self nor the community while Dutch Non-Nuclear Retraction is best taken to be a case of pre-compiled phonology. has priority. Thus there can be no unitary phonology, just as we already accept there is no unitary phonetics.

310 311 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 WORKSHOPS WORKSHOP children, independently of their age: this particular aspect could be the signal of a reciprocal Acquisition at the interface between prosody, gesture, and influence of the subjects during language development. pragmatics (1) Diagrams of results for each child

Pointing and deixis: quantitative and longitudinal survey on a multimodal corpus. Luca Lo Re, Alessandro Panunzi and Valentina Spagnoletti 1 1University of Florence

Several studies have shown a relationship between gestures increase and the onset of linguistic phenomena such as vocabulary spurt, development of combinatory capacity and speech abilities in early language acquisition. We aim to present a multimodal resource, based on audio-video recordings of three infants of age between 14 and 20 months, that have been monitored in a spontaneous context once a month for one year. The participants of this collection are three typically-developing children (2 first-born boys and 1 first-born girl) videotaped monthly in their home during spontaneous play situations and interaction with a caregiver (Sandra, the nanny) when they were respectively between 14 and 28; 18 and 30; 20 and 33 months of age. Audio and video have been collected unobtrusively using two GoPro Hero cameras (for two different points of view) and a Zoom H6 microphone for 13 sessions of about one hour and a half each: once we had the material, we proceeded with a first allineated transcription of the first eight recordings to find main phenomena, like relevant linguistic and pre-linguistic forms (including gestures and cross-modal combinations); this phase was preparatory to draw the tagset. The tagset was based on literature and verified on data and consists of twenty eight tags: twentythree about spoken phenomena (reduplicated babbling, variegated babbling, phonologically consistent forms, onomatopoeia, proto- words, phono-articulation exercises, first words, horizontal repetition, linked words, vertical repetition, vertical construction, formulaic expression, ‘placeholder’ word, false combination, words combination, echolalia, interjection, neologism, singing, lacking nuclear phrase, complete nuclear phrase, extended nuclear phrase, binuclear phrase) and five about gesture (performative gesture, performative gesture-pointing, referential gesture, transmodal forms and transmodal forms-pointing). Each tag corresponds to at least one data phenomenon. The data are annotated through the software ELAN. The template of the annotation in ELAN includes three tiers for each child: the first tier is for the events (like a particular action or the vowel that child uses during the babbling), the second tier is for the phenomenon (tagset) and the third tier is for the interlocutor. In fact, the interaction is the main feature of our source. The tags used to annotate the interaction are ‘caregiver’, ‘child’ and ‘per se’ to indicate when the child does something for himself.

Leading a first analysis on seven sessions, we grouped the tags of our tagset into four macro- categories: 1) Gesture; 2) Non-compositional linguistics forms; 3) Compositional linguistics forms; 4) Transmodal phenomena. Referring to the resulting diagrams (following figures), which display the percentages of the four categories compared to the whole number of produced forms for each session, we can observe some similar results for the three children (despite their different age). There is a decrease of non-compositional linguistic forms while, on the other hand, compositional linguistic forms are increasing over time. Gestures alone keep being scarcely present while transmodal forms number is superior and quite constant or increasing. We aim to investigate deeper this latest aspect, inspecting transmodal forms, particularly the ones involving a pointing gesture, in order to find a correlation between this phenomenon and the onset of compositionality in the language of our subjects. About compositionality, it is worth noting that it seems to arise at the very same point for the three

312 313 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 WORKSHOPS [1] Capirci, O., Contaldo, A., Caselli, M. C., & Volterra, V. (2005). From action to language through Chicago. A narrative production task was administered at 60 months (data from [5]). gesture. Gesture, 5(1–2), 155–177. https://doi.org/10.1075/gest.5.1.12cap Results revealed that early spontaneous non-referential beat gestures were significant [2] Özçalişkan Ş., Goldin-Meadow S. (2005). Gesture is at the cutting edge of early language predictors (β = 0.299, SE = 0.111, z = 2.689, p < .01) of later narrative productions (measured development. Cognition, 96, B101-B113. in terms of structural wellformedness), but not non-referential flips (β = -0.163, SE = 0.109, z = [3] Morgenstern A. (2014). The blossoming of children’s multimodal skills from 1 to 4 years old. In -1.489, p = .137) nor referential iconic gestures (β = 0.029, SE = 0.077, z = 0.381, p = .703). This Müller C., Cienki A., Ladewig S. H., McNeil D., Bressem J. (eds), Body - Language - Communication, study thus shows for the first time that children’s early frequency of use of non-referential beat de Gruyter, 1848-1857. gestures in parent-child naturalistic spontaneous interactions between 14 and 58 months [4] Tomasello M, Carpenter M., Liszkowski U. (2007). A new look at infant pointing. Child Development, 78 (3), 705 - 722. of age forecasts their narrative abilities at a later stage in development. Importantly, these [5] Fontana S. (2009). Linguaggio e multimodalità: gestualità e oralità nelle lingue vocali e nelle findings are in line with previous research that has demonstrated the pragmatic, discursive lingue dei segni. Edizioni ETS. and prosodic properties of non-referential beat gestures. Non-referential beat gestures not [6] Wittenburg, P., Brugman, H., Russel, A., Klassmann, A., Sloetjes, H. (2006). ELAN: a Professional only act as meaningful prosodic pragmatic cues that mark certain aspects of the structure Framework for Multimodality Research. In: Proceedings of LREC 2006, Fifth International of the discourse (i.e., information structure, discourse structure and rhythm) (e.g., [6–8]), but Conference on Language Resources and Evaluation. can also show the illocutionary act that a speaker is engaged in, and indicate how a specific [7] ELAN (Version 6.0) [Computer software]. (2020). Nijmegen: Max Planck Institute for part of the spoken discourse should be interpreted [9]. Psycholinguistics, The Language Archive. Retrieved from https://archive.mpi.nl/tla/elan. In conclusion, although our findings did not tell us whether producing non-referential beat gestures simply reflects a child’s skill in framing discourse or highlighting aspects of prosodic focus (e.g., emphasis), we argue that the information and discursive structure as well as the The important discourse-pragmatic and prosodic properties of children’s early prosodic properties found in children’s early non-referential beat gestures may be key for spontaneous non-referential beat gestures in predicting later narrative structure children’s early discourse and later narrative development. These findings constitute a novel Ingrid Vilà-Giménez1,2, Natalie Dowling3, Ö. Ece Demir-Lira4,5,6, Pilar Prieto7,1, & Susan Goldin- contribution to the literature which highlights the relevance of non-referential beat gesture Meadow8,3 production (and their early pragmatic role) for children’s narrative development. 1Dept. of Translation and Language Sciences, Universitat Pompeu Fabra, Barcelona, Catalonia; 2Dept. of Subject-Specific Education, Universitat de Girona, Girona, Catalonia;3 Dept. of Comparative References Human Development, University of Chicago, Illinois, USA; 4Dept. of Psychological and Brain Sciences, University of Iowa, USA; 5DeLTA Center; 6Iowa Neuroscience Institute; 7Institució Catalana [1] IVERSON, J. M., & GOLDIN-MEADOW, S. (2005). Gesture paves the way for language development.

de Recerca i Estudis Avançats (ICREA), Barcelona, Catalonia; 8Dept. of Psychology, University of Psychological Science, 16(5), 368–371. Chicago, Illinois, USA [2] DEMIR, Ö. E., LEVINE, S. C., & GOLDIN-MEADOW, S. (2015). A tale of two hands: children’s early gesture use in narrative production predicts later narrative structure in speech. Journal of Child Oral language skills play an important role in subsequent successful school literacy. Given Language, 42(3), 662–681. the fact that early gestures not only precede, but also predict simple linguistic milestones [3] VILÀ-GIMÉNEZ, I., DEMIR-LIRA, Ö. E. , & PRIETO, P. (2020). The role of referential iconic and non- (e.g., [1]), it is important to ask whether the early production of gestures could also signal referential beat gestures in children’s narrative production: Iconics signal oncoming changes in speech. Proceedings of the 7th Gesture and Speech in Interaction (GESPIN). KTH Speech, Music oncoming changes in more elaborated discourses including more complex language skills. & Hearing and Språkbanken Tal. Stockholm, Sweden. While recent studies have shown the predictive role of referential iconic gestures produced [4] SHATTUCK-HUFNAGEL, S., REN, A., MATHEW, M., YUEN, I., & DEMUTH, K. (2016). Non-referential in narratives in children’s ability to perform well-structured stories [2, 3], to our knowledge, gestures in adult and child speech: Are they prosodic? In J. Barnes, A. Brugos, S. Shattuck- no previous study has examined the role of non-referential gestures (i.e., beat and flip Hufnagel, & N. Veilleux (Eds.), Speech Prosody 2016 (pp. 836–839). Boston, United States of gestures), in comparison to other gesture types, produced in children’s early spontaneous America. International Speech Communication Association. speech in predicting later narrative abilities. Our hypothesis on the predictive value of non- [5] DEMIR, Ö. E., FISHER, J. A., GOLDIN-MEADOW, S., & LEVINE, S. C. (2014). Narrative processing referential beat gestures is based on evidence that suggests that (1) these gestures associate in typically developing children and children with early unilateral brain injury: Seeing gesture with key positions in discourse which are typically associated with speech prominence, and matters. Developmental Psychology, 50(3), 815–828. that (2) they have a range of pragmatic and structuring properties in discourse (e.g., [4]). [6] IM, S., & BAUMANN, S. (2020). Probabilistic relation between cospeech gestures, pitch accents The current study investigates the predictive role of early spontaneous non-referential beat and information status. Proceedings of the Linguistic Society of America, 5(1), 685–697. and flip gestures in children’s later narrative structure, in comparison to referential iconic [7] FLORIT-PONS, J., ROHRER, P. L., VILÀ-GIMÉNEZ, I., & PRIETO, P. (under review). Non-referential gestures. To examine whether the early production of non-referential beat and flip gestures gestures mark new referents in children’s narrative discourse. Frontiers in Psychology. at 14 to 58 months of age (vs. referential iconics) predicts later narrative productions at 60 [8] ROHRER, P. L., VILÀ-GIMÉNEZ, I., FLORIT-PONS, J., ESTEVE-GIBERT, N., REN, A., SHATTUCK- HUFNAGEL, S., & PRIETO, P. (2020). The MultiModal MultiDimensional (M3D) labelling scheme months (5 years old), a GLMM analysis was undertaken with a longitudinal database including for the annotation of audiovisual corpora. Proceedings of the 7th Gesture and Speech in Interaction 45 child-parent dyads who were visited every four months between 14 and 58 months of (GESPIN). KTH Speech, Music & Hearing and Språkbanken Tal. Stockholm, Sweden. age. Each in-home visit consisted of 90 minutes of parent-child naturalistic interactions. [9] KENDON, A. (2017). Pragmatic functions of gestures. Some observations on the history of their Recordings included mealtimes, book readings, as well as play sessions. This longitudinal study and their nature. Gesture, 16(2), 157–175. database is part of a larger longitudinal study of language development at the University of

314 315 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 WORKSHOPS Children’s comprehension of evidentiality through intonation Maria del Mar Vanrell1, Meghan Armstrong-Abrami2, Elena Castroviejo3 and Laia Mayol4 1U. de les Illes Balears, 2UMass-Amherst, 3UPV-EHU, 4U. Pompeu Fabra

In many languages, intonational meaning goes well beyond marking distinctions in sentence type. For instance, languages like Catalan or English often use intonation to mark a speaker’s degree of certainty ([1], [2], [3], i.a.). Prosody can also be used as an evidential strategy for marking indirect evidence in reported speech or discourse fragments reported directly. For instance, [4] show that que + L+H* L% questions in Majorcan Catalan convey that the speaker has directly perceived p through one of the five senses. While many studies explore children’s acquisition of evidentiality marked lexically or morphologically, we are not aware of any studies exploring the acquisition of evidentiality marked through intonation. Children are developing the ability to detect epistemic meanings related to speaker beliefs (i.e. disbelief or uncertainty) through prosody between the ages of 3 and 6 ([5], [6], i.a.) with some differences depending on specific meanings. [7] showed the importance of cognitive aspects, specifically belief reasoning, in predicting children’s ability to detect disbelief through intonation (as well as gesture), suggesting that such reasoning facilitates intonational development within the realm of speaker beliefs. This same developmental window appears to be of importance for evidential development too, for non-intonational marking of evidentiality. Although children begin using evidential morphemes from age 2, adult-like comprehension does not occur until age 4, or even as late as age 6 ([8], i.a.). In light of the studies mentioned above, we aimed to explore children’s developmental paths for the comprehension of evidential marking of directly perceived evidence through intonation. We hypothesized that the comprehension of evidential meaning through References intonation would coincide chronologically with comprehension of previously-studied non- intonational evidential morphemes. In addition, we explored the relationship between [1] Gravano, A., Benus, S., Hirschberg, J., German, E. S., & Ward, G. 2008. The effect of prosody and children’s general source-marking ability, as well as their ability to make inferences based semàntic modality on the assessment of speaker certainty. In P. Barbosa, S. Madureira, & C. on directly perceived information. We hypothesized that such abilities are a prerequisite for Reiss (Eds.), Proceedings of the fourth Speech Prosody 2008 Conference (pp. 401–404). the comprehension of intonationally marked evidentiality. To this end, four different tasks [2] Armstrong, M. E. & Prieto, P. 2015. The contribution of context and contour to perceived belief were designed: two non-linguistic tasks and two linguistic tasks. In the two non-linguistic in polar questions. Journal of Pragmatics, 81, 77–92. tasks, we examined the ability to infer information based on directly-perceived visual and [3] Vanrell, M. M., Mascaro, I., Torres-Tamarit, F., & Prieto, P. 2013. Intonation as an encoder of audio evidence, as well as the ability to monitor information source ([8]). Our linguistic tasks speaker certainty: Information and confirmation yes–no questions in Catalan. Language and Speech, 56 (3), 163–190. explored the comprehension of declarative vs. polar question intonation, as well as the [4] Vanrell, M.M., Armstrong, M.E. & Prieto, P. 2017. Experimental evidence for the role of intonation detection of evidentiality in que + L+H* L% questions in Majorcan Catalan ([4]). Seventy-three in evidential marking. In: Prieto, P. & Escandell-Vidal, V. (Eds.), Intonational constraints on children (ages 3-6) participated in two experimental sessions. Results show that children pragmatic inferences: Theoretical and experimental investigations. Language and Speech, make gains in their ability to do source monitoring and make evidence-based inferences 60(2), 242-259. during this developmental window. They also show that general intonational comprehension [5] Armstrong, M.E. & Hübscher, I. 2018. Children’s development of internal state prosody. In P. is improving: 4-year-olds outperformed 3-year-olds in discriminating declarative vs. Prieto & N. Esteve-Gibert (Eds.), Prosodic Development in First Language Acquisition. Amsterdam: polar question intonation (Fig 1, green box plots). While no consistent comprehension of John Benjamins, pp. 271-293. evidentiality marked through intonation was demonstrated for these age groups, there are [6] Moore, C., Pure, K., & Furrow, D. 1990. Children’s understanding of the modal expression of signs of its development, with some outliers performing very well after age 5 (Fig. 1, purple). speaker certainty and uncertainty and its relation to the development of a representational While source monitoring (beige) and the ability to make inferences based on direct evidence Theory of Mind. Child Development, 61(3), 722–730. (blue) are clearly being developed during this age window, these abilities do not seem to be [7] Armstrong, M. E., Esteve-Gibert, N., Hübscher, I., Igualada, A. & Prieto, P. 2018. Developmental enough for children to successfully detect evidentiality in que + L+H* L% questions before and cognitive aspects of children’s disbelief comprehension throught intonation and facial the age of 7. However, the acquisition literature for development of evidential marking tends gesture. First Language 38(6), 596-616. [8] Papafragou, A., Li, P., Choi, Y. & Han, Ch. (2007). Evidentiality in language and cognition. Cognition to look at evidential marking that is obligatory. que + L+H* L% questions are optional in 103 (2), 253-299. Majorcan Catalan, which may make them a more elusive form to acquire. Our work adds to the recent body of work confirming that intonational development is highly dependent on the types of meaning associated with intonational forms.

316 317 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 WORKSHOPS Gesture production in typical and atypical development: A compositional perspective References Sara Ramos Cabo1, Valentin Vulchanov1 and Mila Vulchanova1 Language Acquisition and Language Processing Lab, Norwegian University of Science and Charman, T., Drew, A., Baird, C., and Baird, G. (2003). Measuring early language development Technology (NTNU)1 in preschool children with autism spectrum disorder using the MacArthur communicative development inventory (infant form). J. Child Lang. 30, 213–236. doi: 10.1017/S0305000902005482 Children with Autism Spectrum Disorder (ASD) engage less in communicative interactions Kendon, A. (1980). Gesticulation and speech: Two aspects of the process of utterance. The compared to typically developing (TD) children. Evidence shows that they gesture less and talk relationship of verbal and nonverbal communication, 25(1980), 207-227. LeBarton, E. S., and Iverson, J. M. (2016). Gesture development in toddlers with an older sibling less (Charman et al., 2003; Mastrogiuseppe et al., 2015; Lebarton & Iverson, 2016), but little with autism. Int. J. Lang. Commun. Disord. 51, 18–30. doi: 10.1111/1460-6984.12180 is known about the specific ways in which the communicative interactions of ASD children Mastrogiuseppe, M., Capirci, O., Cuva, S., and Venuti, P. (2015). Gestural communication in children differ from their TD peers. Previous findings suggest less sophisticated non-verbal patterns with autism spectrum disorders during mother-child interaction. Autism 19, 469–481. doi: of communication with ASD children producing fewer index finger and no contact pointing 10.1177/1362361314528390 than TD children (Ramos-Cabo et al. 2020). The current study aimed at investigating the Ramos‐Cabo, S., Vulchanov, V., & Vulchanova, M. (2020). Different Ways of Making a Point: A Study complexity of non-verbal communicative interactions in ASD and TD from a compositional of Gestural Communication in Typical and Atypical Early Development. Autism Research. perspective. Gestures, like speech, can be linked one another (Kendon, 1980). To do this, the arm/hand holds in the air after the stroke and prepares for a new stroke instead of retracting and going back to a resting position. In a novel approach, we analysed these instances where Gestural bootstrapping in the L2: the hand/arm is lifted to concatenate two or more gesture strokes (i.e., Gesture Chains). Are head gestures precursors for prosodic focus marking in Catalan learners of English? We argue that Gesture Chains are the non-verbal counterparts of coordinating conjunctions Lieke van Maastricht1 and Núria Esteve-Gibert2 in speech (e.g., and, but, or) that can be used as an index of the complexity of non-verbal 1Centre for Language Studies, Radboud University, 2Universitat Oberta de Catalunya communicative interactions. To investigate whether there are group differences in the production of Gesture Chains, Speakers use prosody to express linguistic meaning, e.g., to mark new and given information 16 ASD children, 13 children at high risk for autism (HR), and 18 TD children between 1 in discourse, e.g., [1]. Studies have shown that languages use distinct prosodic strategies and 6 years of age participated in a semi-structured gesture-elicitation task. Communicative for focus marking and these cross-linguistic differences complicate prosodic focus marking caregiver-child interactions were videotaped, and gestures were coded off-line using ELAN. for L2 speakers. Hence, Spanish learners of Dutch and Dutch learners of Spanish generally A non-parametric analysis of variance (Kruskal-Wallis H test) revealed group differences in produce atypical pitch accent distributions to mark focus, even when advanced L2 speakers the production of Gesture Chains, χ2 = 6.39, df = 2, P = 0.041. Post hoc pairwise comparisons overall, and this atypicality is perceivable by L1 listeners who consider their speech more with the Dwass-Steel-Critchlow-Fligner test identified group differences between the ASD foreign-accented and more difficult to understand [2][3]. and the TD groups, with the ASD children producing fewer Gesture Chains than their TD Studies on multimodality have shown that body gestures are generally temporally aligned with peers (W = 3.511, P = 0.035). No group differences were found between the HR and the other tonal movements in adult speech (e.g., [4]) causing an increase in the amplitude of speech, groups (ASD × HR: W = 0.659, P = 0.887; HR × TD: W = 2.305, P = 0.233). in combination with acoustic peaks in the F0 [5]. In L1 acquisition, this temporal connection These results show that children with autism do not only gesture less, but that they concatenate develops very early in infancy [6][7] but it seems that the development of prosody-gesture gestures less as well, revealing shorter and more fragmented non-verbal communicative alignment for functional purposes (i.e., focus marking) is not so clear: correct prosodic focus interactions. This adds to previous evidence showing that gesture production in ASD is not marking does not usually occur until children are 7 or 8 years old [8]. However, [9] showed only quantitatively different from TD, but qualitatively different as well. that 4 or 5-year-olds express information status nonverbally, even if they have not yet learned prosodic cues for that purpose. Their results showed a higher head gesture rate in narrow focus items than in the broad focus items, but no effect of focus condition on word duration nor F0 range. 73% of the head gestures were aligned with the focused item. Hence, they conclude that head gestures are a bootstrapping mechanism for L1 prosody acquisition. In adult L1 speech, producing head gestures leads to a more pronounced prosodic focus marking in the form of higher F0 and duration excursions, which is also noticeable for L1 listeners [10]. In L2 development, results are less straightforward: Some have shown that producing body movements and gestures facilitate L2 prosody production [11], while others found the opposite [12]. To our knowledge, our study is the first to investigate head gestures as precursors and possible reinforcers of L2 prosodic focus marking. We replicate [9] with 40 Catalan learners of English who play a game eliciting spontaneous sentences in a no-focus (Fig. 1), a contrastive focus, and a corrective focus condition (Fig. 2). We will analyze (alignment between) prosodic (F0, syllable duration) and gestural (gesture type, i.e., none/nod/tilt/chin pointing/eyebrow raising, and phases) correlates of focus. Based on [2], we expect L2 learners to transfer their L1 prosody to their L2 but to our knowledge

318 319 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 WORKSHOPS there are no L2 studies on nonverbal focus marking using head gestures to base gesture pitch gestures facilitates the learning of mandarin Chinese tones and words. Studies in Second transfer predictions on. If gestures serve as a precursor for both L1 [9] and L2 prosody Language Acquisition, 41(1), 33-58. acquisition in focus marking, learners will use L2 gesture strategies. Alternatively, gesture [12] Zheng, A., Hirata, Y., & Kelly, S. D. (2018). Exploring the Effects of Imitating Hand Gestures and and prosody may be so tightly coupled [5] that learners place prominence on the same, Head Nods on L1 and L2 Mandarin Tone Production. Journal of Speech, Language, and Hearing possibly inaccurate, part of the utterance in both modalities. Figure 2. Example of items eliciting Research, 61(9), 2179-2195. narrow (left, middle) and corrective (right) focus.

References

[1] Ladd, D.R. (1996). Intonational Phonology. Cambridge University Press. [2] Van Maastricht, L., Krahmer, E., & Swerts, M. (2016a). Prominence patterns in a second language: Intonational transfer from Dutch to Spanish and vice versa. Language Learning, 66(1), 124-158. [3] Van Maastricht, L., Krahmer, E., & Swerts, M. (2016). Native speaker perceptions of (non-) native prominence patterns: Effects of deviance in pitch accent distributions on accentedness, comprehensibility, intelligibility, and nativeness. Speech Communication, 83, 21-33. [4] Esteve-Gibert, N., Borràs-Comes, J., Asor, E., Swerts, M., & Prieto, P. (2017). The timing of head movements: The role of prosodic heads and edges. The Journal of the Acoustical Society of America, 141(6), 4727-4739. [5] Pouw, W., Harrison, S. A., Gibert, N. E., & Dixon, J. A. (2019, November 27). Energy flows in gesture-speech physics: The respiratory-vocal system and its coupling with hand gestures. https://doi.org/10.1121/10.0001730 [6] Esteve-Gibert, N., & Prieto, P. (2014). Infants temporally coordinate gesture-speech combinations before they produce their first words. Speech Communication, 57, 301-316. [7] Esteve-Gibert, N., Prieto, P., & Pons, F. (2015). Nine-month-old infants are sensitive to the temporal alignment of prosodic and gesture prominences. Infant Behavior and Development, 38, 126-129. [8] Chen, A. (2011). Tuning information packaging: intonational realization of topic and focus in child Dutch. Journal of Child Language, 38, 1055–1083. [9] Esteve-Gibert, N., Lœvenbruck, H., Dohen, M., & D’Imperio, M. (2019). Pre-schoolers use head gestures rather than duration or pitch range to signal narrow focus in French. In S. Calhoun, P. Escudero, M. Tabain & P. Warren (eds.) Proceedings of the 19th International Congress of Phonetic Sciences, Melbourne, Australia 2019 (pp. 1495-1499). [10] Krahmer, E., & Swerts, M. (2007). The effects of visual beats on prosodic prominence: Acoustic analyses, auditory perception and visual perception. Journal of Memory and Language, 57(3), 396-414. [11] Baills, F., Suárez-González, N., González-Fuente, S., & Prieto, P. (2019). Observing and producing

320 321 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 WORKSHOPS WORKSHOP for the nuclear accent. Only contrastive focus was consistently ‘strong’ enough to attract Creativity and Variability: Prosody and Information the nuclear accent to the object for most participants. Questions lack such a competitor. Givenness marking appeared to be entirely optional in questions in this experimental design Variability in information structure marking in German questions and exclamatives (but was present in the fillers, which were rejections). Contrastive focus was consistently Heiko Seeliger1 and Sophie Repp1 marked, presumably in order to direct the addressee’s attention to unexpected information. 1University of Cologne

Information structure (IS) can be marked prosodically in several ways. In the Germanic languages, words expressing new information are more likely to carry an accent than words expressing given information, these accents are more likely to be more prominent, and on continuous scales of pitch and duration, new information tends to be associated with the higher and longer ends of the scales [1, 2, 3, 4]. Contrastive focus tends to be marked even more strongly than new information along these lines [5, 6, 7, 8]. Crucially, these findings are largely based on studies investigating assertions. Assertions arguably represent the most ‘canonical’ speech act: they are most common and least marked. Previous research indicates that the prosodic marking of IS is sensitive to the speech act carried out by the host utterance [9, 10]. We explore the idea that some of the canonical strategies of IS marking, viz. deaccentuation of given information, occur most readily in more canonical speech acts, and that other, less canonical speech acts offer speakers different strategies for IS marking. These strategies may resolve the tension that may be created by different requirements imposed on the prosody by the speech act vs. IS in different ways. By hypothesis, different strategies lead to considerable inter-individual variation in IS marking since any one of them is unlikely to represent an unmarked default. ♦ We present data from a production experiment on two German non-assertive speech acts (2×3 design, 27 ppl., 9×6 items): polar questions and polar exclamatives, see (1). Polar questions elicit a yes/no answer from the addressee, while exclamatives express the speaker’s amazement about the underlying proposition. We manipulated the IS status of the object of the target sentences: it was either given information, new information, or contrastively focused. The data were analysed with mixed effects models, and individual variability was inspected by descriptive analysis. We found that in both speech acts, contrastive focus is consistently marked relative to new information, through a higher rate of more prominent nuclear accents on the object (i.e. L+H* instead of H*), increases in the continuous measures for the object, and a reduction of accents on other words. Given information is marked much less clearly in both speech acts than we expected based on the work on assertions: given objects were not deaccented more often than new objects, and differences in continuous acoustic measures were insignificant. There was inter-individual variability in the accentuation patterns (Figures 1 & 2). In the new and given conditions many participants placed the nuclear accent in exclamatives on the d-pronoun subject but how consistently they did so varied: the object was a competitor. Some participants even showed a preference for the object. One person preferred the clause-initial verb. In questions, most participants placed the nuclear accent on the object regardless of condition. Only a few participants sometimes chose the quantifier/adjective. The adjective overall appears to more readily carry an accent in questions than in exclamatives, while the reverse is true for the verb. The rate of object accents in exclamatives does not seem to predict the rate in questions per participant. ♦ We propose that there is not necessarily a 1:1 mapping between IS categories and prosody in a language: IS marking and speech act specific prosodic constructions may [1] Baumann, S. & Riester, A. 2013. Coreference, lexical givenness and prosody in German. Lingua 136, 16-37. [2] Grice, M., Ritter, S., Niemann, H. & Roettger, T. 2017. Integrating the discreteness come into conflict with one another, with individual speakers resorting to different strategies and continuity of intonational categories. Journal of Phonetics 64, 90-107. [3] Baumann, S., Röhr, for resolving them. For exclamatives, we assume with [10] that they come with a prosodic C. & Grice, M. 2015. Prosodische (De-)Kodierung des Informationsstatus im Deutschen. Zeitschrift default: one element in the clause must carry a prominent accent (the exclamative accent), für Sprachwissenschaft 131, 1-42. [4] Baumann, S. 2006. The intonation of givenness – evidence from the most common carrier of which tended to be the d-pronoun. It is a strong competitor

322 323 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 WORKSHOPS German. Niemeyer. [5] Sudhoff, S. 2010. Focus particles and contrast in German. Lingua 120, 1458- morpho-syntactic means that do not call for prosodic marking in the source language, like 1475. [6] Baumann, S., M. Grice, and S. Steindamm, S. (2006). Prosodic marking of focus domains - the preverbal pronominalization: categorical or gradient? Proceedings of Speech Prosody. 301-304. [7] Baumann, S., J. Becker, M. Grice, and D. Mücke (2007). Tonal and articulatory marking of focus in German, Proceedings of 16th ICPhS. A) (La dame (GI1) plus âgée avec le chapeau (GI2) l’a dénoncée/ (GI3) d’abord au boulanger) (French 1029-1032. [8] Kügler, F., & Gollrad, A. (2015). Production and perception of contrast: The case of bilingual) the rise-fall contour in German. Frontiers in Psychology, 6, 1254. [9] Repp, S. 2019. The prosody of wh- The older lady (IG1) with the hat (IG2) denounced him (IGI3) first to the baker). exclamatives and wh-questions in German: speech act differences, information structure, and sex of speaker. Language and Speech 63 (2), 306-361. [10] Repp, S. & Seeliger, H. 2020. Prosodic prominence Bilinguals exaggeratedly associate strong referential accessibility to light means, including in polar questions and exclamatives. Frontiers in Communication 5 (53). those prosodic. The overinterpretation of the pragmatic order does not allow them to mark the referent in the context of referential accessibility. The sensitivity to the pragmatic order of Polish in contact with French, where there is a parallelism of prosodic and syntactic structures (rightmost focal position) seems to generate the operationalization of a new rule according Prosody and information structure: the case of bilingualism to which the expression of the information with high referential accessibility passes through Anna Kaglik light encoding means and prosodic 0 marking (congruency of means). UMR 7023 Structures Formelles du Langage, Université Paris VIII The contact of languages ​​generates a dynamic of creation. It is about the creation of new representations and the readjustment of treatment strategies. The creative dynamic is In second languages ​​(SLA), the notions of creativity and variation are approached with a motivated by the search for the convergences between the two systems. It seems to meet particular connotation, which refers to an imperfection, to a failure or to a transfer of the the constraint of efficiency at the lowest cost. source language. However, SLA is a building process responding to the adaptive dynamics.

In particular, the active bilingualism with long exposure and practice produces the creation of new forms but also the maintenance of some of them or even their abandonment. The mechanization of which could reveal similarities with the phenomenon of variability within the system of the same language. In this proposal we expose the results of the analysis of prosody in the materialization of the information structure (IS), in the referential movement of animate entities of the frame, in a narrative task. Two groups of bilingual Franco-Polish speakers with and without a noticeable accent and from the two control groups of the source- language Polish and of the target-language French recounted the extract from the silent film Modern Times, viewed in the absence of the experimenter. The “Quaestio” model [2], [3] widely used in cross-linguistic approaches to information structure was chosen for the analysis, as well as the cognitive model of prosody [4], [5] inspired by gestalt theory. Perception occupies a central place in this model, the acoustic parameters are not direct correlates of the functional categories, because there is no direct link between the parameters and Figure 1. Representation of the levels of analysis of the informational structure inspired by the Quaestio model. their value outside of the context. On the segmentation of speech on the basis of syntactic criteria, proposed by the “Quaestio” model (at the microstructural level the reference unit is [1] Baumann, S. & Grice, M. 2006. The intonation of accessibility. Journal of Pragmatics, 38 (10), an utterance), we have overlaid a segmentation based on psychoacoustic criteria: intonative 1636-1657. group (IG) at the microstructural level, and intonative period (IP) at the macrostructural level [2] Klein, W. & von Stuttrheim, C. 1989. Referential Movement in Descriptive and Narrative [4], [5]. The feature expression that coincides with the intonation group (IG) boundary is Discourse. In R. Dietrich & C.F. Graumann (Ed.), Language Processing in Social Context. North considered prosodically marked and if it does not coincide, prosodically unmarked. The force Holland: Elsevier Science Publishers B.V. of prominence is estimated according to the amplitude criterion of the intonation gesture: [3] Klein, W. & von Stuttrheim, C. 2006. How to solve a complex verbal task: Text structure, referential strong (between 5 and 9 semitones (dt)); weak (less than 5 dt). The particularity of our analysis movement and the quaestio. In M. R. Brum de Paula & G. Sanz Espinar (Ed.), Aquisição de consists in taking into account different encoding levels at the same time (see Figure 1), in Línguas Estrangeiras. (Programa de Pós-Graduação em Letras: Universidade Federal de Santa order to account for their possible interactions. Our result corroborates the hypothesis of Maria, Brasil), 29–67. Lambrecht [6], according to which the role of prosody is to encode the supposed degree of [4] Lacheret -Dujour, A. & Victorri, B. 2002. La période intonative comme unité d’analyse pour activation in the consciousness of the interlocutors. It also suggests the absence of a direct l’étude du français parlé : modélisation prosodique et enjeux linguistiques. Verbum, XXIV, (1-2), phonological link between the degree of referential accessibility and prosody, according to 55-72. Bauman & Grice [1]. There is no uniform referential category at the interface of prosody. In [5] Lacheret -Dujour, A. 2003. La prosodie des circonstants. Louvain : Peeters. [6] Lambrecht, K. 1994. Cambridge studies in linguistics. Information structure and sentence form: contrast, those languages ​​(Polish and French) deploy very specific forms to express salience, Topics, focus, and the mental representations of discourse referents. New York, NY, US: Cambridge which in French is based on specific intonation contour and amplitude while in Polish on University Press. http://dx.doi.org/10.1017/CBO9780511620607 amplitude only, regardless of the type of intonation contour. Unlike monolingual natives, bilinguals tend not to prosodically mark information maintained with focus status. Analysis of combinations of encoding means suggests that their strategy is justified by the use of

324 325 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 WORKSHOPS Syntactic and prosodic variation in human-computer interaction Optionality in a competence model: on prosodic phrasing of clitic left-dislocations in Anna Konstantinova Spanish WWU Münster Ingo Feldhausen1 1ATILF-CNRS, Université de Lorraine Situational context often results in syntactic and prosodic variability in human speech production. The situation of human-computer interaction (HCI) usually puts certain constraints In order to address the question of how optionality / variation occurring within an individual on dialogue flow. The most relevant constraints are the lack of common knowledge between and across individuals can be modelled in a competence model we take a closer look at the interlocutors, technical limitations of the system’s ability to process and understand prosodic phrasing in Spanish. We also discuss possible limits of variation. natural language, and the system’s inability to rapidly adapt to the interlocutors’ immediate Background: Following [1], we take the terms “optionality” and “variation” as describing the communication needs, as it happens in human-human communication (HHC). situation where one phonological input has more than one output (p. 519). Furthermore, Previous research looked at priming mechanisms and alignment in HCI, but mainly in Wizard- variation within an individual refers to the same individual using different forms at different of-Oz studies (cf. [1]) where the users could engage in coherent dialogues with the systems times (intra-individual variation), and variation across individuals refers to (a) the use of tailored for the experiment or in controlled speech production experiments ([2]). Very little different forms by different individuals or (b) to the use of the same different forms with is known about the users’ choice of referring expressions, the syntactic structure of the different frequencies (inter-individual variation). Even though, optionality has largely been questions, and prosodic marking of referents in interaction with the existing systems where ignored or eliminated in classical generative models ([1], [2]), it has been playing an important various constraints are at play and communication breakdowns are likely to occur. role in phonological approaches (e.g. [3], [4]). However, they typically account for the mean I investigate how information structure (IS) in HCI differs from IS in HHC. The analysis is based on of a group but not for individual variation within that group. Here, we take this step further. a corpus of spoken questions directed to Amazon Alexa in the HCI condition and a confederate Experiment: Based on data from a production experiment (scripted speech) in which a speaker in the HHC condition. 20 speakers of German from the Münster area participated in homogenous group of eight native speakers of Peninsular Spanish (region Murcia) uttered the study. The results of a pilot study showed that in HCI, the speakers produce full names for 288 sentences with clitic left-dislocations (CLLD), (1), it is shown that the prosodic boundary the referents rather than pronouns or topic-drop, even for highly accessible discourse topics at the right edge of the CLLD constituent is almost always realized (~95%). In addition, the and referents. This tendency can be explained by the speakers’ effort to avoid ambiguities and embedded clause is also almost always separated from the matrix clause (~90%). Individual the influence from Amazon Alexa’s replies which primarily include full names. variation occurs in the phrasing of the matrix clause: the boundary separating the matrix Based on the findings from the pilot study, my hypothesis regarding syntactic and prosodic subject (here: Bárbara) from the matrix verb (here: supone) is optional, and clear frequency- features in HCI is that the speakers are likely to use more varied syntactic constructions of dependent variation exists across the speakers (Fig. 1). This comes as a surprise, because different complexity when requesting information from Alexa compared to the HHC-condition. the prosodic boundary at the right edge of the subject is not considered to be optional in The participants need to find ways to overcome communication breakdown, and they have to Spanish SVO utterances (e.g. [5]). Following [6], a comprehensive approach must account for mention all the referents in each question explicitly. This has consequences for the prosodic (a) the different output forms and (b) the different frequency of the output forms within and marking in that the need for explicitness in HCI-condition might result in a higher pitch accent across individuals. For (a), we apply Stochastic Optimality Theory (SOT, [3]), which allows for on inferable referents, which are usually reported to carry a lower pitch accent in German (cf. different winners due to overlapping constraints (Fig. 2). Our main contribution, however, lies [3]). in the modification of SOT, which allows us to account for (b): By assuming that the degree Preliminary results revealed syntactic variation in HCI mainly in attempts to repair an earlier of constraint overlap can vary between individual speakers, while the underlying hierarchy dialogue breakdown: the use of wh-in-situ questions or keyword-based requests. Explicitness remains invariant, the modified approach is not limited to variation in the output structure of regarding the referents resulted in more complex syntactic structures with prepositional a given population (Fig. 3, [6]). For the present analysis, we assume four violable constraints: phrases in HCI compared to HHC. The ongoing analysis of prosodic marking of different IS- (1) Align-Top,R demands that the right edge of a syntactic topic phrase (here CLLD) aligns components through F0 mean, min, and max measurements suggests that in HCI, pitch accent with the right edge of a prosodic phrase ([6], [7]); (2) Align-CP,L demands that the left edge on inferable referents and wh-region in questions directed to Alexa varies from HHC. These of a syntactic CP aligns with the left edge of a prosodic phrase ([7]); (3) Align-XP,R demands findings might be linked to the discontinuity of discourse in the HCI condition due to the lack that the right edge of a syntactic XP aligns with the right edge of a prosodic phrase (Selkirk of accumulated shared knowledge. I will discuss the interrelation between the relevant factors 1995); and (4) Minimize-Number-Phrases demands that the number of prosodic phrases should for predicting the referent’s prosodic marking regarding the referents’ syntactic position in the be minimal (Truckenbrodt 1999, Prieto 2006). With [6], we propose the underlying constraint question, their IS-status, and speakers’ individual preference. hierarchy Align-Top >> Align-CP,L >> Min-N-Phrases >> Align-XP,R, in which the two constraints Min-N-Phrases and Align-XP,R overlap (Fig. 2, Fig. 3). [1] Dubuisson Duplessis, G., Langlet, C., Clavel, C. & Landragin, F. 2021. Towards alignment Limits of variation and creativity: Optionality occurs only after the matrix subject, but not strategies in human-agent interactions based on measures of lexical repetitions. Lang Resources at the edges of the CLLD element. Therefore, the hypothesis is pit forth that information & Evaluation. https://doi.org/10.1007/s10579-021-09532-w [2] Zellou, G., & Cohn, M. 2020. Social and functional pressures in vocal alignment: Differences structurally motivated prosodic boundaries are not subject to individual variation, because for human and voice-AI interlocutors. Proceedings of Interspeech 2020 (Shanghai, China), 1634- they are communicatively more necessary than boundaries satisfying simple alignment 1638. constraints [6]. This suggests that new prosodic patterns for the information structural [3] Baumann, S., Röhr, C. T., & Grice, M. 2015. Prosodische (De-)Kodierung des Informationsstatus category of CLLDs are not very probable in languages such as Murcia Spanish. However, im Deutschen. Zeitschrift für Sprachwissenschaft, 34, 1–42.

326 327 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 WORKSHOPS what is considered to be “communicatively necessary” might differ between varieties and Barcelona. languages ([7], [8], Downing 2011). [6] Feldhausen, I. 2016. Interspeaker variation, Optimality Theory and the prosody of clitic left- dislocations in Spanish. Probus 28(2): 293–333. [7] Feldhausen, I. 2010. Sentential form and prosodic structure of Catalan. Amsterdam: J. Benjamins. 31% ~90% ~95% [8] Feldhausen, I. & Lausecker, A. 2018. Diatopic Variation in Prosody: Left- and Right-Dislocations ( )( )( )( ) in Spanish. Proceedings of Phonetics and Phonology in the German-Speaking Countries (P&P 13), (1) [Bárbara supone [que el águila de Málaga la vendió su hermano]CP2]CP1 Berlin (Germany), 57-60. S V q CLLD cl V S ‘Barbara assumes that the eagle of Málaga, her brother sold it.’ Prosodic coding of focus in children’s narratives in Chilean Spanish Valeria Cofré Vergara1, Fernando Álvarez1 1Pontificia Universidad Católica de Chile

Regarding the prosodic coding of the information structure in Spanish, there are few studies in the area, most of them focused on adult speech and methodologically very diverse. Nevertheless, there is agreement thus far in that the prosodic coding of focus in Spanish is based on the following parameters: pitch height, alignment of the tonal peak within the tonic syllable, duration of the tonic syllable and junction tones, with variable relevance in the realization of wide or narrow focus (Martinez Celdrán & Fernández Planas, 2007; Hülsmann, 2019; Villalobos Pedroza, 2019). Considering these parameters, we analyze the prosodic realization of focus within a functional approach. This perspective understands language Figure 1. Individual results and mean for the prosodic boundary between matrix S and V. acquisition in terms of learning linguistic forms as tools to achieve communicative objectives, which links the evolutionary changes in children and adolescents mainly with the semantic and pragmatic properties of the linguistic units at the discursive level (Hickmann, 2003). To do so, we use decontextualized discourses (in contrast with more experimental instruments), specifically semi-spontaneous narratives, of three children from Santiago de Chile of 4, 6 and 10 years old. The task was to re-tell a story that the children had just heard. The narratives were recorded in wav format and then manually segmented in clauses, words and segments, and tagged with the software Praat. Measurements of pitch height and duration of all the vocalic segments were taken. For the analysis, we compared the values obtained in the target syllables (focus and final segment) with the rest of the speech to conclude if the vowel was longer, if the highest pitch value was aligned with the tonic syllable (neither the post-tonic nor the pre-tonic, as usually happens in Spanish), and if the pitch variation was significant from a perceptive point of view (measured in semitones). Because of the size of the sample, no phonological annotation was made. Existing research, based on spontaneous or semi-spontaneous speaking performance as well as on reading-aloud tasks, suggests, in general, the late development of prosody in children (after 10 years of age). This late development concerns mainly the planning and Selected references: adequate use of pauses and pitch movements in the segmentation of intonation groups, as well as the use of tonal accents and intonation patterns (Aravena, 2010; González-Trujillo et [1] Anttila, A. 2007. Variation and optionality. In: Paul de Lacy (ed.). The Cambridge Handbook of al., 2014; Ito, 2018; Riffo et al., 2018). This study confirms that Spanish-speaking children of Phonology. CUP, Cambridge, 519-536. 4 and 6 years old struggle to mark off information units prosodically (use of tonal inflection [2] Pierrehumbert, J. 2001. Stochastic phonology. Glot International 5(6): 195–207. and pause). [3] Boersma, P. & Hayes, B. 2001. Empirical tests of the gradual learning algorithm. Linguistic Inquiry As for the prosodic coding of focus, specifically regarding pitch and duration, this study 32: 45–86. shows emerging features of a phonological adult model of accent assignment and alignment [4] Smolensky, P. & Goldrick, M. 2016. Gradient Symbolic Representations in Grammar: The Case of the tonal peak (Zubizarreta, 1998; Hülsmann, 2019; Villalobos Pedroza, 2019), but with of French Liaison. ROA 1286. phonetic variation among children of different ages, specifically in relation to pitch height [5] Elordieta, G., Frota, S., Prieto, P. & Vigário, M. 2003. Effects of constituent weight and syntactic and duration of the tonic syllable. These findings are new within the framework of the study branching on intonational phrasing in Ibero-Romance. Proceedings of ICPhS 15, 487–490. of language development in Spanish, but consistent with the study carried out by De Ruiter

328 329 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 WORKSHOPS (2010) for German-speaking children. Acoustic Variability in Lexical Acquisition and Processing Our study sheds light on the phonetic development of prosody, within a functional Joe Barcroft Washington approach and, more generally, it contributes to a better understanding of information- University in St. Louis structure coding in Spanish. Findings in both areas provide data about the specific prosody variation in Chilean Spanish, as well as empirical data on its development. Acoustic variability refers to variations in the speech signal that do not alter linguistic content, such as when a word is produced by multiple talkers instead of a single recording of a single talker(talker variability) or at multiple rates instead of at a single rate (rate variability). The goal of this presentation is to review theoretical advances and research findings on how acoustic variability affects both lexical acquisition and lexical processing in order to interpret and connect them with the larger goals of the workshop, in particular with regard to the goals of (a) clarifying “canonical” versus “noncanonical” forms when interpreting data and (b) modeling variation in ways that are statistically and computationally viable. The presentation is divided into three main parts. In Part 1, we review research on the extended phonetic relevance hypothesis (Sommers & Barcroft, 2007), which posits that input with acoustic variability based on linguistically relevant sources affect vocabulary learning and speech processing whereas variability based on sources thatare not linguistically relevant do not. Consistent with this hypothesis, previous studies (Barcroft & Sommers, 2005; Sommers & Barcroft, 2007) have found that increases Fig 1. Differences between three groups of children in some types of acoustic variability—talker, speaking-style, and speaking-rate variability— improves second language (L2) vocabulary learning (based on more accurate and quicker and fundamental-frequency (F0) variability have References recall) whereas increases in amplitude yielded null effects, at least for L2 learners whodo not speak a tone language. Interestingly, [1] Aravena, S. 2011. El desarrollo narrativo a través de la adolescencia: Estructura global de other studies (Mullennix, Pisoni & Martin, 1989; Sommers, Nygaard & Pisoni, 1994; contenido y referencia personal, Revista Signos, 44 (77), 215-232. Sommers & Barcroft, 2006) have demonstrated that the same sources of acoustic variability [2] de Ruiter, L. E. 2014. How German children use intonation to signal information status in that produce positive effects for L2 word learning (talker, speaking- style, and speaking-rate) narrative discourse. Journal of Child Language 41(4), 1015–1061. pose costs to speech processing, based on slower word identificationscores whereas, again, [3] ---. 2015. Information status marking in spontaneous vs. Read speech in story- amplitude and F0variability again yielded null effects. These findings suggest that relevant telling tasks – Evidence from intonation analysis using GToBI. Journal of Phonetics, 48, 29-44. sources of acoustic variability required additional processing in order for target word forms [4] González-Trujillo, M., Calet, N., Defior, S. & Gutiérrez-Palma, N. 2014. Escala de fluidez lectora to be mapped onto their stored canonical representations, which develop throughprocessing en español: midiendo los componentes de la fluidez,Estudios de Psicología, 35 (1), 104-136. multiple sources of acoustic variability over time. [5] Hickmann, M. 2003. Children´s Discourse. Person, space and time across languages, Cambridge: In Part 2, we review a model of how phonetically relevant acoustic variability affects Cambridge Studies in Linguistics. developing mental representations of word forms and lexical processing across the lifespan [6] Hülsmann, C. 2019. Tópicos y focos iniciales en el español hablado: funciones pragmáticas y (Figure 1). It depicts how acoustic variability (a) improves vocabulary learning by yielding correlatos formales, in Valeria Belloro (Ed.), La interfaz sintaxis-pragmática: estudios teóricos, more distributed lexical representations and (b) poses costs during lexical processing long descriptivos y experimentales, Berlín/Munich/Boston: De Gruyter, 121-142. [7] Ito, K. 2018. Gradual development of focus prosody and affect prosody comprehension: A after words have been acquired. The latter of these two observations is due to the need proposal for a holistic approach, in Pilar Prieto & Núria Esteve-Gibert (Eds.), The Development to map variant word forms onto one constantly evolving canonical form that becomes of Prosody in First Language Acquisition, 247-270. modified, however slightly, by both canonical and variant forms to which the language user [8] Lambrecht, K. 2000. Information structure and sentence form. Cambridge: Cambridge University is exposed over time. Press. Lastly, in Part 3 of the presentation, we address a series of questions designed to relate the [9] Martínez Celdrán, E. & Fernández Planas, A. 2007. Manual de fonética española. Barcelona: Ariel. immediate discussion of the role of acoustic variability in lexical acquisition and processing [10] Riffo, B., N. Caro & Saez, K. 2018. Conciencia lingüstica, lectura en voz alta y comprensión to others related to the workshop in general, including with regard to variability in prosody lectora, Revista de Lingüstica Teorica y Aplicada, 56(2), 175-198. on lexical acquisition, lexical processing, and beyond. This final part of the presentation will [11] Villalobos Pedroza, L. 2019. La marcación de foco en el habla dirigida a niños: marcos léxicos y be designed to be the most interactive of the three parts, encouraging an active exchange of estrategias prosódicas, in Valeria Belloro (Ed), La interfaz sintaxis-pragmática: estudios teóricos, ideas about theoretical and practical issues that cross the boundaries between research on descriptivos y experimentales, Berlín/Munich/Boston: De Gruyter, 283-310. the effects of the different sources of variability related to lexis reviewed in this presentation [12] Zubizarreta, M.L. 1998. Las funciones informativas: tema y foco, in Ignacio Bosque & Violeta and research on other topics addressed in the workshop. Demonte (eds.): Gramática descriptiva de la lengua española, vol. 4, Madrid: Espasa Calpe, 4215– 4244.

330 331 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 WORKSHOPS References WORKSHOP Barcroft, J., & Sommers, M. S. (2005). Effects of Acoustic Variability on Second Language Vocabulary From speech technology to big data phonetics and phonology: Learning. Studies in Second Language Acquisition, 27(3), 387-414. a win-win paradigm Barcroft, J., & Sommers. M. (2014). A theoretical account of the effects of acoustic variability on word learning and speech processing. In Torrens, V., Escobar, L. (Eds.), The processing of lexicon Constructing an open-source speech corpus of contemporary standard Romanian: and morphosyntax (pp. 7-24). Newcastle: Cambridge Scholars Publishing. outline and preliminary remarks Mullennix, J. W., Pisoni, D. B., & Martin, C. S. (1989). Some effects of talker variability on spoken Oana Niculescu word recognition. Journal of the Acoustical Society of America, 85(1), 365-378. Romanian Academy Institute of Linguistics “Iorgu Iordan – Al. Rosetti” Nygaard, L. C., Sommers, M. S., & Pisoni, D. B. (1994). Speech perception as a talker- contingent process. Psychological Science, 5(1), 42-46. In the present context of transdisciplinary research, when Digital Humanities resources Sommers, M. S., & Barcroft, J. (2006). Stimulus variability and the phonetic relevance hypothesis: effects of variability in speaking style, fundamental frequency, and speaking rate on spoken are rapidly evolving, there is an increasing number of studies based on natural language word identification.Journal of the Acoustical Society of America, 119(4), 2406-2416. processing and automatic alignment [1]. Consequently, the research carried out at Sommers, M. S., & Barcroft, J. (2007). An integrated account of the effects of acoustic variability in the interface between phonetics and laboratory phonology has drawn close to speech first language and second language: Evidence form amplitude, fundamental frequency, and technologies, relying more and more on large scale corpora transcribed and processed at speaking rate variability. Applied Psycholinguistics, 28(2), 231-249. different linguistic levels [2]. Annotated speech corpora serve as a basis to yield insights Sommers, M. S., Nygaard, L. C., & Pisoni, D. B. (1994). Stimulus variability and spoken word into the study of language evolution, allowing for in-depth analyses of linguistic variation recognition. I. Effects of variability in speaking rate and overall amplitude. Journal of the and sound change [3]. Compared to well documented Romance languages, like French [4] Acoustical Society of America, 96(3), 1314-1324. or Spanish [5], Romanian appears as an under-represented language, not benefiting from Sommers, M., Barcroft, J., & Mulqueeny, K. (2008). Further Studies of Acoustic Variability and a well-established representation in fields like speech processing or large corpus-based Vocabulary. Presentation on November 14, 2008 at the 49th Annual Meeting of the analyses [6]. As a result, in this presentation we address the pending need to expand such Psychonomic Society, Chicago. technologies and linguistic approaches to less represented European languages, in this case to contemporary standard Romanian. More precisely, through our postdoctoral research project (Acquiring and Exploring an Oral Contemporary Spoken Romanian Corpus for Linguistic Purpose), currently implemented at the Romanian Academy Institute of Linguistics “Iorgu Iordan – Al. Rosetti” (September 2020 – August 2022), we aim to fill in this gap by providing an open access high quality speech corpus of contemporary standard Romanian. Either stored on cassette tapes, such as CORV [8], later extended to ROVA [9], or reel-to-reel tapes comprising Romania’s National Phonogramic Archive (AFLR) [10], current national oral corpora often prove difficult to access digitally. Newer corpora like COROLA for the most part focus on written text resources [11]. The existing speech corpora are mainly used in pragmatic studies, while lexical and morpho-syntactic analyses rely mostly on written texts (SOR [12]). In terms of acoustic analyses, there are certain drawbacks which prove harder to overcome such as poor audio quality, ambiguous policy of data collection and speaker consent, vague or absent metadata. As such, the novelty of the ROC-lingv project (https:// cutt.ly/skoTP5z) consists in providing a transcribed and forced aligned corpus (resulting from a speech-to-text alignment with the system described in [7]) as well as setting up a roadmap of methods and digital tools, while showcasing a typology of relevant phonetic variation in Romanian based on the forced aligned speech corpus. Furthermore, it can complement existing aligned Romanian oral corpora, such as the BN (built around prepared speech gathered from broadcast news and debates [7]) developed at LIMSI, France (https://www. limsi.fr/fr/). Nevertheless, we must also acknowledge the limitations of our proposal, both in terms of time-management and personnel resources. The ROC-lingv project is meant to be a stepping point in modern acoustic research in Romania, encouraging large corpus- based phonetics and phonology research by showcasing the advantages of working at the Figure 1. Model of the effects of phonetically relevant acoustic variability across the lifespan (Barcroft & interface between acoustics and speech technologies. The project is organized in two stages. Sommers, 2014). The first stage (Corpus design) is defined by data recording, transcribing and processing, while the second stage (Studying linguistic variation) is centred around data exploration and dissemination.

332 333 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 WORKSHOPS Acquiring, processing and exploring a speech corpus from a linguistic perspective are time- [8] Dascălu Jinga, L. 2002. Corpus de română vorbită (CORV). Eșantioane. București: Oscar Print. consuming steps within the ROC-lingv project. For this reason, we limit our analysis to 12 [9] Dascălu Jinga, L. (coord.). 2011. Româna vorbită actuală (ROVA). Corpus și studii. Academia subjects (6 males, 6 females), sharing the same geographic and educational background Română, Institutul de Lingvistică “Iorgu Iordan – Al. Rosetti”. (we are not documenting different accents across the country). The recordings are carried [10] Şuteu, V.1958. Arhiva fonogramică a limbii române. Fonetică şi dialectologie, 1, 211-219. out in the Phonetic Laboratory of the Linguistic Institute and are organised in two sessions, [11] Barbu Mititelu, V., Tufiş, D. & Irimia, E. 2018. The Reference Corpus of the Contemporary corresponding to the elicitation of both controlled (experiment 1 – word list) and spontaneous Romanian Language (CoRoLa). Proceedings of LREC 2018, Japan, 1178-1185. [12] SOR = Pană Dindelegan, Gabriela (ed.). 2016. The Syntax of Old Romanian, Oxford / New York, speech (experiment 2 – monologue). In line with the application proposal, the first experiment Oxford University Press. is centred around a reading task, aimed at gathering data regarding, among others, the seven monophtongs in CVC contexts (useful for future comparison with other vocalic spaces pertaining to various Romance languages), [wa] vs. [oa] diphthongs, apical [r] and lateral On the Need of Standard Assessment Metrics for Automatic Speech Rate Computation [l] in word-initial, medial and final position, consonant clusters. Participants will read the Tools randomized stimuli at a normal speech rate, going through the entire set three times. The Mireia Farrús1, Wendy Elvira-García2 and Juan María Garrido-Almiñana2 second experiment is designed for eliciting spontaneous speech through a monologue task 1Universitat de Barcelona, 2Universidad Nacional de Educación a Distancia, Spain (allows for full data recovery, being also faster to transcribe and annotate). The monologues share the same conversation topics, making the data comparable. By recording casual, Fluency is a relevant feature to assess speech, covering a wide range of linguistic abilities [1]. everyday topics, the material gathered is comparable, up to a point, to other Romance speech Among other elements, speech rate –measured as a specific length unit (usually syllables, corpora such as [4], [5]. The recordings go through audio processing, name anonymization phonemes or syllable nuclei) per unit of time–, has been shown to be one of the most and individual file storage. As for the manual transcriptions, due to time restraints, we will prominent elements to measure fluency in speech pathologies, language acquisition, or not be delivering a verbatim (a detailed account of (non)linguistic data). As alternative, we bilingualism, among others [2-4]. However, a manual annotation of speech rate is costly and propose to pair the orthographical transcription with a broad phonological transcript. The very time consuming, which poses a high difficulty in annotating large corpora for variational challenges we face range from experiment design, optimal linguistic coverage and adequate studies. To overcome it, several automatic tools for speech rate measurement have been transcript system, to cleaning, structuring annotation, processing and forced alignment. proposed [5-8]. Nevertheless, these tools usually differ in the way how they are evaluated The ways in which we address these issues form the topic of the present talk. Through this with respect to human annotations: whereas most of them rely on the correlation between postdoctoral research project, we intend to develop modern resources for the study of human and automatic annotations [5,6,8], others use mean squared error, error rates, or Romanian phonetics and phonology as well as expand and offer a new insight into a cross- insertion/deletion errors in the detection of specific units [7,8]. Moreover, these evaluations Romance typology of connected speech processes. also differ on the elements being assessed: the evaluation of some tools is directly based on the speech rate measured in syllables, nuclei or phonemes per second [5,6], but others are ACKNOWLEDGMENTS based on the count of these units in a specific speech segment [7,8]; and whereas some of This work is supported by a grant of the Romanian Ministry of Education and Research, CNCS-UEFISCDI, project them compute the speech rate over the whole speech segments, others exclude intermediate number PN-III-P1-1.1-PD-2019-1029, within PNCDI III. pauses. Besides, as far as we know, only [7] distinguishes between read and spontaneous speech. References In the current work, we present a preliminary study on the assessment of a Praat-based tool [1] Vasilescu, I., Dutrey, C. & Lamel, L. 2015. Large scale data based investigations using speech that detects syllable nuclei and provides an automatic measure of speech rate [5], initially technologies: the case of Romanian’. Proceedings of the 8th Conference on Speech Technology created and tested for Dutch. We used the off-the-shelf Praat script to evaluate its accuracy and Human-Computer Dialogue “SpeD 2015”, Bucharest, October 14-17, 6 p. over a Spanish corpus from the VILE project [9,10], consisting of 30 male speakers, 3.5 hours [2] Adda-Decker, M. 2006. De la reconnaissance automatique de la parole a l’analyse linguistique of speech recorded in three different sessions, and two different conditions: read (22904 des corpus oraux”. Paper presented at Journées d’Étude sur la Parole (JEP 2006), Dinard, vowels) and spontaneous speech (32853 vowels). The corpus was manually annotated at the France, 12–16 June. phoneme, syllable, and word levels. Then, we computed three different assessment metrics: [3] Ohala, J.J. 1996. The connection between sound change and connected speech processes. (a) accuracy and recall of detected vowel nuclei, (b) interannotator agreement using Cohen’s Arbeitsberichte (AIPUK 31) Universität Kiel, 201-206. kappa (κ) coefficient (not weighted) for vowel nuclei manual/automatic detection, and(c) [4] Torreira, F., Adda-Decker, M. & Ernestus, M. 2010 The Nijmegen corpus of casual French. Speech Pearson’s correlation between manual and automatic measurements of number of syllables Communication, 52.3, 201-212. count, speech rate, and articulation rate (defined as speech rate excluding internal pauses). [5] Torreira, F. & Ernestus, M. 2010. The Nijmegen corpus of casual Spanish. Seventh conference Our results, shown in Table 1, show promising results in accuracy and recall (see also Figure on International Language Resources and Evaluation (LREC’10). European Language Resources 1) for both read spontaneous speech, and a rather good kappa coefficient. However, it largely Association (ELRA), 2981-2985. [6] Trandabat, D., Irimia, E., Mititelu, V., Cristea, D. & Tufis, D. 2012. The Romanian Language in the fails in the detection of number of syllables in read speech, and such correlations in the Digital Age. META-NET White Paper Studies: Springer. number of syllables detected are not consistent with the correlation coefficients for speech [7] Vasilescu, I., Vieru, B. & Lamel, L. 2014. Exploring pronunciation variants for Romanian speech- and articulation rates. Such results clearly show that the assessment of the automatic tools to-text transcription, Proceedings of SLTU-2014, 161–168. depends on the evaluation metrics: while precision and recall metrics and kappa coefficient

334 335 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 WORKSHOPS show a rather good accuracy, the correlation coefficient does not provide promising results. [8] Wang, D., & Narayanan, S. S. 2007. Robust speech rate estimation for spontaneous speech. IEEE The number of speech samples, the length of the speech segments and the segment units Transactions on Audio, Speech, and Language Processing, 15(8), 2190-2201. used might be some of the factors for such discrepancies. [9] Battaner, E., Gil, J., Marrero, V., Carbó, C., Llisterri, J., Machuca, M. J., ... & Ríos, A. (2005). VILE: The tool submitted to evaluation is a relevant example on the use of phonetic knowledge Estudio acústico de la variación inter e intralocutor en español. Procesamiento del Lenguaje to foster speech technologies and vice versa, which make possible variational studies using Natural, (35), 435-436. large corpora. However, the evaluation of this kind of tools must be properly assessed in a [10] Albalá, M. J., Battaner, E., Carranza, M., Mota Gorriz, C. D. L., Gil, J., Llisterri, J., ... & Ríos Mestre, A. 2008. VILE: Análisis estadístico de los parámetros relacionados con el grupo de entonación. standard way to allow a fair comparison; therefore, robust standard assessment metrics Language Design: Journal of Theoretical and Experimental Linguistics, (special issue), 0015-21. are needed. With this work, we raise this necessity through the automatic calculation of speech rate and some of its related elements, although the problem can be extended to other phonetic and prosodic measurements for large corpora. A Neural-Network-Based Formant Tracker Trained on Unlabeled Data

Jason Lilley and H. Timothy Bunnell correlation coefficient Nemours Biomedical Research, Wilmington DE, USA precision recall κ # syllables speech rate articulation rate read 0.979 0.732 0.770 0.497 0.669 0.557 The phonetic analysis of a large corpus of speech often requires reliable estimation of spontaneous 0.944 0.717 0.667 0.808 0.595 0.454 formant frequencies (and sometimes bandwidths or amplitudes), but estimates provided Table 1. Evaluation metrics for both spontaneous and read corpus. by popular off-the-shelf programs such as Praat [1] and WaveSurfer [2] can be unreliable [3]. Estimates can be improved with the manual adjustment of tracker settings for individual speakers, but this may be infeasible for large corpora with dozens or hundreds of speakers. Recently, Dissen et al. [4] trained Deep Neural Network (DNN) models using the VTR-TIMIT corpus [5], which consists of manually-corrected formant frequency estimates of a subset of the TIMIT corpus [6]. Their “DeepFormants” model produced more accurate frequency estimates than several other trackers [3, 4], but training their model requires the manually- corrected formant data. Furthermore, their model shows poorer estimates on other corpora. The model can be adapted to other corpora for better performance [4], but the adaptation process requires further annotated data from each new corpus. In this paper, we describe methods for training DNN models to estimate formant parameters without the need for annotated training data. First, the speech signal is analyzed as a series of fixed-time-step frames, from which smoothed spectral envelopes are estimated. These spectral envelopes serve as the input to a neural network. The output of the network is defined to be the estimates of the frequencies, bandwidths, and amplitude correction factors Figure 1. Correctly and incorrectly detected vowel nuclei in both absolute (left) and relative measurements (right). of all formants within the frequency range of the input envelope. However, instead of directly measuring the difference between these estimates and manually-produced reference values, we generate a new spectral envelope from the estimated formant parameters, and [1] Fillmore, C. J. 1979. On fluency. In Individual differences in language ability and language behavior, calculate a custom loss measure from the difference between the new envelope and the 85-101. Academic Press. input envelope. It is this loss that is back-propagated through the network to establish the [2] Trofimovich, P., & Baker, W. 2006. Learning second language suprasegmentals: Effect of L2 gradients with respect to the formant parameters. experience on prosody and fluency characteristics of L2 speech. Studies in Second Language We will present results based on models trained on the training subset of the TIMIT corpus— Acquisition, 1-30. without the use of the hand-corrected formant frequencies of VTR-TIMIT—and tested on [3] Kalinowski, J., Armson, J., Stuart, A., & Gracco, V. L. 1993. Effects of alterations in auditory the test subset. Experiments are still ongoing, but results based on preliminary models feedback and speech rate on stuttering frequency. Language and Speech, 36(1), 1-16. show performance on par with “DeepFormants” as reported in [3, 4], with an overall root [4] Gordon, J. K., & Clough, S. 2020. How fluent? Part B. Underlying contributors to continuous measures of fluency in aphasia.Aphasiology , 34(5), 643-663. mean square error of 173 Hz (versus 163 Hz for DeepFormants) for F1-F3 for all speech [5] De Jong, N. H., & Wempe, T. 2009. Praat script to detect syllable nuclei and measure speech rate (both voiced and voiceless), and a mean absolute error of 76 Hz for vowels (versus 82 Hz for automatically. Behavior Research Methods, 41(2), 385-390. DeepFormants, and 78 Hz for Dissen et al’s CNN model [4]). Performance also exceeds that of [6] Pellegrino, F., Farinas, J., & Rouas, J. L. 2004. Automatic estimation of speaking rate in multilingual other formant trackers using default settings, including Praat and WaveSurfer, as reported in spontaneous speech. In Proceedings of the Speech Prosody Conference. [3, 4], on individual formants, males and females, and various phonetic subcategories. Error [7] Pfitzinger, H. R., Burger, S., & Heid, S. 1996. Syllable detection in read and spontaneous speech. estimates for our model are not much higher than inter-labeler differences on this corpus as In Proceeding of 4th International Conference on Spoken Language Processing (ICSLP), vol. 2, reported in [5]. We will also present the results of experiments including the VTR-TIMIT test 1261-1264. set in the training data, experiments with different training set sizes, and experiments using

336 337 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 WORKSHOPS an entirely different corpus as the training data. We find, first, that our LDL networks achieve high learning accuracy and that the measures Our results demonstrate that a reliable formant tracker can be constructed from a speech derived from it are as successful in predicting duration as traditional variables, such as corpus without the need for hand-corrected formant tracks. Thus, the methodology should frequency measures or biphone probability. Second, we find that even when we do not be easily and straightforwardly applicable to corpora in other domains and other languages. provide the network with any information about the morphological category a word belongs There is also no need to split the available data into training and test sets – the model can to, these categories still emerge from the network. For instance, the different morphological be trained on all available data. Furthermore, the model produces estimates of not just the categories are reflected in the distributions of the correlation strength of a word’s predicted target formant frequencies, but rather the frequencies, bandwidths, and amplitudes of all semantics with the semantics of its neighbors. Third, the effects of our LDL measures on formants in the input frequency range – a model of the entire vocal-tract signal. duration can give us new insights into mechanisms of speech production. For example, we find that words are lengthened when the semantic support of the word’s predicted References articulatory path is stronger. [1] Boersma, P., & Weenink, D. 2008. Praat: doing phonetics by computer. Computer program. Our results have implications for morphological theory and speech production models. [2] Sjölander, K., & Beskow, J. 2000. Wavesurfer—an open source speech tool. Proceedings of They show that derivational functions can emerge as a by-product of a morpheme-free Interspeech 2000 (Beijing, China), 464–467. learning process. Morphology is possible without morphemes. The acoustic properties of [3] Schiel, F. & Zitzelsberger, T. 2018. Evaluation of automatic formant trackers. Proceedings of LREC complex words, too, can be modeled successfully in a discriminative approach. Contrary to 2018 (Miyazaki, Japan), 2843-48. traditional models which do not allow for a direct morphology-phonetics interaction, such [4] Dissen, Y., Goldberger, J., & Keshet, J. 2019. Formant estimation and tracking: A deep learning interaction is expected in LDL and can be explained by its theoretical principles of learning and approach. The Journal of the Acoustical Society of America 145 (2), 642-53. experience. The present study thus contributes to the growing literature that demonstrates [5] Deng, L., Cui, X., Pruvenok, R., Chen, Y., Momen, S., & Alwan, A. 2006. A database of vocal tract resonance trajectories for research in speech processing. Proceedings of ICASSP 2006 (Toulouse, LDL to be a promising alternative approach to speech production which can explain phonetic France), pp. I–I. variation in simplex, complex, or non-words (cf. [17][20]). [6] Garofolo, J.S., Lamel, L., Fisher, M.W., Fiscus, J., S. Pallett, D.S., Dahlgren, N.L., & Zue, V. 1992. TIMIT Acoustic-phonetic Continuous Speech Corpus. Philadelphia, PA: Linguistic Data Consortium. References

Modeling the acoustic duration of English derivatives with linear discriminative [1] Plag, Ingo, Julia Homann & Gero Kunter. 2017. Homophony and morphology: The acoustics of word-final S in English.Journal of Linguistics 53.1: 181–216. learning 1 1 [2] Seyfarth, Scott, Marc Garellek, Gwendolyn Gillingham, Farrell Ackerman & Robert Malouf. Simon David Stein and Ingo Plag 2017. Acoustic differences in morphologically-distinct homophones. Language, Cognition and 1Department of English and American Studies,Heinrich Heine University Düsseldorf, Germany Neuroscience 33.1: 32–49. [3] Tomaschek, Fabian, Ingo Plag, Mirjam Ernestus & R. Harald Baayen. 2019. Phonetic effects Recent findings in morpho-phonetic and psycholinguistic research have indicated that of morphology and context: Modeling the duration of word-final S in English with naïve phonetic detail can vary by morphological structure. For example, acoustic duration can depend discriminative learning. Journal of Linguistics: 1–39. on a segment’s morphological status and inflectional function [1][2][3][4], on informativity [5] [4] Plag, Ingo, Arne Lohmann, Sonia Ben Hedia & Julia Zimmermann. 2020. An is an , or [6], or on how easily speakers can decompose complex words into its constituents [7][8][9] is it? Plural and genitive-plural are not homophonous. In Livia Körtvélyessy & Pavel Stekauer (eds.), Complex words. Cambridge: Cambridge University Press. [10]. However, the mechanisms behind these effects are still unclear. The fact that phonetic [5] Ben Hedia, Sonia & Ingo Plag. 2017. Gemination and degemination in English prefixation: detail varies systematically with morphological properties is unaccounted for by traditional Phonetic evidence for morphological organization. Journal of Phonetics 62: 34–49. and current models of the morphology-phonology interaction and of speech production [11] [6] Ben Hedia, Sonia. 2019. Gemination and degemination in English affixation: Investigating the [12][13][14][15. These models are either underspecified regarding the processing of complex interplay between morphology, phonology and phonetics (Studies in Laboratory Phonology 8). words, or do not allow for post-lexical access of morphological information. Berlin: Language Science Press. One alternative approach is to model phonetic detail based on the principles of discriminative [7] Hay, Jennifer. 2003. Causes and consequences of word structure. New York, London: Routledge. [8] Hay, Jennifer. 2007. The phonetics of un. In Judith Munat (ed.), Lexical creativity, texts and contexts, learning. Such an approach sees form-meaning relations as being discriminatory instead of 39–57. Amsterdam, Philadelphia: John Benjamins. compositional, and differences between morphological functions are expected to emerge [9] Plag, Ingo & Sonia Ben Hedia. 2018. The phonetics of newly derived words: Testing the effect naturally from sublexical and contextual cues. Recently, [3] successfully employed a of morphological segmentability on affix duration. In Sabine Arndt-Lappe, Angelika Braun, discriminative approach to model the duration of English inflectional [s] and [z]. The present Claudine Moulin & Esme Winter-Froemel (eds.), Expanding the lexicon: Linguistic innovation, paper applies the discriminative approach to derivational morphology. morphological productivity, and ludicity, 93–116. Berlin, New York: Mouton de Gruyter. Our study investigates the durational properties of 4530 English derivative tokens with the [10] Zuraw, Kie, Isabelle Lin, Meng Yang & Sharon Peperkamp. 2020. Competition between whole- word and decomposed representations of English prefixed words. Morphology: 10.1007/ derivational functions ness, less, ize, ation and dis from the AudioBNC [16], using multiple s11525-020-09354-6. linear regression models. The crucial predictors in our models are derived from linear [11] Kiparsky, Paul. 1982. Lexical morphology and phonology. In In-Seok Yang (ed.), Linguistics in discriminative learning networks (LDL, [17]). These networks were trained on semantic vectors the morning calm: Selected papers from SICOL, 3–91. Seoul: Hanshin. from the TASA corpus [18], which had been incrementally learned through an established [12] Dell, Gary S. 1986. A spreading-activation theory of retrieval in sentence production. learning algorithm [19]. Psychological Review 93.3: 283–321. [13] Levelt, William J. M., Ardi Roelofs & Antje S. Meyer. 1999. A theory of lexical access in speech production. Behavioral and Brain Sciences 22.1: 1–38.

338 339 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 WORKSHOPS [14] Roelofs, Ardi & Victor S. Ferreira. 2019. The architecture of speaking. In Peter Hagoort (ed.), transcribing the audio utterances in the CV corpus. To continue with, the WER is used to Human language: From genes and brains to behavior, 35–50. Cambridge, Massachusetts: MIT assess the accuracy of the neural ASR. This metric is the summation of word substitutions Press. (S), deletions (D) and insertions (I) in the ASR transcript, divided by the total number of words [15] Turk, Alice & Stefanie Shattuck-Hufnagel. 2020. Speech timing: Implications for theories of phonology, speech production, and speech motor control. New York: Oxford University Press. in the ground truth transcript (N): [16] Coleman, John, Ladan Baghai-Ravary, John Pybus & Sergio Grau. 2012. Audio BNC: The audio edition of the Spoken British National Corpus. University of Oxford. http://www.phon.ox.ac.uk/ AudioBNC. [17] Baayen, R. Harald, Yu-Ying Chuang, Elnaz Shafaei-Bajestan & James P. Blevins. 2019. The discriminative lexicon: A unified computational model for the lexicon and lexical processing in comprehension and production grounded not in (de)composition but in linear discriminative learning. Complexity 2019: 1–39. WER is computed on every test utterance, and the mean of every accent group is shown in Figure 1. [18] Ivens, Stephen H. & Bertram L. Koslin. 1991. Demands for reading literacy require new As hypothesized, the best WER scores are achieved by the most likely represented accents accountability measures. Brewster: Touchstone Applied Science Associates. in the training corpora, namely the American accents (US and Canada), plus accents from [19] Rescorla, Robert A. & Allan Wagner. 1972. A theory of Pavlovian conditioning: Variations in Ireland, England and New Zealand. Moreover, we see that the WER can degrade up to an the effectiveness of reinforcement and nonreinforcement. In Abraham H. Black & William F. absolute 10% for accents with phonetic and prosodic characteristics further from American Prokasy (eds.), Classical conditioning II: Current research and theory, 64–99. New York: Appleton- and UK English, like Asian accents. This highlights that still newer models present accent bias, Century-Crofts. [20] Chuang, Yu-Ying, Marie L. Vollmer, Elnaz Shafaei-Bajestan, Susanne Gahl, Peter Hendrix & R. a problem that could be better tackled using large corpora with rich metadata like the CV Harald Baayen. 2020. The processing of pseudoword form and meaning in production and dataset. This inspires further research in finding which phonetic and prosodic traits are most comprehension: A computational modeling approach using linear discriminative learning. impactful for speech recognition performance and exploring which methods could be more Behavior Research Methods. . efficient for solving such biases, besides straight data collection.

English Accent Accuracy Analysis in a State-of-the-Art Automatic Speech Recognition System Guillermo Cámbara1, Alex Peiró-Lilja1, Mireia Farrús2 and Jordi Luque3 1Universitat Pompeu Fabra, 2Universitat de Barcelona, 3Telefónica Research, Spain

Nowadays, research in speech technologies has gotten a lot out thanks to recently created public domain corpora that contain thousands of recording hours. These large amounts of data are very helpful for training the new complex models based on deep learning technologies. However, the lack of dialectal diversity in a corpus is known to cause performance biases in speech systems [5], mainly for underrepresented dialects, for example [6] or [7]. In this work, we propose to evaluate a state-of-the-art automatic speech recognition (ASR) deep learning-based model, using unseen data from a corpus with a wide variety of labeled English accents from different countries around the world. The model has been trained with 44.5K hours of English speech from an open access corpus called Multilingual LibriSpeech [4] (MLS), showing remarkable results in popular benchmarks. We test the accuracy of such ASR against samples extracted from another public corpus that is continuously growing, the Common Voice [2] dataset (CV). Then, we present graphically the accuracy in terms of Word Error Rate (WER) of each of the different English included accents, showing that there is indeed an accuracy bias in terms of accentual variety, favoring the accents most prevalent in the training corpus. Figure1. The MLS corpus [4] is a larger version of the LibriSpeech dataset [1] derived from the LibriVox Word Error Rate (%) obtained from the tested ASR for every English accent found in the Common Voice corpus. Standard audio books, this time with transcribed recordings of 8 languages. As suggested in the deviation of the mean is represented as error bars for every accent class. LibriVox wiki1, there might be a higher representation of accents from America (48 speakers between US and Canada) and the UK (26 speakers), than from other countries like India or Accent US CA IR EN NZ AF WA SC BE PH AU HK IN SG ML Philippines (1-2 speakers). As opposed to the MLS dataset, the CV corpus used for evaluation # utt 171K 21K 4K 47K 3K 3K 0.3K 2K 0.2K 2K 24K 1K 34K 2K 0.6K does have accent labels, at least in 321K samples out of its total. The data consist of around 900 hours of recorded speech from volunteers reading scripted texts. # spkr 3747 445 92 1079 75 109 33 75 22 59 305 72 1068 33 46 The ASR model pretrained with the MLS dataset (see architecture details in [4]) is used for Table 1. Number of sampled utterances and speakers from the Common Voice corpus, for each accent group, being the first counted by thousands.

340 341 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 WORKSHOPS References recognition based on the phone-level n-gram model., (ii) forced-alignment based on the word transcriptions., and (iii) Gaussian Mixture Modelling (GMM) classifier [5]a classifier based [1] Panayotov, V., Chen, G., & Povey, D. 2015. LibriSpeech: an ASR corpus based on public domain on cepstral coefficients in combination with LDA, and one based on confidence measures audio books. ICASSP. (the so-called Goodness Of Pronunciation score on using the acoustic model were used. [2] Ardila, R., Branson, M., Davis, K., Henretty, M., Kohler, M., Meyer, J., Morais, R., Saunders, L., To improve our detection system, we worked on the language model - at the phone level Tyers, F.M., & Weber, G. 2020. Common Voice: A Massively-Multilingual Speech Corpus. ArXiv. - to extract the mispronounced phones and trained an acoustic model of speech training [3] Vergyri, D., Lamet, L., & Gauvain, J.L. 2010. Automatic Speech Recognition of Multiple Accented examples for the detection. The n-gram model output helped us obtain insights into the English Data. INTERSPEECH 2010, 11th Annual Conference of the International Speech phone errors in the audio files. Applying the error language model is the novel approach Communication Association, Makuhari, Chiba, Japan. to the speech assessment, which combines both true transcriptions for a word with their [4] Pratap, V., Xu, Q., Sriram, A., Synnaeve, G., & Collobert, R. (2020). MLS: A Large-Scale Multilingual Dataset for Speech Research. arXiv preprint arXiv:2012.03411. adapted transcriptions for the same word. [5] Tatman, R. 2017. Gender and Dialect Bias in Youtube’s Automatic Captions. Proceedings of the The most common evaluation metrics of ASR systems is Phone Error Rate (PER), which First ACL Workshop on Ethics in Natural Language Processing. 53-59. Spain. calculates the number of phone errors over the number of reference phones [5], [6] (cf. Eq 1). [6] Kachru, B. B. 1976. Indian English: A Sociolinguistic Profile of a Transplanted Language. Institute The comparison of our models’ performance (n-gram models and error model) was done by of Education Sciences. Pages 1-52. Available from: https://files.eric.ed.gov/fulltext/ED132854. calculating PER. We choose the best model based on the PER metric. The PER output shows pdf that our system is sufficiently robust, and this was one of the goals of our study. The speech [7] Rahim, H.A. 2016. Malaysian English: Language Contact and Change (Book review). Kajian recognition phone error rates are typically greater for adults than children. Overall, the highest Malaysia 34(2): 149-152. http://dx.doi.org/10.21315/km2016.34.2.8 average accuracy (cf Eq 1) to date was obtained using the error model. The accuracy results range from 72%-76% for correct error detection of phones in ChildIt (Table 1 & Figure 1). 1https://wiki.librivox.org/index.php/Accents_Table Furthermore, the accuracy of comparing ASR & Manual (MAN) (cf Figure 1) output for ChildIt is 72%, and ASR & Ideal (IDE) is 76%. Our results show that using an error model is effective and can improve the performance of an ASR system in the detection of mispronounced Detection of phone-level pronunciation errors of L2-learners: Automatic speech phonemes with high accuracy (low PER, particularly for ChildIt). In fact, in our system, the recognition model that produced the best PER is the error language model with a PER of 23%, which is Nina Hosseini Kivanani1, 2, Roberto Gretter3, Marco Mattassoni3, & Giuseppe Daniele Falavigna better than the rate in previous studies. From the output of our ASR system, it is clear that 1University of Trento, Trento, Italy the selected architecture is more capable of detecting mispronounced phones. Our results 2University of Groningen, Groningen, Netherlands show that the selected error model can discriminate correct sounds from incorrect sounds 3Fondazione Bruno Kessler (FBK), Trento, Italy in native and non-native speech. The findings of this study will help language teachers and improve autonomous learning for language learners in their pronunciation progress. Speech assessment is one of the key tasks in Computer Assisted Pronunciation Training (CAPT) [1], [2]. In this study, we develop a mispronunciation assessment system that checks the pronunciation of non-native English speakers and identifies the common mispronounced phonemes of Italian learners of English, and presents an evaluation of the non-native pronunciation observed in phonetically annotated speech corpora. We design the Automatic Speech Recognition (ASR) system that evaluates learners’ speech at the phoneme level to detect pronunciation errors. Using ASR based systems allows a more learn-centered Equation 1. The PER takes into account the errors related to phoneme substitutions (S), phoneme deletions (D), environment with less anxiety. phoneme insertions (I), and P stands for the number of phones. PER sums the three types of errors. We used two non-native English labeled corpora for developing algorithms capable of detecting mispronunciations; (i) a corpus of Italian children (ChildIt) consists of 5,268 utterances from Corpora N-gram Error model 78 children [3], (ii) a corpus of Italian adults (Interactive Spoken Language Education (ISLE)) ISLE ASR & MAN 42% 43% contains 5,867 utterances from 46 speakers [4]. Audio is passed through a phone-based ASR & IDE 36% 38% ASR implemented using Kaldi, the open-source speech recognition toolkit; the time-aligned IDE & MAN 12% recognized phones are used with the audio to extract features. The language model and ChildIt ASR & MAN 38% 28% the acoustic model are used to identify a given audio segment’s word or phone sequences. ASR & IDE 33% 23% Accordingly, IDE stands for an ideal reference (canonical phones for the pronounced words), IDE & MAN 15% MAN stands for the gold-standard (really pronounced phones, manually annotated), ASR stands for the automatically recognized phones, obtained using either a phone n-gram Table 1. Output of PER: table shows the statistics of error detection results comparing the error detected by our ASR system model or a phone error model, manually derived. and errors reported by annotators for each corpus. The comparison of the performance of our models (n-gram model and Furthermore, we considered the following procedures in the acoustic model; (i) speech error model) was done by calculating PER. We choose the best model based on the PER metric. ASR vs MAN output: this tells us how well an automatic system detects errors; ASR vs. IDE output: this gives us a feeling of what an automatic system can

342 343 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 WORKSHOPS detect for a new utterance, where no manual annotation is available; Ideal (IDE) vs. Manual (MAN) output: this gives us a acoustic data for training (e.g., WaveNet), feature engineering features derived from specific true map of the errors made by Italian speakers when trying to speak English. knowledge in the field could be used not only to speed up the learning iteration process, but also to evaluate the pertinence of the descriptive features extracted from raw acoustic data from the performance of the selected deep learning model, as both VAE and LSTM can identify and generate new well-formed sequences of prosodic events, not found in the training corpus, VAE and LSTM implementations are applied on the SIWIS [3] corpus, containing 4477 relatively short sentences in French, with various syntactic structures, among about 600 different were retained, read by 31 different speakers. In order to obtain some phonologically meaningful results, sentences from the training corpus were annotated with a modified ToBI system, pertaining to prosodic events located on stressed syllables (vowels), i.e. pitch accents, and syntactic groups final syllables (vowels), i.e. boundary tones, which in French occur on the same accent phrases (i.e. stress group) final syllables. Prosodic events are transcribed as HL* and LH* for a melodic movement above the glissando threshold, i.e. perceived as a melodic change instead of a static tone, and H* as a melodic target perceived as a static tone [4]. The glissando gives an indication pertaining to the perception of a melodic change, and is defined as equal to (st2-st1) / (t2- t1), with st1 and st2 the starting and ending points of the stressed vowel F0 frequency value Figure 1. Overall detection accuracy for ISLE (left graph) & ChildIt (Right graph). at times t1 and t2 evaluated in semitones (the logarithm of the F0 value in Hz), whereas the glissando threshold lies in the 0,16 / (t2-t1)2 and 0.32 / (t2-t1)2 range [4]. [1] N. F. Chen and H. Li, “Computer-assisted pronunciation training: From pronunciation Tokenization of accent phrases entries includes implicitly the number of syllables and scoring towards spoken language learning,” in 2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2016, pp. 1–7, doi: 10.1109/ syntactic category of each unit. As the number of different sentences is relatively limited, the APSIPA.2016.7820782. position of stressed syllables are verified manually. [2] C. Tejedor-Garcia, D. Escudero-Mancebo, E. Camara-Arenas, C. Gonzalez-Ferreras, and V. Processing steps implemented in C++ and Python (using Keras and TensorFlow libraries) are Cardenoso-Payo, “Assessing Pronunciation Improvement in Students of English Using a as follows: Controlled Computer-Assisted Pronunciation Tool,” IEEE Trans. Learn. Technol., vol. 13, no. 2, - Resampling of the recordings from 44100 Hz to 22050 Hz to better accommodate the proprietary pp. 269–282, Apr. 2020, doi: 10.1109/TLT.2020.2980261. automatic segmentation algorithm. [3] A. Batliner et al., “The PF_STAR children’s speech corpus,” 2005. - Automatic segmentation into words and phones with API annotation, using a forced alignment [4] W. Menzel et al., “The ISLE corpus of non-native spoken English,” in 2nd International Conference algorithm comparing corpus sentences with TTS generated speech. on Language Resources and Evaluation, LREC 2000, 2000, vol. 2, pp. 957–964. [5] H. Strik, K. Truong, F. de Wet, and C. Cucchiarini, “Comparing different approaches for - Generation of the fundamental frequency curve for each sentence (spectral and/or automatic pronunciation error detection,” Speech Commun., vol. 51, no. 10, pp. 845–852, 2009, autocorrelation based algorithm). doi: 10.1016/j.specom.2009.05.007. - Data encoding with a word lexicon tokenizer including punctuation marks (purposively not [6] L. Kim, Yoon and Franco, Horacio and Neumeyer, “Automatic pronunciation scoring of specific directly including grammatical class), fundamental frequency values at the beginning and end phone segments for language instruction,” 1997. of annotated stressed vowels. These data implicitly include vowel and word duration, syllabic [7] K. Kawai, Goh and Hirose, “A CALL system using speech recognition to teach the pronunciation rate, intensity variation, etc. of Japanese tokushuhaku,” 1998. - The initial model operates on batches of 32 units and an embedding of 100. Phonetic and phonological analysis of prosodic events in French Preliminary results show the pertinence of stressed vowels as locators of prosodic events in with deep learning networks French, as well as the extent of their acceptable phonetic variations. Philippe Hulsbergen University of Toronto References [1] Kingma, D. P. and Welling, M/(2019), An Introduction to Variational Autoencoders, Foundations and Trends R in Machine Learning: Vol. xx, No. xx, 1–18. Various deep learning models such as Variational Autoencoder (VAE) [1] and Long Short [2] Hochreiter, S. and Schmidhuber, J. 1997. Long short-term memory. Neural Computation. 9 (8): Time Memory (LSTM) [2] appear well suited to classify, identify and generate time series 1735–1780. such as speech prosodic events. They may appear at first very attractive to obtain pertinent [3] Yamagishi, J. et al. 2016. The SIWIS French Speech Synthesis Database, University of Edinburgh. phonetic and phonological information in this field. However, the large number of parameters School of Informatics. The Centre for Speech Technology Research, https://doi.org/10.7488/ (typically over 1,000,000) involved in these processes make direct extraction of meaningful ds/1705. phonetic and phonological information almost impossible. Therefore, instead of using raw [3] Rossi, M. 1971. Le seuil de glissando ou seuil de perception des variations tonales pour la parole, Phonetica, 1971. 23, 1-33.

344 345 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 WORKSHOPS WORKSHOP with non-initial stress, the peak occurs on the initial syllable (e.g. in the phrase “unless we Prosodic variation: the role of past and present contact in have that” the peak is on the first syllable of “unless”). multilingual societies We conclude that these peaks have a similar form and function in both Maltese and MaltE. They are distinct from pitch accents in their placement and in terms of their participation in cueing prominence, and thus information structural content. Intonation in Maltese and Maltese English - the form and function of early H peaks 1 2 Alexandra Vella and Martine Grice References 1University of Malta, 2University of Cologne [1] Kachru, B. B. 1990. The alchemy of English: The spread, functions and models of non-native Maltese English (henceforth MaltE) is the variety of English spoken in Malta. It has been Englishes. University of Illinois Press. described as a “continuum of continua” due to variability, not only across speakers, but also [2] Grech, S. 2015. Variation in English: Perception and patterns in the identification of Maltese across different speaking contexts. It does not fit easily into the Kachruvian 3 circles model English. Unpublished PhD thesis, University of Malta. ([1]; [2]). According to Schneider’s Dynamic Model ([3]; [4]; [5]), which has been applied to [3] Schneider, E. W. 2003. The dynamics of new Englishes: From identity construction to dialect MaltE by [6], the variety is transitioning from phase 3, nativisation, to phase 4, endonormative birth. Language 79(2), 233- 281. stabilization. Entry into this phase comes with increasing recognition, in this case of MaltE, [4] Schneider, E. W. 2007. Postcolonial English: Varieties around the world. Cambridge: Cambridge University Press. as a variety in its own right. The co-existence of MaltE and Maltese in the everyday usage [5] Schneider, E. W. 2011. English around the world. Cambridge: Cambridge University Press. of Maltese bilinguals results in an interesting contact situation in which the two languages [6] Thusat, J., Anderson, E., Shante, D., Ferris, M., Javed, A., Laughlin, A., McFaland, C., Sangsiri, influence each other mutually at all levels of linguistic structure ([7], [8]). Here we focus on R., Sinclair, J., Vastalo, V., Whelan, W. & Wrubel, J. 2009. Maltese English and the nativization intonation, and in particular on a salient feature of both languages: the prevalence of F0 phase of the dynamic model. English Today 25(2), 25-32. peaks at the beginning of words and phrases. Following the analysis of this peak as a H tone, [7] Vella, A. 2012. Languages and language varieties in Malta. International Journal of Bilingual we refer to these as early H peaks. Education and Bilingualism, 532-552. DOI:0.1080.13670050.2012.716812. Early H peaks were first analysed in Maltese as an initial phrasal edge tone, drawing [8] Grech, S. & Vella, A. 2018. Rhythm in Maltese English. In Paggio, P. and Gatt, A. (Eds.), The on examples of wh-questions with wh-words in initial position ([9]). By placing wh-words languages of Malta, 203-223. Berlin: Language Science Press. DOI:10.5281/zenodo.1181797. [9] Vella, A. 1995. Prosodic structure and intonation in Maltese and its influence on Maltese in different positions in the phrase, [10], following on from Vella ([11, [12]), provided English. Unpublished PhD thesis, University of Edinburgh. experimental evidence for an analysis of these H peaks as H tones associated with the wh- [10] Grice, M., Vella, A. & Bruggeman, A. 2019. Stress, pitch accent, and beyond: Intonation in words themselves, rather than the phrase. Evidence from tonal alignment indicates that the Maltese questions. Journal of Phonetics, Volume 76, 100913, ISSN 0095-4470, https://doi. H tone is not associated with the lexical stress of these wh-words, as is the case in the same org/10.1016/j.wocn.2019.100913. words when used in indirect questions, but rather with their left edge. Thus, although Maltese [11] Vella, A. 2007. The phonetics and phonology of wh-question intonation in Maltese. has regular pitch accents associated with lexical stress, the language also has word initial Proceedings of ICPhS XVI, Saarbrücken, 1285-1288. tones. What is still unclear is the contribution of these tones to the marking of prominence in [12] Vella, A. 2011. Alignment of the “early” HL sequence in Maltese falling tune wh-questions. the language, i.e., whether a word initial tone (often high in the speaker’s range) has a similar Proceedings of ICPhS XVII Hong Kong, 2062-2065. prominence-cueing function to a pitch accent on that word. Importantly, word initial tones are not confined to wh-questions; they have also been attested in imperatives, exclamatives and vocatives ([9], [10]). In this paper, we investigate early H peaks in both languages, Maltese and MaltE, drawing on a corpus of television interviews in which, unlike in elicited speech, the speakers are engaged in real communication. This corpus allows us to shed light not only on the formal properties of early H peaks (occurrence at the word edge indicating edge association) but also on the question as to whether they are prominence cueing, insofar as it is possible to assess information structural content from the discourse context. First results from the interviews indicate that these peaks are highly frequent in wh- questions in both languages. For examples of wh-questions in Maltese and MaltE, see Figures 1 and 2 respectively. Importantly, there is no contrastive focus on the wh-word in either case, and the question in Figure 2 cannot be interpreted as an echo question, although its superficial form might resemble such a question in other varieties of English. Crucially, in neither language are early H peaks restricted to wh-questions. In fact, they are found in a wide range of sentence modalities/speech act types and contexts, occurring on pronouns, adverbials and conjunctions, and rarely coinciding with a constituent that appears to be newsworthy in the given context. Moreover, in words with more than one syllable or

346 347 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 WORKSHOPS Prosodic transfer in long-standing Bulgarian-Turkish bilingualism: evidence from prosodic substrate exerts influence on Bilingual Bulgarian. We consider the implications for unstressed vowel reduction understanding mechanisms of prosodic transfer in contact situations, and the interaction of Mitko Sabev1 and Elinor Payne2 prosodic transfer with segmental phonology. 1University of Cambridge, 2University of Oxford References This paper reports the results of an investigation into prosodic transfer in the speech of a long-standing bilingual Bulgarian–Turkish community in North-eastern Bulgaria. Bulgarian [1] Sabev, M. (2020). Spectral and Durational Unstressed Vowel Reduction: An acoustic study of mono- is characterised by a prosodically conditioned process of vowel reduction, which affects both lingual and bilingual speakers of Bulgarian and Turkish. DPhil thesis, University of Oxford. duration and vowel quality: a six-vowel system in stressed syllables is reduced to a 4- or [2] Sabev, M. & Payne, E. (2019). A cross-varietal continuum of unstressed vowel reduction: ev- 3-vowel system in unstressed syllables. Turkish, on the other hand, is characterised by a idence from Bulgarian and Turkish. In Calhoun, S., Escudero, P., Tabain, M., and Warren, P., vowel-affecting process of a fundamentally different nature, namely vowel harmony, and editors, Proceedings of the 19th International Congress of Phonetic Sciences, Melbourne,Australia word-level prominence is typically described as pitch accent based, rather than dynamic 2019, 1164–1168. Canberra, Australia: Australasian Speech Science and Technology Associa- stress. Given that unstressed vowel reduction is a pervasive prosodic process in Bulgarian, tion Inc. and theoretically at odds with vowel harmony in Turkish, the question arises as to whether [3] Stojkov, S. (1962). Bǎlgarska dialektologija (Bulgarian dialectology). Sofia: Prof. Marin Drinov long-standing contact between the two languages can result in transfer effects in either Publishing at the Bulgarian Academy of Sciences. direction. We report the results of an acoustic study of stressed and unstressed vowels in these two understudied varieties of bilingual Bulgarian and Turkish, comparing them to each other and A typological assessment of word prosody in contact to monolingual Eastern Bulgarian and Istanbul Turkish. The monolingual Bulgarian spoken Ricardo Napoleão de Souza and Kaius Sinnemäki in the same geographic area exhibits neutralising categorical reduction, whereby non-high University of Helsinki vowels raise and merge with their high counterparts ([1], [2], [3]). Standard (monolingual) Turkish non-high vowels, however, undergo only gradient undershoot in unstressed The language contact literature often suggests that prosody is particularly prone to contact- syllables, with no evidence of categorical shift and merger ([1], [2]). The data were obtained induced change (e.g. [1], [2], [3], [4], among many others). Yet, a broad picture of the types from 14 bilingual speakers in experimentsdesigned to elicit penultimately stressed 4-syllable and prevalence of changes to prosody systems from a typological perspective is still lacking. nonsense words (and lexical items to verify linguistic representativeness). Measurements for This paper aims to fill that gap by providing an assessment of contact-induced changes to duration, F1 and F2 are reported, andused to evaluate distinct aspects of vowel stresslessness word prosody systems in languages from all around the globe. Specifically, it focuses on (undershoot, categoricality and neutralisation). To quantify overall reduction and contrast, changes to lexical stress and lexical tone systems in which contact alters the complexity of we compare pairs of token sets (stressed vs unstressed, high vs non-high) using the Pillai previously existing systems (e.g. development of unpredictable/predictable stress patterns, statistic of significant MANOVAs on F1, F2 and duration as response variables. The 3 acoustic additions/losses of tone contours, etc.). By using reports from the areal linguistics literature, parameters are also separately assessed across different syllables, using a variety of metrics, it further aims to examine the extent to which prosodic phenomena are actually reported in including ANOVA, dependent t-tests and overlap of probability density functions. studies of contact, which have been ostensibly driven by morphosyntactic variables. Our results reveal intricate transfer patterns between the contact varieties. Bilingual Method. Languages from all areas of the world were represented in our convenience Turkish stressed vowel quality bears striking resemblance to that of Eastern Bulgarian sample constructed based on geographical location. Our areal groupings derive from the vowels: spectral properties are practically identical, indicative of transfer from the ambient technique described in [5], which employs geographical location as well as genealogical monolingual variety of Bulgarian. Bilingual Bulgarian vowel qualities are close enough to affiliation criteria. Languages were sampled from the following groupings: (a) Africa; (b) those in Bilingual Turkish to assume identical stressed targets in the bilinguals’ phonologies. Central & South America; (c) North America; (d) Oceania; (e) Europe, South & West Asia; and Bilingual Bulgarian reduction, however, is very weak (suggesting Turkish influence), whereas (f) Southeast Asia. in Bilingual Turkish the level of reduction is intriguingly higher than in not just Istanbul Turkish (as expected), but even Bilingual Bulgarian. This suggests present-day Bilingual Bulgarian For each grouping, we searched areal surveys (e.g. [6], [7], [2], [8], [9], [10], [1]) for data on rests upon a powerfulTurkish substratum – a vestige of a once Turkish-dominated phonology, contact- induced changes in the variables of interest. Reference grammars and articles were without unstressed vowel reduction – while present-day Bilingual Turkish phonology shows at times consulted for clarification. The final sample contained 38 languages from five of the clear evidence of cross-linguistic transfer and approximation to ambient Eastern Bulgarian. six pre-determined areas. No reports on changes to word prosody in languages of Oceania, At the same time, clear differentiation between Bilingual Bulgarian and Eastern Bulgarian as classified in [5]’s method, were found. Note that in this classification, Austronesian reduction patterns points to a degree of resistance to cross-dialectal transfer. We are thus languages belong with Southeast Asia. Thirteen language families are represented in the not dealing with wholesale or unidirectional borrowing, but two layers of cross-linguistic current sample. transfer. In the first, segmental properties (vowel quality) have been borrowed from Results. Changes to lexical tone systems comprise almost half of the sample (18/38, or ambient Eastern Bulgarian into bilingual varieties. Eastern Bulgarian also affects Bilingual 47 percent). Of these, changes reported to deal with convergence in the number, contour Turkish prosodically, as evidenced in intensified vowel reduction. In the second, a Turkish

348 349 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 WORKSHOPS pattern, or laryngeal features accompanying the tones themselves comprised 72 percent [11] Kang, Y. 2011. Loan Phonology. In Marc van Oostendorp, Colin J. Ewen, Elizabeth V. Hume, & of the reports (13/18). Loss of tone, and tone mergers represent only 27 percent of cases K. Rice (eds). The Blackwell Companion to Phonology, 5. Volume Set. Vol. 4, Phonological Interfaces, (5/18) of changes to tone systems. Changes to lexical stress systems (20/38, or 53 percent) 2258-2283. Malden: John Wiley & Sons. most often resulted from increases in the complexity of the system (13/20), in which contact [12] Jun, S.-A. & Fletcher, J. 2014. Methodology of studying intonation: from data collection to data either reduced the predictability of stress placement, or gave rise to so-called pitch accent analysis. In S.-A. Jun (ed.), Prosodic Typology II: The Phonology of Intonation and Phrasing, 493- systems. Fixation of stress placement was reported for 35 percent (7/20) of the languages in 519. Oxford: Oxford University Press. the current sample.

Discussion. Contact-induced changes to word prosody are reported in all areas of the Prosodic structures in New Englishes: Evidence from Nigerian English world, with the exception of Oceania. Changes to lexical tone and lexical stress systems are Carlos Gussenhoven almost equally represented in the contact literature. Of the changes observed, we interpret Radboud University, Nijmegen, and National Chiao Tung University, Hsinchu, Taiwan the majority as increases in the complexity of word prosody systems of languages under contact. These results seem to contradict findings from the loan phonology literature that Language contact ranges from dialect continua via multi-dialectal or multi-lingual contact often leads to simplification of suprasegmental structure ([11]). communities to dominant languages in substrate or adstrate language environments. After Despite the claim that prosodic change is ubiquitous in contact situations, the current briefly illustrating prosodic contact effects in the first two situations, I will concentrate on the study finds that comparatively few studies in the areal linguistics literature actually describe third situation in the context of New Englishes, with Nigerian English as the main case. it, most of which are also very short and lacking in detail. These findings may reflect a bias The phonology of New Englishes will result from the matching the phonetic forms of British towards morphosyntactic and segmental phonological variables in the literature rather than English with native phonetic forms that resemble them. Their phonological representations the absence of an effect of contact on prosody. Likewise, the lack of reports on languages are the building blocks of the new phonological grammar. Even though this grammar may be from Oceania may result from a bibliographical bias. Taken as a whole, the current findings rather different from that of BrE, the overall phonetic forms it produces should be expected suggest the need for more exchange between researchers concerned with prosody and to resemble the phonetic forms of BrE reasonably well. NigE illustrates this imbalance those who work on contact. These methodological issues present a special challenge to between the phonological and the phonetic similarities with superstrate English. In this talk I prosody research given the close relation between word prosody and intonation (e.g. [12]). will summarize my slowly developing understanding of the NigE grammar, which was helped both by my collaboration with Inyang Udofot and a number of fact-finding experiments. References Taking the typological features of BrE as a starting point, I will establish how these have been interpreted in NigE. Intonational structures arise from sequences of word prosodic [1] Epps, P. 2009. Language classification, language contact, and Amazonian prehistory. Language structures, so that both word prosody and sentence prosody need to be addressed. As for and Linguistics Compass 3(2), 259-288. word prosody, English has trochaic feet from the right word edge, creating stressed and [2] Matras, Y. 2009. Language Contact. Cambridge: Cambridge University Press. segmentally reduced unstressed syllables, has at most one unstressed syllable at the word [3] Rice, K. 2010. Accent in the native languages of North America. In H. van der Hulst,R. Goedemans, beginning, has footless function words, and distinguishes between primary and secondary & E. van Zanten (eds.), A survey of word accentual patterns in the languages of the world, 155-248. stress. NigE has left-edge stress, either initial or peninitial. It has unstressed function words, Berlin: De Gruyter Mouton. [4] van der Hulst, H., Goedemans, R., & Rice, K. 20015. Word Prominence and Areal Linguistics. without the distributional restriction on unreduced vowels in unstressed syllables. In Raymond Hickey (ed.), The Cambridge Handbook of Areal Linguistics, 161-203. Cambridge: As for the post-lexical prosody of BrE, I assume maximum accentuation of stressed syllables Cambridge University Press in non-compound words, causing e.g. ˌAdiˈronˌdacks to be ADiRONdacks. Morphological [5] Maddieson, I., Flavier, S., Marsico, E., Coupé, C. & Pellegrino, F. 2013. LAPSyD: Lyon-Albuquerque derivations and post-lexical rules produce a slew of accent deletions, producing minimal Phonological Systems Database. Proceedings of the 14th Interspeech Conference, Lyon, France, pairs like the WHITE House/HOUSE, an old/OLD ENGlish teacher, DOGS must be carried/ CARRied, 25-29. and LONdon? I’ve never BEEN in the city/been in the CIty. In addition, it has tonally distinct [6] Salmons, J. 1992. Accentual Change and Language Contact: A Comparative Survey and a Case Study pitch accents, like H*L, L*, etc., whose starred tone associates with the accented syllables. of Northern Europe. London: Routledge. All other tones remain floating. Optional downstep creates sequences of descending targets [7] Matisoff, J.A. 2001. Genetic versus contact relationship: prosodic diffusibility in South- East Asian of H*-tones. Interpolations between tonal targets ignore boundaries within the intonational languages. In Alexandra I.U. Aikhenvald, and R.M.W. Dixon (eds.), Areal diffusion and genetic inheritance: problems in comparative linguistics, 291-327. Oxford: Oxford University Press. phrase. NigE invariably inserts a LH* pitch accent on every non-function word, with H* going [8] Foley, W. A. 2010. Language contact in the New Guinea region. In Raymond Hickey (ed.), to the stressed syllable and spreading right to the word end. Function words have L. LH* Handbook of Language Contact, 27-50. Oxford: Blackwell. is an obligatory trigger of downstep, whereby L uniquely remains floating in words with [9] Mithun, M. 2010. Contact in North America. In R. Hickey (ed.), Handbook of Language Contact, initial stress. Final L% and H% are used in statements and questions, respectively. After the 673-694. Oxford: Blackwell. formation of non-function words, there are no morphological or postlexical tone or stress [10] Hieda, O., König, C. & Nakagawa, H. 2011. Geographical Typology and Linguistic Areas: With rules. Special Reference to Africa. Amsterdam: John Benjamins. The pitch contours of BrE downstepping utterances are very much like NigE contours. Still, the structural differences will reveal themselves. From the point of view of NigE, the floating

350 351 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 WORKSHOPS status of the non-starred intonational tones in BrE leaves many syllables tonally unspecified. questions (counter [4, 5]). In Spanish, all but one que-question stemmed from CD bilinguals;in Quite unlike the continuing interpolations of BrE, every NigE non-function word is marked by Catalan, both dominance-groups produced similar amounts of que-questions. Interestingly, a pitch drop and every function word is either followed by level pitch, as before a word with some Catalan que-questions were pronounced with the final rising intonation contour (to peninitial stress, or a pitch upstep. However, there is no research indicating that this affects an extent of 53% among SDs and 14% among CDs). In the reading task, too, SD speakers word recognition for NigE listeners to BrE speech. From the point of view of BrE, the absence produced que-questions more frequently with final rising contours than the CD group. of any post-lexical tonal deletions gives NigE a rhythmic character, with too many ‘stresses’. Gender, finally, does not seem to play a role. In sum, the findings show a fair deal of CLI in both directions, which can be explained at least partly by the language dominance of the bilinguals. The study thus contributes to the Language contact and the intonation of yes/no-questions in Catalan and Spanish: The limited research on the intonation of yes/no-questions in bilingual Catalan and Spanish and effects of gender and language dominance the effects of social factors on intonation in situations of language contact. Jonas Grünke JGU Mainz

This paper examines the intonation contours of pragmatically neutral, i.e. information- seeking yes/no-questions used in Catalan and Spanish by Catalan-Spanish bilinguals. Although some north-western varieties may also show falling patterns (e.g., [1], [2], [3]), this question type clearly tends to have a final rising intonation in non-contact varieties of Peninsular Spanish of Spanish (cf. the overview in [4]). Catalan dialects, on the other hand, display a rich variety of intonation contours, often allowing for both final rising and falling patterns [5]. In Central Catalan, yes/no-questions headed by the sentence-initial interrogative particle que generally show a final falling intonation pattern, whereas questions without this particle resemble their Spanish counterparts in presenting a final rise. It has been suggested that Figure 1. Neutral yes-no question headed by que with falling intonation the use of the particle depends mostly on proximity relations in discourse ([6], [7]), but in Northern Central Catalan (as spoken e.g. in Girona), que-questionsare said to be interpreted (Girona Spanish, left panel) and rising intonation (Girona Catalan, right panel). as anti-expectational ([6]), while neutral questions necessarily display a rising contour [5]. These differences between Spanish and Catalan can potentially lead to cross-linguistic References influence (CLI). Many previous studies have demonstrated that language contact can have an effect on intonation (e.g., [8], [9], [10], [11]). However, the precise outcome and the direction [1] Fernández Rei, E. 2019. Galician and Spanish in Galicia: Prosodies in contact. Spanish in Context of CLI may differ from one bilingual community to another, as well as between the members 16, 438–461. of a single community, i.e. depending on a variety of social factors. [2] López-Bobo, M. J. & Cuevas-Alonso, M. 2010. Cantabrian Spanish intonation. In [12], 49–85. [3] Elordieta, G. & Romera, M. 2020. The influence of social factors on the prosody of Spanish in This study thus addresses the following research questions: (1) What intonation contours contact with Basque. International Journal of Bilingualism. Online first, 1–32. are used by bilinguals in Catalan and Spanish neutral yes-no questions with and without the [4] Hualde, J. I. & Prieto, P. 2015. Intonational variation in Spanish: European and American interrogative particle que?, (2) Do bilinguals use similar or different intonation contours in varieties. In Frota, S. & Prieto, P. (Eds), Intonational variation in Romance. Oxford:OUP, 350–391. their two languages?, and (3) What are the effects of gender and language dominance on [5] Prieto, P., Borràs-Comes, J., Cabré, T., Crespo-Sendra, V., Mascaró, I., Roseano, P., Sichel- the intonation of Spanish and Catalan yes/no-questions? The data come from both a Catalan Bazin, R., Vanrell, M. 2015. Intonational phonology of Catalan and its dialectal variation. In and a Spanish discourse completion task (DCT; adapted from [12], [13]) as well as from a Frota, S. & Prieto, P. (Eds), Intonational variation in Romance. Oxford: OUP, 9– 62. short reading task containing a series of Spanish questions headed by que. The participants [6] Prieto, P. & Rigau, G. 2007. The Syntax-Prosody Interface: Catalan interrogative sen- tences were 31 Catalan-Spanish bilinguals (18 females, 13 males) from Girona (Catalonia), aged 18– headed by que. Journal of Portuguese Linguistics 6 (2), 29–59. 23 (mean: 19.6). Based on an adapted version of the BLP ([14]), 20 were classified Catalan- [7] Astruc, Ll., Vanrell, M., & Prieto, P. 2016. Cost of the action and social distance affect the dominant (CD) and 11 Spanish-dominant (SD). The data were annotated in Praat ([15]) using selection of question intonation in Catalan. In: Armstrong, M.E., Henriksen, N., & Vanrell, the respective Spanish and Catalan ToBI conventions ([4], [5], [16]). A total of 367 yes/no- M. (Eds), Intonational Grammar in Ibero-Romance: Approaches across linguistic subfields. Amsterdam: John Benjamins, 91–114. questions were analysed (Spanish DTC: 154; Catalan DTC: 151; reading task: 62). The analysis [8] Colantoni, L. & Gurlekian, J. 2004. Convergence and intonation: Historical evidence from focused on the nuclear pitch accent and the final boundary tone. Buenos Aires Spanish, Bilingualism: Language and Cognition 7, 107–119. The DCT results revealed that both rising and falling nuclear contours can be found in [9] Elordieta, G. 2003. The Spanish intonation of speakers of a Basque pitch-accent dialect. neutral yes/no-questions produced by bilinguals from Girona, although the former are much Catalan Journal of Linguistics 2, 67–95. more frequent (Sp. 95%, Cat. 77%). Falling patterns usually co-occur with the particle que [10] Simonet, M. 2011. Intonational convergence in language contact: Utterance-final f0 contours (90%), which heads 5% of the Spanish (cf. Fig. 1) and 30% of the Catalan neutral yes-no in Catalan–Spanish early bilinguals. Journal of the IPA 41 (2), 157–184.

352 353 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 WORKSHOPS [11] Kireva, E. 2016. Prosody in Spanish-Portuguese contact. Ph.D. dissertation, University variability was found in urban settings, where Spanish is pervasive, and a positive correlation of Hamburg. was observed between attitudes towards Basque and the degree of prosodic convergence [12] Prieto, P. & Roseano, P. (Eds). Transcription of intonation of the Spanish language. Munich: with this language: the more positive the attitudes, the greater the degree of convergence. Lincom. Our interpretation of this correlation is that the adoption of the characteristic Basque prosody [13] Prieto, P. & Cabré, T. (Coords) 2007–2012. Atles interactiu de l’entonació del català. . settings the prosodic realization of the absolute interrogations was falling almost 100%, and [14] Birdsong, D., Gertken, L.M., & Amengual, M. 2012. Bilingual language profile: An easy-to-use no correlation was found with attitudes towards Basque. Our interpretation is that in rural instrument to assess bilingualism. The Center for Open Educational Resources and Language Learning, University of Texas at Austin. areas the presence of Basque in daily life is stronger, and that there is a consolidated variety [15] Boersma, P. & Weenink, D. 2008. Praat. Computer program. of Spanish used by all speakers regardless of their attitudes. Thus, the adoption of intonating [16] Estebas-Vilaplana, E. & Prieto, P. 2010. Castilian Spanish Intonation. In [12], 17–48. features of this language is not the only indicator belonging to the Basque ethnolinguistic group. Our study reveals the great relevance of subjective social factors, such as language Basque prosodic features in Spanish: language contact and social factors attitudes, in the degree of convergence between two languages. Gorka Elordieta1 and Magdalena Romera2 1University of the Basque Country, 2Public University of Navarre References Within the growing field of research on phonetic and phonological issues of language [1] Romera, M. & Elordieta, G. 2013. «Prosodic accommodation in language contact: Spanish contact and bilingualism, aspects of suprasegmental phonology have started receiving more intonation in Majorca». International Journal of the Sociology of Language 221: 127-151. https:// attention, especially in prosody and intonation. A particularly interesting issue is that the doi.org/10.1515/ijsl-2013-0026. presence of features of a language variety (LV-A) in another language variety (LV-B) is variable [2] Romera, M. & Elordieta, G. 2020. «Information-seeking question intonation in Basque Spanish within the contact population. The main goal of this paper is to show that individual social and its correlation with degree of contact and language attitudes». Languages 5(4), 70. factors of speakers of LV-A may help explain the differences among speakers in the degree of [3] Kozminska, K. 2019. «Intonation, identity and contact-induced change among Polish-speaking presence of LV-B features. Such factors can be the degree ofcontact of LV-A speakers with LV-B migrants in the U.K.» Journal of Sociolinguistics 23 (1): 29-53. https://doi.org/10.1111/josl.12318. speakers and the attitudes of LV-A speakers towards LV-B and the LV-B ethnolinguistic group. [4] Elordieta, G. & Romera, M. 2020a. «The Influence of Social Factors on the Prosody [1], [2], [3] and [4] constitute work along these lines, on intonational features of Catalan and of Spanish in Contact with Basque». International Journal of Bilingualism. https://doi. Basque in the variety of Spanish spoken in Majorca and the Basque Country and the presence org/10.1177/1367006920952867. of English intonational features on the variety of Polish spoken by Polish immigrants. [5] Face, T.L. 2008. The Intonation of Castilian Spanish Declaratives andAbsolute Interrogatives. Munich: Lincom Europa. The present study analyzes the prosodic characteristics of the variety of Spanish in contact [6] Estebas-Vilaplana, E. & Prieto. P. 2010. «Castilian Spanish intonation». In Transcription of with Basque (in the Basque Country and in Navarre, Spain). We focus on information-seeking intonation of the Spanish language, edited by Pilar Prieto and Paolo Roseano, 17–48. Munich: yes/no questions, which present different intonation contours in Spanish and Basque. In Lincom Europa. Castilian Spanish these sentences end in a rising contour ([5], [6], [7], [8], among others), [7] Hualde, J.I. & Prieto. P. 2015. «Intonational Variation in Spanish, European and American whereas in Basque they end in a falling or rising-falling circumflex contour (cf. [9], [10], [11], Varieties». In Intonation in Romance, edited by Sónia Frota and Pilar Prieto, 350-391. Oxford: [12]). For Madrid Spanish, [13] report a majority of falling final contours in yes/no questions Oxford University Press. in spontaneous speech, but according to the authors’ descriptions, the questions displaying [8] Elordieta, G. & Romera, M. 2020b. «The Intonation of Information-Seeking Absolute Interrogatives falling contours are topic-continuators or conformation-seeking rather that neutral or in Madrid Spanish». Estudios de Fonética Experimental 19: 195-213. information-seeking, like in our study. In fact, [13] report final rising contours for absolute [9] Elordieta, G. 2003. «Intonation». In A grammar of Basque, edited by José Ignacio Hualde y Jon Ortiz de Urbina, 72-112. Berlin: Mouton de Gruyter. interrogatives that initiatea topic, like the interrogatives in our study. Our interrogatives were [10] Robles-Puente, S. 2012. «Two languages, two intonations? Statements and yes/no questions obtained from semi- directed interviews, where the interviewer (the subjects of our study) in Spanish and Basque». Anuario del Seminario de Filología Vasca Julio de Urquijo: International asked questions on topics related to the interlocutor’s life and his/her views on Basque, Journal of Basque Linguistics and Philology 46 (1): 251- 62. Spanish and the Basque and Spanish ethnolinguistic groups. We present data from semi- 11] Elordieta, G. & Hualde, J.I. 2014. «Intonation in Basque». In Prosodic Typology II: The Phonology directed interviews in Spanish with 36 speakers (monolingual Spanish, L1 Spanish-L2 Basque of Intonation and Phrasing, edited by Sun-Ah Jun, 405-63. Oxford, UK; New York, N.Y: Oxford and L1 Basque-L2 Spanish) from urban and rural areas in the Basque Country and Navarre. University Press. This study continues our previous work on this topic ([2],[4]) by including speakers from [12] González, C. & Reglero. L. 2021. «The Intonation of Yes-No Questions in Basque Spanish». Navarre. In Language Patterns in Spanish and Beyond, edited by Juan Colomina-Almiñana and Sandro Sessarego. London: Routledge. https://doi.org/10.4324/9781003091790-14. All the speakers presented a substantial presence of Basque-like intonational contours [13] Torreira, F. & Floyd, S. 2012. «Intonational meaning: The case of Spanish yes–no questions». at the end of information-seeking interrogatives regardless of their linguistic profile, but Paper presented at Tone and intonation in Europe, University of Oxford, September 6th, 2012. speaker differences in the degree of presence of such features were also observed. Greater

354 355 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021 WORKSHOPS Hosted by

Sponsors

Technical Secretariat

Crea Congresos + 34 933 623 377 [email protected] www.creacongresos.com

https://pape2021.upf.edu/ 4th PHONETICS AND PHONOLOGY IN EUROPE, 2021