Unsupervised Syntactic Chunking with Acoustic Cues: Computational Models for Prosodic Bootstrapping
Total Page:16
File Type:pdf, Size:1020Kb
Unsupervised syntactic chunking with acoustic cues: computational models for prosodic bootstrapping John K Pate ([email protected]) Sharon Goldwater ([email protected]) School of Informatics, University of Edinburgh 10 Crichton St., Edinburgh EH8 9AB, UK Abstract tags or punctuation. However, infants do have ac- cess to certain cues that have not been well explored Learning to group words into phrases with- by NLP researchers focused on grammar induction out supervision is a hard task for NLP sys- from text. In particular, we consider the cues to syn- tems, but infants routinely accomplish it. We hypothesize that infants use acoustic cues to tactic structure that might be available from prosody prosody, which NLP systems typically ignore. (roughly, the structure of speech conveyed through To evaluate the utility of prosodic information rhythm and intonation) and its acoustic realization. for phrase discovery, we present an HMM- The idea that prosody provides important ini- based unsupervised chunker that learns from tial cues for grammar acquisition is known as the only transcribed words and raw acoustic cor- prosodic bootstrapping hypothesis, and is well- relates to prosody. Unlike previous work on established in the field of language acquisition unsupervised parsing and chunking, we use (Gleitman and Wanner, 1982). Experimental work neither gold standard part-of-speech tags nor punctuation in the input. Evaluated on the has provided strong support for this hypothesis, for Switchboard corpus, our model outperforms example by showing that infants begin learning ba- several baselines that exploit either lexical or sic rhythmic properties of their language prenatally prosodic information alone, and, despite pro- (Mehler et al., 1988) and that 9-month-olds use ducing a flat structure, performs competitively prosodic cues to distinguish verb phrases from non- with a state-of-the-art unsupervised lexical- constituents (Soderstrom et al., 2003). However, as ized parser, with a substantial advantage in far as we know, there has so far been no direct com- precision. Our results support the hypothesis that acoustic-prosodic cues provide useful ev- putational evaluation of the prosodic bootstrapping idence about syntactic phrases for language- hypothesis. In this paper, we provide the first such learning infants. evaluation by exploring the utility of acoustic cues for unsupervised syntactic chunking, i.e., grouping words into non-hierarchical syntactic phrases. 1 Introduction Nearly all previous work on unsupervised gram- Young children routinely learn to group words into mar induction has focused on learning hierarchical phrases, yet computational methods have so far phrase structure (Lari and Young, 1990; Liang et al., struggled to accomplish this task without supervi- 2007) or dependency structure (Klein and Manning, sion. Previous work on unsupervised grammar in- 2004); we are aware of only one previous paper duction has made progress by exploiting information on unsupervised syntactic chunking (Ponvert et al., such as gold-standard part of speech tags (e.g. Klein 2010). Ponvert et al. describe a simple method for and Manning (2004)) or punctuation (e.g. Seginer chunking that uses only bigram counts and punctu- (2007)). While this information may be available ation; when the chunks are combined using a right- in some NLP contexts, our focus here is on the com- branching structure, the resulting trees achieve un- putational problem facing language-learning infants, labeled bracketing precision and recall that is com- who do not have access to either part of speech petitive with other unsupervised parsers. The sys- 20 Proceedings of the 2nd Workshop on Cognitive Modeling and Computational Linguistics, pages 20–29, Portland, Oregon, June 2011. c 2011 Association for Computational Linguistics tem’s dependence on punctuation renders it inappro- (CCL) parser (Seginer, 2007), achieves only 38% priate for addressing the questions we are interested unlabeled bracketing F-score on our corpus, as com- in here, but its good performance reccommends syn- pared to published results of 76% on WSJ10 (En- tactic chunking as a profitable approach to the prob- glish) and 59% on Negra10 (German). Interestingly, lem of grammar induction, especially since chunks we find that when evaluated against full parse trees, can be learned using much simpler models than are our best chunker achieves an F-score comparable to needed for hierarchical structure. that of CCL despite positing only flat structure. The models used in this paper are all variants of Before describing our models and experiments in HMMs. Our baseline models are standard HMMs more detail, we first present a brief review of rel- that learn from either lexical or prosodic observa- evant information about prosody and its relation- tions only; we also consider three types of models ship to syntax, including previous work combining (including a coupled HMM) that incorporate both prosody and syntax in supervised parsing systems. lexical and prosodic observations, but vary the de- gree to which syntactic and prosodic variables are 2 Prosody and syntax tied together in the latent structure of the models. Prosody is a theoretical linguistic concept posit- In addition, we compare the use of hand-annotated ing an abstract organizational structure for speech.2 prosodic information (ToBI annotations) to the use While it is often closely associated with such mea- of direct acoustic measures (specifically, duration surable phenomena as movement in fundamen- measures) as the prosodic observations. All of our tal frequency or variation in spectral tilt, these models are unsupervised, receiving no bracketing are merely observable acoustic correlates that pro- information during training. vide evidence of varying quality about the hidden The results of our experiments strongly support prosodic structure, which specifies such hidden vari- the prosodic bootstrapping hypothesis: we find ables as contrastive stress or question intonation. that using either ToBI annotations or acoustic mea- Prosody has been hypothesized to be useful for sures in addition to lexical observations (i.e., word learning syntax because it imposes a grouping struc- sequences) vastly improves chunking performance ture on word sequences that sometimes coincides over any source of information alone. Interestingly, with traditional constituency analyses (Ladd, 1996; our best results are achieved using a combination Shattuck-Hufnagel and Turk, 1996). Moreover, of words and acoustic information as input, rather laboratory experiments have shown that adults use than words and ToBI annotations. Our best com- prosody both for syntactic disambiguation (Millotte bined model achieves an F-score of 41% when eval- et al., 2007; Price et al., 1991) and, crucially, in uated on the lowest level of syntactic structure in learning the syntax of an artificial language (Morgan 1 the Switchboard corpus , as compared to 25% for et al., 1987). Accordingly, if prosodic structure is a words-only model and only 3% for an acoustics- sufficiently prominent in the acoustic signal, and co- only model. Although the combined model’s score incides often enough with syntactic structure, then it is still fairly low, additional results suggest that our may provide children with useful information about corpus of transcribed naturalistic speech is signifi- how to combine words into phrases. cantly more difficult for unsupervised parsing than Although there are several theories of how to rep- the written text that is typically used for training. resent and annotate prosodic structure, one of the Specifically, we find that a state-of the-art unsuper- most influential is the ToBI (Tones and Break In- vised lexicalized parser, the Common Cover Link dices) theory (Beckman et al., 2005), which we will use in some of our experiments. ToBI pro- 1Since our interest is in child language acquisition, we would prefer to evaluate our system on data from the CHILDES poses, among other things, that the prosodic phras- database of child-directed speech (MacWhinney, 2000). Unfor- ing of languages can be represented in terms of se- tunately, there are no corpora in the database that include phrase quences of break indices indicating the strength of structure annotations. We are in the process of annotating a small evaluation corpus with phrase structure trees, and hope to 2Signed languages also exhibit prosodic phenomena, but use this for evaluation in future work. they are not addressed here. 21 word boundaries. In Mainstream American English First, the OBIE set forces all chunks to be at least ToBI, for example, the boundary between a clitic two words long (the shortest chunk allowed is BE). and its base word (e.g. “do” and “n’t” of “don’t”) Imposing this requirement allows us to characterize is 0, representing a very weak boundary, while the the task in concrete terms as “learning when to group boundary following a word at the end of an intona- words together.” Second, as we seek to incorporate tional phrase is 4, indicating a very strong boundary. acoustic correlates of prosody into chunking, we ex- Below we examine how useful these break indices pect edge behavior to merit explicit modeling.3 are for identifying syntactic boundaries. In the following subsections, we describe the var- Finally, we note that our work is not the first com- ious models we use. Note that input to all mod- putational approach to using prosody for identifying els is discrete, consisting of words, ToBI annota- syntactic