Parsing Transcripts of Speech

Parsing Transcripts of Speech

Parsing transcripts of speech Andrew Caines Michael McCarthy Paula Buttery ALTA Institute School of English Computer Laboratory University of Cambridge University of Nottingham University of Cambridge [email protected] [email protected] [email protected] Abstract nally we consider a modification to deal with long input strings in spoken language, a preprocessing We present an analysis of parser perfor- step which we plan to implement in future work. mance on speech data, comparing word type and token frequency distributions 2 Spoken language with written data, and evaluating parse accuracy by length of input string. We As has been well described, speech is very dif- find that parser performance tends to de- ferent in nature to written language (Brazil, 1995; teriorate with increasing length of string, Biber et al., 1999; Leech, 2000; Carter and Mc- more so for spoken than for written texts. Carthy, 2017). Putting aside the mode of transmis- We train an alternative parsing model with sion for now – the phonetics and prosody of pro- added speech data and demonstrate im- ducing speech versus the graphemics and orthog- provements in accuracy on speech-units, raphy of writing systems – we focus on morphol- with no deterioration in performance on ogy, syntax and vocabulary: that is, the compo- written text. nents of speech we can straightforwardly analyse in transcriptions. We also put aside pragmatics and 1 Introduction discourse analysis therefore, even though there is Relatively little attention has been paid to pars- much that is distinctive in speech, including into- ing spoken language compared to parsing written nation and co-speech gestures to convey meaning, language. The majority of parsers are built using and turn-taking, overlap and co-construction in di- newswire training data and The Wall Street Jour- alogic interaction. nal section 21 of the Penn Treebank is a ubiquitous A fundamental morpho-syntactic characteristic test set. However, the parsing of speech is of no of speech is the lack of the sentence unit used by little importance, since it’s the primary mode of convention in writing, delimited by a capital let- communication worldwide, and human computer ter and full stop (period). Indeed it has been said interaction through the spoken modality is increas- that, “such a unit does not realistically exist in con- ingly common. versation” (Biber et al., 1999). Instead in spoken In this paper we first describe the morpho- language we refer to ‘speech-units’ (SUs)– token syntactic characteristics of spoken language and sequences which are usually coherent units from point out some key distributional differences with the point of view of syntax, semantics, prosody, written language, and the implications for pars- or some combination of the three (Strassel, 2003). ing. We then investigate how well a commonly- Thus we are able to model SU boundaries prob- used open source parser performs on a corpus of abilistically, and find that, in dialogue at least, spoken language and corpora of written language, they often coincide with turn-taking boundaries showing that performance deteriorates sooner for (Shriberg et al., 2000; Lee and Glass, 2012; Moore speech as the length of input string increases. We et al., 2016). demonstrate that a new parsing model trained on Other well-known characteristics of speech are both written and spoken data brings improved per- disfluencies such as hesitations, repetitions and formance, making this model freely available1. Fi- false starts (1)-(3). 1https://goo.gl/iQMu9w (1) um he’s a closet yuppie is what he is (Leech, 27 Proceedings of the First Workshop on Speech-Centric Natural Language Processing, pages 27–36 Copenhagen, Denmark, September 7–11, 2017. c 2017 Association for Computational Linguistics 2000). Speech Freq. Rank Writing Freq. I 46,382 1 the 41,423 and 33,080 2 to 26,459 (2) I played, I played against um (Leech, 2000). the 29,870 3 and 22,977 you 27,142 4 I 20,048 (3) You’re happy to – welcome to include it (Lev- that 27,038 5 a 18,289 it 26,600 6 of 18,112 elt, 1989). to 22,666 7 in 14,490 a 22,513 8 is 10,020 Disfluencies are pervasive in speech: of an an- uh 20,695 9 you 10,002 notated 767k token subset of the Switchboard Cor- ’s 20,494 10 that 9952 of 17,112 11 for 8578 pus of telephone conversations (SWB), 17% are yeah 14,805 12 it 8238 disfluent tokens of some kind (Meteer et al., 1995). know 14,723 13 was 8195 they 13,147 14 have 6604 Furthermore they are known to cause problems in in 12,548 15 on 5821 natural language processing, as they must be in- do 12,507 16 with 5621 corporated in the parse tree or somehow removed n’t 11,100 17 be 5514 we 10,308 18 are 4815 (Nasr et al., 2014). Indeed an ‘edit’ transition has have 9970 19 not 4716 been proposed specifically to deal with automat- uh-huh 9325 20 my 4478 ically identified disfluencies, by removing them from the parse tree constructed up to that point Table 1: The most frequently occurring tokens in along with any associated grammatical relations selected corpora of English speech (the Switch- (Honnibal and Johnson, 2014; Moore et al., 2015). board Corpus in Penn Treebank 3) and writing (EWT, LinES, TLE), normalised to counts per mil- We compared the SWB portion of Penn Tree- lion. bank 3 (Marcus et al., 1999) with the three English corpora contained in Universal Dependencies 2.0 (Nivre et al., 2017) as a representation of the writ- In SWB the most frequent token is I followed ten language. These are namely: by and, then the albeit much less frequently than in writing, then you, that, it at much higher rela- The ‘Universal Dependencies English Web • tive frequencies (per million tokens) than in writ- Treebank’ (EWT), the English Web Treebank ing. This ranking reflects the way that (telephone) in dependency format (Bies et al., 2012; Sil- conversations revolve around the first and second veira et al., 2014); person (I and you), and the way that speech makes use of coordination and hence the conjunction and ‘English LinES’ (LinES), the English section • much more than writing. of a parallel corpus of English novels and Furthermore clitics indicative of possession, Swedish translations (Ahrenberg, 2015); copula or auxiliary be, or negation (’s, n’t) and The ‘Treebank of Learner English’ (TLE), a discourse markers uh, yeah, uh-huh are all in the • manually annotated subset of the Cambridge twenty-five most frequent terms in SWB. The sin- Learner Corpus First Certificate in English gle content word in these top-ranked tokens (as- dataset (Yannakoudakis et al., 2011; Berzak suming have occurs mainly as an auxiliary) is et al., 2016). know, 13th most frequent in SWB, but as will be- come clear in Table3, it’s hugely boosted by its We found several differences between our spo- use in the fixed phrase, you know. ken and written datasets in terms of morpholog- Finally we note that the normalised frequencies ical, syntactic and lexical features. Firstly, the for these most frequent tokens are higher in speech most frequent tokens in writing (ignoring punctu- than in writing, suggesting that there is greater dis- ation marks) are, unsurprisingly, function words tributional mass in fewer token types in SWB, a – determiners, prepositions, conjunctions, pro- suggestion borne out by sampling 394,611 tokens nouns, auxiliary and copula verbs, and the like (the sum total of the three written corpora) from (Table1). These are normally considered ‘stop- SWB 100 times and finding that not once does the words’ in large-scale linguistic analyses, but even vocabulary size exceed even half that of the writ- if they are semantically uninteresting, their rank- ten corpora (Table2). ing is indicative of differences between speech and With the most frequent bigrams we note fur- writing. ther differences between speech and writing (Ta- 28 Medium Tokens Types Speech Freq. Rank Writing Freq. speech 394,611* 11,326** VBP PRP 51,845 1 NN DT 48,846 writing 394,611 27,126 NN DT 47,469 2 NN IN 36,274 ROOT UH 39,067 3 NN NN 27,490 Table 2: Vocabulary sizes in selected corpora IN NN 26,868 4 NN JJ 21,566 VB PRP 24,321 5 VB NN 19,584 of English speech and writing (* sampled from ROOT VBP 24,156 6 VB PRP 16,320 766,560 tokens in SWB corpus; ** mean of 100 samples, st.dev=45.5). Table 4: The most frequently occurring part-of- speech tag dependency pairs in selected corpora of English speech (the Switchboard Corpus in Penn ble3). The most frequent bigrams in writing tend Treebank 3) and writing (EWT, LinES, TLE), nor- to be combinations of preposition and determiner, malised to counts per million. The first tag in the or pronoun and auxiliary verb. In speech on the pair is the head of the relation; the second is the other hand, the very frequent bigrams include the dependent (Penn Treebank tagset). discourse markers you know, I mean, and kind of, pronoun plus auxiliary or copula it’s, that’s, I’m, they’re, and I’ve, and disfluent repetition I ther insights to be gleaned from comparisons of I, and hesitation and uh. Again frequency counts speech and writing at higher-order n-grams and are lower for the written corpus, symptomatic of in terms of dependency relations between tokens. a smaller set of bigrams in speech. There are These may in turn have implications for parsing 163,690 unique bigrams in the written data, and algorithms, or at least may suggest some solutions a mean of 89,787 (st.dev=151) unique bigrams in for more accurate parsing of speech.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us