
Sentence segmentation of aphasic speech Kathleen C. Fraser1,3, Naama Ben-David1, Graeme Hirst1, Naida L. Graham2,3, Elizabeth Rochon2,3 1Department of Computer Science, University of Toronto, Toronto, Canada 2Department of Speech-Language Pathology, University of Toronto, Toronto, Canada 3Toronto Rehabilitation Institute, Toronto, Canada kfraser,naama,gh @cs.toronto.edu, naida.graham,elizabeth.rochon @utoronto.ca { } { } Abstract turning to politics for al gore and george w bush another day of rehearsal in just over forty eight hours the two men Automatic analysis of impaired speech for will face off in their first of three debates for the first time screening or diagnosis is a growing research voters will get a live unfiltered view of them together field; however there are still many barriers to a fully automated approach. When automatic Turning to politics, for Al Gore and George W Bush an- speech recognition is used to obtain the speech other day of rehearsal. In just over forty-eight hours the two men will face off in their first of three debates. For the transcripts, sentence boundaries must be in- first time, voters will get a live, unfiltered view of them to- serted before most measures of syntactic com- gether. plexity can be computed. In this paper, we consider how language impairments can affect segmentation methods, and compare the re- Figure 1: ASR text before and after processing. sults of computing syntactic complexity met- rics on automatically and manually segmented transcripts. We find that the important bound- clear that for such systems to be practical in the real ary indicators and the resulting segmentation world they must use automatic speech recognition accuracy can vary depending on the type of (ASR). One issue that arises with ASR is the intro- impairment observed, but that results on pa- duction of word recognition errors: insertions, dele- tient data are generally similar to control data. tions, and substitutions. This problem as it relates to We also find that a number of syntactic com- plexity metrics are robust to the types of seg- impaired speech has been considered elsewhere (Jar- mentation errors that are typically made. rold et al., 2014; Fraser et al., 2013; Rudzicz et al., 2014), although more work is needed. Another is- sue, which we address here, is how ASR transcripts 1 Introduction are divided into sentences. The automatic analysis of speech samples is a The raw output from an ASR system is generally promising direction for the screening and diagno- a stream of words, as shown in Figure 1. With some sis of cognitive impairments. For example, recent effort, it can be transformed into a format which is studies have shown that machine learning classifiers more readable by both humans and machines. Many trained on speech and language features can detect, algorithms exist for the segmentation of the raw text with reasonably high accuracy, whether a speaker stream into sentences. However, there has been no has mild cognitive impairment (Roark et al., 2011), previous work on how those algorithms might be ap- frontotemporal lobar degeneration (Pakhomov et al., plied to impaired speech. 2010b), primary progressive aphasia (Fraser et al., This problem must be addressed for two reasons: 2014), or Alzheimer’s disease (Orimaye et al., 2014; first, sentence boundaries are important when an- Thomas et al., 2005). These studies used manually alyzing the syntactic complexity of speech, which transcribed samples of patient speech; however, it is can be a strong indicator of potential impairment. 862 Human Language Technologies: The 2015 Annual Conference of the North American Chapter of the ACL, pages 862–871, Denver, Colorado, May 31 – June 5, 2015. c 2015 Association for Computational Linguistics Many measures of syntactic complexity are based prosodic features. Word features can include word on properties of the syntactic parse tree (e.g. Yng- or part-of-speech n-grams, keyword identification, ve depth, tree height), which first require the de- and filled pauses (Stevenson and Gaizauskas, 2000; marcation of individual sentences. Even very basic Stolcke and Shriberg, 1996; Gavalda et al., 1997). measures of syntactic complexity, such as the mean Prosodic features include measures of pitch, energy, length of sentence, require this information. Sec- and duration of phonemes around the boundary, as ondly, there are many reasons to believe that exist- well as the length of the silent pause between words ing algorithms might not perform well on impaired (Shriberg et al., 2000; Wang et al., 2003). speech, since assumptions about normal speech do The features which are most discriminative to not hold true in the impaired case. For example, in the segmentation task can change depending on the normal speech, pausing is often used to indicate a nature of the speech. One important factor can boundary between syntactic units, whereas in some be whether the speech is prepared or spontaneous. types of dementia or aphasia a pause may indicate Cuendet et al. (2007) explored three different gen- word-finding difficulty instead. Other indicators of res of speech: broadcast news, broadcast conversa- sentence boundaries, such as prosody, filled pauses, tions, and meetings. They analyzed the effective- and discourse markers, can also be affected by cog- ness of different feature sets on each type of data. nitive impairments (Emmorey, 1987; Bridges and They found that pause features were the most dis- Van Lancker Sidtis, 2013). criminative across all groups, although the best re- Here we explore whether we can apply standard sults were achieved using a combination of lexi- approaches to sentence segmentation to impaired cal and prosodic features. Kolar´ et al. (2009) also speech, and compare our results to the segmentation looked at genre effects on segmentation, and found of broadcast news. We then extract syntactic com- that prosodic features were more useful for segment- plexity features from the automatically segmented ing broadcast news than broadcast conversations. text, and compare the feature values with measure- ments taken on manually segmented text. We assess 2.2 Primary progressive aphasia which features are most robust to the noisy segmen- There are many different forms of language impair- tation, and thus could be appropriate features for fu- ment that could affect how sentence boundaries are ture work on automatic diagnostic interfaces. placed in a transcript. Here, we focus on the syn- drome of primary progressive aphasia (PPA). PPA 2 Background is a form of frontotemporal dementia which is char- acterized by progressive language impairment with- 2.1 Automatic sentence segmentation out other notable cognitive impairment. In partic- Many approaches to the problem of segmenting rec- ular, we consider two subtypes of PPA: semantic ognized speech have been proposed. One popu- dementia (SD) and progressive nonfluent aphasia lar way of framing the problem is to treat it as a (PNFA). SD is typically marked by fluent but empty sequence tagging problem, where each interword speech, obvious word finding difficulties, and spared boundary must be labelled as either a sentence grammar (Gorno-Tempini et al., 2011). In con- boundary (B) or not (NB) (Liu and Shriberg, 2007). trast, PNFA is characterized by halting and some- Liu et al. (2005) showed that using a conditional times agrammatic speech, reduced syntactic com- random field (CRF) classifier for this problem re- plexity, and relatively spared single-word compre- sulted in a lower error rate than using a hidden hension (Gorno-Tempini et al., 2011). Because syn- Markov model or maximum entropy classifier. They tactic impairment, including reduced syntactic com- stated that the CRF approach combined the benefits plexity, is a core feature of PNFA, we expect that of these two other popular approaches, since it is dis- measures of syntactic complexity would be impor- criminative, can handle correlated features, and uses tant for a downstream screening application. Fraser a globally optimal sequence decoding. et al. (2013) presented an automatic system for clas- The features used to train such classifiers fall sifying PPA subtypes from ASR transcripts, but they broadly into two categories: word features and were not able to include any syntactic complexity 863 metrics because their transcripts did not contain sen- 4 Methods tence boundaries. 4.1 Lexical and POS features 3 Data The lexical features are simply the unlemmatized 3.1 PPA data word tokens. We do not consider word n-grams due to the small size of our PPA data set. To extract Twenty-eight patients with PPA (11 with SD and 17 our part-of-speech (POS) features, we first tag the with PNFA) were recruited through three memory transcripts using the NLTK POS tagger (Bird et al., clinics, and 23 age- and education-matched healthy 2009). We use the POS of the current word, the next controls were recruited through a volunteer pool. All word, and the previous word as features. participants were native speakers of English, or had completed some of their education in English. 4.2 Prosodic features To elicit a sample of narrative speech, partici- To calculate the prosodic features, we first perform pants were asked to tell the well-known story of Cin- automatic alignment of the transcripts to the audio derella. They were given a wordless picture book files. This provides us with a phone-level transcrip- to remind them of the story; then the book was re- tion, with the start and end of each phone linked to moved and they were asked to tell the story in their a time in the audio file. Using this information, we own words. This procedure, described in full by Saf- are able to calculate the length of the pauses between fran et al. (1989), is commonly used in studies of words, which we bin into three categories based on connected speech in aphasia.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-