Feature Selection for Trainable Multilingual Broadcast News Segmentation

Feature Selection for Trainable Multilingual Broadcast News Segmentation

Feature Selection for Trainable Multilingual Broadcast News Segmentation David D. Palmer, Marc Reichman, Elyes Yaich Virage Advanced Technology Group 300 Unicorn Park Woburn, MA 01801 {dpalmer,mreichman,eyaich}@virage.com Abstract account for the differences between broadcast sources. Specific features useful for video story segmentation vary Indexing and retrieving broadcast news stories widely from one source to the next, and the degree to within a large collection requires automatic de- which each feature is useful also varies across sources tection of story boundaries. This video news and even from one broadcast to the next within a sin- story segmentation can use a wide range of au- gle source. This variety suggests the need for trainable dio, language, video, and image features. In techniques in which salient source-specific features can this paper, we investigate the correlation be- be automatically learned from a set of training data. This tween automatically-derived multimodal fea- data-driven approach is especially important in multilin- tures and story boundaries in seven different gual video processing, where native speakers may not be broadcast news sources in three languages. We available to develop segmentation models for every lan- identify several features that are important for guage. all seven sources analyzed, and we discuss the contributions of other features that are impor- tant for a subset of the seven sources. The goal of this paper is to provide a model- independent investigation of the correlation between a wide range of multimedia features and news story bound- 1 Introduction aries, in order to aid the development of improved seg- mentation algorithms. Our work seeks to complement Indexing and retrieving stories within a large collection recent work in model-dependent feature selection, such of video requires automatic detection of story bound- as (Hsu and Chang, 2003), without making assumptions aries, and video story segmentation is an essential step about the dependencies between features. toward providing the means for finding, linking, summa- rizing, and visualizing related parts of multimedia col- lections. In many cases, previous story segmentation re- The feature analysis we describe in this paper consisted search has focused on single stream analysis techniques, of several steps. First, we created a data set for our exper- utilizing only one of the information sources present in iments by capturing and digitally encoding a set of news news broadcasts: natural language, audio, image, and broadcasts from seven video news sources in three lan- video (see, for example, (Furht et al., 1995),(Fiscus and guages. A native speaker manually labelled the story Doddington, 2002),(Greiff et al., 2001), (O’Connor et and commercial boundaries in each broadcast; we de- al., 2001)). Some segmentation research has included scribe the data in Section 2. We ran several state-of-the- multimodal approaches that were capable of combin- art audio and video analysis software packages on each ing features from multiple information sources (Boykin recorded broadcasts to extract time-stamped multimedia and Merlino, 1999),(Hauptmann and Witbrock, 1998) metadata, and we defined a set of possible segmentation . While this work was a significant improvement over features based on the metadata values produced by the single-stream approaches, they were rarely applied to analysis software; we describe the software analysis and non-English sources without closed captioning. feature extraction in Section 3. Finally, we analyzed the Previous work on story segmentation has identified patterns of occurrence of the features with respect to story many features useful for finding story boundaries, but and commercial boundaries in all the news broadcasts; feature selection is often model-dependent and does not the results of our analysis are described in Section 4. 2 Data Total Story Story Ave. Source Lang. Hours Hours Count Length The data for our segmentation research consists of a ALJ Ara 10:37 6:56 279 89 s set of news broadcasts recorded directly from a satel- BBC Eng 8:09 6:09 215 103 s lite dish between September 2002 and February 2003. CCTV Chi 9:05 7:14 235 111 s The data set contains roughly equal amounts (8-12 hours) CNNH Eng 11:42 7:18 505 52 s of news broadcasts from seven sources in three lan- CNNI Eng 10:14 9:13 299 111 s guages: Aljazeera (Arabic), BBC America (UK English), Fox Eng 13:13 9:14 194 171 s China Central TV (Mandarin Chinese), CNN Headline NWI Eng 8:33 6:12 198 113 s News (US English), CNN International (US/UK En- glish), Fox News (US English), and Newsworld Interna- Table 1: Data sources (Broadcast source, language, total tional (US/UK English). hours, hours of stories, number of stories, average story Each broadcast was manually segmented with the la- length). bels “story” and “commercial” by one annotator and ver- ified by a second, at least one of whom was a native speaker of the broadcast language. We found that a very sented state-of-the-art technology for a range of audio, good segmentation is possible by a non-native speaker language, image, and video processing applications. based solely on video and acoustic cues, but a native The audio and video analysis produced time-stamped speaker is required to verify story boundaries that re- metadata such as “face Chuck Roberts detected at quire language knowledge, such as a single-shot video time=2:38” and “speaker Bill Clinton identified between sequence of several stories read by a news anchor with- start=12:56 and end=16:28.” From the raw metadata we out pausing. The definition of “story” in our experi- created a set of features that have previously been used in ments corresponds with the Topic Detection and Track- story segmentation work, as well as some novel features ing definition: a segment of a news broadcast with a co- that have not been used in previous published work. The herent news focus, containing at least two independent, software components and resulting features are described declarative clauses (LDC, 1999). The segments within in the following sections. broadcasts briefly summarizing several stories were not assigned a “story” label, nor were anchor introductions, 3.1 Audio and language processing signoffs, banter, and teasers for upcoming stories. Each A great deal of the information in a news broadcast is individual story within blocks of contiguous stories was contained in the raw acoustic portion of the news signal. labeled “story.” A sequence of contiguous commercials Much of the information is contained in the spoken audio, was annotated with a single “commercial” label with a both in the characteristics of the human speech signal and single pair of boundaries for the entire block. in the sequence of words spoken. This information can Table 1 shows the details of our experimental data set. also take the form of non-spoken audio events, such as The first two columns show the broadcast source and the music, background noise, or even periods of silence. We language. The next two columns show the total num- ran the following audio and language processing compo- ber of hours and the number of hours labeled “story” for nents on each of the data sources described in Section 2. each source. It is interesting to note that the percentage Audio type classification segments and labels the au- of broadcast time devoted to news stories varies widely by source, from 62% for CNN Headline News to 90% for dio signal based on a set of acoustic models: speech, CNN International. Similarly, the average story length music, breath, lip smack, and silence. Speaker identi- varies widely, as shown in the final column of Table 1, fication models the speech-specific acoustic characteris- from 52 seconds per story for CNN Headline News to 171 tics of the audio and seeks to identify speakers from a seconds per story for Fox News. These large differences library of known speakers. Automatic speech recogni- are extremely important when modeling the distributions tion (ASR) provides an automatic transcript of the spo- of stories (and commercials) within news broadcasts from ken words. Topic classification labels segments of the various sources. ASR output according to predefined categories. The au- dio processing software components listed above are de- 3 Feature extraction scribed in detail in (Makhoul et al., 2000). Closed cap- tioning is a human-generated transcript of the spoken In order to analyze audio and video events that are rele- words that is often embedded in a broadcast video sig- vant to story segmentation, we encoded the news broad- nal. casts described in Section 2 as MPEG files, then automat- Story segmentation features automatically extracted ically processed the files using a range of media analysis from audio and language processing components were: software components. The software components repre- speech segment, music segment, breath, lip smack, si- lence segment, topic classification segment, closed cap- likelihood (ML) probability of observing the feature near tioning segment, speaker ID segment, and speaker ID a story boundary. For example, if there were 100 sto- change. In addition we analyzed the ASR word se- ries, and the anchor face detection feature was true for 50 quences in all broadcasts to automatically derive a set of of the stories, then p(anchor|story)=50/100 = 0.5. source-dependent cue phrase n-gram features. To de- We similarly calculated the ML probabilities of an an- termine cue n-grams, we extracted all relatively frequent chor face detection near a commercial boundary, outside unigrams, bigrams, and trigrams from the training data of both story and commercial, and inside a story but out- and compared the likelihood of observing each n-gram ide the window of time near the boundary.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    4 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us