Unsupervised Multi-Topic Labeling for Spoken Utterances

Unsupervised Multi-Topic Labeling for Spoken Utterances

Unsupervised Multi-Topic Labeling for Spoken Utterances Sebastian Weigelt, Jan Keim, Tobias Hey, Walter F. Tichy Karlsruhe Institute of Technology Institute for Program Structures and Data Organization Karlsruhe, Germany [email protected], [email protected], [email protected], [email protected] Abstract—Systems such as Alexa, Cortana, and Siri appear contextual information to a minimum. An exemplary input of rather smart. However, they only react to predefined wordings that kind might be, “Hey robot, take – uhm – the apple – err and do not actually grasp the user’s intent. To overcome this – the orange from the fridge.” Even though the utterance is limitation, a system must grasp the topics the user is talking about. Therefore, we apply unsupervised multi-topic labeling to rather short, it encompasses three topics: Domestic Robotics, spoken utterances. Although topic labeling is a well-studied task Fruits, and Home Appliances. Present approaches for topic on textual documents, its potential for spoken input is almost labeling cannot cope with such conditions as they either rely unexplored. Our approach for topic labeling is tailored to spoken on NLP or contextual models. utterances; it copes with short and ungrammatical input. Our approach is influenced by a number of related ap- The approach is two-tiered. First, we disambiguate word senses. We utilize Wikipedia as pre-labeled corpus to train a na¨ıve- proaches to topic labeling on documents. However, it is bayes classifier. Second, we build topic graphs based on DBpedia customized to the challenges of short, spoken utterances. relations. We use two strategies to determine central terms in Our approach is two-tiered. First, we perform word sense the graphs, i.e. the shared topics. One focuses on the dominant disambiguation (WSD). We have adapted the approach by senses in the utterance and the other covers as many distinct Mihalcea [3] and Mihalcea and Csomai [4]. The approach senses as possible. Our approach creates multiple distinct topics per utterance and ranks results. uses Wikipedia as a pre-labeled corpus and applies a na¨ıve- The evaluation shows that the approach is feasible; the word bayes classifier. Nouns are labeled with Wikipedia articles. sense disambiguation achieves a recall of 0.799. Concerning topic Second, we use the word sense labels to determine topic labels. labeling, in a user study subjects assessed that in 90.9% of the To this end, we build so-called sense graphs. Beginning with cases at least one proposed topic label among the first four is a the Wikipedia articles attached to nouns in the utterance, we good fit. With regard to precision, the subjects judged that 77.2% of the top ranked labels are a good fit or good but somewhat too use relations in DBpedia to construct graphs. Afterwards, we broad (Fleiss’ kappa κ = 0.27). determine the most central terms, which we take to be the topic labels for the utterance. We have implemented two different I. INTRODUCTION strategies for graph centrality. The first generates topic labels Conversational interfaces (CI) are a recent trend in human for dominant terms, i.e. the most frequent senses, in the computer interaction. Today, millions of users communicate utterance. The latter covers all terms. Both produce multiple with virtual assistants such as Alexa, Cortana, or Siri. How- labels for each utterance. The labels carry confidences, which ever, such systems often struggle to actually grasp the user’s we derive from the graph centrality value. The contribution of intent. Although they appear rather smart, Alexa and the like the paper is two-fold: merely react to predefined commands. Users will soon expect 1) An adaptation of the WSD approach by Mihalcea and such systems to understand increasingly complex requests. Csomai to short utterances, including an evaluation on Thus, techniques for (deep) spoken language understanding a Wikipedia data set plus an additional evaluation on a (SLU) are needed. We propose to apply topic labeling to corpus for programming in spoken language. spoken utterances as one building block of a comprehensive 2) An implementation and evaluation of unsupervised intent model. Topic modeling and labeling has already proved multi-topic labeling tailored to short, spoken utterances. useful on textual documents; it has been applied to many The remainder of the paper is structured as follows: First, tasks, such as text summarization, machine translation, and we discuss related work in Section II. In Section III we sentiment analysis [1]. However, topic labeling has rarely been introduce our approach for unsupervised multi-topic labeling adapted to spoken utterances scenarios [2]. Most likely this is and evaluate it in Section IV. Finally, we discuss areas of the consequence of differing boundary conditions. Spoken lan- application (Section V) before we conclude the paper in guage is typically ungrammatical. Thus, common techniques Section VI. for natural language (pre-)processing (NLP) cannot be applied. Furthermore, utterances – be it dialog acts, virtual assistant II. RELATED WORK interactions, or instructions for household robots – are short Topic labeling is typically preceded by a topic modeling in comparison to text documents. This limits the usefulness of step that determines sets of terms that are supposed to share the same topic. Afterwards, meaningful labels are assigned to III. APPROACH these topics. Many approaches rely on the so-called Latent Our approach for unsupervised multi-topic labeling is in- Dirichlet Allocation (LDA) introduced by Blei et al. [5] to spired by topic modeling and labeling approaches for text create a topic model [6]–[10]. LDA is a generative probabilistic documents. However, it does not rely on a generative prob- model for collections of discrete data such as text documents. abilistic model such as LDA. This is mainly because LDA is It uses word distributions across a given set of documents not applicable on short documents. Additionally, LDA can only to derive topics from word occurrences. Hence, an LDA topic distinguish a fixed number of topics. However, in our context model comprises a fixed number of topics that consist of words the number of topics is uncertain in advance. Unlike LDA- which often occur together. based approaches, we build topic graphs for the entire input, To determine meaningful labels, some approaches derive i.e. each spoken utterance. We use data from DBpedia to create labels directly from the given text [8], [11], assuming that these graphs; articles are nodes and relations form edges. We a label can be found within the given text. However, this use a biased PageRank algorithm to determine multiple central assumption may not hold. Often, a document does not contain articles per utterance, which we use as topic labels. Therefore, appropriate labels; i.e. for certain topics no abstract term is we are relieved from the challenge of creating meaningful ever mentioned. Additionally, text-based approaches usually labels. Instead, we only have to determine which term is the suffer from challenges such as synonyms or spelling errors. most fitting for a topic. The approach requires word sense Thus, advanced approaches incorporate additional information labels as starting point for the construction of sense graphs. For to gain a deeper understanding of a topic. Usually, these this, we adapt the approach by Mihalcea and Csomai [4] that approaches map words that represent a topic to knowledge uses Wikipedia as a pre-labeled corpus for WSD. It uses na¨ıve- databases. Then, they create graph or tree structures based on bayes classification to attach Wikipedia articles (as senses) to relations in the knowledge database (e.g. Magatti et al. [9]). nouns. Another approach of that kind was introduced by Hulpus et In Subsection III-A we present our adapted re- al. [10]. The authors calculate a topic model with LDA and implementation of their WSD method. Afterwards, we then determine central nodes in a so-called topic graph, which describe our unsupervised multi-topic labeling approach for they build from DBpedia concepts and relations. Central con- spoken utterances in detail in Subsection III-B. cepts form the topic labels. All above-mentioned approaches use LDA to some extent, which is a statistical model. There- A. Word Sense Disambiguation fore, its performance depends on the available amount of data. Supervised classification tasks require manually attached As spoken utterances are rather short, LDA does not produce labels for training, which is time-consuming and costly. Ad- reliable results. Hence, LDA-based approaches are infeasible ditionally, in the case of word sense disambiguation human in our context. annotators often disagree. Mihalcea and Csomai tackle this Some related approaches do not rely on LDA. Coursey et issue by using Wikipedia as a pre-labeled corpus for word al. [12] create graphs based on Wikipedia articles (nodes) and senses. The basic idea is as follows. Relevant terms (mostly the proximity of the containing words (edges). They determine nouns) in a Wikipedia article each have a link attached to central nodes with the help of a biased PageRank algorithm the respective explanatory article. Thus, links can serve as and use the article names of these nodes as topic labels. manually annotated word senses. Aker et al. [13] use the Markov Clustering Algorithm for Links are added by the article’s authors (most commonly), topic modeling. Allahyari and Kochut [14] adapt LDA; they who are supposed to be domain experts. Therefore, Mihalcea introduce a latent variable called concept. The concepts are and Csomai assume that the links are correct. Also, Wikipedia DBpedia concepts and are used to build graphs. Recently, is growing steadily and the quality of articles improves over combined approaches are used; they either join different topic time through continuous inspection by the community. Even labeling approaches (e.g. Gourru et al. [15]), or incorporate though the latter is arguable, the quality of Wikipedia ar- concepts from other research areas, e.g.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us