Optimization of Latent Semantic Analysis Based Language Model Interpolation for Meeting Recognition

Total Page:16

File Type:pdf, Size:1020Kb

Optimization of Latent Semantic Analysis Based Language Model Interpolation for Meeting Recognition Optimization of Latent Semantic Analysis based Language Model Interpolation for Meeting Recognition Michael Pucher† ‡, Yan Huang∗, Ozg¨ ur¨ C¸etin∗ ‡Telecommunications Research Center Vienna, Austria [email protected] †Speech and Signal Processsing Lab, TU Graz Graz, Austria ∗International Computer Science Institute Berkeley, USA [email protected], [email protected] Abstract Latent Semantic Analysis (LSA) defines a semantic similarity space using a training corpus. This semantic similarity can be used for dealing with long distance dependencies, which are an inherent problem for traditional word-based n-gram models. This paper presents an analysis of interpolated LSA models that are applied to meeting recognition. For this task it is necessary to combine meeting and background models. Here we show the optimization of LSA model parameters necessary for the interpolation of multiple LSA models. The comparison of LSA and cache-based models shows furthermore that the former contain more semantic information than is contained in the repetition of words forms. Optimizacija latentne semanticneˇ analize temeljeceˇ na interpolaciji jezikovnega modela za namene razpoznavanja sestankov Latentna semantiˇcna analiza (LSA) definira prostor semantiˇcne podobnosti z uporabo uˇcnega korpusa. To semantiˇcno podobnost je mogoˇce uporabiti pri odvisnostih dolgega dosega, ki so inherenten problem za tradicionalne, na besedah temeljeˇce n-gramske modele. Prispevek predstavlja analizo interpoliranih modelov LSA, ki so uporabljeni za razpoznavanje sestankov. Za to nalogo je potrebno zdruˇziti modela sestankov in ozadja. Predstavljena je optimizacija parametrov modela LSA za interpolacijo med veˇcimi modeli LSA. Primerjava modelov LSA in modelov s predpomnilnikom pokaˇze tudi, da prvi vsebujejo veˇcsemantiˇcnih informacij kot ponavljanje besednih oblik. 1. Introduction tively small corpora with background LSA models which are trained on larger corpora. Word-based n-gram models are a popular and fairly LSA-based language models have several parameters sucessful paradigm in language modeling. With these mod- influencing the length of the history or the similarity func- els it is however difficult to model long distance dependen- tion that need to be optimized. The interpolationof multiple cies which are present in natural language (Chelba and Je- LSA models leads to additional paramters that regulate the linek, 1998). impact of different models on a word and model basis. LSA maps a corpus of documents onto a semantic vec- tor space. Long distance dependencies are modeled by rep- resenting the context or history of a word and the word it- self as a vector in this space. The similarity between these 2. LSA-based Language Models two vectors is used to predict a word given a context. Since 2.1. Constructing the Semantic Space LSA models the contextas a bag of words it has to be com- bined with n-gram models to include word-order statistics In LSA first the training corpus is encoded as a word– of the short span history. Language models that combine document co-ocurrence matrix W (using weighted term word-based n-gram models with LSA models have been frequency). This matrix has high dimension and is highly successfully applied to conversational speech recognition sparse. Let V be the vocabulary with |V| = M and T and to the Wall Street Journal recognition task (Bellegarda, be a text corpus containing n documents. Let cij be the 2000b)(Deng and Khudanpur, 2003). number of occurrences of word i in document j, ci the We conjecture that LSA-based language models can number of occurrences of word i in the whole corpus, i.e. also help to improve speech recognition of recorded meet- N ci = Pj=1 cij , and cj the number of words in document j. ings, because meetings have clear topics and LSA models The elements of W are given by adapt dynamically to topics. Due to the sparseness of avail- able data for languagemodeling for meetings it is important cij [W ]ij = (1 − ǫwi ) (1) to combine meeting LSA models that are trained on rela- cj where ǫwi is defined as Model Definition N n-gram (baseline) Pn−gram 1 cij cij ǫwi = − X log . (2) Linear interpolation (LIN) λPLSA + (1 − λ)Pn−gram log N ci ci j=1 Information weighted λw 1−λw geometric mean ∝ PLSAPn−gram ǫ will be used as a short-hand for ǫ i . Informative words w w interpolation (INFG) will have a low value of ǫw. Then a semantic space with much lower dimension is constructed using Singular Value Table 1: Interpolation methods. Decomposition (SVD) (Deerwester et al., 1990). ˆ T W ≈ W = U × S × V (3) The information weighted geometric mean interpolation represents a loglinear interpolation of normalized LSA For some order r ≪ min(m,n), U is a m × r left singular probabilities and the standard n-gram. matrix, S is a r × r diagonal matrix that contains r singular values, and V is a n × r right singular matrix. The vector 2.4. Combining LSA Models represents word , and represents document . uiS wi vj S dj For the combination of multiple LSA models we tried 2.2. LSA Probability two different approaches. The first approach was the lin- ear interpolation of LSA models with optimized λ where In this semantic space the cosine similarity between i λ =1 − (λ + . + λ ): words and documents is defined as n+1 1 n P , λ P + . + λ P + λ P (7) u SvT lin 1 Lsa1 n Lsan n+1 n−gram , i j Ksim(wi, dj ) 1 1 . (4) ||uiS 2 || · ||vj S 2 || Our second approach was the INFG Interpolation with (n+1) (1) (n) Since we need a probability for the integration with the n- optimized θi where λw =1 − (λw + . + λw ): gram models, the similarity is converted into a probabil- (1) (n) (n+1) P ∝ P λw θ1 ...P λw θn P λw θn+1 (8) ity by normalizing it. According to (Coccaro and Jurafsky, infg Lsa1 Lsan n−gram 1998), we extend the small dynamic range of the similarity (k) function by introducing a temperature parameter γ. The parameter θi have to be optimized since the λw de- We also have to define the concept of a pseudo- pend on the corpus, so that a certain corpus can get a higher document d˜t−1 using the word vectors of all words pre- weight because of a content-word-like distribution of w, al- ceding wt, i.e. w1,...,wt−1. This is needed because the though the whole data does not well fit the meeting domain. model is used to compare words with documents that have In general we saw that the λw values were higher for the not been seen so far. In the construction of the pseudo- background domain models than for the meeting models. document we also include a decay parameter δ < 1 that is But taking the n-gram mixtures as an example the meet- multiplied with the preceding pseudo-document vector and ing models should get a higher weight than the background renders words closer in the history more significant. models. For this reason the λw of the background models The conditional probability of a word wt given a have to be lowered using θ. pseudo-document d˜t−1 is defined as To ensure that the n-gram model gets a certain part α of (k) the distribution, we define λw for word w and LSA model PLSA(wt|d˜t−1) , Lsak as (k) γ 1 − ǫw [Ksim(wt, d˜t−1) − Kmin(d˜t−1)] (k) , (5) λw n (9) ˜ ˜ γ 1−α Pw[Ksim(w, dt−1) − Kmin(dt−1)] (k) where ǫw is the uninformativeness of word w in LSA where Kmin(d˜t−1) = minw K(w, d˜t−1) to make the result- ing similarities nonnegative (Deng and Khudanpur, 2003). model Lsak as defined in (2) and n is the number of LSA models. This is a generalization of definition (6). Through 2.3. Combining LSA and n-gram Models the generalization it is also possible to train α, the mimi- For the interpolation of the word based n-gram mod- mum weight of the n-gram model. els and the LSA models we used the methods defined in For the INFG interpolation we had to optimize the Table 1. λ is a fixed constant interpolation weight, and ∝ model parameters θi, the part of the n-gram model α, and denotes that the result is normalized by the sum over the the γ exponent for each LSA model. whole vocabulary. λw is a word-dependent parameter de- fined as 3. Analysis of the models 1 − ǫw To gain a deeper understanding of our models we an- λw , . (6) 2 alyzed the effects of the model parameters and compared This definition ensures that the n-gram model gets at least our models with other similar models. For this analysis we half of the weight. λw is higher for more informativewords. used meeting heldout data, containing four ICSI, four CMU We used two different methods for the interpolation and four NIST meetings. The perplexities and similarities of n-gram models and LSA models. The information were estimated using LSA and 4-gram models trained on weighted geometric mean and simple linear interpolation. the Fisher conversational speech data (Bulyko et al., 2003) and the meeting data (Table 2) minus the meeting heldout data. The models were interpolated using the INFG inter- polation method (Table 1). 72 71 Training Source # of words (×103) 70 Fisher 23357 69 Meeting 880 Perplexity 68 Table 2: Training data sources. 67 1 0.8 1 0.8 0.6 0.6 0.4 0.4 0.2 0.2 3.1. Perplexity Space of Combined LSA Models 0 0 Meeting model θ Fisher model θ Figure 1 shows the perplexities for the meeting and the 1 2 Fisher LSA model, that were interpolated with an n-gram Figure 2: Perplexity space for 2 INFG interpolated LSA model using linear interpolation (Definition 7) were λ and 1 models.
Recommended publications
  • Probabilistic Topic Modelling with Semantic Graph
    Probabilistic Topic Modelling with Semantic Graph B Long Chen( ), Joemon M. Jose, Haitao Yu, Fajie Yuan, and Huaizhi Zhang School of Computing Science, University of Glasgow, Sir Alwyns Building, Glasgow, UK [email protected] Abstract. In this paper we propose a novel framework, topic model with semantic graph (TMSG), which couples topic model with the rich knowledge from DBpedia. To begin with, we extract the disambiguated entities from the document collection using a document entity linking system, i.e., DBpedia Spotlight, from which two types of entity graphs are created from DBpedia to capture local and global contextual knowl- edge, respectively. Given the semantic graph representation of the docu- ments, we propagate the inherent topic-document distribution with the disambiguated entities of the semantic graphs. Experiments conducted on two real-world datasets show that TMSG can significantly outperform the state-of-the-art techniques, namely, author-topic Model (ATM) and topic model with biased propagation (TMBP). Keywords: Topic model · Semantic graph · DBpedia 1 Introduction Topic models, such as Probabilistic Latent Semantic Analysis (PLSA) [7]and Latent Dirichlet Analysis (LDA) [2], have been remarkably successful in ana- lyzing textual content. Specifically, each document in a document collection is represented as random mixtures over latent topics, where each topic is character- ized by a distribution over words. Such a paradigm is widely applied in various areas of text mining. In view of the fact that the information used by these mod- els are limited to document collection itself, some recent progress have been made on incorporating external resources, such as time [8], geographic location [12], and authorship [15], into topic models.
    [Show full text]
  • Employee Matching Using Machine Learning Methods
    Master of Science in Computer Science May 2019 Employee Matching Using Machine Learning Methods Sumeesha Marakani Faculty of Computing, Blekinge Institute of Technology, 371 79 Karlskrona, Sweden This thesis is submitted to the Faculty of Computing at Blekinge Institute of Technology in partial fulfillment of the requirements for the degree of Master of Science in Computer Science. The thesis is equivalent to 20 weeks of full time studies. The authors declare that they are the sole authors of this thesis and that they have not used any sources other than those listed in the bibliography and identified as references. They further declare that they have not submitted this thesis at any other institution to obtain a degree. Contact Information: Author(s): Sumeesha Marakani E-mail: [email protected] University advisor: Prof. Veselka Boeva Department of Computer Science External advisors: Lars Tornberg [email protected] Daniel Lundgren [email protected] Faculty of Computing Internet : www.bth.se Blekinge Institute of Technology Phone : +46 455 38 50 00 SE–371 79 Karlskrona, Sweden Fax : +46 455 38 50 57 Abstract Background. Expertise retrieval is an information retrieval technique that focuses on techniques to identify the most suitable ’expert’ for a task from a list of individ- uals. Objectives. This master thesis is a collaboration with Volvo Cars to attempt ap- plying this concept and match employees based on information that was extracted from an internal tool of the company. In this tool, the employees describe themselves in free flowing text. This text is extracted from the tool and analyzed using Natural Language Processing (NLP) techniques.
    [Show full text]
  • Matrix Decompositions and Latent Semantic Indexing
    Online edition (c)2009 Cambridge UP DRAFT! © April 1, 2009 Cambridge University Press. Feedback welcome. 403 Matrix decompositions and latent 18 semantic indexing On page 123 we introduced the notion of a term-document matrix: an M N matrix C, each of whose rows represents a term and each of whose column× s represents a document in the collection. Even for a collection of modest size, the term-document matrix C is likely to have several tens of thousands of rows and columns. In Section 18.1.1 we first develop a class of operations from linear algebra, known as matrix decomposition. In Section 18.2 we use a special form of matrix decomposition to construct a low-rank approximation to the term-document matrix. In Section 18.3 we examine the application of such low-rank approximations to indexing and retrieving documents, a technique referred to as latent semantic indexing. While latent semantic in- dexing has not been established as a significant force in scoring and ranking for information retrieval, it remains an intriguing approach to clustering in a number of domains including for collections of text documents (Section 16.6, page 372). Understanding its full potential remains an area of active research. Readers who do not require a refresher on linear algebra may skip Sec- tion 18.1, although Example 18.1 is especially recommended as it highlights a property of eigenvalues that we exploit later in the chapter. 18.1 Linear algebra review We briefly review some necessary background in linear algebra. Let C be an M N matrix with real-valued entries; for a term-document matrix, all × RANK entries are in fact non-negative.
    [Show full text]
  • Latent Semantic Analysis for Text-Based Research
    Behavior Research Methods, Instruments, & Computers 1996, 28 (2), 197-202 ANALYSIS OF SEMANTIC AND CLINICAL DATA Chaired by Matthew S. McGlone, Lafayette College Latent semantic analysis for text-based research PETER W. FOLTZ New Mexico State University, Las Cruces, New Mexico Latent semantic analysis (LSA) is a statistical model of word usage that permits comparisons of se­ mantic similarity between pieces of textual information. This papersummarizes three experiments that illustrate how LSA may be used in text-based research. Two experiments describe methods for ana­ lyzinga subject's essay for determining from what text a subject learned the information and for grad­ ing the quality of information cited in the essay. The third experiment describes using LSAto measure the coherence and comprehensibility of texts. One of the primary goals in text-comprehension re­ A theoretical approach to studying text comprehen­ search is to understand what factors influence a reader's sion has been to develop cognitive models ofthe reader's ability to extract and retain information from textual ma­ representation ofthe text (e.g., Kintsch, 1988; van Dijk terial. The typical approach in text-comprehension re­ & Kintsch, 1983). In such a model, semantic information search is to have subjects read textual material and then from both the text and the reader's summary are repre­ have them produce some form of summary, such as an­ sented as sets ofsemantic components called propositions. swering questions or writing an essay. This summary per­ Typically, each clause in a text is represented by a single mits the experimenter to determine what information the proposition.
    [Show full text]
  • How Latent Is Latent Semantic Analysis?
    How Latent is Latent Semantic Analysis? Peter Wiemer-Hastings University of Memphis Department of Psychology Campus Box 526400 Memphis TN 38152-6400 [email protected]* Abstract In the late 1980's, a group at Bellcore doing re- search on information retrieval techniques developed a Latent Semantic Analysis (LSA) is a statisti• statistical, corpus-based method for retrieving texts. cal, corpus-based text comparison mechanism Unlike the simple techniques which rely on weighted that was originally developed for the task of matches of keywords in the texts and queries, their information retrieval, but in recent years has method, called Latent Semantic Analysis (LSA), cre• produced remarkably human-like abilities in a ated a high-dimensional, spatial representation of a cor• variety of language tasks. LSA has taken the pus and allowed texts to be compared geometrically. Test of English as a Foreign Language and per• In the last few years, several researchers have applied formed as well as non-native English speakers this technique to a variety of tasks including the syn• who were successful college applicants. It has onym section of the Test of English as a Foreign Lan• shown an ability to learn words at a rate sim• guage [Landauer et al., 1997], general lexical acquisi• ilar to humans. It has even graded papers as tion from text [Landauer and Dumais, 1997], selecting reliably as human graders. We have used LSA texts for students to read [Wolfe et al., 1998], judging as a mechanism for evaluating the quality of the coherence of student essays [Foltz et a/., 1998], and student responses in an intelligent tutoring sys• the evaluation of student contributions in an intelligent tem, and its performance equals that of human tutoring environment [Wiemer-Hastings et al., 1998; raters with intermediate domain knowledge.
    [Show full text]
  • Navigating Dbpedia by Topic Tanguy Raynaud, Julien Subercaze, Delphine Boucard, Vincent Battu, Frederique Laforest
    Fouilla: Navigating DBpedia by Topic Tanguy Raynaud, Julien Subercaze, Delphine Boucard, Vincent Battu, Frederique Laforest To cite this version: Tanguy Raynaud, Julien Subercaze, Delphine Boucard, Vincent Battu, Frederique Laforest. Fouilla: Navigating DBpedia by Topic. CIKM 2018, Oct 2018, Turin, Italy. hal-01860672 HAL Id: hal-01860672 https://hal.archives-ouvertes.fr/hal-01860672 Submitted on 23 Aug 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Fouilla: Navigating DBpedia by Topic Tanguy Raynaud, Julien Subercaze, Delphine Boucard, Vincent Battu, Frédérique Laforest Univ Lyon, UJM Saint-Etienne, CNRS, Laboratoire Hubert Curien UMR 5516 Saint-Etienne, France [email protected] ABSTRACT only the triples that concern this topic. For example, a user is inter- Navigating large knowledge bases made of billions of triples is very ested in Italy through the prism of Sports while another through the challenging. In this demonstration, we showcase Fouilla, a topical prism of Word War II. For each of these topics, the relevant triples Knowledge Base browser that offers a seamless navigational expe- of the Italy entity differ. In such circumstances, faceted browsing rience of DBpedia. We propose an original approach that leverages offers no solution to retrieve the entities relative to a defined topic both structural and semantic contents of Wikipedia to enable a if the knowledge graph does not explicitly contain an adequate topic-oriented filter on DBpedia entities.
    [Show full text]
  • Document and Topic Models: Plsa
    10/4/2018 Document and Topic Models: pLSA and LDA Andrew Levandoski and Jonathan Lobo CS 3750 Advanced Topics in Machine Learning 2 October 2018 Outline • Topic Models • pLSA • LSA • Model • Fitting via EM • pHITS: link analysis • LDA • Dirichlet distribution • Generative process • Model • Geometric Interpretation • Inference 2 1 10/4/2018 Topic Models: Visual Representation Topic proportions and Topics Documents assignments 3 Topic Models: Importance • For a given corpus, we learn two things: 1. Topic: from full vocabulary set, we learn important subsets 2. Topic proportion: we learn what each document is about • This can be viewed as a form of dimensionality reduction • From large vocabulary set, extract basis vectors (topics) • Represent document in topic space (topic proportions) 푁 퐾 • Dimensionality is reduced from 푤푖 ∈ ℤ푉 to 휃 ∈ ℝ • Topic proportion is useful for several applications including document classification, discovery of semantic structures, sentiment analysis, object localization in images, etc. 4 2 10/4/2018 Topic Models: Terminology • Document Model • Word: element in a vocabulary set • Document: collection of words • Corpus: collection of documents • Topic Model • Topic: collection of words (subset of vocabulary) • Document is represented by (latent) mixture of topics • 푝 푤 푑 = 푝 푤 푧 푝(푧|푑) (푧 : topic) • Note: document is a collection of words (not a sequence) • ‘Bag of words’ assumption • In probability, we call this the exchangeability assumption • 푝 푤1, … , 푤푁 = 푝(푤휎 1 , … , 푤휎 푁 ) (휎: permutation) 5 Topic Models: Terminology (cont’d) • Represent each document as a vector space • A word is an item from a vocabulary indexed by {1, … , 푉}. We represent words using unit‐basis vectors.
    [Show full text]
  • Vector-Based Word Representations for Sentiment Analysis: a Comparative Study
    CORE Metadata, citation and similar papers at core.ac.uk Provided by SEDICI - Repositorio de la UNLP Vector-based word representations for sentiment analysis: a comparative study M. Paula Villegas1, M. José Garciarena Ucelay1, Juan Pablo Fernández1, Miguel A. 2 1 1,3 Álvarez-Carmona , Marcelo L. Errecalde , Leticia C. Cagnina 1LIDIC, Universidad Nacional de San Luis, San Luis, Argentina 2Language Technologies Laboratory, INAOE, Puebla, México 3CONICET, Argentina {villegasmariapaula74, mjgarciarenaucelay, miguelangel.alvarezcarmona, merrecalde, lcagnina}@gmail.com Abstract. New applications of text categorization methods like opinion mining and sentiment analysis, author profiling and plagiarism detection requires more elaborated and effective document representation models than classical Information Retrieval approaches like the Bag of Words representation. In this context, word representation models in general and vector-based word representations in particular have gained increasing interest to overcome or alleviate some of the limitations that Bag of Words-based representations exhibit. In this article, we analyze the use of several vector-based word representations in a sentiment analysis task with movie reviews. Experimental results show the effectiveness of some vector-based word representations in comparison to standard Bag of Words representations. In particular, the Second Order Attributes representation seems to be very robust and effective because independently the classifier used with, the results are good. Keywords: text mining, word-based representations, text categorization, movie reviews, sentiment analysis 1 Introduction Selecting a good document representation model is a key aspect in text categorization tasks. The usual approach, named Bag of Words (BoW) model, considers that words are simple indexes in a term vocabulary. In this model, originated in the information retrieval field, documents are represented as vectors indexed by those words.
    [Show full text]
  • Similarity Detection Using Latent Semantic Analysis Algorithm Priyanka R
    International Journal of Emerging Research in Management &Technology Research Article August ISSN: 2278-9359 (Volume-6, Issue-8) 2017 Similarity Detection Using Latent Semantic Analysis Algorithm Priyanka R. Patil Shital A. Patil PG Student, Department of Computer Engineering, Associate Professor, Department of Computer Engineering, North Maharashtra University, Jalgaon, North Maharashtra University, Jalgaon, Maharashtra, India Maharashtra, India Abstract— imilarity View is an application for visually comparing and exploring multiple models of text and collection of document. Friendbook finds ways of life of clients from client driven sensor information, measures the S closeness of ways of life amongst clients, and prescribes companions to clients if their ways of life have high likeness. Roused by demonstrate a clients day by day life as life records, from their ways of life are separated by utilizing the Latent Dirichlet Allocation Algorithm. Manual techniques can't be utilized for checking research papers, as the doled out commentator may have lacking learning in the exploration disciplines. For different subjective views, causing possible misinterpretations. An urgent need for an effective and feasible approach to check the submitted research papers with support of automated software. A method like text mining method come to solve the problem of automatically checking the research papers semantically. The proposed method to finding the proper similarity of text from the collection of documents by using Latent Dirichlet Allocation (LDA) algorithm and Latent Semantic Analysis (LSA) with synonym algorithm which is used to find synonyms of text index wise by using the English wordnet dictionary, another algorithm is LSA without synonym used to find the similarity of text based on index.
    [Show full text]
  • LDA V. LSA: a Comparison of Two Computational Text Analysis Tools for the Functional Categorization of Patents
    LDA v. LSA: A Comparison of Two Computational Text Analysis Tools for the Functional Categorization of Patents Toni Cvitanic1, Bumsoo Lee1, Hyeon Ik Song1, Katherine Fu1, and David Rosen1! ! 1Georgia Institute of Technology, Atlanta, GA, USA (tcvitanic3, blee300, hyeoniksong) @gatech.edu (katherine.fu, david.rosen) @me.gatech.edu! !! Abstract. One means to support for design-by-analogy (DbA) in practice involves giving designers efficient access to source analogies as inspiration to solve problems. The patent database has been used for many DbA support efforts, as it is a pre- existing repository of catalogued technology. Latent Semantic Analysis (LSA) has been shown to be an effective computational text processing method for extracting meaningful similarities between patents for useful functional exploration during DbA. However, this has only been shown to be useful at a small-scale (100 patents). Considering the vastness of the patent database and realistic exploration at a large- scale, it is important to consider how these computational analyses change with orders of magnitude more data. We present analysis of 1,000 random mechanical patents, comparing the ability of LSA to Latent Dirichlet Allocation (LDA) to categorize patents into meaningful groups. Resulting implications for large(r) scale data mining of patents for DbA support are detailed. Keywords: Design-by-analogy ! Patent Analysis ! Latent Semantic Analysis ! Latent Dirichlet Allocation ! Function-based Analogy 1 Introduction Exposure to appropriate analogies during early stage design has been shown to increase the novelty, quality, and originality of generated solutions to a given engineering design problem [1-4]. Finding appropriate analogies for a given design problem is the largest challenge to practical implementation of DbA.
    [Show full text]
  • An Application of Latent Semantic Analysis to Word Sense Discrimination for Words with Related and Unrelated Meanings
    An Application of Latent Semantic Analysis to Word Sense Discrimination for Words with Related and Unrelated Meanings Juan Pino and Maxine Eskenazi (jmpino, max)@cs.cmu.edu Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213, USA Abstract We present an application of Latent Semantic Analysis to word sense discrimination within a tutor for English vocabulary learning. We attempt to match the meaning of a word in a document with the meaning of the same word in a fill-in-the-blank question. We compare the performance of the Lesk algorithm to La- tent Semantic Analysis. We also compare the Figure 1: Example of cloze question. performance of Latent Semantic Analysis on a set of words with several unrelated meanings and on a set of words having both related and To define the problem formally, given a target unrelated meanings. word w, a string r (the reading) containing w and n strings q1, ..., qn (the sentences used for the ques- 1 Introduction tions) each containing w, find the strings qi where In this paper, we present an application of Latent the meaning of w is closest to its meaning in r. Semantic Analysis (LSA) to word sense discrimi- We make the problem simpler by selecting only one nation (WSD) within a tutor for English vocabu- question. lary learning for non-native speakers. This tutor re- This problem is challenging because the context trieves documents from the Web that contain tar- defined by cloze questions is short. Furthermore, get words a student needs to learn and that are at a word can have only slight variations in meaning an appropriate reading level (Collins-Thompson and that even humans find sometimes difficult to distin- Callan, 2005).
    [Show full text]
  • Distributional Semantics Today Introduction to the Special Issue Cécile Fabre, Alessandro Lenci
    Distributional Semantics Today Introduction to the special issue Cécile Fabre, Alessandro Lenci To cite this version: Cécile Fabre, Alessandro Lenci. Distributional Semantics Today Introduction to the special issue. Revue TAL, ATALA (Association pour le Traitement Automatique des Langues), 2015, Sémantique distributionnelle, 56 (2), pp.7-20. hal-01259695 HAL Id: hal-01259695 https://hal.archives-ouvertes.fr/hal-01259695 Submitted on 26 Jan 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Distributional Semantics Today Introduction to the special issue Cécile Fabre* — Alessandro Lenci** * University of Toulouse, CLLE-ERSS ** University of Pisa, Computational Linguistics Laboratory ABSTRACT. This introduction to the special issue of the TAL journal on distributional seman- tics provides an overview of the current topics of this field and gives a brief summary of the contributions. RÉSUMÉ. Cette introduction au numéro spécial de la revue TAL consacré à la sémantique dis- tributionnelle propose un panorama des thèmes de recherche actuels dans ce champ et fournit un résumé succinct des contributions acceptées. KEYWORDS: Distributional Semantic Models, vector-space models, corpus-based semantics, se- mantic proximity, semantic relations, evaluation of distributional ressources. MOTS-CLÉS : Sémantique distributionnelle, modèles vectoriels, proximité sémantique, relations sémantiques, évaluation de ressources distributionnelles.
    [Show full text]