
FBK-TR: Applying SVM with Multiple Linguistic Features for Cross-Level Semantic Similarity Ngoc Phuoc An Vo Tommaso Caselli Octavian Popescu Fondazione Bruno Kessler TrentoRISE Fondazione Bruno Kessler University of Trento Trento, Italy Trento, Italy Trento, Italy [email protected] [email protected] [email protected] Abstract word similarity of the resulting pairs" (Mihalcea et al., 2006). In contrast, the third approach uses Recently, the task of measuring seman- machine learning techniques to learn models built tic similarity between given texts has from different lexical, semantic and syntactic fea- drawn much attention from the Natural tures and then give predictions on degree of simi- Language Processing community. Espe- larity between given texts (Šaric´ et al., 2012). cially, the task becomes more interesting when it comes to measuring the seman- At SemEval 2014, the Task 3 "Cross-Level Se- tic similarity between different-sized texts, mantic Similarity" (Jurgens et al., 2014) is to eval- e.g paragraph-sentence, sentence-phrase, uate the semantic similarity across different sizes phrase-word, etc. In this paper, we, the of texts, in particular, a larger-sized text is com- FBK-TR team, describe our system par- pared to a smaller-sized one. The task consists ticipating in Task 3 "Cross-Level Seman- of four types of semantic similarity comparison: tic Similarity", at SemEval 2014. We also paragraph to sentence, sentence to phrase, phrase report the results obtained by our system, to word, and word to sense. The degree of similar- compared to the baseline and other partic- ity ranges from 0 (different meanings) to 4 (simi- ipating systems in this task. lar meanings). For evaluation, systems were eval- 1 Introduction uated, first, within comparison type and second, across all comparison types. Two methods are Measuring semantic text similarity has become a used to evaluate between system outputs and gold hot trend in NLP as it can be applied to other standard (human annotation), which are Pearson tasks, e.g. Information Retrieval, Paraphrasing, correlation and Spearman’s rank correlation (rho). Machine Translation Evaluation, Text Summariza- tion, Question and Answering, and others. Several The FBK-TR team participated in this task with approaches proposed to measure the semantic sim- three different runs. In this paper, we present a ilarity between given texts. The first approach is clear and comprehensive description of our sys- based on vector space models (VSMs) (Meadow, tem which obtained competitive results. Our main 1992). A VSM transforms given texts into "bag- approach is using machine learning technique to of-words" and presents them as vectors. Then, it learn models from different lexical and semantic deploys different distance metrics to compute the features from train corpora to make prediction on closeness between vectors, which will return as the test corpora. We used support vector machine the distance or similarity between given texts. The (SVM) regression model to solve the task. next well-known approach is using text-alignment. By assuming that two given texts are semantically The remainder of the paper is organized as fol- similar, they could be aligned on word or phrase lows. Section 2 presents the system overview. levels. The alignment quality can serve as a simi- Sections 3, 4 and 5 describe the Semantic Word larity measure. "It typically pairs words from the Similarity, String Similarity and other features, re- two texts by maximizing the summation of the spectively. Section 6 discusses about SVM ap- This work is licensed under a Creative Commons At- proach. Section 7 presents the experiment settings tribution 4.0 International Licence. Page numbers and pro- ceedings footer are added by the organisers. Licence details: for each subtask. Finally, Sections 8 and 9 present http://creativecommons.org/licenses/by/4.0/ the evaluation and conclusion. 284 Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 284–288, Dublin, Ireland, August 23-24, 2014. each expressing a distinct concept) to provide short, general definitions, and record the vari- ous semantic relations between synsets. We used Perdersen’s package WordNet:Similarity (Peder- sen et al., 2004) to obtain similarity scores for the lexical items covered in WordNet. Similarity scores have been computed by means of the Lin measure (Lin, 1998). The Lin measure is built on Resnik’s measure of similarity (Resnik, 1995): 2 IC(LCS) Simlin = ∗ (1) IC(concept1) + IC(concept2) where IC(LCS) is the information content (IC) of the least common subsumer (LCS) of two con- Figure 1: System Overview. cepts. To overcome the limit in coverage of WordNet, we applied the Levenshtein distance (Levenshtein, 2 System Overview 1966). The distance between two words is defined Our system was built on different linguistic fea- by the minimum number of operations (insertions, tures as shown in Figure 1. By constructing a deletions and substitutions) needed to transform pipeline system, each linguistic feature can be one word into the other. used independently or together with others to mea- 3.3 Wikipedia Relatedness sure the semantic similarity of given texts as well as to evaluate the significance of each feature to Wikipedia Miner (Milne and Witten, 2013) is a the accuracy of system’s predictions. On top of Java-based package developed for extracting se- this, the system is expandable and scalable for mantic information from Wikipedia. Through our adopting more useful features aiming for improv- experiments, we observed that Wikipedia related- ing the accuracy. ness plays an important role for providing extra information to measure the semantic similarity be- 3 Semantic Word Similarity Measures tween words. We used the package Wikipedia Miner from University of Waikato (New Zealand) At the lexical level, we built a simple, yet effec- to extract additional relatedness scores between tive Semantic Word Similarity model consisting of words. three components: WordNet similarity, Wikipedia relatedness and Latent Semantic Analysis (LSA). 3.4 Latent Semantic Analysis (LSA) These components played important and compli- We also took advantage from corpus-based ap- mentary roles to each other. proaches to measure the semantic similarity be- tween words by using Latent Semantic Analysis 3.1 Data Processing (LSA) technique (Landauer et al., 1998). LSA as- We used the TreeTagger tool (Schmid, 1994) to sumes that similar and/or related words in terms extract Part-of-Speech (POS) from each given of meaning will occur in similar text contexts. In text, then tokenize and lemmatize it. On the basis general, a LSA matrix is built from a large cor- of the POS tags, we only picked lemmas of con- pus. Rows in the matrix represent unique words tent words (Nouns and Verbs) from the given texts and columns represent paragraphs or documents. and then paired them up regarding to similar POS The content of the matrix corresponds to the word tags. count per paragraph/document. Matrix size is then reduced by means of Single Value Decomposition 3.2 WordNet Similarity and Levenshtein (SVD) technique. Once the matrix has been ob- Distance tained, similarity and/or relatedness between the WordNet (Fellbaum, 1999) is a lexical database words is computed by means of cosine values for the English language in which words are (scaled between 0 and 1) for each word vector grouped into sets of synonyms (namely synsets, in the matrix. Values close to 1 are assumed to 285 be very similar/related, otherwise dissimilar. We maximum 500 topics (20, 50, 100, 150, 200, 250, trained our LSA model on the British National 300, 350, 400, 450 and 500). From the proportion Corpus (BNC) 1 and Wikipedia 2 corpora. vectors (distribution of documents over topics) of given texts, we applied three different measures to 4 String Similarity Measures compute the distance between each pair of texts, The Longest Common Substring (LCS) is the which are Cosine similarity, Kullback-Leibler and longest string in common between two or more Jensen-Shannon divergences (Gella et al., 2013). strings. Two given texts are considered similar if they are overlapping/covering each other (e.g sen- 5.2 Named-Entity Recognition (NER) tence 1 covers a part of sentence 2, or otherwise). NER aims at identifying and classifying entities We implemented a simple algorithm to extract the in a text with respect to a predefined set of cate- LCS between two given texts. Then we divided the gories such as person names, organizations, loca- LCS length by the product of normalized lengths tions, time expressions, quantities, monetary val- of two given texts and used it as a feature. ues, percentages, etc. By exploring the training set, we observed that there are lot of texts in this 4.1 Analysis Before and After LCS task containing named entities. We deployed the After extracting the LCS between two given texts, Stanford Named Entity Recognizer tool (Finkel et we also considered the similarity for the parts be- al., 2005) to extract the similar and overlapping fore and after the LCS. The similarity between the named entities between two given texts. Then we text portions before and after the LSC has been ob- divided the number of similar/overlapping named tained by means of the Lin measure and the Lev- entities by the sum length of two given texts. enshtein distance. 6 Support Vector Machines (SVMs) 5 Other Features Support vector machine (SVM) (Cortes and Vap- To take into account other levels of analysis for se- nik, 1995) is a type of supervised learning ap- mantic similarity between texts, we extended our proaches. We used the LibSVM package (Chang features by means of topic modeling and Named and Lin, 2011) to learn models from the different Entities. linguistic features described above. However, in 5.1 Topic Modeling (Latent Dirichlet SVM the problem of finding optimal kernel pa- Allocation - LDA) rameters is critical and important for the learning process.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages5 Page
-
File Size-