
LT3: Applying Hybrid Terminology Extraction to Aspect-Based Sentiment Analysis Orphee´ De Clercq, Marjan Van de Kauter, Els Lefever and Veronique´ Hoste LT3, Language and Translation Technology Team Department of Translation, Interpreting and Communication – Ghent University Groot-Brittannielaan¨ 45, 9000 Ghent, Belgium [email protected] Abstract (one for the laptops and one for the restaurants do- main). The restaurant sentences were to be anno- The LT3 system perceives ABSA as a task tated with automatically identified <target, aspect consisting of three main subtasks, which have category> tuples, the laptop sentences only with the to be tackled incrementally, namely aspect identified aspect categories. In the second phase term extraction, classification and polarity classification. For the first two steps, we see (Phase B), the gold annotations for the above two that employing a hybrid terminology extrac- datasets, as well as for a hidden domain, were given tion system leads to promising results, espe- and the participants had to return the corresponding cially when it comes to recall. For the polar- polarities (positive, negative, neutral). For more in- ity classification, we show that it is possible formation we refer to Pontiki et al. (2015). to gain satisfying accuracies, even on out-of- We tackled the problem by dividing the ABSA domain data, with a basic model employing task into three incremental subtasks: (i) aspect term only lexical information. extraction, (ii) aspect term classification and (iii) as- pect term polarity estimation (Pavlopoulos and An- 1 Introduction droutsopoulos, 2014). The first two are at the basis There exists a large interest in sentiment analysis of Phase A, whereas the final one constitutes Phase tar- of user-generated content. Until recently, the main B. For the first step, viz. extracting terms (or gets research focus has been on discovering the overall ), we wanted to test our in-house hybrid termi- polarity of a certain text or phrase. A noticeable nology extraction system (Section 2). Next, we per- shift has occurred to consider a more fine-grained formed a multiclass classification task relying on a approach, known as aspect-based sentiment analysis feature space containing both lexical and semantic (ABSA). For this task the goal is to automatically information to aggregate the previously identified identify the aspects of given target entities and the terms into the domain-specific and predefined as- aspect categories sentiment expressed towards each of them. In this pects (or ) (Section 3). Finally, we paper, we present the LT3 system that participated performed polarity classification by deriving both in this year’s SemEval 2015 ABSA task. Though general and domain-specific lexical features from the focus was on the same domains (restaurants and the reviews (Section 4). We finish with conclusions laptops) as last year’s task (Pontiki et al., 2014), it and prospects for future work (Section 5). differed in two ways. This time, entire reviews were 2 Aspect Term Extraction to be annotated and for one subtask the systems were confronted with an out-of-domain test set, unknown Before starting with any sort of classification, it to the participants. is essential to know which entities or concepts are The task ran in two phases. In the first phase present in the reviews. According to Wright (1997), (Phase A), the participants were given two test sets these “words that are assigned to concepts used in 719 Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 719–724, Denver, Colorado, June 4-5, 2015. c 2015 Association for Computational Linguistics the special languages that occur in subject-field or (Lee et al., 2011). We should also point out that we domain-related texts” are called terms. Translated to only allowed terms to be identified in the test data the current challenge, we are thus looking for words when a sentence contains a subjective opinion. This or terms specific to a specific domain or interest, was done by running it through the above-mentioned such as the restaurant domain. sentiment lexicons. In order to detect these terms, we tested our in-house terminology extraction system TEx- 3 Phase A SIS (Macken et al., 2013), which is a hybrid Given a list of possible candidate terms, the next step system combining linguistic and statistical infor- consists in aggregating these terms to broader aspect mation. For the linguistic analysis, TExSIS re- categories. As our main focus was on combining as- lies on tokenized, Part-of-Speech tagged, lemma- pect term extraction with classification and since no tized and chunked data using the LeTs Preprocess targets were annotated for the laptops, we decided toolkit (Van de Kauter et al., 2013), which is in- to focus on the restaurants domain. The organizers corporated in the architecture. Subsequently, all provided the participants with training data consist- words and chunks matching certain Part-of-Speech ing of 254 annotated restaurant reviews. The task patterns (i.e. nouns and noun phrases) were con- was then to assign each identified term to a correct sidered as candidate terms. In order to determine aspect category. the specificity of and cohesion between these can- didate terms, we combine several statistical filters For the classification task, we relied on a rich to represent the termhood and unithood of the can- feature space for each of the candidate targets and didate terms (Kageura and Umino, 1996). To this performed classification into the domain-specific purpose, we employed Log-likelihood (Rayson and categories. Whereas the annotations allow for a Garside, 2000), C-value (Frantzi et al., 2000) and two-step classification procedure by first classify- termhood (Vintar, 2010). All these statistical fil- ing the main categories and afterwards the subcat- ters were calculated using the Web 1T 5-gram cor- egories, we chose to perform the joint classification pus (Brants and Franz, 2006) as a reference corpus. as this yielded better results in our exploratory ex- After a manual inspection of the first output periments. for the training data, we formulated some filter- 3.1 Feature Extraction ing heuristics. We filter out terms consisting of more than six words, terms that refer to location For all candidate terms present in our data sets we names or that contain sentiment words. Locations derived a number of lexical and semantic features. are found using the Stanford CoreNLP toolkit (Man- For those candidate targets that have been recog- ning et al., 2014) and for the sentiment words, we nized as anaphors (see Section 2), these features filter those terms occurring in one of the follow- were derived based on the corresponding antecedent. ing sentiment lexicons: AFINN (Nielsen, 2011), First of all, we derived bag-of-words token uni- General Inquirer (Stone et al., 1966), NRC Emo- gram features of the sentence in which a term occurs tion (Mohammad and Turney, 2010; Mohammad in order to represent some of the lexical information and Yang, 2011), MPQA (Wilson et al., 2005) and present in each of the categories. Bing Liu (Hu and Liu, 2004). The main part of our feature vectors, however, The terms that resulted from this filtered TExSIS was made up of semantic features, which should output, supplemented with those terms that were an- enable us to classify our aspect terms into the notated in the training data but not recognized by our predefined categories. These semantic features terminology extraction system, were all considered consist of: as candidate terms. Finally, this list of candidate tar- gets was further extended by also including corefer- 1. WordNet features: for each main category, a ential links as null terms. Coreference resolution of value is derived indicating the number of (unique) each individual review was performed with the Stan- terms annotated as aspect terms from that cate- ford multi-pass sieve coreference resolution system gory in the training data that (1) co-occur in the 720 synset of the candidate term or (2) which are a hy- target term that is never associated with the respec- ponym/hypernym of a term in the synset. In case the tive target in the training dictionary, we overrule the candidate term is a multi-word term whose full term classification output and replace it by the (most fre- is not found, this value is calculated for all nouns in quent) category-subcategory label that is associated the multi-word term and the resulting sum is divided with this target in the training dictionary. by the number of nouns. The results of our system on the final test set and 2. Cluster features: using the implementa- rank are presented in Table 1, where Slot 1 refers to tion of the Brown hierarchical word clustering al- the aspect category classification and Slot 2 to the gorithm (Brown et al., 1992) by Liang (2005), we task of finding the correct opinion target expressions derived clusters from the Yelp dataset1. Then, we (or terms). derived for each main category a value indicating the number of (unique) terms annotated as aspect terms Slot Precision Recall F-score Rank from that category in the training data that co-occur Slot 1 51.54 56.00 53.68 8/15 with the candidate term in the same cluster. Since Slot 2 36.47 79.34 49.97 13/21 clusters can only contain single words, we calculate Slot 1,2 29.44 44.73 35.51 6/13 this value for all the nouns in a multi-word term and take the mean of the resulting sum. Table 1: Results of the LT3 system on Phase A 3. Linked Open Data (LOD) features: using For the design of our system we wanted to focus DBpedia (Lehmann et al., 2013), we included most on the combination of Slot 1 and 2, i.e.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-