Neural Semantic Role Labeling Using Verb Sense Disambiguation

Neural Semantic Role Labeling Using Verb Sense Disambiguation

Neural Semantic Role Labeling using Verb Sense Disambiguation Domenico Alfano Roberto Abbruzzese Donato Cappetta Eustema S.p.A. Eustema S.p.A. Eustema S.p.A. [email protected] [email protected] [email protected] Abstract study is the realization of a system able to perform SRL. The Natural Language Processing (NLP) A SRL system does nothing more than take a set community has recently experienced a of input phrases and, for each of them, it starts to growing interest in Semantic Role Label- determine the various components that could play ing (SRL). The increased availability of a semantic role. A component of a proposition annotated resources enables the develop- that plays a semantic role is defined as constituent. ment of statistical approaches specifically Once the possible candidates are determined, Ma- for SRL. This holds potential impact in chine Learning techniques are used to label them NLP applications. We examine and repro- with the right role. duce the Marcheggiani’s system and its in- This task becomes important for advanced appli- dividual components, including its anno- cations where it is also necessary to process the tated resources, parser, classification sys- semantic meaning of a sentence. Moreover, all this tem, the features used and the results ob- applications have to deal with ambiguity. tained by the system. Ambiguity is the term used to describe the fact Then, we explore different solutions in or- that a certain expression can be interpreted in more der to achieve better results by approach- than one way. ing to Verb-Sense Disambiguation (VSD). In NLP, ambiguity is present at several stages in VSD is a sub-problem of the Word Sense the processing of a text or a sentence, such as: Disambiguation (WSD) problem, that tries tokenization, sentence-splitting, part-of-speech to identify in which sense a polysemic (POS) tagging, syntactic parsing and semantic word is used in a given sentence. Thus a processing. Semantic ambiguity is usually the last sense inventory for each word (or lemma) to be addressed by NLP systems, and it tends to must be used. be one of the hardest to solve among all types of Finally, we also assess the challenges in ambiguities mentioned. SRL and identify the opportunities for use- For this type of ambiguity, the sentence has al- ful further research in future. ready been parsed and, even if its syntactic anal- ysis (parse tree) is unique and correct, some words 1 Introduction may feature more than one meaning for the gram- matical category they were tagged with. One of the fields where AI is gaining great impor- Usually this difference in meaning is associated to tance is the NLP. Nowadays, NLP has many ap- syntactic properties. In order to overcome these plications: search engines (semantic/topic search issues, this research study approaches to the VSD rather than word matching), automated speech task. The majority of the systems used in the VSD translation, automatic summarization, etc. task are based on Machine Learning techniques Therefore, there are many sub-tasks for natural (Witten, 2011). language applications that have already been stud- We approach both the tasks by following two dif- ied. An example is the syntactic analysis of the ferent solutions. words of a sentence. The object of this research Copyright c 2019 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). 2 Related Work pared with vectors of features from testing examples using the cosine function. 2.1 SRL Approaches • Support Vector Machines, the most widely Until recently, state-of-the-art Semantic Role La- used classification technique in WSD at Sen- beling (SRL) systems relied on complex sets of sEval3 (Agirre, 2004); (Lee, 2004); (Vil- lexico-syntactic features (Pradhan, 2005) as well larejo, 2004), is a classification method that as declarative constraints (Punyakanok, 2008). finds the maximal margin hyperplane that Neural SRL models, instead, exploit induction ca- best separates the positive from the negative pabilities of neural networks, largely eliminating examples. In the particular case of WSD, the need for complex ”hand-made” features. Re- this has to be slightly tuned for multiple class cently, it has been shown that an accurate span- classification. Usually, methods like one- based SRL model can be constructed without re- against-all are used, which lead to the cre- lying on syntactic features (Jie Zhou, 2015). In ation of one classifier per class. particular, Roth and Lapata (Roth and Lapata, 2016) argue that syntactic features are necessary The most commonly used features used by the sys- for the dependency-based SRL and show that per- tems proposed and presented at SensEval3 can be formance of their model degrades dramatically if divided as follows: syntactic paths between arguments and predicates are not provided as an input. • Collocations: n-grams (usually bi-grams or Recent studies (Luheng He, 2018) propose an end- tri-grams) around the target word are col- to-end approach for jointly predicting all predi- lected. The information stored for then- cates, arguments spans, and the relations between grams is composed by the lemma, word-from them. The model makes independent decisions and part-of-speech tag of each word. about what relationship, if any, holds between ev- • Syntactic dependencies: syntactic dependen- ery possible word-span pair, and learns contextual- cies are extracted among words around the ized span representations that provide rich, shared target word. The relations most commonly input features for each decision. used are subject, object, modifier. However, depending on the system, other dependencies 2.2 WSD Approaches might also be extracted. An overview of the most used techniques and fea- tures for WSD was also conducted, based on the • Surrounding context: single words in a de- systems evaluated at the SensEval3. The most fined window size are extracted and used in a common learning algorithms (Witten, 2011) used bag-of-words approach. at SensEval3 are the following: • Knowledge-Based information: Some sys- tems also make use of information suchas • The Naive Bayes algorithm, which estimates WordNet’s domains, FrameNet’s syntactic the most probable sense for a given word w patterns or annotated examples, among oth- based on the prior probability of each sense ers. and the conditional probability for each of the features in that context. 3 Data • The Decision List algorithm (Yarowsky, The dataset used is the CoNLL 2009 Shared Task 1995), which builds a list of rules, ordered built on the CoNLL 2008 task which has been ex- from the highest to the lowest weighted fea- tended to multiple languages. The core of the task ture. The correct sense of the word is deter- was to predict syntactic and semantic dependen- mined by the first rule that is matched. cies and their labeling. Data was provided for both statistical training and • The Vector Space Model algorithm, which evaluation, in order to extract these labelled de- considers the features of the context as binary pendencies from manually annotated Treebanks values in a vector. In the training phase, a such as the Penn Treebank for English, the Prague centroid is calculated for each possible sense Dependency Treebank for Czech and similar Tree- of the word. These centroids are then com- banks for Catalan, Chinese, German, Japanese and Spanish languages, enriched with semantic rela- • A randomly initialized word embedding dw tions. Great effort has been dedicated in providing xre 2 R . the participants with a common and relatively sim- dw ple data representation for all the languages, simi- • A pre-trained word embedding xpe 2 R . lar to the 2008 English data. Role-annotated data • A randomly initialized part-of-speech tag makes it available for many research opportunities dp embedding xpos 2 R . in SRL including a broad spectrum of probabilis- tic and machine learning approaches. • A randomly initialized lemma embedding d We have introduced the dataset associated with xle 2 R l that is only active if the word is SRL; we are now prepared to discuss the ap- one of the predicates. proaches to automatic SRL and VBS. Then, it has been used the Predicate-Specific En- 4 Metrics coding. Specifically, when identifying arguments of a given predicate, the authors added a predicate- For many of these subtasks there are standard eval- specific feature to the representation of each word uations techniques and corpora. Standard eval- in the sentence by concatenating a binary flag to uation metrics from information retrieval include the word representation. The flag is set as 1 for precision, recall and a combined metric called F1 the word corresponding to the currently consid- measure (Jurafsky, 2000). ered predicate, it is set as 0 otherwise. In this way, Precision is a measure of how much of the infor- sentences with more than one predicate will be re- mation that the system returned is correct, also encoded by Bidirectional LSTMs multiple times. known as accuracy. Recall is a measure of how much relevant information the system has ex- 5.2 Encoder tracted from text, thus a measure of the system’s Recurrent neural networks (RNN) (Elman, coverage. The F1 measure balances recall and pre- 1990), more precisely, Long Short-Term Memory cision. (LSTM) networks (Hochreiter and Schmidhuber, A corpus is often divided into three sets: training 1997) are one of the most effective ways to model set, development set and testing set. Training set sequences. Formally, the LSTM is a function is used for training systems, whereas the develop- that takes as input the sequence and returns a ment set is used to tune parameters of the learning hidden state. This state can be regarded as a systems, and sselecting the best model.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us