
Sequential Tagging of Semantic Roles on Chinese FrameNet Jihong LI Ruibo WANG, Yahui GAO Computer Center Computer Center Shanxi University Shanxi University [email protected] wangruibo,gaoyahui @sxu.edu.cn { } Abstract FrameNet (Baker et al., 1998), PropBank (Kings- bury and Palmer, 2002), NomBank (Meyers et al., In this paper, semantic role labeling(SRL) 2004), and so on. On this basis, several interna- on Chinese FrameNet is divided into the tional semantic evaluations have been organized, subtasks of boundary identification(BI) which include Senseval 3 (Litkowski, 2004), and semantic role classification(SRC). SemEval 2007 (Baker,et al., 2007), CoNLL These subtasks are regarded as the se- 2008 (Surdeanu et al., 2008), CoNLL 2009 (Hajic quential tagging problem at the word et al., 2009), and so on. level, respectively. We use the conditional The first SRL model on FrameNet was pro- random fields(CRFs) model to train and posed by Gildea and Jurafsky(2002). The model test on a two-fold cross-validation data consists of two subtasks of boundary identifica- set. The extracted features include 11 tion(BI) and semantic role classification(SRC). word-level and 15 shallow syntactic fea- Both subtasks were implemented on the pretreat- tures derived from automatic base chunk ment results of the full parsing tree. Many lex- parsing. We use the orthogonal array of ical and syntactic features were extracted to im- statistics to arrange the experiment so that prove the accuracy of the model. On the test data the best feature template is selected. The of FrameNet, the system achieved 65% precision experimental results show that given the and 61% recall. target word within a sentence, the best Most works on SRL followed Gildea’s frame- F-measures of SRL can achieve 60.42%. work of processing the SRL task on English For the BI and SRC subtasks, the best F- FrameNet and PropBank. They built their model measures are 70.55 and 81%, respectively. on the full parse tree and selected features using The statistical t-test shows that the im- various machine learning methods to improve the provement of our SRL model is not signif- accuracy of SRL models. Many attempts have icant after appending the base chunk fea- made significant progress, ssuch as the works of tures. Pradhan et al. (2005), Surdeanu et al. (2007), and so on. Other researchers regarded the task 1 Introduction of SRL as a sequential tagging problem and em- Semantic parsing is important in natural lan- ployed the shallow chunking technique to solve it, guage processing, and it has attracted an increas- as described by Marquez at al. (2008). ing number of studies in recent years. Cur- Although the SRL model based on a full parse rently, its most important aspect is the formaliza- tree has good performance in English, this method tion of the proposition meaning of one sentence of processing is not available in other languages, through the semantic role labeling. Therefore, especially in Chinese. A systemic study of Chi- many large human-annotated corpora have been nese SRL was done by Xue et al. (2008). Like constructed to support related research, such as the English SRL procedure, he removed many 22 Proceedings of the 8th Workshop on Asian Language Resources, pages 22–29, Beijing, China, 21-22 August 2010. c 2010 Asian Federation for Natural Language Processing uncorrelated constituents of a parse tree and re- a sequential tagging problem at the word level. lied on the remainder to identify the semantic Conditional random fields(CRFs) model is em- roles using the maximum entropy model. When ployed to train the model and predict the result human-corrected parse is used, the F-measures of the unlabeled sentence. To improve the accu- on the PropBank and NomBank achieve 92.0 and racy of the model, base chunk features are intro- 69.6%, respectively. However, when automatic duced, and the feature selection method involving full parse is used, the F-measures only achieve an orthogonal array is adopted. The experimen- 71.9 and 60.4%, respectively. This significant de- tal results illustrate that the F-measure of our SRL crease prompts us to analyze its causes and to find model achieves 60.42%. This is the best SRL re- a potential solution. sult of CFN so far. First, the Chinese human-annotated resources The paper is organized as follows. In Section of semantic roles are relatively small. Sun and 2, we describe the situation of CFN and introduce Gildea only studied the SRL of 10 Chinese verbs SRL on CFN. In Section 3, we propose our SRL and extracted 1,138 sentences in the Chinese Tree model in detail. In Section 4, the candidate feature Bank. The size of the Chinese PropBank and set is proposed, and the orthogonal-array-based Chinese NomBank used in the paper of Xue is feature selection method is introduced. In Sec- significantly smaller than the ones used in En- tion 5, we describe the experimental setup used glish language studies. Moreover, more verbs ex- throughout this paper. In Section 6, we list our ist in Chinese than in English, which increases the experimental results and provide detailed analy- sparsity of Chinese Semantic Role data resources. sis. The conclusions and several further directions The same problem also exists in our experiment. are given at the end of this paper. The current corpus of Chinese FrameNet includes about 18,322 human-annotated sentences of 1,671 2 CFN and Its SRL task target words. There is only an average of less than Chinese FrameNet(CFN) (You et al., 2005) is a re- 10 sentences for every target word. To reduce the search project that has been developed by Shanxi influence of the data sparsity, we adopt a two-fold University, creating an FN-styled lexicon for Chi- cross validation technique for train and test label- nese, based on the theory of Frame Semantics ing. (Fillmore, 1982) and supported by corpus evi- Second, because of the lack of morphological dence. The results of the CFN project include a clues in Chinese, the accuracy of a state-of-the-art lexical resource, called the CFN database, and as- parsing system significantly decreases when used sociated software tools. Many natural language for a realistic scenario. In the preliminary stage processing(NLP) applications, such as Informa- of building an SRL model of CFN, we employed tion Retrieval and Machine Translation, will ben- a Stanford full parser to parse all sentences in the efit from this resource. In FN, the semantic roles corpus and adopted the traditional SRL technique of a predicate are called the frame elements of a on our data set. However, the experiment result frame. A frame has different frame elements. A was insignificant. Only 76.48% of the semantic group of lexical units (LUs) that evokes the same roles in the data set have a constituent with the frame share the same names of frame elements. same text span in the parse tree, and the F-measure The CFN project currently contains more than of BI can only achieves 54%. Therefore, we at- 1,671 LUs, more than 219 semantic frames, and tempted to use another processing technique for has exemplified more than 18,322 annotated sen- SRL on CFN. We formalized SRL on CFN into a tences. In addition to correct segmentation and sequential tagging problem at the word level. We part of speech, every sentence in the database is first extracted 11 word features into the baseline marked up to exemplify the semantic and syntac- model. Then we added 15 additional base chunk tic information of the target word. Each annotated features into the SRL model. sentence contains only one target word. In this paper, the SRL task of CFN comprises (a). <medium-np-subj 第 1/m 章/q > <tgt=” two subtasks: BI and SRC. These are regarded as 陈述” 介绍/v > <msg-np-obj 算法/n 与/c 数据/n 23 结构/n > ;/w To avoid the problem of data sparsity, we use all The CFN Corpus is currently at an early stage, sentences in our train data set to train the model of and the available CFN resource is relatively lim- BI. ited, so the SRL task on CFN is described as fol- lows. Given a Chinese sentence, a target word, and its frame, we identify the boundaries of the 3.3 SRC frame elements within the sentence and label them with the appropriate frame element name. This is After predicting the boundaries of semantic role the same as the task in Senseval-3. chunks in a sentence, the proper semantic role types should be assigned in the SRC step. Al- 3 Shallow SRL Models though it can be easily modeled as a classification problem, we regarded it as a sequential tagging This section proposes our SRL model architec- problem at the word level. An additional con- ture, and describes the stages of our model in de- straint is employed in this step: the boundary tags tail. of the predicting sequence of this stage should be 3.1 SRL Model Architecture consistent with the the output of the BI stage. A family of SRL models can be constructed using One intuitive reason for this model strategy is only shallow syntactic information as the input. that the SRC step can use the same feature set as The main differences of the models in this family BI, and it can further prove the rationality of our mainly focus on the following two aspects. feature optimization method. i) model strategy: whether to combine the sub- tasks of BI and SRC? ii) tagging unit: which is used as the tagging 3.4 Postprocessing unit, word or chunk.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-