<<

Sense Disambiguation Incorporating Lexical and Structural Semantic Information Takaaki Tanaka† Francis Bond♦ Timothy Baldwin♠ Sanae Fujita† Chikara Hashimoto♣ † {takaaki, sanae}@cslab.kecl.ntt.co.jp ♦ [email protected][email protected][email protected]

† NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation ♦ National Institute of Information and Communications Technology ♠ The University of Melbourne ♣ Yamagata University

Abstract the superordinate concepts capital (⊃ stocks) and move upward (⊃ sky rocket) frequently appear to- We present results that show that incorporat- gether. However, there has been little success in this ing lexical and structural semantic informa- area to date. For example, Xiong et al. (2005) use se- tion is effective for word sense disambigua- mantic knowledge to parse Chinese, but gain only a tion. We evaluated the method by using pre- marginal improvement. Focusing on WSD, Steven- cise information from a large treebank and son (2003) and others have shown that the use of an ontology automatically created from dic- syntactic information (predicate-argument relations) tionary sentences. Exploiting rich semantic improve the quality of word sense disambiguation and structural information improves preci- (WSD). McCarthy and Carroll (2003) have shown sion 2–3%. The most gains are seen with the effectiveness of the selectional preference infor- verbs, with an improvement of 5.7% over a mation for WSD. However, there is still little work model using only bag of and n-gram on combining WSD and parse selection. features. We hypothesize that one of the reasons for the 1 Introduction lack of success is that there has been no resource annotated with both syntactic (or structural seman- Recently, significant improvements have been made tic information) and lexical semantic information. in combining symbolic and statistical approaches For English, there is the SemCor corpus (Fellbaum, to various natural language processing tasks. In 1998) which is annotated with parse trees and Word- parsing, for example, symbolic grammars are be- Net senses, but it is fairly small, and does not ex- ing combined with stochastic models (Riezler et al., plicitly include any structural semantic information. 2002; Oepen et al., 2002; Malouf and van Noord, Therefore, we decided to construct and use a tree- 2004). Statistical techniques have also been shown bank with both syntactic information (e.g. HPSG to be useful for word sense disambiguation (Steven- parses) and lexical semantic information (e.g. sense son, 2003). However, to date, there have been tags): the Hinoki treebank (Bond et al., 2004). This few combinations of sense information together with can be used to train word sense disambiguation and symbolic grammars and statistical models. Klein parse ranking models using both syntactic and lexi- and Manning (2003) show that much of the gain in cal semantic features. In this paper, we discuss only statistical parsing using lexicalized models comes word sense disambiguation. Parse ranking is dis- from the use of a small set of function words. cussed in Fujita et al. (2007). Features based on general relations provide little improvement, presumably because the data is too 2 The Hinoki Corpus sparse: in the Penn treebank normally used to train and test statistical parsers stocks and skyrocket never The Hinoki corpus consists of the Lexeed Seman- appear together. They note that this should motivate tic Database of Japanese (Kasahara et al., 2004) and the use of similarity and/or class based approaches: corpora annotated with syntactic and semantic infor-

477 Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pp. 477–485, Prague, June 2007. c 2007 Association for Computational mation. Poly- #Mono- Fam #Words semous #WS semous(%) 6.5 - 368 182 4.0 186 (50.5) 2.1 Lexeed 6.0 - 4,445 1,902 3.4 2,543 (57.2) Lexeed is a database built from on a dictionary, 5.5 - 9,814 3,502 2.7 6,312 (64.3) 5.0 - 11,430 3,457 2.5 7,973 (69.8) which defines word senses used in the Hinoki cor- pus and has around 49,000 dictionary definition sen- Table 1: Word Senses in Lexeed tences and 46,000 example sentences which are syn- tactically and semantically annotated. Lexeed con- sists of all words with a familiarity greater than or 2.3 Ontology equal to five on a scale of one to seven. This gives The Hinoki corpus comes with an ontology semi- a fundamental vocabulary of 28,000 words, divided automatically constructed from the parse results of into 46,347 different senses. Each sense has a defi- definitions in Lexeed (Nichols and Bond, 2005). The nition and example sentence written using ontology includes more than 80 thousand relation- only these 28,000 familiar words (and some function ships between word senses, e.g. , hyper-

words). Many senses have more than one sentence nym, abbreviation, etc. The hypernym relation for

Í ¿ in the definition: there are 75,000 defining sentences þ untenshu “chauffeur” is shown in Figure 1.

in all. Hypernym or synonym relations exist for almost all

Í¿ A (simplified) example of the entry for þ un- content words. tenshu “chauffeur” is given in Figure 1. Each word contains the word itself, its part of speech (POS) and 2.4 Thesaurus lexical type(s) in the grammar, and the familiarity As part of the ontology verification, all nominal and score. Each sense then contains definition and ex- most verbal word senses in Lexeed were linked to ample sentences, links to other senses in the lexicon semantic classes in the Japanese thesaurus, Nihongo (such as hypernym), and links to other resources, Goi-Taikei (Ikehara et al., 1997). These were then such as the Goi-Taikei (Ikehara et al., 1997) and hand verified. Goi-Taikei has about 400,000 words WordNet (Fellbaum, 1998). Each content word in including proper nouns, most nouns are classified the definition and example sentences is annotated into about 2,700 semantic classes. These seman- with sense tags from the same lexicon. tic classes are arranged in a hierarchical structure

(11 levels). The Goi-Taikei Semantic Class for þ

2.2 Lexical Annotation ¿ Í untenshu “chauffeur” is shown in Figure 1: The lexical semantic annotation uses the sense in- hC292:driveri at level 9 which is subordinate to ventory from Lexeed. All words in the fundamental hC4:personi. vocabulary are tagged with their sense. For example,

2.5 Syntactic and Structural Semantics ²  the word d ookii “big” (in ookiku naru “grow up”) is tagged as sense 5 in the example sentence Annotation (Figure 1), with the meaning “elder, older”. Syntactic annotation is done by selecting the best Each word was annotated by five annotators. We parse (or parses) from the full analyses derived by use the majority choice in case of disagreements a broad-coverage precision grammar. The gram- (Tanaka et al., 2006). Inter-annotator agreements mar is an HPSG implementation (JACY: Siegel and among the five annotators range from 78.7% to Bender, 2002), which provides a high level of de- 83.3%: the lowest agreement is for the Lexeed def- tail, marking not only dependency and constituent inition sentences and the highest is for Kyoto cor- structure but also detailed semantic relations. As the pus (newspaper text). These agreements reflect the grammar is based on a monostratal theory of gram- difficulties in disambiguating word sense over each mar (HPSG: Pollard and Sag, 1994) it is possible corpus and can be considered as the upper bound of to simultaneously annotate syntactic and semantic precision for WSD. structure without overburdening the annotator. Us- Table 1 shows the distribution of word senses ac- ing a grammar enforces treebank consistency — all cording to the word familiarity in Lexeed. sentences annotated are guaranteed to have well-

478

Í¿ INDEX þ untenshu POS noun   LEX-TYPE noun-lex   FAMILIARITY 6.2 [1–7] (≥ 5)   

 

℄ ¥ þÍ ¼  \   1 1 1 4 a person who drives trains and cars   

  h i

\ þÍ¿ Ó Ý ¢  d 5 1 1 6 3   EXAMPLE    I dream of growing up and becoming a train driver  SENSE 1  " #      HYPERNYM ¼4 hito “person”      SEM. CLASS h292:driveri (⊂ h4:personi)         WORDNET motorman1    

  

Í¿ Figure 1: Dictionary Entry for þ 1 untenshu “chauffeur” formed parses. The flip side to this is that any sen- UTTERANCE tences which the parser cannot parse remain unan- NP notated, at least unless we were to fall back on full VP N manual mark-up of their analyses. The actual anno- PP V tation process uses the same tools as the Redwoods NP treebank of English (Oepen et al., 2002). PP

There were 4 parses for the definition sentence N CONJ N CASE-P V V

\ ℄  ¥ þ Í ¼ shown in Figure 1. The correct parse, shown as a \ densha ya jidousha o unten suru hito phrase structure tree, is shown in Figure 2. The two train or car ACC drive do person

sources of are the conjunction and the rel-

Í¿ þ 1 “chauffeur”: “a person who drives a train or car”

ative clause. The parser also allows the conjunction

¼

to join to \ densha and hito. In Japanese, rel- Í Figure 2: Syntactic View of the Definition of þ ative clauses can have gapped and non-gapped read- ¿1 untenshu “chauffeur”

ings. In the gapped reading (selected here), ¼ hito Í is the subject of þ unten “drive”. In the non- gapped reading there is some underspecified relation between the thing and the verb phrase. This is sim- The semantic view shows some ambiguity has ilar to the difference in the two readings of the day been resolved that is not visible in the purely syn- he knew in English: “the day that he knew about” tactic view. (gapped) vs “the day on which he knew (some- The semantic view can be further simplified into a thing)” (non-gapped). Such semantic ambiguity is dependency representation, further abstracting away resolved by selecting the correct derivation tree that from quantification, as shown in Figure 4. One of includes the applied rules in building the tree. the advantages of the HPSG sign is that it contains The parse results can be automatically given by all this information, making it possible to extract the the HPSG parser PET (Callmeier, 2000) with the particular view needed. In order to make linking to Japanese grammar JACY. The current parse ranking other resources (such as the sense annotation) easier, model has an accuracy of 70%: the correct tree is predicates are labeled with pointers back to their po- ranked first 70% of the time (for Lexeed definition sition in the original surface string. For example, the sentences) (Fujita et al., 2007). predicate densha n 1 links to the surface characters

The full parse is an HPSG sign, containing both between positions 0 and 3: \ . syntactic and semantic information. A view of the semantic information is given in Figure 31.

1The specific meaning representation language used in JACY is Minimal Recursion Semantics (Copestake et al., 2005).

479

℄¥ þ Í ¼ TEXT \ TOP h1  udef rel ya p  proposition m rel unknown rel densha n LBL h8 LBL h11  LBL h1 LBL h4    LBL h6 ARG0 x7 ARG0 x13  ARG0 e2 ARG0 e2      ARG0 x7 RSTR h9 L-INDEX x7    MARG h3 ARG x5        BODY h10R-INDEX x12          udef rel udef rel         LBL h15 jidousha n LBL h19        RELS  ARG0 x12 LBL h18 ARG0 x12        RSTR h16 ARG0 x12 RSTR h20   BODY h17 BODY h21          unten s   udef rel    proposition m rel    LBL h22 hito n LBL h25    LBL h10001   ARG0 e23 tense=present  LBL h24 ARG0 x5       ARG0 e23 tense=present     ARG1 x5 ARG0 x5 RSTR h26       MARG h28   ARG2 x13  BODY h27          HCONS h3 qeq h4,h9 qeq h6,h16 qeq h11,h20 qeq h18,h26 qeq h24,h28 qeq h22   {    }  ING {h24 ing h10001}   

 

Í¿ Figure 3: Semantic View of the Definition of þ 1 untenshu “chauffeur”

_1:proposition_m<0:13>[MARG e2:unknown] e2:unknown<0:13>[ARG x5:_hito_n] x7:udef<0:3>[] x7:densha_n_1<0:3> x12:udef<4:7>[] x12:_jidousha_n<4:7> x13:_ya_p_conj<0:4>[L-INDEX x7:_densha_n_1, R-INDEX x12:_jidousha_n] e23:_unten_s_2<8:10>[ARG1 x5:_hito_n, ARG2 x13:_ya_p_conj] x5:udef<12:13>[]

_2:proposition_m<0:13>[MARG e23:_unten_s_2]

Í¿ Figure 4: Dependency View of the Definition of þ 1 untenshu “chauffeur”

3 Task for various words, however, features for a word are discriminated from those for other words so that the We define the task in this paper as “allocating the senses irrelevant to a target word are not selected. word sense tags for all content words included in For example, an n-gram feature following a target Lexeed as headwords, in each input sentence”. This word “has-a-tail” for dog is distinct from that for cat. task is a kind of all-words task, however, a unique point is that we focus on fundamental vocabulary In the remainder of this section, we describe the (basic words) in Lexeed and ignore other words. We features used in the word sense disambiguation. use Lexeed as the sense inventory. There are two First we used simple n-gram collocations, then a bag problems in resolving the task: how to build the of words of all words occurring in the sentence. This model and how to assign the word sense by using was then enhanced by using ontological information the model for disambiguating the senses. We de- and predicate argument relations. scribe the word sense selection model we use in sec- tion 4 and the method of word sense assignment in section 5. 4.1 Word Collocations 4 Word Sense Selection Model Word collocations (WORD-Col) are basic and effec- All content words (i.e. basic words) in Lexeed are tive cues for WSD. They can be modelled by n- classified into six groups by part-of-speech: noun, gram and bag of words features, which are easily verb, verbal noun, adjective, adverb, others. We extracted from a corpus. We used all unigrams, bi- treat the first five groups as targets of disambiguat- grams and trigrams which precede and follow the ing senses. We build five words sense models corre- target words (N-gram) and all content words in the sponding to these groups. A model contains senses sentences where the target words occur (BOW).

480

þ Í

# sample features # sample features for þ 1

þÍ ¼

C1 hCOLWS:¼4i D1 hPRED: , ARG1: i

Í \

C2 hCOLWSSC:C33:other personi D1 hPRED:þ , ARG2: i

» þÍ ¥ C3 hCOLWSHYP:¼ 1i D1 hPRED: , ARG2: i

C4 hCOLWSHYPSC:C5:personi

Í ¼

D2 hPRED:þ , ARG1: 4i

þÍ \

C1 hCOLWS:\ 1i D2 hPRED: , ARG2: 1i

Í ¥

C2 hCOLWSSC:C988:land vehiclei D2 hPRED:þ , ARG2: 1i £

C3 hCOLWSHYP: 1i

Í

C4 hCOLWSHYPSC:C988:land vehiclei D3 hPRED:þ , ARG1SC:C33i

Í

D3 hPRED:þ , ARG2SC:C988i

¥

C1 hCOLWS: 1i

Í  C2 hCOLWSSC:C988:land vehiclei D4 hPRED:þ , ARG2SYN: 1i

C3 hCOLWSHYP: 2i

Í ¼»

C4 hCOLWSHYPSC:C988:land vehiclei D5 hPRED:þ , ARG1HYP: 1i

Í £

D5 hPRED:þ , ARG2HYP: 1i

Í Table 2: Example semantic collocation features D5 hPRED:þ , ARG2HYP: 2i

(SEM-Col) extracted from the word sense tagged cor-

Í

D6 hPRED:þ , ARG1HYPSC:C5i

Í pus and the dictionary (Lexeed and GoiTaikei) and D6 hPRED:þ , ARG2HYPSC:C988i

the ontology which have the word senses and the se-

Í ¼ \

mantic classes linked to the semantic tags. The first D11 hPRED:þ , ARG1: , ARG2: i

Í ¼ \

D22 hPRED:þ , ARG1: 4, ARG2: 1i

Í ¼ column numbers the feature template corresponding D23 hPRED:þ , ARG1: 4, ARG2:C1460 i

to each example.

Í ¼ 

D24 hPRED:þ , ARG1: 4, ARG2SYN: 1i

Í \ D32 hPRED:þ , ARG1:C5, ARG2: i

4.2 Semantic Features 1

Í D33 hPRED:þ , ARG1:C5, ARG2:C988i

We use the semantic information (sense tags and on-

Í ¼» £

D55 hPRED:þ , ARG1HYP: 4, ARG2HYP: 1i

Í ¼»

tologies) in two ways. One is to enhance the collo- D56 hPRED:þ , ARG1HYP: 4, ARG2HYPSC:C988i

Í £ cations and the other is to enhance dependency rela- D65 hPRED:þ , ARG1HYPSC:C5 , ARG2HYP: 1i

tions.

\ D322 hPRED:C2003, ARG1:¼4, ARG2: 1i 4.2.1 Semantic Collocations Table 3: Example semantic features extracted from Word surface features like N-gram and BOW in- the dependency tree in Figure 4. The first column evitably suffer from data sparseness, therefore, we numbers the feature template corresponding to each generalize them to more abstract words or concepts example. and also consider words having the same mean- ings. We used the ontology described in Sec- tion 2.3 to get hypernyms and and the jidousha-wo unten suru hito “a person who drives a Goi-Taikei thesaurus to abstract the words to the se- train or car” given in Figure 4. The predicate un- mantic classes. The superordinate classes at level ten “drive”, has two arguments: ARG1 hito “person” 3, 4 and 5 are also added in addition to the original and ARG2 ya “or”. The coordinate conjunction is

expanded out into its children, giving ARG2 densha semantic class. For example, \ densha “train”

“train” and jidousha “automobile”.

¥ and  jidousha “automobile” are both gener- alized to the semantic class hC988:land vehiclei From these, we produce several features, a sam- (level 7). The superordinate classes are also used: ple of them are shown in Table 3. One has all argu- hC706:inanimatei (level 3), hC760:artifacti ments and their labels (D11). We also produce var- (level 4) and hC986:vehiclei (level 5). ious back offs, for example the predicate with only one argument at a time (D1-D3). Each combination 4.2.2 Semantic Dependencies of predicate and its related argument(s) becomes a The semantic dependency features are based on feature. a predicate and its arguments taken from the ele- For the next class of features, we used the sense mentary dependencies. For example, consider the information from the corpus combined with the se- semantic dependency representation for densha ya mantic classes in the dictionary to replace each pred-

481 icate by its disambiguated sense, its hypernym, its The algorithm is described as follows. For a pol-

synonym (if any) and its semantic class. The seman- ysemous word set in an input sentence {w1,...,wn},

tic classes for \ 1 and 1 are both h988:land twik is the k-th word sense of word wi, W is a set

Í ¼

vehiclei, while þ 1 is h2003:motioni and 4 having words to be disambiguated, T is a list of re-

¥

is h4:humani. We also expand  1 into its syn- solved word senses. A search node N is defined as

æ‚ onym õ 1 mot¯ ak¯ a¯ “motor car”. [W,T ] and a score of a node N, s(N) is defined as The semantic class features provide a seman- the probability that the word sense set T occurs in tic smoothing, as words are binned into the 2,700 the . The beam search can be done as fol- classes. The hypernym/synonym features provide lows (beam width is b): even more smoothing. Both have the effect of mak- 1. Create an initial node N = [T ,W ] (T = {}, ing more training data available for the disambigua- 0 0 0 0 W = {}) and insert the node into an initial tor. 0 queue Q0. 4.3 Domain 2. For each node N in the queue Q, do the follow- Domain information is a simple and sometimes ing steps. strong cue for disambiguating the target words ′ • For each wi (∈ W), create Wi by picking (Gliozzo et al., 2005). For instance, the sense of out wi from W the word “record” is likey to be different in the mu- • Create new lists T ′,...,T ′ by adding one sical context, which is recalled by domain-specific 1 l of word sense candidates twi1,...,twil for wi words like “orchestra”, “guitar”, than in the sport- to T ing context. We use 12 domain categories like “cul- • Create new nodes [W ′,T ′],...,[W ′,T ′] and ture/art”, “sport”, etc. which are similar to ones used i 0 i l insert them into the queue Q′ in directory search web sites. About 6,000 words are automatically classified into one of 12 domain 3. Sort the nodes in Q′ by the score s(N) categories by distributions in web sites (Hashimoto ′ and Kurohashi, 2007) and 10% of them are manually 4. If the top node W in the queue Q is empty, checked. Polysemous words which belong to multi- adopt T as the combination of word senses and ple domains and neutral words are not classified into terminate. Otherwise, pick out the top b nodes ′ any domain. from Q and insert them into new queue Q, then go back to 2 5 Search Algorithm 6 Evaluation The conditional probability of the word sense for We trained and tested on the Lexeed Dictionary Def- each word is given by the word sense selection inition (LXD-DEF) and Example sections (LXD-EX) of model described in Section 4. In the initial state, the Hinoki corpus (Bond et al., 2007). These have some of the semantic features, e.g. semantic col- about 75,000 definition and 46,000 example sen- locations (SEM-Col) and word sense extensions for tences respectively. Some 54,000 and 36,000 sen- semantic dependencies (SEM-Dep) are not available, tences of them are treebanked, i.e., they have the since no word senses for polysemous words have syntactic trees and structural semantic information. been determined. It is not practical to count all com- We used these sentences with the complete informa- binations of word senses for target words, therefore, tion and selected 1,000 sentences out of each sen- we first try to decide the sense for that word which tence class as test sets (LXD-DEFtest, LXD-EXtest), and is most plausible among all the ambiguous words, the remainder is combined and used as a training then, disambiguate the next word by using the sense. set (LXD-ALL). We also tested 1,000 sentences from We use the beam search algorithm, which is sim- the Kyoto Corpus of newspaper text (KYOTOtest). ilar to that used for decoder in statistical machine These sentences have between 3.4 (LXD-EXtest) – 5.2 translation (Watanabe, 2004), for finding the plausi- (KYOTOtest) polysemous words per sentence on av- ble combination of word sense tags. erage.

482 We use a maximum entropy / minimum diver- collocation features (WORD-Col) give a vast improve- gence (MEMD) modeler to train the word sense se- ment. Extending this by using the ontological in- lection model. We use the open-source Maximun formation (+SEM-Col) gives a further improvement Entropy Modeling Toolkit2 for training, determining over the WORD-Col. Adding the predicate-argument best-performing convergence thresholds and prior relationships (+SEM-Dep) improves the results even sizes experimentally. The models for five differ- more. ent POSs were trained with each training sets: the Table 6 shows the statistics of the target corpora. base model is word collocation model (WORD-Col), The best result of LXD-DEFtest (80.7%) surpasses the and the semantic models built by semantic colloca- inter-annotator agreement (78.7%) in building the tion (SEM-Col), semantic dependency (SEM-Dep) or Hinoki Sensebank. However, there is a wide gap domain with WORD-Col (+SEM-Col, +SEM-Dep and between the best results of KYOTOtest (60.4%) and +DOMAIN). the inter-annotator agreement (83.3%), this suggests other information such as the semantic classes for named entities (including proper nouns and multi- word expressions (MWE)) and broader contexts are required. However, a model built on dictionary sen- tences lacks these features. Even, so there is some improvement. The domain features (+DOMAIN) give small con- tribution to the precision, since only intra-sentence context is counted in this experiment. Unfortunately dictiory definition and example sentences do not re- ally have a useful context. We expect broader con- text should make the domain features more effective for the newspaper text (e.g. as in Stevenson (2003)), Table 5 shows comparison of results of different POSs. The semantic features (+SEM-Col and +SEM- Dep) are particularly effective for verb and also give moderate improvements on the results of the other Figure 5: Learning Curve POSs. Figure 5 shows the precisions of LXD-DEFtest in changing the size of a training corpus, which is di- 7 Results and Discussion vided into five partitions. The precision is saturated in using four partitions (264,000 tokens). Table 4 shows the precision as the results of the word These results of the dictionary sentences are close sense disambiguation on the combination of LXD- to the best published results for the SENSEVAL-2 DEF and LXD-EX (LXD-ALL). The baseline method task (79.3% by Murata et al. (2003) using a com- selects the senses occurring most frequently in the bination of simple Bayes learners). However, we training corpus. Each row indicates the results us- are using a different sense inventory (Lexeed not ing the baseline, word collocation (WORD-Col), the Iwanami (Nishio et al., 1994)) and testing over a dif- combinations of WORD-Col and one of the seman- ferent corpus, so the results are not directly compa- tic features (+SEM-Col, +SEM-Dep and +DOMAIN), rable. In future work, we will test over SENSEVAL- e.g, +SEM-Col gives the results using WORD-Col and 2 data so that we can compare directly. SEM-Col, and all features (FULL). None of the SENSEVAL-2 systems used onto- There are significant improvements over the base- logical information, despite the fact that the dic- line and the other results on all corpora. Basic word tionary definition sentences were made available, 2http://homepages.inf.ed.ac.uk/s0450736/ and there are several algorithms describing how to maxent_toolkit.html extract such information from MRDs (Tsurumaru

483 Model Test Baseline WORD-Col +SEM-Col +SEM-Dep +DOMAIN FULL LXD-ALL LXD-DEFtest 72.8 78.4 79.8 80.2 78.1 80.7 LXD-EXtest 70.4 75.6 78.7 77.9 76.0 78.8 KYOTOtest 55.6 58.5 60.0 58.8 59.8 60.4

Table 4: The Precision of WSD

POS Baseline WORD-Col +SEM-Col +SEM-Dep +DOMAIN FULL Noun 65.5 68.7 69.6 69.4 68.9 69.8 Verb 60.3 66.9 71.0 70.6 67.7 72.6 VN 72.6 76.2 77.7 74.6 77.6 77.5 Adj 59.9 67.2 69.5 68.9 68.9 69.5 Adv 74.4 78.6 79.8 79.2 78.6 79.8

Table 5: The Precision of WSD (per Part-of-Speech) et al., 1991; Wilkes et al., 1996; Nichols et al., 2005). Language Processing (IJCNLP-04), pages 554–559. Hainan We hypothesize that this is partly due to the way the Island. task is presented: there was not enough time to ex- Francis Bond, Sanae Fujita, and Takaaki Tanaka. 2007. The Hi- noki syntactic and semantic treebank of Japanese. Language tract and debug an ontology as well as build a dis- Resources and Evaluation. (Special issue on Asian language ambiguation system, and there was no ontology dis- technology). tributed. The CRL system (Murata et al., 2003) used Ulrich Callmeier. 2000. PET - a platform for experimentation a syntactic dependency parser as one source of fea- with efficient HPSG processing techniques. Natural Lan- guage Engineering, 6(1):99–108. tures (KNP: Kurohashi and Nagao (2003)), remov- Ann Copestake, Dan Flickinger, Carl Pollard, and Ivan A. Sag. ing it decreased performance by around 0.6%. 2005. Minimal Recursion Semantics. An introduction. Re- search on Language and Computation, 3(4):281–332. 8 Conclusions Christine Fellbaum, editor. 1998. WordNet: An Electronic Lex- ical Database. MIT Press. We used the Hinoki corpus to test the importance of Sanae Fujita, Francis Bond, Stephan Oepen, and Takaaki lexical and structural information in word sense dis- Tanaka. 2007. Exploiting semantic information for HPSG parse selection. In ACL 2007 Workshop on Deep Linguistic ambiguation. We found that basic n-gram features Processing, pages 25–32. Prague, Czech Republic. and collocations provided a great deal of useful in- Alfio Massimiliano Gliozzo, Claudio Giuliano, and Carlo formation, but that better results could be gained by Strapparava. 2005. Domain kernels for word sense disam- using ontological information and semantic depen- biguation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL 2005). Ann dencies. Arbor, U.S. Chikara Hashimoto and Sadao Kurohashi. 2007. Construction Acknowledgements of domain dictionary for fundamental vocaburalry. In Pro- ceedings of the ACL 2007 Main Conference Poster Sessions. We would like to thank the other members of the Association for Computational Linguistics, Prague, Czech NTT Natural Language Research Group NTT Com- Republic. munication Science laboratories for their support. Satoru Ikehara, Masahiro Miyazaki, Satoshi Shirai, Akio We would also like to express gratitude to the re- Yokoo, Hiromi Nakaiwa, Kentaro Ogura, Yoshifumi Ooyama, and Yoshihiko Hayashi. 1997. Goi-Taikei — viewers for their valuable comments and Professor A Japanese Lexicon. Iwanami Shoten, Tokyo. 5 vol- Zeng Guangping, Wang Daliang and Shen Bin of umes/CDROM. the University of Science and Technology Beijing Kaname Kasahara, Hiroshi Sato, Francis Bond, Takaaki (USTB) for building the demo system. Tanaka, Sanae Fujita, Tomoko Kanasugi, and Shigeaki Amano. 2004. Construction of a Japanese semantic lexicon: Lexeed. In IPSG SIG: 2004-NLC-159, pages 75–82. Tokyo. References (in Japanese). Francis Bond, Sanae Fujita, Chikara Hashimoto, Kaname Dan Klein and Christopher D. Manning. 2003. Accurate un- Kasahara, Shigeko Nariyama, Eric Nichols, Akira Ohtani, lexicalized parsing. In Erhard Hinrichs and Dan Roth, edi- Takaaki Tanaka, and Shigeaki Amano. 2004. The Hinoki tors, Proceedings of the 41st Annual Meeting of the Associ- treebank: A treebank for text understanding. In Proceed- ation for Computational Linguistics, pages 423–430. URL ings of the First International Joint Conference on Natural http://www.aclweb.org/anthology/P03-1054.pdf.

484 Annotated Agreement Corpus Tokens #WS token (type) %Other Sense %Homonym %MWE %Proper Noun LXD-DEF 199,268 5.18 .787 (.850) 4.2 0.084 1.5 0.046 LXD-EX 126,966 5.00 .820 (.871) 2.3 0.035 0.4 0.0018 KYOTO 268,597 3.93 .833 (.828) 9.8 3.3 7.9 5.5

Table 6: Corpus Statistics

Sadao Kurohashi and Makoto Nagao. 2003. Building a Mark Stevenson. 2003. Word Sense Disambiguation. CSLIPub- Japanese parsed corpus — while improving the parsing sys- lications. tem. In Anne Abeill´e, editor, Treebanks: Building and Using Takaaki Tanaka, Francis Bond, and Sanae Fujita. 2006. The Hi- Parsed Corpora, chapter 14, pages 249–260. Kluwer Aca- noki sensebank — a large-scale word sense tagged corpus of demic Publishers. Japanese —. In Proceedings of the Workshop on Frontiers in Robert Malouf and Gertjan van Noord. 2004. Wide cover- Linguistically Annotated Corpora 2006, pages 62–69. Syd- age parsing with stochastic attribute value grammars. In ney. URL http://www.aclweb.org/anthology/W/W06/ IJCNLP-04 Workshop: Beyond shallow analyses - For- W06-0608, (ACL Workshop). malisms and statistical modeling for deep analyses. JST Hiroaki Tsurumaru, Katsunori Takesita, Itami Katsuki, Toshi- CREST. URL http://www-tsujii.is.s.u-tokyo.ac. hide Yanagawa, and Sho Yoshida. 1991. An approach to jp/bsa/papers/malouf.pdf. thesaurus construction from Japanese language dictionary. Diana McCarthy and John Carroll. 2003. Disambiguat- In IPSJ SIGNotes Natural Language, volume 83-16, pages ing nouns, verbs and adjectives using automatically ac- 121–128. (in Japanese). quired selectional preferences. Computational Linguistics, Taro Watanabe. 2004. Example-based Statistical Machine 29(4):639–654. Translation. Ph.D. thesis, Kyoto University. Masaaki Murata, Masao Utiyama, Kiyotaka Uchimoto, Qing Yorick A. Wilkes, Brian M. Slator, and Louise M. Guthrie. Ma, and HItoshi Isahara. 2003. CRL at Japanese dictionary- 1996. Electric Words. MIT Press. based task of SENSEVAL-2. Journal of Natural Language Deyi Xiong, Qun Liu Shuanglong Li and, Shouxun Lin, and Processing, 10(3):115–143. (in Japanese). Yueliang Qian. 2005. Parsing the Penn Chinese treebank Eric Nichols and Francis Bond. 2005. Acquiring ontologies with semantic knowledge. In Robert Dale, Jian Su Kam-Fai using deep and shallow processing. In 11th Annual Meeting Wong and, and Oi Yee Kwong, editors, Natural Language of the Association for Natural Language Processing, pages Processing — IJCNLP 005: Second International Joint Con- 494–498. Takamatsu. ference Proceedings, pages 70–81. Springer-Verlag. Eric Nichols, Francis Bond, and Daniel Flickinger. 2005. Ro- bust ontology acquisition from machine-readable dictionar- ies. In Proceedings of the International Joint Conference on Artificial Intelligence IJCAI-2005, pages 1111–1116. Edin- burgh. Minoru Nishio, Etsutaro Iwabuchi, and Shizuo Mizutani. 1994. Iwanami Kokugo Jiten Dai Go Han [Iwanami Japanese Dic- tionary Edition 5]. Iwanami Shoten, Tokyo. (in Japanese). Stephan Oepen, Kristina Toutanova, Stuart Shieber, Christoper D. Manning, Dan Flickinger, and Thorsten Brant. 2002. The LinGO redwoods treebank: Motivation and preliminary applications. In 19th International Confer- ence on Computational Linguistics: COLING-2002, pages 1253–7. Taipei, Taiwan. Carl Pollard and Ivan A. Sag. 1994. Head Driven Phrase Struc- ture Grammar. University of Chicago Press, Chicago. Stefan Riezler, Tracy H. King, Ronald M. Kaplan, Richard Crouch, John T. Maxwell, and Mark Johnson. 2002. Parsing the Wall Street Journal using a Lexical-Functional Grammar and discriminative estimation techniques. In 41st Annual Meeting of the Association for Computational Linguistics: ACL-2003, pages 271–278. Melanie Siegel and Emily M. Bender. 2002. Efficient deep pro- cessing of Japanese. In Proceedings of the 3rd Workshop on Asian Language Resources and International Standardiza- tion at the 19th International Conference on Computational Linguistics, pages 1–8. Taipei.

485