<<

Computing with a Thesaurus

Word Senses and Relations Terminology: and wordform • A lemma or citation form • Same stem, part of speech, rough semantics • A wordform • The inflected word as it appears in text

Wordform Lemma banks bank sung sing duermes dormir Lemmas have senses

• One lemma “bank” can have many meanings: Sense 1: • …a bank can hold the investments in a custodial account…1 • “…as agriculture burgeons on the east bank the Sense 2: 2 river will shrink even more” • Sense (or word sense) • A discrete representation of an aspect of a word’s meaning. • The lemma bank here has two senses Homonymy Homonyms: that share a form but have unrelated, distinct meanings:

• bank1: financial institution, bank2: sloping land • bat1: club for hitting a ball, bat2: nocturnal flying mammal 1. Homographs (bank/bank, bat/bat) 2. Homophones: 1. Write and right 2. Piece and peace Homonymy causes problems for NLP applications

• Information retrieval • “bat care” • Machine • bat: murciélago (animal) or bate (for baseball) • Text-to-Speech • bass (stringed instrument) vs. bass (fish) Polysemy

• 1. The bank was constructed in 1875 out of local red brick. • 2. I withdrew the money from the bank • Are those the same sense? • Sense 2: “A financial institution” • Sense 1: “The building belonging to a financial institution” • A polysemous word has related meanings • Most non-rare words have multiple meanings Metonymy or Systematic Polysemy: A systematic relationship between senses

• Lots of types of polysemy are systematic • School, university, hospital • All can mean the institution or the building. • A systematic relationship: • Building Organization • Other such kinds of systematic polysemy: Author (Jane Austen wrote Emma) Works of Author (I love Jane Austen) Tree (Plums have beautiful blossoms) Fruit (I ate a preserved plum) How do we know when a word has more than one sense?

• The “zeugma” test: Two senses of serve? • Which flights serve breakfast? • Does Lufthansa serve Philadelphia? • ?Does Lufthansa serve breakfast and San Jose? • Since this conjunction sounds weird, • we say that these are two different senses of “serve”

• Word that have the same meaning in some or all contexts. • filbert / hazelnut • couch / sofa • big / large • automobile / car • vomit / throw up

• Water / H20 • Two are synonyms • if they can be substituted for each other in all situations • If so they have the same propositional meaning Synonyms

• But there are few (or no) examples of perfect synonymy. • Even if many aspects of meaning are identical • Still may not preserve the acceptability based on notions of politeness, slang, register, genre, etc. • Example:

• Water/H20 • Big/large • Brave/courageous Synonymy is a relation between senses rather than words

• Consider the words big and large • Are they synonyms? • How big is that plane? • Would I be flying on a large or small plane? • How about here: • Miss Nelson became a kind of big sister to Benjamin. • ?Miss Nelson became a kind of large sister to Benjamin. • Why? • big has a sense that means being older, or grown up • large lacks this sense Antonyms

• Senses that are opposites with respect to one feature of meaning • Otherwise, they are very similar! dark/light short/long fast/slow rise/fall hot/cold up/down in/out • More formally: antonyms can • define a binary opposition or be at ends of a scale • long/short, fast/slow • Be reversives: • rise/fall, up/down

• One sense is a hyponym of another if the first sense is more specific, denoting a subclass of the other • car is a hyponym of vehicle • mango is a hyponym of fruit • Conversely hypernym/superordinate (“hyper is super”) • vehicle is a hypernym of car • fruit is a hypernym of mango

Superordinate/hyper vehicle fruit furniture Subordinate/hyponym car mango chair Hyponymy more formally

• Extensional: • The class denoted by the superordinate extensionally includes the class denoted by the hyponym • Entailment: • A sense A is a hyponym of sense B if being an A entails being a B • Hyponymy is usually transitive • (A hypo B and B hypo C entails A hypo C) • Another name: the IS-A hierarchy • A IS-A B (or A ISA B) • B subsumes A Hyponyms and Instances

• WordNet has both classes and instances. • An instance is an individual, a proper noun that is a unique entity • San Francisco is an instance of city • But city is a class • city is a hyponym of municipality...location...

15 Meronymy

• The part-whole relation • A leg is part of a chair; a wheel is part of a car. • Wheel is a meronym of car, and car is a holonym of wheel.

16 Computing with a Thesaurus Word Senses and Word Relations Computing with a Thesaurus WordNet WordNet 3.0

• A hierarchically organized lexical database • On-line thesaurus + aspects of a • Some other languages available or under development • (Arabic, Finnish, German, Portuguese…) Category Unique Strings Noun 117,798 Verb 11,529 Adjective 22,479 Adverb 4,481 Senses of “bass” in Wordnet How is “sense” defined in WordNet?

• The synset ( set), the set of near-synonyms, instantiates a sense or concept, with a gloss • Example: chump as a noun with the gloss: “a person who is gullible and easy to take advantage of” • This sense of “chump” is shared by 9 words: chump1, fool2, gull1, mark9, patsy1, fall guy1, sucker1, soft touch1, mug2 • Each of these senses have this same gloss • (Not every sense; sense 2 of gull is the aquatic bird) WordNet Hypernym Hierarchy for “bass” WordNet Noun Relations

16.4 WORD SENSE DISAMBIGUATION:OVERVIEW 7 • Relation Also Called Definition Example Hypernym Superordinate From concepts to superordinates breakfast1 meal1 ! Hyponym Subordinate From concepts to subtypes meal1 lunch1 ! Instance Hypernym Instance From instances to their concepts Austen1 author1 ! Instance Hyponym Has-Instance From concepts to concept instances composer1 Bach1 ! Member Meronym Has-Member From groups to their members faculty2 professor1 ! Member Holonym Member-Of From members to their groups copilot1 crew1 ! Part Meronym Has-Part From wholes to parts table2 leg3 ! Part Holonym Part-Of From parts to wholes course7 meal1 ! Substance Meronym From substances to their subparts water1 oxygen1 ! Substance Holonym From parts of substances to wholes gin1 martini1 ! Antonym Semantic opposition between lemmas leader1 follower1 () Derivationally Lemmas w/same morphological root destruction1 destroy1 () Related Form Figure 16.2 Noun relations in WordNet.

Relation Definition Example Hypernym From events to superordinate events fly9 travel5 ! Troponym From events to subordinate event walk1 stroll1 (often via specific manner) ! Entails From verbs (events) to the verbs (events) they entail snore1 sleep1 ! Antonym Semantic opposition between lemmas increase1 decrease1 () Derivationally Lemmas with same morphological root destroy1 destruction1 () Related Form Figure 16.3 Verb relations in WordNet.

respond to the notion of immediate hyponymy discussed on page 5. Each synset is related to its immediately more general and more specific synsets through direct hy- pernym and hyponym relations. These relations can be followed to produce longer chains of more general or more specific synsets. Figure 16.4 shows hypernym chains for bass3 and bass7. In this depiction of hyponymy, successively more general synsets are shown on successive indented lines. The first chain starts from the concept of a human bass singer. Its immediate superordinate is a synset corresponding to the generic concept of a singer. Following this chain leads eventually to concepts such as entertainer and person. The second chain, which starts from musical instrument, has a completely different path leading eventually to such concepts as musical instrument, device, and physical object. Both paths do eventually join at the very abstract synset whole, unit, and then proceed together to entity which is the top (root) of the noun hierarchy (in unique beginner WordNet this root is generally called the unique beginner).

16.4 Word Sense Disambiguation: Overview

Our discussion of compositional semantic analyzers in Chapter 15 pretty much ig- nored the issue of lexical ambiguity. It should be clear by now that this is an unrea- sonable approach. Without some means of selecting correct senses for the words in an input, the enormous amount of homonymy and polysemy in the would quickly overwhelm any approach in an avalanche of competing interpretations. 16.4 WORD SENSE DISAMBIGUATION:OVERVIEW 7 • Relation Also Called Definition Example Hypernym Superordinate From concepts to superordinates breakfast1 meal1 ! Hyponym Subordinate From concepts to subtypes meal1 lunch1 ! Instance Hypernym Instance From instances to their concepts Austen1 author1 ! Instance Hyponym Has-Instance From concepts to concept instances composer1 Bach1 ! Member Meronym Has-Member From groups to their members faculty2 professor1 ! Member Holonym Member-Of From members to their groups copilot1 crew1 ! Part Meronym Has-Part From wholes to parts table2 leg3 ! Part Holonym Part-Of From parts to wholes course7 meal1 ! Substance Meronym From substances to their subparts water1 oxygen1 ! Substance Holonym From parts of substances to wholes gin1 martini1 ! AntonymWordNet VerbRelations Semantic opposition between lemmas leader1 follower1 () Derivationally Lemmas w/same morphological root destruction1 destroy1 () Related Form Figure 16.2 Noun relations in WordNet.

Relation Definition Example Hypernym From events to superordinate events fly9 travel5 ! Troponym From events to subordinate event walk1 stroll1 (often via specific manner) ! Entails From verbs (events) to the verbs (events) they entail snore1 sleep1 ! Antonym Semantic opposition between lemmas increase1 decrease1 () Derivationally Lemmas with same morphological root destroy1 destruction1 () Related Form Figure 16.3 Verb relations in WordNet.

respond to the notion of immediate hyponymy discussed on page 5. Each synset is related to its immediately more general and more specific synsets through direct hy- pernym and hyponym relations. These relations can be followed to produce longer chains of more general or more specific synsets. Figure 16.4 shows hypernym chains for bass3 and bass7. In this depiction of hyponymy, successively more general synsets are shown on successive indented lines. The first chain starts from the concept of a human bass singer. Its immediate superordinate is a synset corresponding to the generic concept of a singer. Following this chain leads eventually to concepts such as entertainer and person. The second chain, which starts from musical instrument, has a completely different path leading eventually to such concepts as musical instrument, device, and physical object. Both paths do eventually join at the very abstract synset whole, unit, and then proceed together to entity which is the top (root) of the noun hierarchy (in unique beginner WordNet this root is generally called the unique beginner).

16.4 Word Sense Disambiguation: Overview

Our discussion of compositional semantic analyzers in Chapter 15 pretty much ig- nored the issue of lexical ambiguity. It should be clear by now that this is an unrea- sonable approach. Without some means of selecting correct senses for the words in an input, the enormous amount of homonymy and polysemy in the lexicon would quickly overwhelm any approach in an avalanche of competing interpretations. Word SenseWordNet: Viewed as a graph Disambiguation: A Survey 10:9

25

Fig. 3. An excerpt of the WordNet semantic network.

We note that each word sense univocally identifies a single synset. For instance, 1 1 1 1 4 1 given carn the corresponding synset carn, auton, automobilen, machinen, motorcarn is univocally determined. In Figure 3{ we report an excerpt of the WordNet semantic} 1 network containing the carn synset. For each synset, WordNet provides the following information: —A gloss, that is, a textual definition of the synset possibly with a set of usage examples 1 (e.g., the gloss of carn is “a 4-wheeled motor vehicle; usually propelled by an internal combustion engine; ‘he needs a car to get to work’ ”).7 —Lexical and semantic relations, which connect pairs of word senses and synsets, re- spectively: while semantic relations apply to synsets in their entirety (i.e., to all members of a synset), lexical relations connect word senses included in the respec- tive synsets. Among the latter we have the following: 1 —Antonymy: X is an antonym of Y if it expresses the opposite concept (e.g., gooda is 1 the antonym of bada). Antonymy holds for all parts of speech. —Pertainymy: X is an adjective which can be defined as “of or pertaining to” a noun 1 1 (or, rarely, another adjective) Y (e.g., dentala pertains to toothn). 2 —Nominalization: a noun X nominalizes a verb Y (e.g., servicen nominalizes the verb 4 servev). Among the semantic relations we have the following: —Hypernymy (also called kind-of or is-a): Y is a hypernym of X if every X is a (kind 1 1 of) Y (motor vehiclen is a hypernym of carn). Hypernymy holds between pairs of nominal or verbal synsets.

7Recently, Princeton University released the Princeton WordNet Gloss Corpus, a corpus of manually and automatically sense-annotated glosses from WordNet 3.0, available from the WordNet Web site.

ACM Computing Surveys, Vol. 41, No. 2, Article 10, Publication date: February 2009. Noun Verb The 26 noun and 15 verb supersense categories are GROUP 1469 place STATIVE 2922 is listed with examples in table 1. Some of the names PERSON 1202 people COGNITION 1093 know overlap between the noun and verb inventories, but ARTIFACT 971 car COMMUNIC. 974 recommend they are to be considered separate categories; here- COGNITION 771 way SOCIAL ∗ 944 use after, we will distinguish the noun and verb categories FOOD 766 food MOTION 602 go with prefixes, e.g. N:COGNITION vs. V:COGNITION. ACT 700 service POSSESSION 309 pay Though WordNet synsets are associated with lex- LOCATION 638 area CHANGE 274 fix ical entries, the supersense categories are unlexical- “SupersensesTIME” 530 day EMOTION 249 love EVENT 431 experience PERCEPTION 143 see ized. The N:PERSON category, for instance, contains The top level COMMUNIChypernyms. 417 review CONSUMPTIONin the hierarchy93 have synsets for both principal and student. A different POSSESSION∗ 339 price BODY 82 get. . . done sense of principal falls under N:POSSESSION. (counts from Schneider and Smith 2013’s Streusel corpus) ATTRIBUTE 205 quality CREATION 64 cook As far as we are aware, the supersenses were QUANTITY 102 amount CONTACT 46 put Noun Verb Noun Verb originally intendedThe 26 noun only and as a15 method verb supersense of organizing categories are ANIMAL 88 dog COMPETITIONThe 26 noun11 andwin 15 verb supersense categories are the WordNet structure. But Ciaramita and Johnson GROUP 1469 place STATIVEBODY GROUP292287 hairis 1469 placeWEATHERlisted withSTATIVE examples0— 2922 in tableis 1. Somelisted of the with names examples in table 1. Some of the names (2003) pioneered the coarse word sense disambigua- PERSON 1202 people COGNITIONSTATE PERSON109356 painknow1202 peoplealloverlap 15 VSSTsCOGNITION between7806 the1093 nounknow and verboverlap inventories, between but the noun and verb inventories, but ARTIFACT 971 car COMMUNICNATURAL OBJARTIFACT. .974 54 flowerrecommend971 carthey areCOMMUNIC to be considered. 974 recommend separatetion task categories;they of supersense are to here- be considered tagging, noting separate that categories; the su- here- COGNITION 771 way SOCIALRELATION COGNITION∗ 94435 portionuse 771 wayafter,N/A weSOCIAL will(see distinguish §3.2)∗ 944 theusepersense noun andafter, categories verb we categories will provided distinguish a natural the noun broadening and verb categories FOOD 766 food MOTION FOOD602 go 766 food MOTION 602 go SUBSTANCE 34 oil `awith prefixes,1191 e.g.haveN:COGNITIONof thevs. traditionalwithV:COGNITION prefixes, named e.g.. entityN:COGNITION categories tovs. encom-V:COGNITION. ACT 700 service POSSESSIONACT309 pay 700 service POSSESSION 309 pay FEELING 34 discomfort ` Though WordNet821 anyone synsetspass are associatedall nouns.Though withCiaramita WordNet lex- and synsets Altun are(2006 associated) later with lex- LOCATION 638 area CHANGEPROCESS LOCATION27428 processfix 638 area`j CHANGE54 fried 274 fix ical entries, the supersenseexpanded categoriesical the are entries, task unlexical- to the include supersense all verbs, categories and applied are unlexical- TIME 530 day EMOTIONMOTIVE TIME24925 reasonlove 530 day EMOTION 249 love EVENT ized. ThePERCEPTIONN:PERSON category,a supervised for instance,ized. sequence The containsN:PERSON modelingcategory, framework for instance, adapted contains EVENT 431 experience PERCEPTIONPHENOMENON14323 resultsee 431 experience COMMUNIC. 143 see from NER. Evaluation was against manually sense- COMMUNIC. 417 review CONSUMPTIONSHAPE COMMUNIC936 squarehave. 417 reviewsynsets∗isCONSUMPTION for short both for principal93 haveand studentsynsets. A different for both principal and student. A different POSSESSION∗ 339 price BODYPLANT POSSESSION825 treeget.∗ . .339 donepricesenseCOMMUNICATION ofBODYprincipal falls82 underget.tagged . . doneN:POSSESSION datasense that of hadprincipal. been automaticallyfalls under N converted:POSSESSION to . ATTRIBUTE 205 quality CREATIONOTHER ATTRIBUTE642 stuffcook 205 qualityAs farCREATION as we are64 aware,cookthe the coarser supersensesAs supersenses. far as were we Similar are aware, taggers the have supersenses since were QUANTITY 102 amount CONTACTall 26 NSSTsQUANTITY901846 put 102 amountoriginallyCONTACT intended only46 put asbeen a method builtoriginally offor organizing Italian intended (Picca only et al. as, 2008 a method) and of Chi- organizing ANIMAL 88 dog COMPETITIONANIMAL11 win 88 dog COMPETITION 11 win 26 the WordNet structure. ButneseCiaramita (Qiuthe et and al.WordNet, 2011 Johnson), structure. both of which But Ciaramita have their own and Johnson BODY 87 hair TableWEATHER 1: BODYSummary0— of noun87 hair and verbWEATHER supersense cate-0— (2003) pioneered the coarseWordNets word sense(2003 mapped disambigua-) pioneered to English the WordNet. coarse word sense disambigua- STATE 56 pain gories.all 15 VSSTs EachSTATE7806 entry shows56 thepain label alongall with15 VSSTs the count7806 Although many of the annotated expressions in ex- NATURAL OBJ. 54 flower and most frequentNATURAL lexical OBJ. 54 itemflower intion the STREUSLE task of supersensecorpus. tagging, notingtion task that ofthesupersense su- tagging, noting that the su- RELATION 35 portion RELATION(see §3.2) 35 portionpersense categoriesN/A (see provided §3.2)isting a natural supersensepersense broadening datasets categories contain provided multiple a natural words, the broadening enrichN/A the MWE annotations of the CMWE corpus1 SUBSTANCE 34 oil `a SUBSTANCE1191 have 34 oil of the traditional`a named1191 have entityrelationship categoriesof the between to traditional encom- MWEs named and supersenses entity categories has not to encom- (Schneider et al., 2014b), are publicly released under FEELING 34 discomfort ` FEELING821 anyone34 discomfortpass all` nouns. Ciaramita821 anyonereceived and Altun muchpass(2006 all attention. nouns.) later (PiaoCiaramita et al. (2003 and, Altun2005)( did2006) later the name STREUSLE.2 This includes new guidelines PROCESS 28 process `j PROCESS54 fried 28 processexpanded`j the task to54 includefriedinvestigate all verbs,expanded MWEs and applied the in the task context to include of a lexical all verbs, tagger and applied MOTIVE for verbMOTIVE supersense annotation.25 reason Our open-source 25 reason a supervised sequence modelingwith a framework finer-graineda supervised adapted sequence of modeling semantic framework classes.) adapted PHENOMENON 23 result tagger, implementedCOMMUNICPHENOMENON. in23 Python,result is available fromCOMMUNIC that . from NER. Evaluation wasConsider against manuallyfrom these NER. examples sense- Evaluation from online was against reviews: manually sense- SHAPE 6 square ∗isSHAPE short for 6 square ∗is short for page as well. tagged data that had been automatically converted to PLANT 5 tree COMMUNICATIONPLANT 5 treetagged data thatCOMMUNICATION had been automatically(1) IT IS NOT converted A HIGH to END STEAK HOUSE OTHER 2 stuffthe coarser supersenses. Similar taggersthe coarser have since supersenses. Similar taggers have since OTHER 2 stuff 2 Background: Supersense Tags (2) The white pages allowed me to get in touch with all 26 NSSTs 9018 all 26 NSSTs 9018 been built for Italian (Picca et al., been2008) built and for Chi- Italian (Picca et al., 2008) and Chi- parents of my high school friends so that I could WordNet’s supersense categoriesnese (Qiu are et the al. top-level, 2011), both of whichnese have (Qiu their et own al., 2011), both of which have their own Table 1: Summary of noun and verbTable supersense 1: Summary cate- of noun and verb supersense cate-track people down one by one gories. Each entry shows thehypernyms label alonggories. with in the Each the taxonomy count entry showsWordNets (sometimes the label mapped along known to with asEnglish the count WordNet. mapped to English WordNet. and most frequent lexical itemsemantic in the STREUSLEand fields most) frequent whichcorpus. are lexical designedAlthough item in the to manySTREUSLE be broad of the annotatedcorpus.HIGH expressions ENDAlthoughfunctions in many ex- as of a the unit annotated to mean expressions ‘sophis- in ex- isting supersense datasets containticated, multiple expensive’.isting words,supersense (It the is datasets not in WordNet,contain multiple though words, the enough to encompass all1 nouns and verbs (Miller, 1 enrich the MWE annotations of the CMWEenrich thecorpus MWE3 annotations of the CMWE corpus 1990; Fellbaum, 1990). relationship between MWEsNLTK and supersenses (Birdrelationship et al., 2009 has) betweenreturns not a synset’s MWEs lexicographer and supersenses file. has not (Schneider et al., 2014b), are publicly(Schneider released under et al., 2014b), are publicly released under received much attention. (PiaoA et subtle al. (received2003 difference, 2005 much is) thatdid attention. a special ( filePiao called et al.noun.Tops(2003, 2005) did 2 1 the name STREUSLE.2 This includes new guidelines the name STREUSLE. This includeshttp://www.ark.cs.cmu.edu/LexSem/ new guidelines investigate MWEs in the contextcontains of eachinvestigate a lexical noun supersense’s tagger MWEs root in the synset context (e.g., group.n.01 of a lexical tagger 2Supersense-for verbTagged supersenseRepository annotation.of English with Our a Unified open-source for verb supersense annotation. Our open-source with a finer-grained taxonomyfor N of:GROUP semanticwith) as a well finer-grained classes.) as a few miscellaneous taxonomy synsets, of semantic such as classes.) tagger, implemented in Python,Semantics is available fortagger,Lexical from implementedExpressions that in Python, is available fromliving that _thing.n.01, that are too abstract to fall under any single 3 Consider these examples from onlineConsider reviews: these examples from online reviews: page as well. WordNetpage synset as well. entries were originally partitioned into supersense. Following Ciaramita and Altun (2006), we treat the lexicographer files for these coarse(1) categories, IT IS NOT which A became HIGH ENDlatter STEAK cases under(1) HOUSE IT an ISN:OTHER NOT Asupersense HIGH END category STEAK and merge HOUSE known as “supersenses.” The lexname function in WordNet/ the former under their respective supersense. 2 Background: Supersense Tags2 Background:(2) SupersenseThe white Tags pages allowed me to get(2) inThe touch white with pages allowed me to get in touch with parents of my high school friends so thatparents I could of my high school friends so that I could WordNet’s supersense categories areWordNet’s the top-levelsupersense categories are the top-level track people down one by one track people down one by one hypernyms in the taxonomy (sometimeshypernyms known in as the taxonomy (sometimes known as semantic fields) which are designedsemantic to be broad fields) whichHIGH are END designedfunctions to be as broad a unit to meanHIGH ‘sophis- END functions as a unit to mean ‘sophis- enough to encompass all nouns andenough verbs to(Miller encompass, ticated, all nouns expensive’. and verbs (It is (Miller not in, WordNet,ticated, expensive’. though (It is not in WordNet, though 3 3 1990; Fellbaum, 1990). 1990; Fellbaum, 1990NLTK). (Bird et al., 2009) returns a synset’s lexicographerNLTK (Bird et file. al., 2009) returns a synset’s lexicographer file. A subtle difference is that a special file called noun.Tops 1 A subtle difference is that a special file called noun.Tops 1 http://www.ark.cs.cmu.edu/LexSem/ http://www.ark.cs.cmu.edu/LexSem/ contains each noun supersense’s root synsetcontains (e.g., group.n.01 each noun supersense’s root synset (e.g., group.n.01 2 2 Supersense-Tagged Repository of EnglishS withupersense- a UnifiedTaggedforRepositoryN:GROUP) of asE wellnglish as with a few a miscellaneousUnified for synsets,N:GROUP such) as as well as a few miscellaneous synsets, such as Semantics for Lexical Expressions Semantics for Lexical Expressionsliving_thing.n.01, that are too abstract to fallliving under_thing.n.01 any single , that are too abstract to fall under any single 3 3WordNet synset entries were originally partitionedWordNet synset into entriessupersense. were Following originallyCiaramita partitioned and into Altun supersense.(2006), we treat Following the Ciaramita and Altun (2006), we treat the lexicographer files for these coarse categories,lexicographer which became files forlatter these cases coarse under categories, an N:OTHER whichsupersense became categorylatter cases and under merge an N:OTHER supersense category and merge known as “supersenses.” The lexname functionknown in as WordNet/ “supersenses.”the Theformerlexname under theirfunction respective in WordNet/ supersense.the former under their respective supersense. SemEval 2016 Task 10: Detecting Minimal Semantic Units and their Meanings (DiMSUM)

Task Home Page

The DiMSUM shared task at SemEval 2016 is concerned with predicting, given an English sentence, a broad-coverage representation of . The representation consists of two closely connected facets: a segmentation into minimal semantic units, and a labeling of some of those units with semantic classes known as supersenses. For example, givenSupersenses the POS-tagged sentence IPRP googledVBD restaurantsNNS inIN theDT areaNN andCC FujiNNP SushiNNP cameVBD upRB andCC • reviewsA word’s NNS weresupersenseVBD greatJJ can be a useful coarsesoRB IPRP madeVBD aDT carry-VBgrained outRP orderNN representation of word meaning for NLP tasks the goal is to predict the representation

I googledcommunication restaurantsGROUP in the areaLOCATION and Fuji_SushiGROUP came_upcommunication and reviewsCOMMUNICATION werestative great so I made_ a carry_outpossession _ordercommunication where lowercase labels are verb supersenses, UPPERCASE labels are noun supersenses, and _ joins tokens within a multiword expression. (carry_outpossession and made_ordercommunication are separate MWEs.)

The two facets of the representation are discussed in greater detail below. Systems are expected to produce 27 the both facets, though the manner in which they do this (e.g., pipeline vs. joint model) is up to you.

Gold standard training data labeled with the combined representation will be provided in two domains: online reviews and tweets. (Rules for using other data resources in data conditions.) Blind test data will be in these two domains as well as a third, surprise domain. The domain will not be indicated as part of the input at test time. The three test domains will have equal weight in the overall system scores (details of the scoring procedure will be announced at a future time). Minimal semantic units

The word tokens of the sentence are partitioned into basic units of lexical meaning. Equivalently, where multiple tokens function together as an idiomatic whole, they are grouped together into a multiword expression (MWE). MWEs include: nominal compounds like hot dog; verbal expressions like do away with 'eliminate', make decisions 'decide', kick the bucket 'die'; PP like at all and on the spot 'without planning'; multiword prepositions/connectives like in front of and due to; multiword named entities; and many other kinds.

Input word tokens are never subdivided. Grouped tokens do not have to be contiguous; e.g., verb-particle constructions are annotated whether they are contiguous (make up the story) or gappy (make the story up). There are, however, formal constraints on gaps to facilitate sequence tagging. Combinations considered to be statistical collocations (yet compositional in meaning) are called "weak WordNet 3.0

• Where it is: • http://wordnetweb.princeton.edu/perl/webwn • Libraries • Python: WordNet from NLTK • http://www.nltk.org/Home • Java: • JWNL, extJWNL on sourceforge Other (domain specific) thesauri MeSH: Medical Subject Headings thesaurus from the National Library of Medicine

• MeSH (Medical Subject Headings) • 177,000 entry terms that correspond to 26,142 biomedical “headings”

• Hemoglobins Synset Entry Terms: Eryhem, Ferrous Hemoglobin, Hemoglobin Definition: The oxygen-carrying proteins of ERYTHROCYTES. They are found in all vertebrates and some invertebrates. The number of globin subunits in the hemoglobin quaternary structure differs between species. Structures range from monomeric to a variety of multimeric arrangements The MeSH Hierarchy

• a

31 Uses of the MeSH Ontology

• Provide synonyms (“entry terms”) • E.g., glucose and dextrose • Provide hypernyms (from the hierarchy) • E.g., glucose ISA monosaccharide • Indexing in MEDLINE/PubMED database • NLM’s bibliographic database: • 20 million journal articles • Each article hand-assigned 10-20 MeSH terms Computing with a thesaurus WordNet Computing with a thesaurus Word Similarity: Thesaurus Methods Word Similarity

• Synonymy: a binary relation • Two words are either synonymous or not • Similarity (or distance): a looser metric • Two words are more similar if they share more features of meaning • Similarity is properly a relation between senses • The word “bank” is not similar to the word “slope” • Bank1 is similar to fund3 • Bank2 is similar to slope5 • But we’ll compute similarity over both words and senses Why word similarity

• A practical component in lots of NLP tasks • Question answering • Natural language generation • Automatic essay grading • Plagiarism detection • A theoretical component in many linguistic and cognitive tasks • Historical semantics • Models of human word learning • Morphology and grammar induction Word similarity and word relatedness • We often distinguish word similarity from word relatedness • Similar words: near-synonyms • Related words: can be related any way • car, bicycle: similar • car, gasoline: related, not similar Two classes of similarity algorithms

• Thesaurus-based algorithms • Are words “nearby” in hypernym hierarchy? • Do words have similar glosses (definitions)? • Distributional algorithms • Do words have similar distributional contexts? • Distributional (Vector) semantics on Thursday! Path based similarity

• Two concepts (senses/synsets) are similar if they are near each other in the thesaurus hierarchy • =have a short path between them • concepts have path 1 to themselves Refinements to path-based similarity

• pathlen(c1,c2) = 1 + number of edges in the shortest path in the hypernym graph between sense nodes c1 and c2 • ranges from 0 to 1 (identity) 1 • simpath(c1,c2) = pathlen(c1,c2 )

• wordsim(w1,w2) = max sim(c1,c2)

c1∈senses(w1),c2∈senses(w2) Example: path-based similarity simpath(c1,c2) = 1/pathlen(c1,c2)

simpath(nickel,coin) = 1/2 = .5 simpath(fund,budget) = 1/2 = .5 simpath(nickel,currency) = 1/4 = .25 simpath(nickel,money) = 1/6 = .17 simpath(coinage,Richter scale) = 1/6 = .17 Problem with basic path-based similarity

• Assumes each link represents a uniform distance • But nickel to money seems to us to be closer than nickel to standard • Nodes high in the hierarchy are very abstract • We instead want a metric that • Represents the cost of each edge independently • Words connected only through abstract nodes • are less similar Information content similarity metrics Resnik 1995

• Let’s define P(c) as: • The probability that a randomly selected word in a corpus is an instance of concept c • Formally: there is a distinct random variable, ranging over words, associated with each concept in the hierarchy • for a given concept, each observed noun is either • a member of that concept with probability P(c) • not a member of that concept with probability 1-P(c) • All words are members of the root node (Entity) • P(root)=1 • The lower a node in hierarchy, the lower its probability entity Information content similarity … geological-formation

• Train by counting in a corpus natural elevation cave shore • Each instance of hill counts toward frequency of natural elevation, geological formation, entity, etc hill ridge grotto coast • Let words(c) be the set of all words that are children of node c • words(“geo-formation”) = {hill,ridge,grotto,coast,cave,shore,natural elevation} • words(“natural elevation”) = {hill, ridge} ∑ count(w) P(c) = w∈words(c) N Information content similarity • WordNet hierarchy augmented with probabilities P(c) D. Lin. 1998. An Information-Theoretic Definition of Similarity. ICML 1998 Information content and probability • The self-information of an event, also called its surprisal: • how surprised we are to know it; how much we learn by knowing it. • The more surprising something is, the more it tells us when it happens • We’ll measure self-information in bits. I(w)= -log2 P(w) • I flip a coin; P(heads)= 0.5 • How many bits of information do I learn by flipping it? • I(heads) = -log2(0.5) = -log2 (1/2) = log2 (2) = 1 bit • I flip a biased coin: P(heads )= 0.8 I don’t learn as much 46 • I(heads) = -log2(0.8) = -log2(0.8) = .32 bits Information content: definitions

1.3 bits • Information content: 5.9 bits IC(c) = -log P(c) 9.1 bits • Most informative subsumer (Lowest common subsumer)

LCS(c1,c2) = 15.7 bits The most informative (lowest) node in the hierarchy subsuming both c1 and c2 Using information content for similarity: the Resnik method Philip Resnik. 1995. Using Information Content to Evaluate Semantic Similarity in a Taxonomy. IJCAI 1995. Philip Resnik. 1999. Semantic Similarity in a Taxonomy: An Information-Based Measure and its Application to Problems of Ambiguity in Natural Language. JAIR 11, 95-130. • The similarity between two words is related to their common information • The more two words have in common, the more similar they are • Resnik: measure common information as: • The information content of the most informative (lowest) subsumer (MIS/LCS) of the two nodes

• simresnik(c1,c2) = -log P( LCS(c1,c2) ) Dekang Lin method

Dekang Lin. 1998. An Information-Theoretic Definition of Similarity. ICML

• Intuition: Similarity between A and B is not just what they have in common • The more differences between A and B, the less similar they are: • Commonality: the more A and B have in common, the more similar they are • Difference: the more differences between A and B, the less similar • Commonality: IC(common(A,B)) • Difference: IC(description(A,B)-IC(common(A,B)) Dekang Lin similarity theorem • The similarity between A and B is measured by the ratio between the amount of information needed to state the commonality of A and B and the information needed to fully describe what A and B are IC(common(A, B)) sim (A, B)∝ Lin IC(description(A, B))

• Lin (altering Resnik) defines IC(common(A,B)) as 2 x information of the LCS

2log P(LCS(c1,c2 )) simLin (c1,c2 ) = log P(c1)+ log P(c2 ) Lin similarity function

2log P(LCS(c1,c2 )) simLin (A, B) = log P(c1)+ log P(c2 )

2log P(geological-formation) sim (hill,coast) = Lin log P(hill)+ log P(coast)

2ln0.00176 = ln0.0000189 + ln0.0000216 =.59 The (extended) Lesk Algorithm

• A thesaurus-based measure that looks at glosses • Two concepts are similar if their glosses contain similar words • Drawing paper: paper that is specially prepared for use in drafting • Decal: the art of transferring designs from specially prepared paper to a wood or glass or metal surface • For each n-word phrase that’s in both glosses • Add a score of n2 • Paper and specially prepared for 1 + 22 = 5 • Compute overlap also for other relations • glosses of hypernyms and hyponyms Summary: thesaurus-based similarity

1 simpath (c1,c2 ) = pathlen(c1,c2 )

2log P(LCS(c1,c2 )) simresnik (c1,c2 ) = −log P(LCS(c1,c2 )) simlin (c1,c2 ) = log P(c1)+ log P(c2 ) 1 sim jiangconrath (c1,c2 ) = log P(c1)+ log P(c2 )− 2log P(LCS(c1,c2 )) simeLesk (c1,c2 ) = ∑ overlap(gloss(r(c1)),gloss(q(c2 ))) r,q∈RELS Libraries for computing thesaurus-based similarity

• NLTK • http://nltk.github.com/api/nltk.corpus.reader.html?highlight=similarity - nltk.corpus.reader.WordNetCorpusReader.res_similarity

• WordNet::Similarity • http://wn-similarity.sourceforge.net/ • Web-based interface: • http://marimba.d.umn.edu/cgi-bin/similarity/similarity.cgi

54 Evaluating similarity

• Extrinsic (task-based, end-to-end) Evaluation: • Question Answering • Spell Checking • Essay grading • Intrinsic Evaluation: • Correlation between algorithm and human word similarity ratings • Wordsim353: 353 noun pairs rated 0-10. sim(plane,car)=5.77 • Taking TOEFL multiple-choice tests • Levied is closest in meaning to: imposed, believed, requested, correlated