Adjusting sense representations for knowledge-based word sense disambiguation and automatic pun interpretation

Tristan Miller

Presented at: School of Data Analysis and Artificial Intelligence National Research University – Higher School of Economics 25 May 2017

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 1 Biography

Technische Universität Darmstadt

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 2 Biography

§ argumentation mining § language technology for the digital humanities § lexical-semantic resources and algorithms § text mining and analytics § writing assistance and language learning

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 2 Biography

University of Regina University of Toronto

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 2 Biography

Ludic Linguistics Ludic linguistics

Greek Lexical lapses Puzzles by Tristan Miller, Technische Universität Darmstadt Illustration: Claudio Rodriguez Valdes, GentleSquid Studios A number of word errors have been introduced into the four texts is written with an of 24 letters. The following table shows the uppercase and lowercase forms, along with below, such that they no longer exactly match what the authors their usual transliteration into the . originally wrote. Your first task is to identify and correct the errors. Then, try to determine what would have caused the particular type upper ΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΤΥΦΧΨΩ of error observed in each text. Solution on page 51. lower αβγδεζηθικλμνξοπρ σ, ς τ υ φ χ ψ ω Latin a v g d e z ī th iklmnxoprs tyf ch ps ō There are two accents which can be applied to vowels (α, ε, η, ι, ο, υ, and ω): The (´) indicates a stressed syllable, and 1. Polite society has always placed a high value on table mariners, the diaeresis (¨), when applied to the second of two adjacent vowels, usually indicates that they are pronounced separately or as a but it is only recently that they have come to play so large a part diphthong (glide) rather than together as a monophthong (pure vowel). in business… many a business connection has been cemented into Never miss Like English, Greek has a Braille used by the blind. Greek Braille characters are formed by combinations of raised something stronger through the genial influence of something dots in a 3×2 grid. good to eat arid drink. It is, of course, a mistake to depend too much upon one’s social gifts. They are very pleasant and helpful an issue. 1. The following Greek Braille words represent ξεσκεπάζω, Ψυχοφθόρα, βδελυγμία, and Αψηφώντας, though not necessarily in hut the work of the world is clone in offices, not on golf links or in that order. Which is which? How are capital letters and the acute accent represented in Greek Braille? dining rooms. Subscribe (a)⠨⠁⠯⠜⠋⠐⠚⠝⠞⠁⠎ (b) ⠭⠑⠎⠅⠑⠏⠐⠁⠵⠚ (c) ⠃⠙⠑⠇⠽⠛⠍⠐⠊⠁ (d) ⠨⠨⠯⠽⠓⠕⠋⠹⠐⠕⠗⠁ The Book of Business Etiquette by Nella Henney, 1922 2. The following Greek Braille words represent ισόγειε, κοινόβιο, υπάκουο, θεϊκός, προϊών, and ισοϋψή. Which is which? How NOW! does Greek Braille differ from the standard alphabet in its treatment of vowel sequences? (a) ⠊⠎⠕⠽⠯⠐⠜ (b) ⠹⠑⠊⠅⠐⠕⠎ (c) ⠽⠏⠐⠁⠅⠥⠕ (d) ⠏⠗⠕⠊⠐⠚⠝ (e) ⠊⠎⠐⠕⠛⠩⠑ (f) ⠅⠪⠝⠐⠕⠃⠊⠕ 3. How would you write the following words in Greek Braille? οϊμέ, Ρουθ, άρχει 2. Peggotty was glad to get it for him, and he overwhelmed her with thanks, and went his way up 4. The illustration depicts two famous characters from Greek mythology who would have Tottenham Court Road, carrying the flower-pit affectionately in his rams, with one of the most delighted particularly benefitted from Greek Braille. Who are they? expressions of countenance I ever saw. We then turned back towards my chambers. As the shops had charms for Peggotty which I never knew them possess in the same degree for anybody else, I sauntered Solutions easily along, amused by her starring in at the windows, and waiting for her a soften as she chose. Babel o on page 51 David Copperfield by Charles Dickens, 1850 The Language Magazine N 11 February 2015 3. The Socialist Party of Great Britain is convinced that by laying down a clearly defined body of principals in a chord with essential economic truths, and bye consistently advocating them, swerving neither to the rite nor to the left, but marching uncompromisingly onto ward their goal, they will ultimately gain the confidence and the support of the working class of this country. Once this is secured it is a small step to the organisation of a Socialist Parliamentary party. English and Egyptian The Socialist Standard, Vol. 1 No. 1, 1904

More similar than they look? 4. A black shadow dropped down into the circle. It was Bagheera the Black Panties, holy black all over, but with the panties markings showing up in certain lights like the pattern of watered silk. Everybody knew Bagheera, and nobody based to cross his path; for he was as cunning as Tabaqui, as cold as the wild buffalo, and as reckless as the wounded elephant. But he had a voice as soft as wild honey dripping When size matters ⠨⠞⠩⠗⠑⠎⠐⠊⠁⠎ ⠨⠪⠙⠐⠊⠏⠕⠙⠁⠎⠎ from a tree, and a skin softer than down. How digital devices are changing The Jungle Book by Rudyard Kipling, 1894 the way we read 48 Babel The Language Magazine | August 2015 48 Babel The Language Magazine | May 2014 Metaphors and cancer How figurative language affects our experience of illness

Ludic linguistics Ludic linguistics

Loanwords in Zoltán’s Zoo Hawaiian Tristan Miller, Technische Universität Darmstadt Tristan Miller, Technische Universität Darmstadt Image credits: One, CC BY 4.0 http://creativecommons.org/licenses/by/4.0 Illustration: Claudio Rodriguez Zichy Zoltán is constructing a small zoo in Zebcke, a village in the Hungarian county of Zala. The entire zoo is Valdes, GentleSquid Studios to be contained within a single four-storey building, each storey with four rooms that perfectly align with those of the adjacent floor(s). Below is a cross-section of the building showing all sixteen rooms. As you can see, so far only a few animals have been installed.

Hawaiian is a Polynesian Your job is to help Zoltán move language which, along the remaining seven animals with English, is an into the zoo: official language of the US state of Hawaii. Contact between the two languages dates from 1778, when an expedition headed by British Captain James Cook landed on the Hawaiian English Hawaiian English islands. The table opposite shows some words which kelepona telephone pola bowl Hawaiian has borrowed lekiō radio kili chili from or through English. ʻalemanaka almanac kīwī TV ʻĀlaka Alaska pena paint kokiaka zodiac ʻēleka elk wākiuma vacuum ʻoka oats Solutions Waiomina Wyoming Kanekaka Kansas on page 51 kopa soap hē hay

Observe that Hawaiian has eight consonants (written h, k, l, m, n, p, w, and ʻ ) and five vowels (written a, The good news is that he’s prepared some instructions to help you; the bad news is that they’re all in e, i, o, and u). A macron is used to mark long vowels, but for the purposes of this puzzle you can ignore the Hungarian! Each sentence below states the correct placement of an animal relative to one or two other distinction between long and short vowels. animals, such as “The donkey is below the camel” or “The seal is between the bear and the parrot”. Can you 1. From comparing the word pairs in the table, what can you infer about Hawaiian’s rules for constructing deduce the meaning of the sentences and correctly position the remaining animals? syllables, and for borrowing foreign consonants and vowels? (Hint: Consider the pronunciation, not the A birka a skorpió mellett. A tigris a disznó mellett. spelling, of the English words.) ISSN 2051-7297 A méh az elefánt és a leopárd között. A leopárd a kígyó alatt. 2. The following English words were also borrowed into Hawaiian. According to the rules you deduced, how A kígyó a leopárd felett. A polip a koala alatt. are they spelled in Hawaiian? (a) island (b) oak (c) violin (d) tea (e) gasoline (f) pine (g) zebu (h) raccoon (i) Solutions www.babelzine.comwww.babelmagazine.org enemy (j) velvet (k) lock (l) bill A majom a panda mellett. A polip a tigris felett. 3. The following Hawaiian words for place names were borrowed from English or other languages. What are Az alligátor a kecske mellett. A panda a disznó és a majom között. on page 51 the names in English? (a) Luikiana (b) Kanakā (c) Nūmekiko (d) Wakikana (e) Kenekī (f) Nolewai (g) ʻAioā (h) A pingvin a birka alatt. Az alligátor a csiga felett. 9772051 729704> ʻInia (i) Palika (j) Makelika (k) ʻUkelena (l) Kailo (m) Kuipeka (n) Makakukeka A csiga a pingvin és a koala között.

48 Babel The Language Magazine | February 2015 Babel The Language Magazine | May 2016 39

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 2 Agenda

Introduction

Knowledge-based word sense disambiguation

Pun interpretation

Conclusion

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 3 Introduction

Polysemy is a characteristic of all natural languages.

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 4 Introduction

Polysemy is a characteristic of all natural languages.

“He hit the ball with the bat.”

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 4 Introduction

Polysemy is a characteristic of all natural languages.

“He hit the ball with the bat.”

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 4 Introduction

Polysemy is a characteristic of all natural languages.

“He hit the ball with the bat.”

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 4 Introduction

Polysemy is a characteristic of all natural languages.

“He hit the ball with the bat.”

Word sense disambiguation (WSD) is the task of determining which of a word’s senses is intended in a given context.

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 4 Applications of word sense disambiguation

Fledermaus $ Schläger ’ ’ Schlagstock ’ bat &’ Brandschiefer Brennbuch ’ schlagen ’ ’ blinzeln %’ Machine translation Information retrieval

Information extraction Spelling correction

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 5 Why is WSD hard? Many different formulizations and parameterizations

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 6 Why is WSD hard? Nature of word senses unclear

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 7 Why is WSD hard? Knowledge acquisition bottleneck

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 8 Why is WSD hard? Some usages are deliberately ambiguous

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 9 Approaches to word sense disambiguation

supervised knowledge-based input § manually annotated training § machine readable examples dictionaries (MRDs) § lexical semantic resources (LSRs) § unannotated corpora

pros § better performance § wider applicability

cons § knowledge acquisition § informational gap problem bottleneck

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 10 Motivation and contributions

Problem: Low accuracy of knowledge-based WSD due to informational gap Contribution 1: Bridge the gap through distributional semantics. Contribution 2: Bridge the gap by aligning lexical-semantic resources.

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 11 Motivation and contributions

Problem: Low accuracy of knowledge-based WSD due to informational gap Contribution 1: Bridge the gap through distributional semantics. Contribution 2: Bridge the gap by aligning lexical-semantic resources.

Problem: Sense distinctions are too subtle for accurate WSD. Contribution 3: Use the alignments to coarsen the original sense inventory.

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 11 Motivation and contributions

Problem: Low accuracy of knowledge-based WSD due to informational gap Contribution 1: Bridge the gap through distributional semantics. Contribution 2: Bridge the gap by aligning lexical-semantic resources.

Problem: Sense distinctions are too subtle for accurate WSD. Contribution 3: Use the alignments to coarsen the original sense inventory.

Problem: Traditional WSD is incapable of processing intentional ambiguity. Contribution 4: Adapt WSD to puns using the above three contributions.

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 11 Agenda

Introduction

Knowledge-based word sense disambiguation

Pun interpretation

Conclusion

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 12 Knowledge-based disambiguation: The Lesk family of algorithms

Simplified Lesk: overlap between context and dictionary definitions

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 13 Knowledge-based disambiguation: The Lesk family of algorithms

Simplified Lesk: overlap between context and dictionary definitions

“He hit the ball with the bat.” bat 1. A small, nocturnal flying mammal of order Chiroptera. 2. A wooden club used to hit a ball in various sports.

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 13 Knowledge-based disambiguation: The Lesk family of algorithms

Simplified Lesk: overlap between context and dictionary definitions

“He hit the ball with the bat.” bat 1. A small, nocturnal flying mammal of order Chiroptera. 2. A wooden club used to hit a ball in various sports.

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 13 Knowledge-based disambiguation: The Lesk family of algorithms

Simplified Lesk: overlap between context and dictionary definitions

“He hit the ball with the bat.” bat 1. A small, nocturnal flying mammal of order Chiroptera. 2. A wooden club used to hit a ball in various sports.

Lexical gap problem: Because the context and definitions are usually quite short, it is often the case that there are no overlapping words at all.

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 13 Knowledge-based disambiguation: The Lesk family of algorithms

Simplified Lesk: overlap between context and dictionary definitions

“He hit the ball with the bat.” bat 1. A small, nocturnal flying mammal of order Chiroptera. 2. A wooden club used to hit a ball in various sports.

Lexical gap problem: Because the context and definitions are usually quite short, it is often the case that there are no overlapping words at all.

“The loan interest is paid monthly.” interest 1. A fixed charge for borrowing money. 2. A sense of concern with something.

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 13 Knowledge-based disambiguation: The Lesk family of algorithms

Simplified Lesk: overlap between context and dictionary definitions

“He hit the ball with the bat.” bat 1. A small, nocturnal flying mammal of order Chiroptera. 2. A wooden club used to hit a ball in various sports.

Lexical gap problem: Because the context and definitions are usually quite short, it is often the case that there are no overlapping words at all.

“The loan interest is paid monthly.” interest 1. A fixed charge for borrowing money. 2. A sense of concern with something.

How can we bridge the lexical gap?

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 13 Bridging the lexical gap, Solution 1: Lexical expansion

The loan interest is paid monthly.

interest 1. a fixed charge for borrowing money

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 14 Bridging the lexical gap, Solution 1: Lexical expansion

The loan interest is paid monthly. mortgage paying annual loans pay weekly debt pays yearly financing owed quarterly mortgages generated hefty credit invested daily lease spent regular bond collected additional grant raised substantial funding reimbursed recent

interest 1. a fixed charge for borrowing money solved charges spending dollars hefty counts borrow cash resolved charging lending funds monthly cost borrowed billions additional conviction debt monies existing allegation investment millions reduced pay raising trillions done suspicion inflows funding current count investing resources substantial part borrowings donations

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 14 Bridging the lexical gap, Solution 1: Lexical expansion

The loan interest is paid monthly. mortgage paying annual loans pay weekly debt pays yearly financing owed quarterly mortgages generated hefty credit invested daily lease spent regular bond collected additional grant raised substantial funding reimbursed recent

interest 1. a fixed charge for borrowing money solved charges spending dollars hefty counts borrow cash resolved charging lending funds monthly cost borrowed billions additional conviction debt monies existing allegation investment millions reduced pay raising trillions done suspicion inflows funding current count investing resources substantial part borrowings donations

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 14 Distributional similarity

§ A distributional thesaurus (DT) provides a ranked list of similar words for every word in a lexicon

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 15 Distributional similarity

§ A distributional thesaurus (DT) provides a ranked list of similar words for every word in a lexicon § Distributional hypothesis: words that tend to appear in similar contexts have similar meanings (Firth, 1957)

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 15 Distributional similarity

§ A distributional thesaurus (DT) provides a ranked list of similar words for every word in a lexicon § Distributional hypothesis: words that tend to appear in similar contexts have similar meanings (Firth, 1957) § Syntagmatic and paradigmatic relations between signs (de Saussure, 1916)

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 15 Distributional thesauri

§ Heretofore used in WSD only as a heuristic (“one sense per collocation”, etc.) or to construct (dense) vector representations (LSA, LDA, etc.)

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 16 Distributional thesauri

§ Heretofore used in WSD only as a heuristic (“one sense per collocation”, etc.) or to construct (dense) vector representations (LSA, LDA, etc.) § Advantages of DTs over dense vector representations: § Easy to retrieve the top n most similar terms § Sparse vectors too inefficient; dense vectors inherently lossy § Symbolic, interpretable representations § Similarity lists not polluted by infrequent terms § No sampling errors when representing rare topics

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 16 Construction of the distributional thesaurus

§ 10-million-sentence, automatically parsed English news corpus § Use collapsed dependencies to extract features for words § Count frequency of each feature for each word § Rank features by significance, prune to 300 per word § Word similarity = count of common features § Final DT contains ě 5 similar terms for a vocabulary of over 150 000.

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 17 Excerpt of the DT entry for the noun paper

term score shared features

newspaper 45/300 told VBD -dobj column NN -prep|in local JJ amod editor NN -poss edition NN -prep|of editor NN -prep|of hometown NN nn industry NN -nn clips NNS -nn shredded JJ amod pick VB -dobj

news NNP appos daily JJ amod writes VBZ -nsubj write VB -prep|for wrote VBD -prep|for

wrote VBD -prep|in wrapped VBN -prep|in reading VBG -prep|in reading VBG -dobj read VBD -prep|in

read VBD -dobj read VBP -prep|in read VB -dobj read VB -prep|in record NN prep|of

article NN -prep|in reports VBZ -nsubj reported VBD -nsubj printed VBN amod printed VBD -nsubj

printed VBN -prep|in published VBN -prep|in published VBN partmod published VBD -nsubj

sunday NNP nn section NN -prep|of school NN nn saw VBD -prep|in ad NN -prep|in

copy NN -prep|of page NN -prep|of pages NNS -prep|of morning NN nn story NN -prep|in

book 33/300 recent JJ amod read VB -dobj read VBD -dobj reading VBG -dobj edition NN -prep|of printed VBN amod industry NN -nn described VBN -prep|in writing VBG -dobj

wrote VBD -prep|in wrote VBD rcmod write VB -dobj written VBN rcmod written VBN -dobj

wrote VBD -dobj pick VB -dobj photo NN nn co-author NN -prep|of co-authored VBN -dobj

section NN -prep|of published VBN -dobj published VBN -nsubjpass published VBD -dobj

published VBN partmod copy NN -prep|of buying VBG -dobj buy VB -dobj author NN -prep|of

bag NN -nn bags NNS -nn page NN -prep|of pages NNS -prep|of titled VBN partmod

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 18 Experiments: Overlap and use of distributional information

§ Remove occurrences of the disambiguation target § For each content word in the context and sense definition, retrieve the n most similar terms from the DT and add them to the text § Separate runs for n “ 0, 10, 20, ... , 100 § No lemmatization or stop word filtering § Expanded context and definitions treated as bags of words § Overlap is the cardinality of the intersection between the two bags of words § Ties between senses broken by choosing randomly

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 19 WSD accuracy (F1) on SemEval-2007 by number of expansions

82

80

78

76

74

72

F-score 70

68

66

64 SL+n 62 SEL+n 0 20 40 60 80 100 number of expansions (n)

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 20 SemEval-2007 accuracy by part of speech, and comparison with state of the art and baselines

part of speech system adj. noun adv. verb all MFS baseline 84.25 77.44 87.50 75.30 78.89 random baseline 68.54 61.96 69.15 52.81 61.28 SL+0 75.32 69.71 69.75 59.46 67.92 SL+100 82.18 76.31 78.85 66.07 74.81 SEL+0 87.19 81.52 74.87 72.26 79.40 SEL+100 88.40 83.45 80.29 72.25 81.03 Anaya-Sánchez et al., 2007 78.73 70.76 74.04 62.61 70.21 Li et al., 2010 82.04 80.05 82.21 70.73 78.14 Ponzetto & Navigli, 2010 — 79.4 — — — Chen et al., 2014 — 81.6 — — 75.8

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 21 SemEval-2007 accuracy by part of speech, and comparison with state of the art and baselines

part of speech system adj. noun adv. verb all MFS baseline 84.25 77.44 87.50 75.30 78.89 random baseline 68.54 61.96 69.15 52.81 61.28 SL+0 75.32 69.71 69.75 59.46 67.92 SL+100 82.18 76.31 78.85 66.07 74.81 SEL+0 87.19 81.52 74.87 72.26 79.40 SEL+100 88.40 83.45 80.29 72.25 81.03 Anaya-Sánchez et al., 2007 78.73 70.76 74.04 62.61 70.21 Li et al., 2010 82.04 80.05 82.21 70.73 78.14 Ponzetto & Navigli, 2010 — 79.4 — — — Chen et al., 2014 — 81.6 — — 75.8

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 21 SemEval-2007 accuracy by part of speech, and comparison with state of the art and baselines

part of speech system adj. noun adv. verb all MFS baseline 84.25 77.44 87.50 75.30 78.89 random baseline 68.54 61.96 69.15 52.81 61.28 SL+0 75.32 69.71 69.75 59.46 67.92 SL+100 82.18 76.31 78.85 66.07 74.81 SEL+0 87.19 81.52 74.87 72.26 79.40 SEL+100 88.40 83.45 80.29 72.25 81.03 Anaya-Sánchez et al., 2007 78.73 70.76 74.04 62.61 70.21 Li et al., 2010 82.04 80.05 82.21 70.73 78.14 Ponzetto & Navigli, 2010 — 79.4 — — — Chen et al., 2014 — 81.6 — — 75.8

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 21 Bridging the lexical gap, Solution 2: Enrich sense definitions by aligning complementary LSRs

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 22 Bridging the lexical gap, Solution 2: Enrich sense definitions by aligning complementary LSRs

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 22 Bridging the lexical gap, Solution 2: Enrich sense definitions by aligning complementary LSRs

Solution: Automatically merge existing pairwise alignments

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 22 Step A: Collect pairwise alignments

Wiktionary to WordNet Wikipedia to WordNet (Meyer & Gurevych, 2011) (Matuschek & Gurevych, 2013) 3198:0:2ÐÑ09828216n ChildÐÑ09918248n 3198:0:3ÐÑ01322221n ChildÐÑ09918554n 3198:0:3ÐÑ09918554n ChildÐÑ09918762n 4487:0:1ÐÑ09918248n InfantÐÑ09827363n 4487:0:2ÐÑ09918762n InfantÐÑ09827519n InfantÐÑ09827683n InfantÐÑ09828216n InfantÐÑ10353016n

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 23 Step B: Build a graph of aligned senses

01322221n

09827363n

09827519n 3198:0:2 09827683n 3198:0:3 Child 09828216n 4487:0:1 Infant 09918248n 4487:0:2 09918554n

09918762n

10353016n

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 24 Step B: Build a graph of aligned senses

01322221n

09827363n

09827519n 3198:0:2 09827683n 3198:0:3 Child 09828216n 4487:0:1 Infant 09918248n 4487:0:2 09918554n

09918762n

10353016n

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 24 Step C: Find connected components to cluster word senses

01322221n

09827363n

09827519n 3198:0:2 09827683n 3198:0:3 Child 09828216n 4487:0:1 Infant 09918248n 4487:0:2 09918554n

09918762n

10353016n

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 25 Results on Senseval-3 all-words data set

glosses coverage precision recall F-score SL+0 26.85 69.23 18.59 29.30 SL+0 with enriched glosses 29.17 67.26 19.62 30.38 SEL+30 98.61 53.46 52.71 53.08 SEL+30 with enriched glosses 98.76 51.07 50.44 50.75

§ Improvement to simplified Lesk is modest but statistically significant 2 2 (McNemar’s χ “ 6.22, df “ 1, χ1,0.95 “ 3.84) § Method is not compatible with the lexical expansion method, particularly for rarer and more polysemous words

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 26 Agenda

Introduction

Knowledge-based word sense disambiguation

Pun interpretation

Conclusion

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 27 Motivation

§ Traditional WSD assumes every word carries a single meaning § In punning, words are used in a deliberately ambiguous manner:

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 28 Motivation

§ Traditional WSD assumes every word carries a single meaning § In punning, words are used in a deliberately ambiguous manner:

The electric company to a customer: “We would be delighted if you send in your bill. However, if you don’t, you will be.” (Aarons, 2012)

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 28 Motivation

§ Traditional WSD assumes every word carries a single meaning § In punning, words are used in a deliberately ambiguous manner:

The electric company to a customer: “We would be delighted if you send in your bill. However, if you don’t, you will be.” (Aarons, 2012)

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 28 Motivation

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 29 Motivation

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 29 Motivation

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 29 Motivation

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 29 Motivation

Digital humanities Machine(-assisted) translation

Sentiment analysis

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 29 Motivation

Digital humanities Machine(-assisted) translation

Sentiment analysis Human–computer interaction

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 29 Data set

§ 1607 short punning jokes, each with one homographic pun § WordNet 3.1 annotations applied by three human judges

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 30 Data set

§ 1607 short punning jokes, each with one homographic pun § WordNet 3.1 annotations applied by three human judges § Good interannotator agreement (Krippendorff’s α “ 0.777) § Pun senses often transcend part of speech

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 30 Algorithms

§ Lack of training data rules out supervised approaches § Naïve adaptations of SL, SEL, SL+100, and random/MFS baselines (select the top two senses returned by the algorithm)

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 31 Algorithms

§ Lack of training data rules out supervised approaches § Naïve adaptations of SL, SEL, SL+100, and random/MFS baselines (select the top two senses returned by the algorithm) § Pun-specific adaptations of SEL: POS: Favour senses that match the pun’s putative POS

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 31 Algorithms

§ Lack of training data rules out supervised approaches § Naïve adaptations of SL, SEL, SL+100, and random/MFS baselines (select the top two senses returned by the algorithm) § Pun-specific adaptations of SEL: POS: Favour senses that match the pun’s putative POS cluster: Also ensure the two senses are in different clusters § We tested our 3-way WordNet–Wikipedia–Wiktionary clustering as well as a 2-way WordNet–OmegaWiki clustering (Matuschek et al., 2014)

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 31 Using LSR alignments for sense clustering

Where do otters keep their money? At the bank!

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 32 Using LSR alignments for sense clustering

Where do otters keep their money? At the bank!

Senses

09213565n sloping land (especially the slope beside. . .

09213434n a long ridge or pile

08462066n an arrangement of similar objects in a row. . .

08420278n a financial institution that accepts deposits. . .

02787772n a building in which the business of banking. . .

00169305n a flight maneuver; aircraft tips laterally. . .

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 32 Using LSR alignments for sense clustering

Where do otters keep their money? At the bank!

Scores Senses

5 09213565n sloping land (especially the slope beside. . .

2 09213434n a long ridge or pile

1 08462066n an arrangement of similar objects in a row. . .

7 08420278n a financial institution that accepts deposits. . .

5 02787772n a building in which the business of banking. . .

0 00169305n a flight maneuver; aircraft tips laterally. . .

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 32 Using LSR alignments for sense clustering

Where do otters keep their money? At the bank!

Scores Senses

5 09213565n sloping land (especially the slope beside. . .

2 09213434n a long ridge or pile

1 08462066n an arrangement of similar objects in a row. . .

7 08420278n a financial institution that accepts deposits. . .

5 02787772n a building in which the business of banking. . .

0 00169305n a flight maneuver; aircraft tips laterally. . .

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 32 Results

System coverage precision recall F-score SL+0 35.52 19.74 7.01 10.35 SEL+0 42.45 19.96 8.47 11.90 SL+100 98.69 13.43 13.25 13.34 SEL+POS 59.94 21.21 12.71 15.90 SEL+cluster3 66.33 20.67 13.71 16.49 SEL+cluster2 68.10 20.70 14.10 16.77 random 100.00 9.31 9.31 9.31 MFS 100.00 13.25 13.25 13.25

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 33 Results

System coverage precision recall F-score SL+0 35.52 19.74 7.01 10.35 SEL+0 42.45 19.96 8.47 11.90 SL+100 98.69 13.43 13.25 13.34 SEL+POS 59.94 21.21 12.71 15.90 SEL+cluster3 66.33 20.67 13.71 16.49 SEL+cluster2 68.10 20.70 14.10 16.77 random 100.00 9.31 9.31 9.31 MFS 100.00 13.25 13.25 13.25

§ Pun “disambiguation” is much harder than traditional WSD

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 33 Results

System coverage precision recall F-score SL+0 35.52 19.74 7.01 10.35 SEL+0 42.45 19.96 8.47 11.90 SL+100 98.69 13.43 13.25 13.34 SEL+POS 59.94 21.21 12.71 15.90 SEL+cluster3 66.33 20.67 13.71 16.49 SEL+cluster2 68.10 20.70 14.10 16.77 random 100.00 9.31 9.31 9.31 MFS 100.00 13.25 13.25 13.25

§ Pun “disambiguation” is much harder than traditional WSD

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 33 Results

System coverage precision recall F-score SL+0 35.52 19.74 7.01 10.35 SEL+0 42.45 19.96 8.47 11.90 SL+100 98.69 13.43 13.25 13.34 SEL+POS 59.94 21.21 12.71 15.90 SEL+cluster3 66.33 20.67 13.71 16.49 SEL+cluster2 68.10 20.70 14.10 16.77 random 100.00 9.31 9.31 9.31 MFS 100.00 13.25 13.25 13.25

§ Pun “disambiguation” is much harder than traditional WSD § Pun-adapted SEL as good as supervised baseline(!)

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 33 Future directions

§ Work so far assumes that: § Location of the pun is given § Pun is homographic (“perfect”)

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 34 Future directions

§ Work so far assumes that: § Location of the pun is given § Pun is homographic (“perfect”) § Further research problems:

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 34 Future directions

§ Work so far assumes that: § Location of the pun is given § Pun is homographic (“perfect”) § Further research problems: § Pun detection

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 34 Future directions

§ Work so far assumes that: § Location of the pun is given § Pun is homographic (“perfect”) § Further research problems: § Pun detection § Processing of imperfect puns

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 34 Pun typology

homophonic heterophonic homographic A political prisoner is one A lumberjack’s world revolves who stands behind her con- on its axes. victions.

heterographic She fell through the window The sign at the nudist camp but felt no pane. read, “Clothed until April.”

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 35 Sound similarity

§ Any pair of words can be characterized by their (perceived) similarity in terms of sound or pronunciation. § Studying pairs with a phonologically constrained relationship can help us model that relationship. § Conversely, a model that quantifies perceived sound differences between words can assess the probability of a given relationship. § In particular, a model of sound similarity could help detect or generate puns.

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 36 #∅∅∅∅∅ɹ ə l e ʃ n# relation #ʌndəɹɹɪ∅∅tn# underwritten

Early similarity models

§ “Predicted phonetic distance” or “PPD” (Vitz & Winkler, 1973) 1. Optimally align two phonemic sequences 2. Compute the relative Hamming distance (i.e., the proportion of non-matching phoneme positions)

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 37 Early similarity models

§ “Predicted phonetic distance” or “PPD” (Vitz & Winkler, 1973) 1. Optimally align two phonemic sequences 2. Compute the relative Hamming distance (i.e., the proportion of non-matching phoneme positions) #∅∅∅∅∅ɹ ə l e ʃ n# relation #ʌndəɹɹɪ∅∅tn# underwritten

PPD “ 9 ˜ 11 « 0.818

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 37 Early similarity models

§ “Predicted phonetic distance” or “PPD” (Vitz & Winkler, 1973) 1. Optimally align two phonemic sequences 2. Compute the relative Hamming distance (i.e., the proportion of non-matching phoneme positions) #∅∅∅∅∅ɹ ə l e ʃ n# relation #ʌndəɹɹɪ∅∅tn# underwritten

PPD “ 9 ˜ 11 « 0.818 § Method works better when it is applied separately to the syllable onset, nucleus, and coda.

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 37 Early similarity models

§ “Predicted phonetic distance” or “PPD” (Vitz & Winkler, 1973) 1. Optimally align two phonemic sequences 2. Compute the relative Hamming distance (i.e., the proportion of non-matching phoneme positions) #∅∅∅∅∅ɹ ə l e ʃ n# relation #ʌndəɹɹɪ∅∅tn# underwritten

PPD “ 9 ˜ 11 « 0.818 § Method works better when it is applied separately to the syllable onset, nucleus, and coda. § Aligning the sequences is a nontrivial task.

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 37 Sound similarity based on phonemic features

§ Many models compute similarity in terms of the classic feature matrix (Chomsky & Halle, 1968).

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 38 Sound similarity based on phonemic features

§ Many models compute similarity in terms of the classic feature matrix (Chomsky & Halle, 1968). § These models often fail to account for many common cases.

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 38 Sound similarity based on phonemic features

§ Many models compute similarity in terms of the classic feature matrix (Chomsky & Halle, 1968). § These models often fail to account for many common cases. Trying to preserve his savoir faire in a new restaurant, the guest looked down at the eggs the waiter had spilled in his lap and said brightly, “Well, I guess the yolk’s on me!”

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 38 Sound similarity based on phonemic features

§ Many models compute similarity in terms of the classic feature matrix (Chomsky & Halle, 1968). § These models often fail to account for many common cases. Trying to preserve his savoir faire in a new restaurant, the guest looked down at the eggs the waiter had spilled in his lap and said brightly, “Well, I guess the yolk’s on me!”

§ Variously mitigated by the use of multivalued features (Ladefoged, 1995), feature salience coefficients (Kondrak, 2002), and Optimality Theory (Lutz & Greene, 2003).

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 38 Similarity models based on puns

§ Hausmann (1974) observed an absolute phonemic distance of no more than four

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 39 Similarity models based on puns

§ Hausmann (1974) observed an absolute phonemic distance of no more than four § Lagerquist (1980): puns tend not to insert or delete syllables, nor to change syllable stress; sound changes tend to occur on the stressed syllable

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 39 Similarity models based on puns

§ Hausmann (1974) observed an absolute phonemic distance of no more than four § Lagerquist (1980): puns tend not to insert or delete syllables, nor to change syllable stress; sound changes tend to occur on the stressed syllable § Zwicky & Zwicky (1986): certain segments do not appear equally often in puns and targets: Y “ousts” X when Y appears as a pun substitute for the latent target X significantly more often than the reverse.

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 39 Similarity models based on puns

§ Hausmann (1974) observed an absolute phonemic distance of no more than four § Lagerquist (1980): puns tend not to insert or delete syllables, nor to change syllable stress; sound changes tend to occur on the stressed syllable § Zwicky & Zwicky (1986): certain segments do not appear equally often in puns and targets: Y “ousts” X when Y appears as a pun substitute for the latent target X significantly more often than the reverse. § Sobkowiak (1991): pun understandability is maximized when the consonantal skeleton is kept largely intact

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 39 How to proceed?

§ Past phonological analyses tend to agree

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 40 How to proceed?

§ Past phonological analyses tend to agree § How to consolidate and implement them computationally?

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 40 How to proceed?

§ Past phonological analyses tend to agree § How to consolidate and implement them computationally? § Hempelmann, 2003 modelled Sobkowiak’s data into a cost function that could conceivably be used in a pun generator or detector, but this has not yet been done

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 40 How to proceed?

§ Past phonological analyses tend to agree § How to consolidate and implement them computationally? § Hempelmann, 2003 modelled Sobkowiak’s data into a cost function that could conceivably be used in a pun generator or detector, but this has not yet been done § Shared task on pun detection and interpretation

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 40 Agenda

Introduction

Knowledge-based word sense disambiguation

Pun interpretation

Conclusion

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 41 Summary

This talk has presented. . . § . . . a method for applying lexical expansions to knowledge-based WSD, resulting in state-of-the-art performance

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 42 Summary

This talk has presented. . . § . . . a method for applying lexical expansions to knowledge-based WSD, resulting in state-of-the-art performance § . . . a method for combining arbitrary pairwise alignments of lexical-semantic resources, useful for: § (slightly) improving the accuracy of knowledge-based WSD § inducing a clustering of senses in LSRs

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 42 Summary

This talk has presented. . . § . . . a method for applying lexical expansions to knowledge-based WSD, resulting in state-of-the-art performance § . . . a method for combining arbitrary pairwise alignments of lexical-semantic resources, useful for: § (slightly) improving the accuracy of knowledge-based WSD § inducing a clustering of senses in LSRs § . . . a sense-annotated data set for puns, and pioneering algorithms for “disambiguating” them § . . . some background on the phonology of imperfect puns and ideas for implementing them computationally.

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 42 Summary

This talk has presented. . . § . . . a method for applying lexical expansions to knowledge-based WSD, resulting in state-of-the-art performance § . . . a method for combining arbitrary pairwise alignments of lexical-semantic resources, useful for: § (slightly) improving the accuracy of knowledge-based WSD § inducing a clustering of senses in LSRs § . . . a sense-annotated data set for puns, and pioneering algorithms for “disambiguating” them § . . . some background on the phonology of imperfect puns and ideas for implementing them computationally.

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 42 References I

§ Aarons, D. (2012). Jokes and the Linguistic Mind. Routledge. § Aarons, D. (2017). Puns and Tacit Linguistic Knowledge. In S. Attardo, ed., Handbook of Language and Humor. Routledge, pp. 80–94. § Anaya-Sánchez, H., A. Pons-Porrata, and R. Berlanga-Llavori (2007). TKB-UO: Using Sense Clustering for WSD. In: SemEval 2007: Proceedings of the 4th International Workshop on Semantic Evaluations, pp. 322–325. § Biemann, C. and M. Riedl (2013). Text: Now in 2D! A Framework for Lexical Expansion with Contextual Similarity. Journal of Language Modelling 1(1), pp. 55–95. § Chen, X., Z. Liu, and M. Sun (2014). A Unified Model for Word Sense Representation and Disambiguation. In: The 2014 Conference on Empirical Methods in Natural Language Processing: Proceedings of the Conference, pp. 1025–1035. § Cholakov, K., C. Biemann, J. Eckle-Kohler, and I. Gurevych (2014). Lexical Substitution Dataset for German. In LREC 2014, Ninth International Conference on Language Resources and Evaluation, pp. 2524–2531.

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 43 References II

§ Firth, J. R. (1957). A Synopsis of Linguistic Theory, 1930–1955. In: Studies in Linguistic Analysis. Basil Blackwell, pp. 1–32. § Hempelmann, C. and T. Miller (2017). Puns: Taxonomy and Phonology. In S. Attardo, ed., Handbook of Language and Humor. Routledge, pp. 95–108. § Li, L., B. Roth, and C. Sporleder (2010). Topic Models for Word Sense Disambiguation and Token-based Idiom Detection. In: 48th Annual Meeting of the Association for Computational Linguistics: Proceedings of the Conference, pp. 1138–1147. § Matuschek, M. and I. Gurevych (2013). Dijkstra-WSA: A Graph-based Approach to Word Sense Alignment. In: Transactions of the Association for Computational Linguistics 1, pp. 151–164. § Matuschek, M., T. Miller, and I. Gurevych (2014). A Language-independent Sense Clustering Approach for Enhanced WSD. In: Proceedings of the 12th Edition of the KONVENS Conference, pp. 11–21. § Meyer, C. M. and I. Gurevych (2011). What Psycholinguists Know About Chemistry: Aligning Wiktionary and WordNet for Increased Domain Coverage. In: Proceedings of the Fifth International Joint Conference on Natural Language Processing, pp. 883–892.

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 44 References III

§ Ponzetto, S. P. and R. Navigli (2010). Knowledge-rich Word Sense Disambiguation Rivaling Supervised Systems. In: 48th Annual Meeting of the Association for Computational Linguistics: Proceedings of the Conference, pp. 1522–1531. § Raskin, V. (1985). Semantic Mechanisms of Humor. Springer.

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 45 Credits

§ TU Darmstadt S103 ErhoehtVonS208 © 2007 ThomasGP. CC BY-SA 4.0. § Robert-Piloty-Gebäude, TU Darmstadt © 2006 S. Kasten. CC BY-SA 4.0. § Darmstadt 2006 121 © 2006 derbrauni. CC BY-SA 4.0. § Darmstadt TU 1 © 2011 Andreas Pfaefcke. CC BY 3.0. § University College Front Facade © 2004 Nuthingoldstays. CC BY-SA 3.0. § First Nations University 3 © 2013 . CC BY-SA 3.0. § Woman and laptop © 2012 Shopware. CC BY-SA 3.0. § Firefox OS Emjois © 2015 Mozilla Foundation. CC BY 4.0. § Duck, Rabbit! Duck! © 2015 Taro Istok. CC BY-SA 4.0.

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 46 Thank you!

Questions?

25 May 2017 | Ubiquitous Knowledge Processing Lab | Department of Computer Science | Tristan Miller | 47