Teaching Forge to Verbalize Dbpedia Properties in Spanish

Total Page:16

File Type:pdf, Size:1020Kb

Teaching Forge to Verbalize Dbpedia Properties in Spanish Teaching FORGe to Verbalize DBpedia Properties in Spanish Simon Mille Stamatia Dasiopoulou Universitat Pompeu Fabra Independent Researcher Barcelona, Spain Barcelona, Spain [email protected] [email protected] Beatriz Fisas Leo Wanner Universitat Pompeu Fabra ICREA and Universitat Pompeu Fabra Barcelona, Spain Barcelona, Spain [email protected] [email protected] Abstract a tremendous amount of structured knowledge Statistical generators increasingly dominate has been made publicly available as language- the research in NLG. However, grammar- independent triples; the Linked Open Data (LOD) based generators that are grounded in a solid cloud currently contains over one thousand inter- linguistic framework remain very competitive, linked datasets (e.g., DBpedia, Wikidata), which especially for generation from deep knowl- cover a large range of domains and amount to edge structures. Furthermore, if built modu- billions of different triples. The verbalization of larly, they can be ported to other genres and LOD triples, i.e., their mapping onto sentences in languages with a limited amount of work, without the need of the annotation of a consid- natural languages, has been attracting a growing erable amount of training data. One of these interest in the past years, as shown by the organi- generators is FORGe, which is based on the zation of dedicated events such as the WebNLG Meaning-Text Model. In the recent WebNLG 2016 workshop (Gardent and Gangemi, 2016) challenge (the first comprehensive task ad- and the 2017 WebNLG challenge (Gardent et al., dressing the mapping of RDF triples to text) 2017b). As a result, a variety of new NLG systems FORGe ranked first with respect to the over- designed specifically for handling structured data all quality in human evaluation. We extend the have emerged, most of them statistical, as seen in coverage of FORGE’s open source grammati- cal and lexical resources for English, so as to the 2017 WebNLG challenge, although a number further improve the English outcome, and port of rule-based generators have also been presented. them to Spanish, to achieve a comparable qual- All systems focus on English, mainly because no ity. This confirms that, as already observed training data other than for English are available as in the case of SimpleNLG, a robust universal yet. Given the high cost for the creation of train- grammar-driven framework and a systematic ing data, this state of affairs is likely to persist for organization of the linguistic resources can be some time. Therefore, the question on the compet- an adequate choice for NLG applications. itiveness of rule-based generators arises. 1 Introduction One of the rule-based generators presented at The origins of Natural Language Generation WebNLG was FORGe (Mille and Dasiopoulou, (NLG) are in rule-based sentence/text genera- 2017), which ranked first with respect to over- tion from numerical data or deep semantic struc- all quality in the human evaluation. FORGe is tures. With the availability of large scale syn- grounded in the linguistic model of the Meaning- tactically annotated corpora and the lack of pub- Text Theory (Mel’cukˇ , 1988). The multistratal licly available knowledge repositories, the focus nature of this model allows for a modular or- had shifted to statistical surface generation. How- ganization of blocks of graph-transduction rules, ever, thanks to Semantic Web (SW) initiatives from blocks that are universal, i.e., multilingual, such as the W3C Linking Open Data Project,1 to blocks that are language-specific. The graph- transduction framework MATE (Bohnet and Wan- 1https://www.w3.org/wiki/SweoIG/ TaskForces/CommunityProjects/ ner, 2010) furthermore facilitates a systematic hi- LinkingOpenData erarchical rule writing and testing. SimpleNLG 473 Proceedings of The 12th International Conference on Natural Language Generation, pages 473–483, Tokyo, Japan, 28 Oct - 1 Nov, 2019. c 2019 Association for Computational Linguistics (Gatt and Reiter, 2009) demonstrated that a well- ing statistically the most appropriate output (Gar- defined generation infrastructure, along with a dent et al., 2017b; Belz et al., 2011). Template- transparent, easy to handle rule and structure for- based systems are very robust, but also limited in mat, is a key for its take up and use for creation terms of portability since new templates need to of generation modules for multiple languages. In be defined for every new domain, style, language, what follows, we aim to demonstrate that the etc. Statistical systems have the best coverage, but FORGe generator can also well serve as a multi- the relevance and the quality of the produced texts lingual portable text generator for verbalization of cannot be ensured. Furthermore, they are fully de- structured data and that its lexical and grammatical pendent on the available (still scarce and mostly resources can be easily extended to reach a higher monolingual) training data. The development of coverage of linguistic constructions. For this, we grammar-based systems is time-consuming and extend its publicly available resources for English, they usually have coverage issues. However, they so as to improve the quality of the English texts do not require training material, allow for a greater and port the resources to Spanish with a compara- control over the outputs (e.g. for mitigating er- ble output quality. rors or tuning the output to a desired style), and In the next section, we summarize the related the linguistic knowledge used for one domain or work. Section3 introduces FORGe. In Section language can be reused for other domains and lan- 4, we outline our work on the extension of the guages. In addition to these, a number of systems available English resources and on the adaptation actually address the whole sequence as one step, of FORGe to Spanish. Section5 presents the re- by combining approaches (i) and (iii) and filling sults of the automatic evaluation of the extended the slot values of pre-existing templates using neu- system, and Section6 a qualitative evaluation of ral network techniques (Nayak et al., 2017). the outputs in both languages. Section7, finally, In the WebNLG challenge (Gardent et al., draws some conclusions and presents the future 2017a), systems of types (ii) and (iii) have been work. presented. The task consisted in generating texts from up to 7 DBpedia triples from 15 categories, 2 Related work covering in total 373 distinct DBpedia properties. Nine categories appeared in the training data (‘As- The most prominent recent illustration of the tronaut’, ‘Building’, ‘University’, etc.), i.e., were portability of a generation framework is Sim- “seen”, and five categories were “unseen”, i.e., pleNLG. Originally developed for generation of they did not appear in the training data (‘Athlete’, English in practical applications (Gatt and Re- ‘Artist’, etc.). At the time of the challenge, the iter, 2009), in the meantime it has been ported to WebNLG dataset contained about 10K distinct in- generate, among others, in Brasilian Portuguese puts and 25K data-text pairs; a sample data-text (De Oliveira and Sripada, 2014), Dutch (de Jong pair is shown in Figure1. The neural genera- and Theune, 2018), German (Bollmann, 2011), tor ADAPT (Elder et al., 2018) performed best on Italian (Mazzei et al., 2016), and Spanish (Soto seen data, and FORGe on unseen data and over- et al., 2017). However, while SimpleNLG is a all. In what follows, we aim to improve the per- framework for surface generation, usually with a formance of FORGe on seen data for English and limited coverage, we are interested in a portable furthermore port it to Spanish. multilingual framework for large scale text gener- ation from structured data, more precisely, from 3 Overview of FORGe DBpedia properties (Lehmann et al., 2015). Although most existing NLG generators com- FORGe is an open-source generator implemented bine different techniques, there are three main ap- in terms of graph transducers; it covers the last proaches to generating texts from an input se- two typical NLG tasks (text planning and linguis- quence of structured data (Bouayad-Agha et al., tic generation). Following the Meaning-Text The- 2014; Gatt and Krahmer, 2018): (i) filling slot ory (Mel’cukˇ , 1988), FORGe is based on the no- values in predefined sentence templates (Androut- tion of linguistic dependencies, that is, the seman- sopoulos et al., 2013), (ii) applying grammars tic, syntactic and morphological relations between (rules) that encode different types of linguistic the components of the sentence. Input predicate- knowledge (Wanner et al., 2010), and (iii) predict- argument structures are mapped onto sentences by 474 A1 A2 subject leader object dpos=NP definiteness=DEF dpos=NP Reference 1: Charles Michel is the leader of Belgium where class=Person the German language is spoken. Antwerp is located in the A2 country and served by Antwerp International airport. Location Reference 2: Antwerp International Airport serves the city of Antwerp which is a popular tourist destination in Belgium. One of the languages spoken in Belgium is German, and the subject speak object leader is Charles Michel. dpos=NP lex=speak VB 02 definiteness=DEF Figure 1: Sample pair of data (subject-property-object) and human-produced texts (references). Figure 2: Sample PredArg templates corresponding to the leader (top) and language (bottom) properties. applying a series of rule-based graph transducers. The generator handles Semantic Web inputs by pertinent subject/object class labels,
Recommended publications
  • Abstraction and Generalisation in Semantic Role Labels: Propbank
    Abstraction and Generalisation in Semantic Role Labels: PropBank, VerbNet or both? Paola Merlo Lonneke Van Der Plas Linguistics Department Linguistics Department University of Geneva University of Geneva 5 Rue de Candolle, 1204 Geneva 5 Rue de Candolle, 1204 Geneva Switzerland Switzerland [email protected] [email protected] Abstract The role of theories of semantic role lists is to obtain a set of semantic roles that can apply to Semantic role labels are the representa- any argument of any verb, to provide an unam- tion of the grammatically relevant aspects biguous identifier of the grammatical roles of the of a sentence meaning. Capturing the participants in the event described by the sentence nature and the number of semantic roles (Dowty, 1991). Starting from the first proposals in a sentence is therefore fundamental to (Gruber, 1965; Fillmore, 1968; Jackendoff, 1972), correctly describing the interface between several approaches have been put forth, ranging grammar and meaning. In this paper, we from a combination of very few roles to lists of compare two annotation schemes, Prop- very fine-grained specificity. (See Levin and Rap- Bank and VerbNet, in a task-independent, paport Hovav (2005) for an exhaustive review). general way, analysing how well they fare In NLP, several proposals have been put forth in in capturing the linguistic generalisations recent years and adopted in the annotation of large that are known to hold for semantic role samples of text (Baker et al., 1998; Palmer et al., labels, and consequently how well they 2005; Kipper, 2005; Loper et al., 2007). The an- grammaticalise aspects of meaning.
    [Show full text]
  • A Novel Large-Scale Verbal Semantic Resource and Its Application to Semantic Role Labeling
    VerbAtlas: a Novel Large-Scale Verbal Semantic Resource and Its Application to Semantic Role Labeling Andrea Di Fabio}~, Simone Conia}, Roberto Navigli} }Department of Computer Science ~Department of Literature and Modern Cultures Sapienza University of Rome, Italy {difabio,conia,navigli}@di.uniroma1.it Abstract The automatic identification and labeling of ar- gument structures is a task pioneered by Gildea We present VerbAtlas, a new, hand-crafted and Jurafsky(2002) called Semantic Role Label- lexical-semantic resource whose goal is to ing (SRL). SRL has become very popular thanks bring together all verbal synsets from Word- to its integration into other related NLP tasks Net into semantically-coherent frames. The such as machine translation (Liu and Gildea, frames define a common, prototypical argu- ment structure while at the same time pro- 2010), visual semantic role labeling (Silberer and viding new concept-specific information. In Pinkal, 2018) and information extraction (Bas- contrast to PropBank, which defines enumer- tianelli et al., 2013). ative semantic roles, VerbAtlas comes with In order to be performed, SRL requires the fol- an explicit, cross-frame set of semantic roles lowing core elements: 1) a verb inventory, and 2) linked to selectional preferences expressed in terms of WordNet synsets, and is the first a semantic role inventory. However, the current resource enriched with semantic information verb inventories used for this task, such as Prop- about implicit, shadow, and default arguments. Bank (Palmer et al., 2005) and FrameNet (Baker We demonstrate the effectiveness of VerbAtlas et al., 1998), are language-specific and lack high- in the task of dependency-based Semantic quality interoperability with existing knowledge Role Labeling and show how its integration bases.
    [Show full text]
  • An Arabic Wordnet with Ontologically Clean Content
    Applied Ontology (2021) IOS Press The Arabic Ontology – An Arabic Wordnet with Ontologically Clean Content Mustafa Jarrar Birzeit University, Palestine [email protected] Abstract. We present a formal Arabic wordnet built on the basis of a carefully designed ontology hereby referred to as the Arabic Ontology. The ontology provides a formal representation of the concepts that the Arabic terms convey, and its content was built with ontological analysis in mind, and benchmarked to scientific advances and rigorous knowledge sources as much as this is possible, rather than to only speakers’ beliefs as lexicons typically are. A comprehensive evaluation was conducted thereby demonstrating that the current version of the top-levels of the ontology can top the majority of the Arabic meanings. The ontology consists currently of about 1,300 well-investigated concepts in addition to 11,000 concepts that are partially validated. The ontology is accessible and searchable through a lexicographic search engine (http://ontology.birzeit.edu) that also includes about 150 Arabic-multilingual lexicons, and which are being mapped and enriched using the ontology. The ontology is fully mapped with Princeton WordNet, Wikidata, and other resources. Keywords. Linguistic Ontology, WordNet, Arabic Wordnet, Lexicon, Lexical Semantics, Arabic Natural Language Processing Accepted by: 1. Introduction The importance of linguistic ontologies and wordnets is increasing in many application areas, such as multilingual big data (Oana et al., 2012; Ceravolo, 2018), information retrieval (Abderrahim et al., 2013), question-answering and NLP-based applications (Shinde et al., 2012), data integration (Castanier et al., 2012; Jarrar et al., 2011), multilingual web (McCrae et al., 2011; Jarrar, 2006), among others.
    [Show full text]
  • Concepticon: a Resource for the Linking of Concept Lists
    Concepticon: A Resource for the Linking of Concept Lists Johann-Mattis List1, Michael Cysouw2, Robert Forkel3 1CRLAO/EHESS and AIRE/UPMC, Paris, 2Forschungszentrum Deutscher Sprachatlas, Marburg, 3Max Planck Institute for the Science of Human History, Jena [email protected], [email protected], [email protected] Abstract We present an attempt to link the large amount of different concept lists which are used in the linguistic literature, ranging from Swadesh lists in historical linguistics to naming tests in clinical studies and psycholinguistics. This resource, our Concepticon, links 30 222 concept labels from 160 conceptlists to 2495 concept sets. Each concept set is given a unique identifier, a unique label, and a human-readable definition. Concept sets are further structured by defining different relations between the concepts. The resource can be used for various purposes. Serving as a rich reference for new and existing databases in diachronic and synchronic linguistics, it allows researchers a quick access to studies on semantic change, cross-linguistic polysemies, and semantic associations. Keywords: concepts, concept list, Swadesh list, naming test, word norms, cross-linguistically linked data 1. Introduction in the languages they were working on, or that they turned In 1950, Morris Swadesh (1909 – 1967) proposed the idea out to be not as stable and universal as Swadesh had claimed that certain parts of the lexicon of human languages are uni- (Matisoff, 1978; Alpher and Nash, 1999). Up to today, versal, stable over time, and rather resistant to borrowing. dozens of different concept lists have been compiled for var- As a result, he claimed that this part of the lexicon, which ious purposes.
    [Show full text]
  • Verbnet Based Citation Sentiment Class Assignment Using Machine Learning
    (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 11, No. 9, 2020 VerbNet based Citation Sentiment Class Assignment using Machine Learning Zainab Amjad1, Imran Ihsan2 Department of Creative Technologies Air University, Islamabad, Pakistan Abstract—Citations are used to establish a link between time-consuming and complicated. To resolve this issue there articles. This intent has changed over the years, and citations are exists many researchers [7]–[9] who deal with the sentiment now being used as a criterion for evaluating the research work or analysis of citation sentences to improve bibliometric the author and has become one of the most important criteria for measures. Such applications can help scholars in the period of granting rewards or incentives. As a result, many unethical research to identify the problems with the present approaches, activities related to the use of citations have emerged. That is why unaddressed issues, and the present research gaps [10]. content-based citation sentiment analysis techniques are developed on the hypothesis that all citations are not equal. There are two existing approaches for Citation Sentiment There are several pieces of research to find the sentiment of a Analysis: Qualitative and Quantitative [7]. Quantitative citation, however, only a handful of techniques that have used approaches consider that all citations are equally important citation sentences for this purpose. In this research, we have while qualitative approaches believe that all citations are not proposed a verb-oriented citation sentiment classification for equally important [9]. The quantitative approach uses citation researchers by semantically analyzing verbs within a citation text count to rank a research paper [8] while the qualitative using VerbNet Ontology, natural language processing & four approach analyzes the nature of citation [10].
    [Show full text]
  • Generating Lexical Representations of Frames Using Lexical Substitution
    Generating Lexical Representations of Frames using Lexical Substitution Saba Anwar Artem Shelmanov Universitat¨ Hamburg Skolkovo Institute of Science and Technology Germany Russia [email protected] [email protected] Alexander Panchenko Chris Biemann Skolkovo Institute of Science and Technology Universitat¨ Hamburg Russia Germany [email protected] [email protected] Abstract Seed sentence: I hope PattiHelper can helpAssistance youBenefited party soonTime . Semantic frames are formal linguistic struc- Substitutes for Assistance: assist, aid tures describing situations/actions/events, e.g. Substitutes for Helper: she, I, he, you, we, someone, Commercial transfer of goods. Each frame they, it, lori, hannah, paul, sarah, melanie, pam, riley Substitutes for Benefited party: me, him, folk, her, provides a set of roles corresponding to the sit- everyone, people uation participants, e.g. Buyer and Goods, and Substitutes for Time: tomorrow, now, shortly, sooner, lexical units (LUs) – words and phrases that tonight, today, later can evoke this particular frame in texts, e.g. Sell. The scarcity of annotated resources hin- Table 1: An example of the induced lexical represen- ders wider adoption of frame semantics across tation (roles and LUs) of the Assistance FrameNet languages and domains. We investigate a sim- frame using lexical substitutes from a single seed sen- ple yet effective method, lexical substitution tence. with word representation models, to automat- ically expand a small set of frame-annotated annotated resources. Some publicly available re- sentences with new words for their respective sources are FrameNet (Baker et al., 1998) and roles and LUs. We evaluate the expansion PropBank (Palmer et al., 2005), yet for many lan- quality using FrameNet.
    [Show full text]
  • 'Face' in Vietnamese
    Body part extensions with mặt ‘face’ in Vietnamese ​ Annika Tjuka 1 Introduction In many languages, body part terms have multiple meanings. The word tongue, for example, ​ ​ refers not only to the body part but also to a landscape or object feature, as in tongue of the ocean ​ or tongue of flame. These examples illustrate that body part terms are commonly mapped to ​ ​ concrete objects. In addition, body part terms can be transferred to abstract domains, including the description of emotional states (e.g., Yu 2002). The present study investigates the mapping of the body part term face to the concrete domain of objects. Expressions like mouth of the river ​ ​ and foot of the bed reveal the different analogies for extending human body parts to object ​ features. In the case of mouth of the river, the body part is extended on the basis of the function ​ ​ of the mouth as an opening. In contrast, the expression foot of the bed refers to the part of the bed ​ where your feet are if you lie in it. Thus, the body part is mapped according to the spatial alignment of the body part and the object. Ambiguous words challenge our understanding of how the mental lexicon is structured. Two processes could lead to the activation of a specific meaning: i) all meanings of a word are stored in one lexical entry and the relevant one is retrieved, ii) only a core meaning is stored and the other senses are generated by lexical rules (for a description of both models, see, Pustejovsky 1991). In the following qualitative analysis, I concentrate on the different meanings of the body part face and discuss the implications for its representation in the mental lexicon.
    [Show full text]
  • Towards a Cross-Linguistic Verbnet-Style Lexicon for Brazilian Portuguese
    Towards a cross-linguistic VerbNet-style lexicon for Brazilian Portuguese Carolina Scarton, Sandra Alu´ısio Center of Computational Linguistics (NILC), University of Sao˜ Paulo Av. Trabalhador Sao-Carlense,˜ 400. 13560-970 Sao˜ Carlos/SP, Brazil [email protected], [email protected] Abstract This paper presents preliminary results of the Brazilian Portuguese Verbnet (VerbNet.Br). This resource is being built by using other existing Computational Lexical Resources via a semi-automatic method. We identified, automatically, 5688 verbs as candidate members of VerbNet.Br, which are distributed in 257 classes inherited from VerbNet. These preliminary results give us some directions of future work and, since the results were automatically generated, a manual revision of the complete resource is highly desirable. 1. Introduction the verb to load. To fulfill this gap, VerbNet has mappings The task of building Computational Lexical Resources to WordNet, which has deeper semantic relations. (CLRs) and making them publicly available is one of Brazilian Portuguese language lacks CLRs. There are some the most important tasks of Natural Language Processing initiatives like WordNet.Br (Dias da Silva et al., 2008), that (NLP) area. CLRs are used in many other applications is based on and aligned to WordNet. This resource is the in NLP, such as automatic summarization, machine trans- most complete for Brazilian Portuguese language. How- lation and opinion mining. Specially, CLRs that treat the ever, only the verb database is in an advanced stage (it syntactic and semantic behaviour of verbs are very impor- is finished, but without manual validation), currently con- tant to the tasks of information retrieval (Croch and King, sisting of 5,860 verbs in 3,713 synsets.
    [Show full text]
  • Distributional Semantics, Wordnet, Semantic Role
    Semantics Kevin Duh Intro to NLP, Fall 2019 Outline • Challenges • Distributional Semantics • Word Sense • Semantic Role Labeling The Challenge of Designing Semantic Representations • Q: What is semantics? • A: The study of meaning • Q: What is meaning? • A: … We know it when we see it • These sentences/phrases all have the same meaning: • XYZ corporation bought the stock. • The stock was bought by XYZ corporation. • The purchase of the stock by XYZ corporation... • The stock purchase by XYZ corporation... But how to formally define it? Meaning Sentence Representation Example Representations Sentence: “I have a car” Logic Formula Graph Representation Key-Value Records Example Representations Sentence: “I have a car” As translation in “Ich habe ein Auto” another language There’s no single agreed-upon representation that works in all cases • Different emphases: • Words or Sentences • Syntax-Semantics interface, Logical Inference, etc. • Different aims: • “Deep (and narrow)” vs “Shallow (and broad)” • e.g. Show me all flights from BWI to NRT. • Do we link to actual flight records? • Or general concept of flying machines? Outline • Challenges • Distributional Semantics • Word Sense • Semantic Role Labeling Distributional Semantics 10 Learning Distributional Semantics from large text dataset • Your pet dog is so cute • Your pet cat is so cute • The dog ate my homework • The cat ate my homework neighbor(dog) overlaps-with neighbor(cats) so meaning(dog) is-similar-to meaning(cats) 11 Word2Vec implements Distribution Semantics Your pet dog is so cute 1. Your pet - is so | dog 2. dog | Your pet - is so From: Mikolov, Tomas; et al. (2013). "Efficient Estimation of Word 12 Representations in Vector Space” Latent Semantic Analysis (LSA) also implements Distributional Semantics Pet-peeve: fundamentally, neural approaches aren’t so different from classical LSA.
    [Show full text]
  • Modeling and Encoding Traditional Wordlists for Machine Applications
    Modeling and Encoding Traditional Wordlists for Machine Applications Shakthi Poornima Jeff Good Department of Linguistics Department of Linguistics University at Buffalo University at Buffalo Buffalo, NY USA Buffalo, NY USA [email protected] [email protected] Abstract Clearly, descriptive linguistic resources can be This paper describes work being done on of potential value not just to traditional linguis- the modeling and encoding of a legacy re- tics, but also to computational linguistics. The source, the traditional descriptive wordlist, difficulty, however, is that the kinds of resources in ways that make its data accessible to produced in the course of linguistic description NLP applications. We describe an abstract are typically not easily exploitable in NLP appli- model for traditional wordlist entries and cations. Nevertheless, in the last decade or so, then provide an instantiation of the model it has become widely recognized that the devel- in RDF/XML which makes clear the re- opment of new digital methods for encoding lan- lationship between our wordlist database guage data can, in principle, not only help descrip- and interlingua approaches aimed towards tive linguists to work more effectively but also al- machine translation, and which also al- low them, with relatively little extra effort, to pro- lows for straightforward interoperation duce resources which can be straightforwardly re- with data from full lexicons. purposed for, among other things, NLP (Simons et al., 2004; Farrar and Lewis, 2007). 1 Introduction Despite this, it has proven difficult to create When looking at the relationship between NLP significant electronic descriptive resources due to and linguistics, it is typical to focus on the dif- the complex and specific problems inevitably as- ferent approaches taken with respect to issues sociated with the conversion of legacy data.
    [Show full text]
  • TEI and the Documentation of Mixtepec-Mixtec Jack Bowers
    Language Documentation and Standards in Digital Humanities: TEI and the documentation of Mixtepec-Mixtec Jack Bowers To cite this version: Jack Bowers. Language Documentation and Standards in Digital Humanities: TEI and the documen- tation of Mixtepec-Mixtec. Computation and Language [cs.CL]. École Pratique des Hauts Études, 2020. English. tel-03131936 HAL Id: tel-03131936 https://tel.archives-ouvertes.fr/tel-03131936 Submitted on 4 Feb 2021 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Préparée à l’École Pratique des Hautes Études Language Documentation and Standards in Digital Humanities: TEI and the documentation of Mixtepec-Mixtec Soutenue par Composition du jury : Jack BOWERS Guillaume, JACQUES le 8 octobre 2020 Directeur de Recherche, CNRS Président Alexis, MICHAUD Chargé de Recherche, CNRS Rapporteur École doctorale n° 472 Tomaž, ERJAVEC Senior Researcher, Jožef Stefan Institute Rapporteur École doctorale de l’École Pratique des Hautes Études Enrique, PALANCAR Directeur de Recherche, CNRS Examinateur Karlheinz, MOERTH Senior Researcher, Austrian Center for Digital Humanities Spécialité and Cultural Heritage Examinateur Linguistique Emmanuel, SCHANG Maître de Conférence, Université D’Orléans Examinateur Benoit, SAGOT Chargé de Recherche, Inria Examinateur Laurent, ROMARY Directeur de recherche, Inria Directeur de thèse 1.
    [Show full text]
  • Foundations of Natural Language Processing Lecture 16 Semantic
    Language is Flexible Foundations of Natural Language Processing Often we want to know who did what to whom (when, where, how and why) • Lecture 16 But the same event and its participants can have different syntactic realizations. Semantic Role Labelling and Argument Structure • Sandy broke the glass. vs. The glass was broken by Sandy. Alex Lascarides She gave the boy a book. vs. She gave a book to the boy. (Slides based on those of Schneider, Koehn, Lascarides) Instead of focusing on syntax, consider the semantic roles (also called 13 March 2020 • thematic roles) defined by each event. Alex Lascarides FNLP Lecture 16 13 March 2020 Alex Lascarides FNLP Lecture 16 1 Argument Structure and Alternations Stanford Dependencies Mary opened the door • The door opened John slices bread with a knife • This bread slices easily The knife slices cleanly Mary loaded the truck with hay • Mary loaded hay onto the the truck The truck was loaded with hay (by Mary) The hay was loaded onto the truck (by Mary) John gave a present to Mary • John gave Mary a present cf Mary ate the sandwich with Kim! Alex Lascarides FNLP Lecture 16 2 Alex Lascarides FNLP Lecture 16 3 Syntax-Semantics Relationship Outline syntax = semantics • 6 The semantic roles played by different participants in the sentence are not • trivially inferable from syntactical relations . though there are patterns! • The idea of semantic roles can be combined with other aspects of meaning • (beyond this course). Alex Lascarides FNLP Lecture 16 4 Alex Lascarides FNLP Lecture 16 5 Commonly used thematic
    [Show full text]