A Large, Interlinked, Syntactically-Rich Lexical Resource for Ontologies

Total Page:16

File Type:pdf, Size:1020Kb

A Large, Interlinked, Syntactically-Rich Lexical Resource for Ontologies Semantic Web 0 (0) 1 1 IOS Press lemonUby - a large, interlinked, syntactically-rich lexical resource for ontologies Judith Eckle-Kohler, a;∗ John Philip McCrae b and Christian Chiarcos c a Ubiquitous Knowledge Processing (UKP) Lab, Department of Computer Science, Technische Universität Darmstadt and Information Center for Education, German Institute for International Educational Research, Germany, http://www.ukp.tu-darmstadt.de b Cognitive Interaction Technology (CITEC), Semantic Computing Group, Universität Bielefeld, Germany, http://www.sc.cit-ec.uni-bielefeld.de c Applied Computational Linguistics (ACoLi), Department of Computer Science and Mathematics, Goethe-University Frankfurt am Main, Germany, http://acoli.cs.uni-frankfurt.de Abstract. We introduce lemonUby, a new lexical resource integrated in the Semantic Web which is the result of converting data extracted from the existing large-scale linked lexical resource UBY to the lemon lexicon model. The following data from UBY were converted: WordNet, FrameNet, VerbNet, English and German Wiktionary, the English and German entries of Omega- Wiki, as well as links between pairs of these lexicons at the word sense level (links between VerbNet and FrameNet, VerbNet and WordNet, WordNet and FrameNet, WordNet and Wiktionary, WordNet and German OmegaWiki). We linked lemonUby to other lexical resources and linguistic terminology repositories in the Linguistic Linked Open Data cloud and outline possible applications of this new dataset. Keywords: Lexicon model, lemon, UBY-LMF, UBY, OLiA, ISOcat, WordNet, VerbNet, FrameNet, Wiktionary, OmegaWiki 1. Introduction numerous mappings and linkings of lexica, as well as standards for representing lexical resources, such Recently, the language resource community has begun as the ISO 24613:2008 Lexical Markup Framework to explore the opportunities offered by the Semantic (LMF) [13]. In this context, the LLOD cloud can be Web, lead by the formation of the Linguistic Linked considered as a new data integration platform, en- Open Data (LLOD) cloud and an increasing interest in abling linkings not only between lexical resources, but making use of Linked Open Data principles in the con- also between lexical resources and other language re- text of Natural Language Processing (NLP) and Lin- sources. guistics [7]. The use of RDF supports data integra- We extend the LLOD cloud by a new lexical re- tion and offers a large body of tools for accessing this source called lemonUby1 which is the result of con- data. Furthermore, the linked data approach gives rise verting data extracted from the existing large-scale to novel research questions in the context of language linked lexical resource UBY [14]2 to the lemon lex- resources and their application. icon model. UBY has been developed independently For lexical resources, data integration has been from Semantic Web technology. It is LMF based and in the focus of interest for many years, resulting in 1http://www.lemon-model.net/lexica/uby/ *Corresponding author, http://www.ukp.tu-darmstadt.de 2http://www.ukp.tu-darmstadt.de/uby/ 1570-0844/0-1900/$27.50 c 0 – IOS Press and the authors. All rights reserved 2 Eckle-Kohler et al. / a subset of the LMF-compliant UBY lexicons is pair- LexicalForm wise linked at the word sense level. The lemon lexicon writtenRep:String model has been developed for lexical resource integra- canonicalForm tion on the Semantic Web [18]. This lexicon model form otherForm abstractForm Word serves as a common interchange format for lexical re- sources on the Semantic Web and has been designed to Lexicon entry LexicalEntry Phrase represent and share lexical resources that are linked to language:String ontologies, i.e., ontology lexica. Making use of a lexi- con interchange format, such as lemon is not only im- isSenseOf sense Part portant for data integration, but also for the reuse of LexicalSense lexicons. While many lexical resources have already been in- prefRef reference altRef isReferenceOf cluded in the LLOD cloud, e.g., [3,19,23,8,20], the hiddenRef lem n LLOD cloud is still missing a large-scale lexical re- Ontology core source rich in lexical information on verbs, includ- ing aspects such as syntactic behaviour and semantic roles of a verb’s arguments. Such information is crucial Fig. 1. The core of the lemon model for lexicalizing relational knowledge, e.g., the relation like(Experiencer;Theme) can be lexicalized syntacti- this form on the Semantic Web, and (ii) to use them as cally with a verb as in "NP likes NP". interchangeable modules in NLP applications. The new resource lemonUby addresses this gap: In order to overcome this difficulty, the lemon model Along with resources for word-level semantics (Word- [18] was proposed as a common interchange format for Net [12], English and German Wiktionary,3 and the lexical resources on the Semantic Web. lemon has its English and German entries of OmegaWiki,4) we con- historical roots in LMF and thus allows easy conver- verted two syntactically rich resources from UBY to sion from LMF-like, non-linked data resources. It links the lemon format: FrameNet [2] and VerbNet [15]. For to data categories in annotation terminology reposito- further data integration, we established links between ries, and most of all, it realises a separation of lexicon lemonUby and other language resources in the LLOD and ontology layers, so that lemon lexica can be linked cloud. to existing ontologies in the linked data cloud. This core model is illustrated in Fig. 1, which de- fines the basic elements used by all lexica published as 2. Representing lexical-semantic resources as linked data. In addition to this there are a number of Linked Data: Lemon modules used to model linguistic description, syntax, morphology and relationships between lexica.5 There has been significant work towards integrating lemon has been used as a basis for integrating the lexical resources using RDF and Semantic Web prin- data of the English Wiktionary with the RDF ver- ciples [6], and many resources are already available sion of WordNet [19]. lemon’s similarity to the Word- as Linked Data. Yet, representing lexical resources in Net model made this conversion straight-forward, with RDF does not per se make them semantically interop- only the need for a slight change in modelling to ac- erable. Consider, for instance, existing conversions of commodate inflectional variants of lexical entries. WordNet and FrameNet [26,22], where a simple map- ping to RDF is provided, and augmented with OWL semantics so that reasoning could be applied to the 3. Large-scale integration of lexical-semantic structure of the resource. However, the formats cho- resources: UBY and UBY-LMF sen for the RDF versions of WordNet and FrameNet are specific to the underlying data models of WordNet UBY is both a network of interlinked lexical-semantic and FrameNet. Although these lexicons are comple- resources and a project on continuous integration and mentary resources [1], it is difficult (i) to link them in linking of lexical resources for NLP applications. It is 3http://www.wiktionary.org 5More detail of the model and descriptions of the modules can be 4http://www.omegawiki.org found at http://lemon-model.net Eckle-Kohler et al. / 3 motivated by the observation that an essential require- maintained data categories in ISOcat.7 As the mapping ment in NLP is the availability of a wide range of lexi- of UBY-LMF to lemon preserves this linking, lemonU- cal resources that can be used for many different NLP by is linked to ISOcat as well. The content of ISOcat is tasks. In a continuous process, such resources are inte- also available as Linked Data [27], and therefore, pro- grated into UBY by means of (i) making them interop- vides a possible and direct way to interconnect lemon- erable and (ii) linking them to other resources in UBY Uby with other LLOD resources at the level of linguis- at the sense level. tic data categories. In UBY, interoperability is achieved by standardiz- However, ISOcat is not a formal ontology, but only a ing lexical resources according to UBY-LMF [9,10], semistructured collection of terms, and while it serves a lexicon model which is an instantiation of LMF, as a repository of definitions, it does not provide specifically designed for NLP. The lexicon model a formal data model that can be applied to a re- UBY-LMF has been developed to fully cover a wide source: ISOcat contains doublets created by differ- range of heterogeneous lexical resources without in- ent data providers, and such superficially similar cat- formation loss, which resulted in a fine-grained model egories may actually have incompatible definitions, of lexical information types (documented by data cat- e.g., gerundive [DC-1294] is an “adjective formed egories from ISOcat,6 the implementation of the ISO from a verb” (excluding verbal nouns), whereas gerun- 12620:2009 Data Category Registry) and was accom- dive [DC-2243] is a “non-finite form (...) other than panied by an extension of the ISO standard LMF by the infinitive” (including verbal nouns). Hierarchical a few elements. The extensibility of UBY-LMF was a relations between ISOcat terms are possible, but not primary design principle in order to enable the integra- obligatory, and when compared with a full-fledged tion of further (in particular automatically acquired) ontology, ISOcat terms that represent superconcepts lexical resources. for a bundle of features (e.g., ActiveVoice [DC-3064] The mapping from UBY-LMF to lemon is motivated dcif:isA VoiceProperty [DC-3551]) do not distinguish by an increase in interoperability with the Semantic relational and categorial aspects: VoiceProperty could Web and its resources, thereby making it available to be either a property that assigns ActiveVoice to a par- a new group of potential users and novel applications. ticular unit of annotation, or a concept that defines the Beyond this, mapping UBY-LMF to lemon is an in- range of such a relation.
Recommended publications
  • An Arabic Wordnet with Ontologically Clean Content
    Applied Ontology (2021) IOS Press The Arabic Ontology – An Arabic Wordnet with Ontologically Clean Content Mustafa Jarrar Birzeit University, Palestine [email protected] Abstract. We present a formal Arabic wordnet built on the basis of a carefully designed ontology hereby referred to as the Arabic Ontology. The ontology provides a formal representation of the concepts that the Arabic terms convey, and its content was built with ontological analysis in mind, and benchmarked to scientific advances and rigorous knowledge sources as much as this is possible, rather than to only speakers’ beliefs as lexicons typically are. A comprehensive evaluation was conducted thereby demonstrating that the current version of the top-levels of the ontology can top the majority of the Arabic meanings. The ontology consists currently of about 1,300 well-investigated concepts in addition to 11,000 concepts that are partially validated. The ontology is accessible and searchable through a lexicographic search engine (http://ontology.birzeit.edu) that also includes about 150 Arabic-multilingual lexicons, and which are being mapped and enriched using the ontology. The ontology is fully mapped with Princeton WordNet, Wikidata, and other resources. Keywords. Linguistic Ontology, WordNet, Arabic Wordnet, Lexicon, Lexical Semantics, Arabic Natural Language Processing Accepted by: 1. Introduction The importance of linguistic ontologies and wordnets is increasing in many application areas, such as multilingual big data (Oana et al., 2012; Ceravolo, 2018), information retrieval (Abderrahim et al., 2013), question-answering and NLP-based applications (Shinde et al., 2012), data integration (Castanier et al., 2012; Jarrar et al., 2011), multilingual web (McCrae et al., 2011; Jarrar, 2006), among others.
    [Show full text]
  • Verbnet Based Citation Sentiment Class Assignment Using Machine Learning
    (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 11, No. 9, 2020 VerbNet based Citation Sentiment Class Assignment using Machine Learning Zainab Amjad1, Imran Ihsan2 Department of Creative Technologies Air University, Islamabad, Pakistan Abstract—Citations are used to establish a link between time-consuming and complicated. To resolve this issue there articles. This intent has changed over the years, and citations are exists many researchers [7]–[9] who deal with the sentiment now being used as a criterion for evaluating the research work or analysis of citation sentences to improve bibliometric the author and has become one of the most important criteria for measures. Such applications can help scholars in the period of granting rewards or incentives. As a result, many unethical research to identify the problems with the present approaches, activities related to the use of citations have emerged. That is why unaddressed issues, and the present research gaps [10]. content-based citation sentiment analysis techniques are developed on the hypothesis that all citations are not equal. There are two existing approaches for Citation Sentiment There are several pieces of research to find the sentiment of a Analysis: Qualitative and Quantitative [7]. Quantitative citation, however, only a handful of techniques that have used approaches consider that all citations are equally important citation sentences for this purpose. In this research, we have while qualitative approaches believe that all citations are not proposed a verb-oriented citation sentiment classification for equally important [9]. The quantitative approach uses citation researchers by semantically analyzing verbs within a citation text count to rank a research paper [8] while the qualitative using VerbNet Ontology, natural language processing & four approach analyzes the nature of citation [10].
    [Show full text]
  • Towards a Cross-Linguistic Verbnet-Style Lexicon for Brazilian Portuguese
    Towards a cross-linguistic VerbNet-style lexicon for Brazilian Portuguese Carolina Scarton, Sandra Alu´ısio Center of Computational Linguistics (NILC), University of Sao˜ Paulo Av. Trabalhador Sao-Carlense,˜ 400. 13560-970 Sao˜ Carlos/SP, Brazil [email protected], [email protected] Abstract This paper presents preliminary results of the Brazilian Portuguese Verbnet (VerbNet.Br). This resource is being built by using other existing Computational Lexical Resources via a semi-automatic method. We identified, automatically, 5688 verbs as candidate members of VerbNet.Br, which are distributed in 257 classes inherited from VerbNet. These preliminary results give us some directions of future work and, since the results were automatically generated, a manual revision of the complete resource is highly desirable. 1. Introduction the verb to load. To fulfill this gap, VerbNet has mappings The task of building Computational Lexical Resources to WordNet, which has deeper semantic relations. (CLRs) and making them publicly available is one of Brazilian Portuguese language lacks CLRs. There are some the most important tasks of Natural Language Processing initiatives like WordNet.Br (Dias da Silva et al., 2008), that (NLP) area. CLRs are used in many other applications is based on and aligned to WordNet. This resource is the in NLP, such as automatic summarization, machine trans- most complete for Brazilian Portuguese language. How- lation and opinion mining. Specially, CLRs that treat the ever, only the verb database is in an advanced stage (it syntactic and semantic behaviour of verbs are very impor- is finished, but without manual validation), currently con- tant to the tasks of information retrieval (Croch and King, sisting of 5,860 verbs in 3,713 synsets.
    [Show full text]
  • Leveraging Verbnet to Build Corpus-Specific Verb Clusters
    Leveraging VerbNet to build Corpus-Specific Verb Clusters Daniel W Peterson and Jordan Boyd-Graber and Martha Palmer University of Colorado daniel.w.peterson,jordan.boyd.graber,martha.palmer @colorado.edu { } Daisuke Kawhara Kyoto University, JP [email protected] Abstract which involved dozens of linguists and a decade of work, making careful decisions about the al- In this paper, we aim to close the gap lowable syntactic frames for various verb senses, from extensive, human-built semantic re- informed by text examples. sources and corpus-driven unsupervised models. The particular resource explored VerbNet is useful for semantic role labeling and here is VerbNet, whose organizing princi- related tasks (Giuglea and Moschitti, 2006; Yi, ple is that semantics and syntax are linked. 2007; Yi et al., 2007; Merlo and van der Plas, To capture patterns of usage that can aug- 2009; Kshirsagar et al., 2014), but its widespread ment knowledge resources like VerbNet, use is limited by coverage. Not all verbs have we expand a Dirichlet process mixture a VerbNet class, and some polysemous verbs model to predict a VerbNet class for each have important senses unaccounted for. In addi- sense of each verb, allowing us to incorpo- tion, VerbNet is not easily adaptable to domain- rate annotated VerbNet data to guide the specific corpora, so these omissions may be more clustering process. The resulting clusters prominent outside of the general-purpose corpora align more closely to hand-curated syn- and linguistic intuition used in its construction. tactic/semantic groupings than any previ- Its great strength is also its downfall: adding ous models, and can be adapted to new new verbs, new senses, and new classes requires domains since they require only corpus trained linguists - at least, to preserve the integrity counts.
    [Show full text]
  • Statistical Machine Translation with a Factorized Grammar
    Statistical Machine Translation with a Factorized Grammar Libin Shen and Bing Zhang and Spyros Matsoukas and Jinxi Xu and Ralph Weischedel Raytheon BBN Technologies Cambridge, MA 02138, USA {lshen,bzhang,smatsouk,jxu,weisched}@bbn.com Abstract for which the segments for translation are always fixed. In modern machine translation practice, a sta- However, do we really need such a large rule set tistical phrasal or hierarchical translation sys- to represent information from the training data of tem usually relies on a huge set of trans- much smaller size? Linguists in the grammar con- lation rules extracted from bi-lingual train- struction field already showed us a perfect solution ing data. This approach not only results in space and efficiency issues, but also suffers to a similar problem. The answer is to use a fac- from the sparse data problem. In this paper, torized grammar. Linguists decompose lexicalized we propose to use factorized grammars, an linguistic structures into two parts, (unlexicalized) idea widely accepted in the field of linguis- templates and lexical items. Templates are further tic grammar construction, to generalize trans- organized into families. Each family is associated lation rules, so as to solve these two prob- with a set of lexical items which can be used to lex- lems. We designed a method to take advantage icalize all the templates in this family. For example, of the XTAG English Grammar to facilitate the extraction of factorized rules. We experi- the XTAG English Grammar (XTAG-Group, 2001), mented on various setups of low-resource lan- a hand-crafted grammar based on the Tree Adjoin- guage translation, and showed consistent sig- ing Grammar (TAG) (Joshi and Schabes, 1997) for- nificant improvement in BLEU over state-of- malism, is a grammar of this kind, which employs the-art string-to-dependency baseline systems factorization with LTAG e-tree templates and lexical with 200K words of bi-lingual training data.
    [Show full text]
  • Verbnet Guidelines
    VerbNet Annotation Guidelines 1. Why Verbs? 2. VerbNet: A Verb Class Lexical Resource 3. VerbNet Contents a. The Hierarchy b. Semantic Role Labels and Selectional Restrictions c. Syntactic Frames d. Semantic Predicates 4. Annotation Guidelines a. Does the Instance Fit the Class? b. Annotating Verbs Represented in Multiple Classes c. Things that look like verbs but aren’t i. Nouns ii. Adjectives d. Auxiliaries e. Light Verbs f. Figurative Uses of Verbs 1 Why Verbs? Computational verb lexicons are key to supporting NLP systems aimed at semantic interpretation. Verbs express the semantics of an event being described as well as the relational information among participants in that event, and project the syntactic structures that encode that information. Verbs are also highly variable, displaying a rich range of semantic and syntactic behavior. Verb classifications help NLP systems to deal with this complexity by organizing verbs into groups that share core semantic and syntactic properties. VerbNet (Kipper et al., 2008) is one such lexicon, which identifies semantic roles and syntactic patterns characteristic of the verbs in each class and makes explicit the connections between the syntactic patterns and the underlying semantic relations that can be inferred for all members of the class. Each syntactic frame in a class has a corresponding semantic representation that details the semantic relations between event participants across the course of the event. In the following sections, each component of VerbNet is identified and explained. VerbNet: A Verb Class Lexical Resource VerbNet is a lexicon of approximately 5800 English verbs, and groups verbs according to shared syntactic behaviors, thereby revealing generalizations of verb behavior.
    [Show full text]
  • Single Classifier Approach for Verb Sense Disambiguation Based On
    Single Classifier Approach for Verb Sense Disambiguation based on Generalized Features Daisuke Kawaharay, Martha Palmerz yGraduate School of Informatics, Kyoto University, Kyoto, Kyoto 606-8501, Japan zDepartment of Linguistics, University of Colorado Boulder, Boulder, CO 80302, USA [email protected], [email protected] Abstract We present a supervised method for verb sense disambiguation based on VerbNet. Most previous supervised approaches to verb sense disambiguation create a classifier for each verb that reaches a frequency threshold. These methods, however, have a significant practical problem that they cannot be applied to rare or unseen verbs. In order to overcome this problem, we create a single classifier to be applied to rare or unseen verbs in a new text. This single classifier also exploits generalized semantic features of a verb and its modifiers in order to better deal with rare or unseen verbs. Our experimental results show that the proposed method achieves equivalent performance to per-verb classifiers, which cannot be applied to unseen verbs. Our classifier could be utilized to improve the classifications in lexical resources of verbs, such as VerbNet, in a semi-automatic manner and to possibly extend the coverage of these resources to new verbs. Keywords: verb sense disambiguation, single classifier, word representations 1. Introduction for each verb or use class membership constraints (Abend A verb plays a primary role in conveying the meaning of et al., 2008), which limit the class candidates of a verb to its a sentence. Since capturing the sense of a verb is essential seen classes in the training data.
    [Show full text]
  • Verbnet Representations: Subevent Semantics for Transfer Verbs
    VerbNet Representations: Subevent Semantics for Transfer Verbs Susan Windisch Brown Julia Bonn James Gung University of Colorado University of Colorado University of Colorado [email protected] [email protected] [email protected] Annie Zaenen James Pustejovsky Martha Palmer Stanford University Brandeis University University of Colorado [email protected]@[email protected] Abstract verb senses and introduce a means for automati- cally translating text to these representations. We This paper announces the release of a new ver- explore the format of these representations and the sion of the English lexical resource VerbNet types of information they track by thoroughly ex- with substantially revised semantic represen- amining the representations for transfer of posses- tations designed to facilitate computer plan- sions and information. These event types are ex- ning and reasoning based on human language. We use the transfer of possession and trans- cellent examples of complex events with multi- fer of information event representations to il- ple participants and relations between them that lustrate both the general framework of the change across the time frame of the event. By representations and the types of nuances the aligning our new representations more closely new representations can capture. These repre- with the dynamic event structure encapsulated by sentations use a Generative Lexicon-inspired the Generative Lexicon, we can provide a more subevent structure to track attributes of event precise subevent structure that makes the changes participants across time, highlighting opposi- tions and temporal and causal relations among over time explicit (Pustejovsky, 1995; Pustejovsky the subevents. et al., 2016). Who has what when and who knows what when are exactly the sorts of things that we 1 Introduction want to extract from text, but this extraction is dif- ficult without explicit, computationally-tractable Many natural language processing tasks have seen representations.
    [Show full text]
  • Verbnet and Propbank
    The English VerbNet PropBank VerbNet and PropBank Word and VerbNets for Semantic Processing DAAD Summer School in Advanced Language Engineering, Kathmandu University, Nepal Day 3 Annette Hautli 1 / 34 The English VerbNet PropBank Recap Levin (1993) Detailed analysis of syntactic alternations in English Verbs with a common meaning partake in the same syntactic alternations Verb classes that share syntactic properties ! How can we use this information in nlp? 2 / 34 The English VerbNet Lexical information PropBank The VerbNet API Today 1 The English VerbNet Lexical information The VerbNet API 2 PropBank 3 / 34 The English VerbNet Lexical information PropBank The VerbNet API Overview Hierarchically arranged verb classes Inspired by Levin (1993) and further extended for nlp purposes Levin: 240 classes (47 top level, 193 second level), around 3000 lemmas VerbNet: 471 classes, around 4000 lemmas Explicit links to other resources, e.g. WordNet, FrameNet 4 / 34 The English VerbNet Lexical information PropBank The VerbNet API Outline 1 The English VerbNet Lexical information The VerbNet API 2 PropBank 5 / 34 The English VerbNet Lexical information PropBank The VerbNet API Components Thematic roles 23 thematic roles are used in VerbNet Each argument is assigned a unique thematic role When verbs have more than one sense, they usually have different thematic roles 6 / 34 The English VerbNet Lexical information PropBank The VerbNet API Components Role Example Actor Susan was chitchatting. Agent Cynthia nibbled on the carrot. Asset Carmen purchased a dress for $50. Attribute The price of oil soared. Beneficiary Sandy sang a song to me. Cause My heart is pounding from fear.
    [Show full text]
  • Framester: a Wide Coverage Linguistic Linked Data Hub
    Framester: A Wide Coverage Linguistic Linked Data Hub B Aldo Gangemi1, Mehwish Alam1( ), Luigi Asprino2, Valentina Presutti2, and Diego Reforgiato Recupero3 1 Universite Paris 13, Paris, France {gangemi,alam}@lipn.univ-paris13.fr 2 National Research Council (CNR), Rome, Italy [email protected], [email protected] 3 University of Cagliari, Cagliari, Italy [email protected] Abstract. Semantic web applications leveraging NLP can benefit from easy access to expressive lexical resources such as FrameNet. However, the usefulness of FrameNet is affected by its limited coverage and non- standard semantics. The access to existing linguistic resources is also limited because of poor connectivity among them. We present some strategies based on Linguistic Linked Data to broaden FrameNet cov- erage and formal linkage of lexical and factual resources. We created a novel resource, Framester, which acts as a hub between FrameNet, Word- Net, VerbNet, BabelNet, DBpedia, Yago, DOLCE-Zero, as well as other resources. Framester is not only a strongly connected knowledge graph, but also applies a rigorous formal treatment for Fillmore’s frame seman- tics, enabling full-fledged OWL querying and reasoning on a large frame- based knowledge graph. We also describe Word Frame Disambiguation, an application that reuses Framester data as a base in order to perform frame detection from text, with results comparable in precision to the state of the art, but with a much higher coverage. Keywords: Frame detection · Framester · FrameNet · Framenet cover- age · Knowledge graphs · Frame semantics · Linguistic linked data 1 Introduction Many resources from different domains are now published using Linked Open Data (LOD) principles to provide easy access to structured data on the web.
    [Show full text]
  • Probabilistic Frame-Semantic Parsing Dipanjan Das Nathan Schneider Desai Chen Noah A
    Probabilistic Frame-Semantic Parsing Dipanjan Das Nathan Schneider Desai Chen Noah A. Smith School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213, USA fdipanjan@cs,nschneid@cs,desaic@andrew,[email protected] Abstract TRANSITIVE_ CAUSE_TO_ ACTION MAKE_NOISE MAKE_NOISE Event Purpose Sound This paper contributes a formalization of frame-semantic parsing as a structure predic- Place Place Place tion problem and describes an implemented Time Time Time parser that transforms an English sentence Agent Agent Noisy_event into a frame-semantic representation. It finds Cause Cause Sound_source cough.v, gobble.v, words that evoke FrameNet frames, selects Patient Sound_maker ring.v, yodel.v, ... blare.v, play.v, frames for them, and locates the arguments — for each frame. The system uses two feature- ring.v, toot.v, ... based, discriminative probabilistic (log-linear) Inheritance relation Causative_of relation models, one with latent variables to permit Excludes relation disambiguation of new predicate words. The Figure 2. Partial illustration of frames, roles, and LUs parser is demonstrated to significantly outper- related to the CAUSE TO MAKE NOISE frame, from the form previously published results. FrameNet lexicon. “Core” roles are filled ovals. 8 addi- tional roles of CAUSE TO MAKE NOISE are not shown. 1 Introduction and a precision-boosting constraint that forbids ar- FrameNet (Fillmore et al., 2003) is a rich linguistic guments of the same predicate to overlap. Our parser resource containing considerable information about achieves the best published results to date on the lexical and predicate-argument semantics in En- SemEval’07 FrameNet task (Baker et al., 2007). glish. Grounded in the theory of frame semantics 2 Resources and Task (Fillmore, 1982), it suggests—but does not formally define—a semantic representation that blends word- We consider frame-semantic parsing resources.
    [Show full text]
  • Redalyc.Sentence Similarity Computation Based on Wordnet
    Computación y Sistemas ISSN: 1405-5546 [email protected] Instituto Politécnico Nacional México Wali, Wafa; Gargouri, Bilel; Hamadou, Abdelmajid Ben Sentence Similarity Computation based on WordNet and VerbNet Computación y Sistemas, vol. 21, núm. 4, 2017, pp. 627-635 Instituto Politécnico Nacional Distrito Federal, México Available in: http://www.redalyc.org/articulo.oa?id=61553900006 How to cite Complete issue Scientific Information System More information about this article Network of Scientific Journals from Latin America, the Caribbean, Spain and Portugal Journal's homepage in redalyc.org Non-profit academic project, developed under the open access initiative ISSN 2007-9737 Sentence Similarity Computation based on WordNet and VerbNet Wafa Wali1, Bilel Gargouri1, Abdelmajid Ben Hamadou2 1 MIRACL Laboratory, FESGS-Sfax, Tunisia 2 MIRACL Laboratory, ISIMS-Sfax, Tunisia [email protected], [email protected], [email protected] Abstract. Sentence similarity computing is increasingly clustering, question answering, machine transla- growing in several applications, such as question tion, text summarization and others. answering, machine-translation, information retrieval A number of different metrics for computing and automatic abstracting systems. This paper firstly the semantic similarity of sentences have been sums up several methods to calculate similarity between devised. Some of these outlined syntactic sentences which consider semantic and syntactic methods, to measure the similarity between knowledge. Second, it presents a new method for sentences are based on the co-occurring words the sentence similarity measure that aggregates, in a linear function, three components: the Lexical similarity between sentences, such as [16] or on the similar Lexsim including the common words, the semantic syntactic dependencies, like [13].
    [Show full text]