From Dbpedia and Wordnet Hierarchies to Linkedin and Twitter

Total Page:16

File Type:pdf, Size:1020Kb

From Dbpedia and Wordnet Hierarchies to Linkedin and Twitter From DBpedia and WordNet hierarchies to LinkedIn and Twitter Aonghus McGovern Alexander O’Connor Vincent Wade ADAPT Centre ADAPT Centre ADAPT Centre Trinity College Dublin Trinity College Dublin Trinity College Dublin Dublin, Ireland Dublin, Ireland Dublin, Ireland [email protected] Alex.OConnor Vincent.Wade@cs @scss.tcd.ie .tcd.ie Abstract growing in number, but also in their heterogeneity' Previous research has demonstrated the benefits of (2014). They further argue that the best way to using linguistic resources to analyze a user’s social overcome these issues is by using linguistic media profiles in order to learn information about that resources that conform to Linked Open Data user. However, numerous linguistic resources exist, principles. Chiarcos et al. divide such resources raising the question of choosing the appropriate into two categories, distinguishing between resource. This paper compares Extended WordNet strictly lexical resources such as WordNet and Domains with DBpedia. The comparison takes the general knowledge bases such as DBpedia. form of an investigation of the relationship between users’ descriptions of their knowledge and background The study described in this paper examines a on LinkedIn with their description of the same single resource of each type. WordNet is characteristics on Twitter. The analysis applied in this considered to be a representative example of a study consists of four parts. First, information a user has shared on each service is mined for keywords. purely lexical resource given the extent of its use 1 These keywords are then linked with terms in in research . DBpedia is considered to be a DBpedia/Extended WordNet Domains. These terms are representative example of a knowledge base ranked in order to generate separate representations of because its information is derived from the user’s interests and knowledge for LinkedIn and Wikipedia, the quality of whose knowledge Twitter. Finally, the relationship between these almost matches that of Encyclopedia Britannica separate representations is examined. In a user study (Giles, 2005). These resources are compared by with eight participants, the performance of this analysis means of an investigation of the relationship using DBpedia is compared with the performance of between users’ descriptions of their knowledge this analysis using Extended WordNet Domains. The and background on LinkedIn with their best results were obtained when DBpedia was used. description of the same characteristics on Twitter. 1 Introduction Both LinkedIn and Twitter allow users to describe Natural Language Processing (NLP) techniques their interests and knowledge by: (i) Filling in have been shown in studies such as Gao et al. profile information (ii) Posting status updates. (2012) and Vosecky et al. (2013) to be successful However, the percentage of users who post status in extracting information from a user’s social updates on LinkedIn is significantly lower than media profiles. In these studies, linguistic the percentage of users who do so on Twitter resources are employed in order to perform NLP (Bullas, 2015). On the other hand, LinkedIn users tasks such as identifying concepts, Named Entity fill in far more of their profiles on average than Recognition etc. However, as Chiarcos et al. Twitter users (Abel, Henze, Herder, and Krause, argue, significant interoperability issues are posed 2010). by the fact that linguistic resources ‘are not only 1 A list of publications involving WordNet: http://lit.csci.unt.edu/~wordnet/ 1 Proceedings of the 4th Workshop on Linked Data in Linguistics (LDL-2015), pages 1–10, Beijing, China, July 31, 2015. c 2015 Association for Computational Linguistics and Asian Federation of Natural Language Processing Given the different ways in which users use these described in this paper, the Extended WordNet services, it is possible that they provide different Domain labels created by González et al. (2012) representations of their interests and knowledge are used. Not only do these labels provide greater on each one. For example, a user may indicate a synset coverage, González et al. report better new-found interest in ‘Linguistics’ through their WSD performance with Extended WordNet tweets before they list this subject on their Domains than with the original WordNet LinkedIn profile. This study examines the Domains. relationship between users’ descriptions of their Mihalcea and Csomai describe an approach for interests and knowledge on each service. identifying the relevant Wikipedia articles for a 2 Related Work piece of text (2007). Their approach employs a combination of the Lesk algorithm and Naïve Hauff and Houben describe a study that Bayes classification. Since DBpedia URIs are investigates whether a user’s bookmarking profile created using Wikipedia article titles, the above on the sites Bibsonomy, CiteULike and approach can also be used to identify DBpedia LibraryThing can be inferred using information entities in text. obtained from that user’s tweets (2011) . Hauff and Houben generate separate ‘knowledge Magnini et al’s approach offers a means of profiles’ for Twitter and for the bookmarking analysing the skill, interest and course lists on a sites. These profiles consist of a weighted list of user’s LinkedIn profile with regard to both terms that appear in the user’s tweets and WordNet and DBpedia. Unambiguous items in bookmarking profiles, respectively. The authors’ these lists can be linked directly with a WordNet approach is hindered by noise introduced by synset or DBpedia URI. These unambiguous tweets that are unrelated to the user’s learning items can then provide a basis for interpreting activities. This problem could be addressed by ambiguous terms. For example, the unambiguous enriching information found in a user’s profiles ‘XML’ could be linked with the ‘Computer with structured data in a linguistic resource. Science’ domain, providing a basis for interpreting the ambiguous ‘Java’. However, there are often multiple possible interpretations for a term. For example, the word The above analysis allows for items in a user’s ‘bank’ has entirely different interpretations when tweets and LinkedIn profile to be linked with it appears to the right of the word ‘river’ than entities in WordNet/DBpedia. Labels associated when it appears to the right of the word with these entities can then be collected to form ‘merchant’. When linking a word with separate term-list representations for a user’s information contained in a linguistic resource, the tweets and LinkedIn profile information. correct interpretation of the word must be chosen. Plumbaum et al. describe a Social Web User The NLP technique Word Sense Disambiguation Model as consisting of the following attributes: (WSD) addresses this issue. Two different Personal Characteristics, Interests, Knowledge approaches to WSD are described below. and Behavior, Needs and Goals and Context Magnini et al. perform WSD using WordNet as (2011). Inferring ‘Personal Characteristics’ (i.e. well as domain labels provided by the WordNet demographic information) from either a user’s Domains project2 (2002). This project assigned tweets or their LinkedIn profile information would domain labels to WordNet synsets in accordance require a very different kind of analysis from that with the Dewey Decimal Classification. However, described in this paper, for example that WordNet has been updated with new synsets since performed by Schler and Koppel (2006). As Magnini et al.’s study. Therefore, in the study Plumbaum et al. define ‘Behaviour’ and ‘Needs and Goals’ as system-specific characteristics, 2 http://wndomains.fbk.eu/ information about a specific system would be 2 required to infer them. ‘Context’ requires to recommend items to them e.g. news articles. information such as the user’s role in their social Furthermore, as user activity is significantly network to be inferred. higher on Twitter than on LinkedIn (Bullas, Given the facts stated in the previous paragraph, 2015), users may discuss recent interests on the the term lists generated by the analysis in this former without updating the latter. The second study are taken as not describing ‘Personal research question of this study is as follows: Characteristics’, ‘Behavior’, ‘Needs and Goals’ RQ2. Can information obtained from a user’s and ‘Context’. However, a term-list format has been used to represent interests and knowledge by tweets be used to recommend items for that user’s Ma, Zeng, Ren, and Zhong (2011) and Hauff and LinkedIn page? Houben (2011), respectively. As mentioned These questions aim to investigate: (i) The previously, users’ Twitter and LinkedIn profiles variation between a user’s description of their can contain information about both characteristics. Thus, the term lists generated in this study are interests and knowledge through their LinkedIn taken as representing a combination of the user’s profile and their description of these interests and knowledge. characteristics through their tweets (ii) Whether a user’s tweets can be used to augment the 3 Research Questions information in their LinkedIn profile. 3.1 Research Question 1 4 Method This question investigates the possibility that a The user study method is applied in this research. user may represent their interests and knowledge This decision is taken with reference to work such differently on different Social Media services. It is as Lee and Brusilovsky (2009) and Reinecke and as follows: Bernstein (2009). The aforementioned authors RQ1. To what extent does a user’s description of employ user studies in order to determine the their interests and knowledge through their tweets accuracy with which their systems infer correspond with their description of the same information about users. characteristics through their profile information 4.1 Analysis on LinkedIn? The analysis adopted in this study consists of four For example, 'Linguistics’ may be the most stages: discussed item in the user’s LinkedIn profile, but Identify keywords only the third most discussed item in their tweets. Link these keywords to labels in DBpedia This question is similar to that investigated by /Extended WordNet Domains) Hauff and Houben (2011).
Recommended publications
  • Automatic Wordnet Mapping Using Word Sense Disambiguation*
    Automatic WordNet mapping using word sense disambiguation* Changki Lee Seo JungYun Geunbae Leer Natural Language Processing Lab Natural Language Processing Lab Dept. of Computer Science and Engineering Dept. of Computer Science Pohang University of Science & Technology Sogang University San 31, Hyoja-Dong, Pohang, 790-784, Korea Sinsu-dong 1, Mapo-gu, Seoul, Korea {leeck,gblee }@postech.ac.kr seojy@ ccs.sogang.ac.kr bilingual Korean-English dictionary. The first Abstract sense of 'gwan-mog' has 'bush' as a translation in English and 'bush' has five synsets in This paper presents the automatic WordNet. Therefore the first sense of construction of a Korean WordNet from 'gwan-mog" has five candidate synsets. pre-existing lexical resources. A set of Somehow we decide a synset {shrub, bush} automatic WSD techniques is described for among five candidate synsets and link the sense linking Korean words collected from a of 'gwan-mog' to this synset. bilingual MRD to English WordNet synsets. As seen from this example, when we link the We will show how individual linking senses of Korean words to WordNet synsets, provided by each WSD method is then there are semantic ambiguities. To remove the combined to produce a Korean WordNet for ambiguities we develop new word sense nouns. disambiguation heuristics and automatic mapping method to construct Korean WordNet based on 1 Introduction the existing English WordNet. There is no doubt on the increasing This paper is organized as follows. In section 2, importance of using wide coverage ontologies we describe multiple heuristics for word sense for NLP tasks especially for information disambiguation for sense linking.
    [Show full text]
  • Universal Or Variation? Semantic Networks in English and Chinese
    Universal or variation? Semantic networks in English and Chinese Understanding the structures of semantic networks can provide great insights into lexico- semantic knowledge representation. Previous work reveals small-world structure in English, the structure that has the following properties: short average path lengths between words and strong local clustering, with a scale-free distribution in which most nodes have few connections while a small number of nodes have many connections1. However, it is not clear whether such semantic network properties hold across human languages. In this study, we investigate the universal structures and cross-linguistic variations by comparing the semantic networks in English and Chinese. Network description To construct the Chinese and the English semantic networks, we used Chinese Open Wordnet2,3 and English WordNet4. The two wordnets have different word forms in Chinese and English but common word meanings. Word meanings are connected not only to word forms, but also to other word meanings if they form relations such as hypernyms and meronyms (Figure 1). 1. Cross-linguistic comparisons Analysis The large-scale structures of the Chinese and the English networks were measured with two key network metrics, small-worldness5 and scale-free distribution6. Results The two networks have similar size and both exhibit small-worldness (Table 1). However, the small-worldness is much greater in the Chinese network (σ = 213.35) than in the English network (σ = 83.15); this difference is primarily due to the higher average clustering coefficient (ACC) of the Chinese network. The scale-free distributions are similar across the two networks, as indicated by ANCOVA, F (1, 48) = 0.84, p = .37.
    [Show full text]
  • Lexical Resource Reconciliation in the Xerox Linguistic Environment
    Lexical Resource Reconciliation in the Xerox Linguistic Environment Ronald M. Kaplan Paula S. Newman Xerox Palo Alto Research Center Xerox Palo Alto Research Center 3333 Coyote Hill Road 3333 Coyote Hill Road Palo Alto, CA, 94304, USA Palo Alto, CA, 94304, USA kapl an~parc, xerox, com pnewman©parc, xerox, tom Abstract This paper motivates and describes the morpho- logical and lexical adaptations of XLE. They evolved This paper motivates and describes those concurrently with PARGRAM, a multi-site XLF_~ aspects of the Xerox Linguistic Environ- based broad-coverage grammar writing effort aimed ment (XLE) that facilitate the construction at creating parallel grammars for English, French, of broad-coverage Lexical Functional gram- and German (see Butt et. al., forthcoming). The mars by incorporating morphological and XLE adaptations help to reconcile separately con- lexical material from external resources. structed linguistic resources with the needs of the Because that material can be incorrect, in- core grammars. complete, or otherwise incompatible with The paper is divided into three major sections. the grammar, mechanisms are provided to The next section sets the stage by providing a short correct and augment the external material overview of the overall environmental features of the to suit the needs of the grammar developer. original LFG GWB and its provisions for morpho- This can be accomplished without direct logical and lexicon processing. The two following modification of the incorporated material, sections describe the XLE extensions in those areas. which is often infeasible or undesirable. Externally-developed finite-state morpho- 2 The GWB Data Base logical analyzers are reconciled with gram- mar requirements by run-time simulation GWB provides a computational environment tai- of finite-state calculus operations for com- lored especially for defining and testing grammars bining transducers.
    [Show full text]
  • Wordnet As an Ontology for Generation Valerio Basile
    WordNet as an Ontology for Generation Valerio Basile To cite this version: Valerio Basile. WordNet as an Ontology for Generation. WebNLG 2015 1st International Workshop on Natural Language Generation from the Semantic Web, Jun 2015, Nancy, France. hal-01195793 HAL Id: hal-01195793 https://hal.inria.fr/hal-01195793 Submitted on 9 Sep 2015 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. WordNet as an Ontology for Generation Valerio Basile University of Groningen [email protected] Abstract a structured database of words in a format read- able by electronic calculators. For each word in In this paper we propose WordNet as an the database, WordNet provides a list of senses alternative to ontologies for the purpose of and their definition in plain English. The senses, natural language generation. In particu- besides having a inner identifier, are represented lar, the synset-based structure of WordNet as synsets, i.e., sets of synonym words. Words proves useful for the lexicalization of con- in general belong to multiple synsets, as they cepts, by providing ready lists of lemmas have more than one sense, so the relation between for each concept to generate.
    [Show full text]
  • The Social Semantic Web John G
    The Social Semantic Web John G. Breslin · Alexandre Passant · Stefan Decker The Social Semantic Web 123 John G. Breslin Alexandre Passant Electrical and Electronic Engineering Digital Enterprise Research School of Engineering and Informatics Institute (DERI) National University of Ireland, Galway National University of Ireland, Galway Nuns Island IDA Business Park Galway Lower Dangan Ireland Galway [email protected] Ireland [email protected] Stefan Decker Digital Enterprise Research Institute (DERI) National University of Ireland, Galway IDA Business Park Lower Dangan Galway Ireland [email protected] ISBN 978-3-642-01171-9 e-ISBN 978-3-642-01172-6 DOI 10.1007/978-3-642-01172-6 Springer Heidelberg Dordrecht London New York Library of Congress Control Number: 2009936149 ACM Computing Classification (1998): H.3.5, H.4.3, I.2, K.4 c Springer-Verlag Berlin Heidelberg 2009 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
    [Show full text]
  • NL Assistant: a Toolkit for Developing Natural Language: Applications
    NL Assistant: A Toolkit for Developing Natural Language Applications Deborah A. Dahl, Lewis M. Norton, Ahmed Bouzid, and Li Li Unisys Corporation Introduction scale lexical resources have been integrated with the toolkit. These include Comlex and WordNet We will be demonstrating a toolkit for as well as additional, internally developed developing natural language-based applications resources. The second strategy is to provide easy and two applications. The goals of this toolkit to use editors for entering linguistic information. are to reduce development time and cost for natural language based applications by reducing Servers the amount of linguistic and programming work needed. Linguistic work has been reduced by Lexical information is supplied by four external integrating large-scale linguistics resources--- servers which are accessed by the natural Comlex (Grishman, et. al., 1993) and WordNet language engine during processing. Syntactic (Miller, 1990). Programming work is reduced by information is supplied by a lexical server based automating some of the programming tasks. The on the 50K word Comlex dictionary available toolkit is designed for both speech- and text- from the Linguistic Data Consortium. Semantic based interface applications. It runs in a information comes from two servers, a KB Windows NT environment. Applications can server based on the noun portion of WordNet run in either Windows NT or Unix. (70K concepts), and a semantics server containing case frame information for 2500 System Components English verbs. A denotations server using unique concept names generated for each The NL Assistant toolkit consists of WordNet synset at ISI links the words in the lexicon to the concepts in the KB.
    [Show full text]
  • Similarity Detection Using Latent Semantic Analysis Algorithm Priyanka R
    International Journal of Emerging Research in Management &Technology Research Article August ISSN: 2278-9359 (Volume-6, Issue-8) 2017 Similarity Detection Using Latent Semantic Analysis Algorithm Priyanka R. Patil Shital A. Patil PG Student, Department of Computer Engineering, Associate Professor, Department of Computer Engineering, North Maharashtra University, Jalgaon, North Maharashtra University, Jalgaon, Maharashtra, India Maharashtra, India Abstract— imilarity View is an application for visually comparing and exploring multiple models of text and collection of document. Friendbook finds ways of life of clients from client driven sensor information, measures the S closeness of ways of life amongst clients, and prescribes companions to clients if their ways of life have high likeness. Roused by demonstrate a clients day by day life as life records, from their ways of life are separated by utilizing the Latent Dirichlet Allocation Algorithm. Manual techniques can't be utilized for checking research papers, as the doled out commentator may have lacking learning in the exploration disciplines. For different subjective views, causing possible misinterpretations. An urgent need for an effective and feasible approach to check the submitted research papers with support of automated software. A method like text mining method come to solve the problem of automatically checking the research papers semantically. The proposed method to finding the proper similarity of text from the collection of documents by using Latent Dirichlet Allocation (LDA) algorithm and Latent Semantic Analysis (LSA) with synonym algorithm which is used to find synonyms of text index wise by using the English wordnet dictionary, another algorithm is LSA without synonym used to find the similarity of text based on index.
    [Show full text]
  • Introduction to Wordnet: an On-Line Lexical Database
    Introduction to WordNet: An On-line Lexical Database George A. Miller, Richard Beckwith, Christiane Fellbaum, Derek Gross, and Katherine Miller (Revised August 1993) WordNet is an on-line lexical reference system whose design is inspired by current psycholinguistic theories of human lexical memory. English nouns, verbs, and adjectives are organized into synonym sets, each representing one underlying lexical concept. Different relations link the synonym sets. Standard alphabetical procedures for organizing lexical information put together words that are spelled alike and scatter words with similar or related meanings haphazardly through the list. Unfortunately, there is no obvious alternative, no other simple way for lexicographers to keep track of what has been done or for readers to ®nd the word they are looking for. But a frequent objection to this solution is that ®nding things on an alphabetical list can be tedious and time-consuming. Many people who would like to refer to a dictionary decide not to bother with it because ®nding the information would interrupt their work and break their train of thought. In this age of computers, however, there is an answer to that complaint. One obvious reason to resort to on-line dictionariesÐlexical databases that can be read by computersÐis that computers can search such alphabetical lists much faster than people can. A dictionary entry can be available as soon as the target word is selected or typed into the keyboard. Moreover, since dictionaries are printed from tapes that are read by computers, it is a relatively simple matter to convert those tapes into the appropriate kind of lexical database.
    [Show full text]
  • Web Search Result Clustering with Babelnet
    Web Search Result Clustering with BabelNet Marek Kozlowski Maciej Kazula OPI-PIB OPI-PIB [email protected] [email protected] Abstract 2 Related Work In this paper we present well-known 2.1 Search Result Clustering search result clustering method enriched with BabelNet information. The goal is The goal of text clustering in information retrieval to verify how Babelnet/Babelfy can im- is to discover groups of semantically related docu- prove the clustering quality in the web ments. Contextual descriptions (snippets) of docu- search results domain. During the evalua- ments returned by a search engine are short, often tion we tested three algorithms (Bisecting incomplete, and highly biased toward the query, so K-Means, STC, Lingo). At the first stage, establishing a notion of proximity between docu- we performed experiments only with tex- ments is a challenging task that is called Search tual features coming from snippets. Next, Result Clustering (SRC). Search Results Cluster- we introduced new semantic features from ing (SRC) is a specific area of documents cluster- BabelNet (as disambiguated synsets, cate- ing. gories and glosses describing synsets, or Approaches to search result clustering can be semantic edges) in order to verify how classified as data-centric or description-centric they influence on the clustering quality (Carpineto, 2009). of the search result clustering. The al- The data-centric approach (as Bisecting K- gorithms were evaluated on AMBIENT Means) focuses more on the problem of data clus- dataset in terms of the clustering quality. tering, rather than presenting the results to the user. Other data-centric methods use hierarchical 1 Introduction agglomerative clustering (Maarek, 2000) that re- In the previous years, Web clustering engines places single terms with lexical affinities (2-grams (Carpineto, 2009) have been proposed as a solu- of words) as features, or exploit link information tion to the issue of lexical ambiguity in Informa- (Zhang, 2008).
    [Show full text]
  • Modeling and Encoding Traditional Wordlists for Machine Applications
    Modeling and Encoding Traditional Wordlists for Machine Applications Shakthi Poornima Jeff Good Department of Linguistics Department of Linguistics University at Buffalo University at Buffalo Buffalo, NY USA Buffalo, NY USA [email protected] [email protected] Abstract Clearly, descriptive linguistic resources can be This paper describes work being done on of potential value not just to traditional linguis- the modeling and encoding of a legacy re- tics, but also to computational linguistics. The source, the traditional descriptive wordlist, difficulty, however, is that the kinds of resources in ways that make its data accessible to produced in the course of linguistic description NLP applications. We describe an abstract are typically not easily exploitable in NLP appli- model for traditional wordlist entries and cations. Nevertheless, in the last decade or so, then provide an instantiation of the model it has become widely recognized that the devel- in RDF/XML which makes clear the re- opment of new digital methods for encoding lan- lationship between our wordlist database guage data can, in principle, not only help descrip- and interlingua approaches aimed towards tive linguists to work more effectively but also al- machine translation, and which also al- low them, with relatively little extra effort, to pro- lows for straightforward interoperation duce resources which can be straightforwardly re- with data from full lexicons. purposed for, among other things, NLP (Simons et al., 2004; Farrar and Lewis, 2007). 1 Introduction Despite this, it has proven difficult to create When looking at the relationship between NLP significant electronic descriptive resources due to and linguistics, it is typical to focus on the dif- the complex and specific problems inevitably as- ferent approaches taken with respect to issues sociated with the conversion of legacy data.
    [Show full text]
  • TEI and the Documentation of Mixtepec-Mixtec Jack Bowers
    Language Documentation and Standards in Digital Humanities: TEI and the documentation of Mixtepec-Mixtec Jack Bowers To cite this version: Jack Bowers. Language Documentation and Standards in Digital Humanities: TEI and the documen- tation of Mixtepec-Mixtec. Computation and Language [cs.CL]. École Pratique des Hauts Études, 2020. English. tel-03131936 HAL Id: tel-03131936 https://tel.archives-ouvertes.fr/tel-03131936 Submitted on 4 Feb 2021 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Préparée à l’École Pratique des Hautes Études Language Documentation and Standards in Digital Humanities: TEI and the documentation of Mixtepec-Mixtec Soutenue par Composition du jury : Jack BOWERS Guillaume, JACQUES le 8 octobre 2020 Directeur de Recherche, CNRS Président Alexis, MICHAUD Chargé de Recherche, CNRS Rapporteur École doctorale n° 472 Tomaž, ERJAVEC Senior Researcher, Jožef Stefan Institute Rapporteur École doctorale de l’École Pratique des Hautes Études Enrique, PALANCAR Directeur de Recherche, CNRS Examinateur Karlheinz, MOERTH Senior Researcher, Austrian Center for Digital Humanities Spécialité and Cultural Heritage Examinateur Linguistique Emmanuel, SCHANG Maître de Conférence, Université D’Orléans Examinateur Benoit, SAGOT Chargé de Recherche, Inria Examinateur Laurent, ROMARY Directeur de recherche, Inria Directeur de thèse 1.
    [Show full text]
  • Unification of Multiple Treebanks and Testing Them with Statistical Parser with Support of Large Corpus As a Lexical Resource
    Eng. &Tech.Journal, Vol.34,Part (B), No.5,2016 Unification of Multiple Treebanks and Testing Them With Statistical Parser With Support of Large Corpus as a Lexical Resource Dr. Ahmed Hussein Aliwy Computer Science Department, University of Kufa/Kufa Email:[email protected] Received on:12/11/2015 & Accepted on:21/4/2016 ABSTRACT There are many Treebanks, texts with the parse tree, available for the researcher in the field of Natural Language Processing (NLP). All these Treebanks are limited in size, and each one used private Context Free Grammar (CFG) production rules (private formalism) because its construction is time consuming and need to experts in the field of linguistics. These Treebanks, as we know, can be used for statistical parsing and machine translation tests and other fields in NLP applications. We propose, in this paper, to build large Treebank from multiple Treebanks for the same language. Also, we propose to use an annotated corpus as a lexical resource. Three English Treebanks are taken for our study which arePenn Treebank (PTB), GENIA Treebank (GTB) and British National Corpus (BNC). Brown corpus is used as a lexical resource which contains approximately one million tokens annotated with part of speech tags for each. Our work start by the unification of POS tagsets of the three Treebank then the mapping process between Brown Corpus tagset and the unified tagset is done. This is done manually according to our experience in this field. Also, all the non-terminals in the CFG production are unified.All the three Treebanks and the Brown corpus are rebuilt according to the new modification.
    [Show full text]