Aggregated Semantic Matching for Short Text Entity Linking

Total Page:16

File Type:pdf, Size:1020Kb

Aggregated Semantic Matching for Short Text Entity Linking Aggregated Semantic Matching for Short Text Entity Linking Feng Nie1,∗ Shuyan Zhou2,∗ Jing Liu3,∗ Jinpeng Wang4, Chin-Yew Lin4, Rong Pan1∗ 1Sun Yat-Sen University 2Harbin Institute of Technology 3Baidu Inc. 4Microsoft Research Asia ffengniesysu, [email protected], [email protected], fjinpwa, [email protected], [email protected] Abstract Tweet The vile #Trump humanity raises its gentle face The task of entity linking aims to identify con- in Canada ... chapeau to #Trudeau cepts mentioned in a text fragments and link them to a reference knowledge base. Entity Candidates linking in long text has been well studied in Donald Trump, Trump (card games), ... previous work. However, short text entity link- Table 1: An illustration of short text entity linking, ing is more challenging since the texts are noisy and less coherent. To better utilize the with mention Trump underlined. local information provided in short texts, we propose a novel neural network framework, Aggregated Semantic Matching (ASM), in One of the major challenges in entity link- which two different aspects of semantic infor- ing task is ambiguity, where an entity mention mation between the local context and the can- could denote to multiple entities in a knowledge didate entity are captured via representation- base. As shown in Table1, the mention Trump based and interaction-based neural semantic matching models, and then two matching sig- can refer to U.S. president Donald Trump and nals work jointly for disambiguation with a also the card name Trump (card games). Many rank aggregation mechanism. Our evaluation of recent approaches for long text entity linking shows that the proposed model outperforms take the advantage of global context which cap- the state-of-the-arts on public tweet datasets. tures the coherence among the mapped entities for a set of related mentions in a single docu- 1 Introduction ment (Cucerzan, 2007; Han et al., 2011; Glober- son et al., 2016; Heinzerling et al., 2017). How- The task of entity linking aims to link a men- ever, short texts like tweets are often concise and tion that appears in a piece of text to an entry less coherent, which lack the necessary informa- (i.e. entity) in a knowledge base. For example, tion for the global methods. In the NEEL dataset as shown in Table1, given a mention Trump in (Weller et al., 2016), there are only 3:4 mentions in a tweet, it should be linked to the entity Donald each tweet on average. Several studies (Liu et al., Trump1 in Wikipedia. Recent research has shown 2013; Huang et al., 2014) investigate collective that entity linking can help better understand the tweet entity linking by pre-collecting and consid- text of a document (Schuhmacher and Ponzetto, ering multiple tweets simultaneously. However, 2014) and benefits several tasks, including named multiple texts are not always available for collec- entity recognition (Luo et al.) and information re- tion and the process is time-consuming. Thus, we trieval (Xiong et al., 2017b). The research of entity argue that an efficient entity disambiguation which linking mainly considers two types of documents: requires only a single short text (e.g., a tweet) and long text (e.g. news articles and web documents) can well utilize local contexts is better suited in and short text (e.g. tweets). In this paper, we focus real word applications. on short text, particularly tweet entity linking. In this paper, we investigate entity disambigua- ∗ Correspondence author is Rong Pan. This work was tion in a setting where only local information is done when the first and second author were interns and the third author was an employee at Microsoft Research Asia. available. Recent neural approaches have shown 1https://en.wikipedia.org/wiki/Donald Trump their superiority in capturing rich semantic sim- 476 Proceedings of the 22nd Conference on Computational Natural Language Learning (CoNLL 2018), pages 476–485 Brussels, Belgium, October 31 - November 1, 2018. c 2018 Association for Computational Linguistics ilarities from mention contexts and entity con- by leveraging only the local information. Specif- tents. Sun et al.(2015); Francis-Landau et al. ically, we propose using both representation- (2016) proposed using convolutional neural net- focused model and interaction-focused model for works (CNN) with Siamese (symmetric) archi- semantic matching and view them as complemen- tecture to capture the similarity between texts. tary to each other. To overcome the issue of the These approaches can be viewed as represen- static weights in linear regression, we apply rank tation-focused semantic matching models. The aggregation to combine multiple semantic match- representation-focused model first builds a rep- ing signals captured by two neural models on mul- resentation for a single text (e.g., a context or tiple text pairs. We conduct extensive experiments an entity description) with a neural network, and to examine the effectiveness of our proposed ap- then conducts matching between the abstract rep- proach, ASM, on both NEEL dataset and MSR resentation of two pieces of text. Even though tweet entity linking (MSR-TEL for short) dataset. such models capture distinguishable information from both mention and entity side, some con- 2 Background crete matching signals are lost (e.g., exact match), 2.1 Notations since the matching between two texts happens af- ter their individual abstract representations have Given a tweet t, it contains a set of identified been obtained. To enhance the representation- queries Q = (q1; :::; qn). Each query q in a tweet t focused models, inspired by recent advances in in- consists of m and ctx, where m denotes an entity formation retrieval (Lu and Li, 2013; Guo et al., mention and ctx denotes the context of the men- 2016; Xiong et al., 2017a), we propose using in- tion, i.e., a piece of text surrounding m in the tweet teraction-focused approach to capture the con- t. An entity is an unambiguous page (e.g., Donald crete matching signals. The interaction-focused Trump) in a referent Knowledge Base (KB). Each method tries to build local interactions (e.g., co- entity e consists of ttl and desc, where ttl denotes sine similarity) between two pieces of text, and the title of e and desc denotes the description of e then uses neural networks to learn the final match- (e.g., the article defining e). ing score based on the local interactions. 2.2 An Overview of the Linking System The representation- and interaction-focused ap- Typically, an entity linking system consists of proach capture abstract- and concrete-level match- three components: mention detection, candidate ing signal respectively, they would be comple- generation and entity disambiguation. In this sec- ment each other if designed appropriately. One tion, we will briefly presents the existing solutions straightforward way to combine multiple seman- for the first two components. In next section, we tic matching signals is to apply a linear regres- will introduce our proposed aggregated semantic sion layer to learn a static weight for each match- matching for entity disambiguation. ing signal(Francis-Landau et al., 2016). However, we observe that the importance of different sig- 2.2.1 Mention Detection nals can be different case by case. For example, Given a tweet t with a sequence of words as shown in Table1, the context word Canada w1; :::; wn, our goal is to identify the possible en- is the most important word for the disambiguation tity mentions in the tweet t. Specifically, every of Trudeau. In this case, the concrete-level match- word wi in tweet t requires a label to indicate ing signal is required. While for the tweet “#Star- that whether it is an entity mention word or not. Wars #theForceAwakens #StarWarsForceAwakens Therefore, we view it as a traditional named entity @StarWars”, @StarWars is linked to the entity recognition (NER) problem and use BIO tagging 2 Star Wars . In this case, the whole tweet describes schema. Given the tweet t, we aim to assign labels the same topic “Star Wars”, thus the abstract-level y = (y1; :::; yn) for each word in the tweet t. semantics matching signal is helpful. To address this issue, we propose using a rank aggregation 8 < B wi is a begin word of a mention; method to dynamically combine multiple seman- yi = I wi is a non-begin word of a mention; tic matching signals for disambiguation. : O wi is not a mention word: In summary, we focus on entity disambiguation In our implementation, we apply an LSTM-CRF 2https://en.wikipedia.org/wiki/Star Wars based NER tagging model which automatically 477 Model Overview Knowledge Base Tweet Data ing signals captured by the two neural models on Mention Detection and Candidate Generation four text pairs. Semantic Matching 3.1 Semantic Matching Convolution Neural Neural Relevance Model Formally, given two texts T1 and T2, the semantic Network with Max-Pooling with Kernel-Pooling similarity of the two texts is measured as a score produced by a matching function based on the rep- Rank Aggregation resentation of each text: Linking Results match(T ;T ) = F (Φ(T ); Φ(T )) (1) Figure 1: An overview of aggregated semantic 1 2 1 2 matching for entity disambiguation. where Φ is a function to learn the text representa- tion, and F is the matching function based on the learns contextual features for sequence tagging via interaction between the representations. recurrent neural networks (Lample et al., 2016). Existing neural semantic matching models can be categorized into two types: (a) the 2.2.2 Candidate Generation representation-focused model which takes a com- Given a mention m, we use several heuristic rules plex representation learning function and uses to generate candidate entities similar to (Bunescu a relatively simple matching function, (b) the and Pasca, 2006; Huang et al., 2014; Sun et al., interaction-focused model which usually takes a 2015).
Recommended publications
  • Towards Ontology Based BPMN Implementation. Sophea Chhun, Néjib Moalla, Yacine Ouzrout
    Towards ontology based BPMN Implementation. Sophea Chhun, Néjib Moalla, Yacine Ouzrout To cite this version: Sophea Chhun, Néjib Moalla, Yacine Ouzrout. Towards ontology based BPMN Implementation.. SKIMA, 6th Conference on Software Knowledge Information Management and Applications., Jan 2012, Chengdu, China. 8 p. hal-01551452 HAL Id: hal-01551452 https://hal.archives-ouvertes.fr/hal-01551452 Submitted on 6 Nov 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. 1 Towards ontology based BPMN implementation CHHUN Sophea, MOALLA Néjib and OUZROUT Yacine University of Lumiere Lyon2, laboratory DISP, France Natural language is understandable by human and not machine. None technical persons can only use natural language to specify their business requirements. However, the current version of Business process management and notation (BPMN) tools do not allow business analysts to implement their business processes without having technical skills. BPMN tool is a tool that allows users to design and implement the business processes by connecting different business tasks and rules together. The tools do not provide automatic implementation of business tasks from users’ specifications in natural language (NL). Therefore, this research aims to propose a framework to automatically implement the business processes that are expressed in NL requirements.
    [Show full text]
  • Harnessing the Power of Folksonomies for Formal Ontology Matching On-The-Fly
    Edinburgh Research Explorer Harnessing the power of folksonomies for formal ontology matching on the fly Citation for published version: Togia, T, McNeill, F & Bundy, A 2010, Harnessing the power of folksonomies for formal ontology matching on the fly. in Proceedings of the ISWC workshop on Ontology Matching. <http://ceur-ws.org/Vol- 689/om2010_poster4.pdf> Link: Link to publication record in Edinburgh Research Explorer Document Version: Early version, also known as pre-print Published In: Proceedings of the ISWC workshop on Ontology Matching General rights Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact [email protected] providing details, and we will remove access to the work immediately and investigate your claim. Download date: 01. Oct. 2021 Harnessing the power of folksonomies for formal ontology matching on-the-y Theodosia Togia, Fiona McNeill and Alan Bundy School of Informatics, University of Edinburgh, EH8 9LE, Scotland Abstract. This paper is a short introduction to our work on build- ing and using folksonomies to facilitate communication between Seman- tic Web agents with disparate ontological representations. We briey present the Semantic Matcher, a system that measures the semantic proximity between terms in interacting agents' ontologies at run-time, fully automatically and minimally: that is, only for semantic mismatches that impede communication.
    [Show full text]
  • PERVASIVE BEHAVIOR INTERVENTIONS Using Mobile Devices for Overcoming Barriers for Physical Activity
    PERVASIVE BEHAVIOR INTERVENTIONS Using Mobile Devices for Overcoming Barriers for Physical Activity Vom Fachbereich Elektrotechnik und Informationstechnik der Technischen Universität Darmstadt zur Erlangung des akademischen Grades eines Doktor-Ingenieurs (Dr.-Ing.) genehmigte Dissertation von DIPL.-INF. UNIV. TIM ALEXANDER DUTZ Geboren am 20. Juli 1978 in Darmstadt Vorsitz: Prof. Dr. techn. Heinz Koeppl Referent: Prof. Dr.-Ing. habil. Ralf Steinmetz Korreferent: Prof. Dr. rer. nat. Rainer Malaka Tag der Einreichung: 14. September 2016 Tag der Disputation: 28. November 2016 Hochschulkennziffer D17 Darmstadt 2017 Dieses Dokument wird bereitgestellt von tuprints, E-Publishing-Service der Technischen Universität Darmstadt. http://tuprints.ulb.tu-darmstadt.de [email protected] Bitte zitieren Sie dieses Dokument als: URN: urn:nbn:de:tuda-tuprints-61270 URL: http://tuprints.ulb.tu-darmstadt.de/id/eprint/6127 Die Veröffentlichung steht unter folgender Creative Commons Lizenz: International 4.0 – Namensnennung, nicht kommerziell, keine Bearbeitung https://creativecommons.org/licenses/by-nc-nd/4.0/ Für meine Eltern Abstract Extensive cohort studies show that physical inactivity is likely to have negative consequences for one’s health. The World Health Organization thus recommends a minimum of thirty minutes of medium- intensity physical activity per day, an amount that can easily be reached by doing some brisk walking or leisure cycling. Recently, a Taiwanese-American team of scientists was able to prove that even less effort is required for positive health effects and that as little as fifteen minutes of physical activity per day will increase one’s life expectancy by up to three years on the average. However, simply spreading this knowledge is not sufficient.
    [Show full text]
  • Learning to Match Ontologies on the Semantic Web
    The VLDB Journal manuscript No. (will be inserted by the editor) Learning to Match Ontologies on the Semantic Web AnHai Doan1, Jayant Madhavan2, Robin Dhamankar1, Pedro Domingos2, Alon Halevy2 1 Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA fanhai,[email protected] 2 Department of Computer Science and Engineering, University of Washington, Seattle, WA 98195, USA fjayant,pedrod,[email protected] Received: date / Revised version: date Abstract On the Semantic Web, data will inevitably come and much of the potential of the Web has so far remained from many different ontologies, and information processing untapped. across ontologies is not possible without knowing the seman- In response, researchers have created the vision of the Se- tic mappings between them. Manually finding such mappings mantic Web [BLHL01], where data has structure and ontolo- is tedious, error-prone, and clearly not possible at the Web gies describe the semantics of the data. When data is marked scale. Hence, the development of tools to assist in the ontol- up using ontologies, softbots can better understand the se- ogy mapping process is crucial to the success of the Seman- mantics and therefore more intelligently locate and integrate tic Web. We describe GLUE, a system that employs machine data for a wide variety of tasks. The following example illus- learning techniques to find such mappings. Given two on- trates the vision of the Semantic Web. tologies, for each concept in one ontology GLUE finds the most similar concept in the other ontology. We give well- founded probabilistic definitions to several practical similar- Example 1 Suppose you want to find out more about some- ity measures, and show that GLUE can work with all of them.
    [Show full text]
  • Kgvec2go – Knowledge Graph Embeddings As a Service
    Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020), pages 5641–5647 Marseille, 11–16 May 2020 c European Language Resources Association (ELRA), licensed under CC-BY-NC KGvec2go – Knowledge Graph Embeddings as a Service Jan Portisch (1,2), Michael Hladik (2), Heiko Paulheim (1) (1) University of Mannheim - Data and Web Science Group, (2) SAP SE (1) B 6, 26 68159 Mannheim, Germany (2) Dietmar-Hopp Allee 16, 60190, Walldorf, Germany [email protected], [email protected], [email protected] Abstract In this paper, we present KGvec2go, a Web API for accessing and consuming graph embeddings in a light-weight fashion in downstream applications. Currently, we serve pre-trained embeddings for four knowledge graphs. We introduce the service and its usage, and we show further that the trained models have semantic value by evaluating them on multiple semantic benchmarks. The evaluation also reveals that the combination of multiple models can lead to a better outcome than the best individual model. Keywords: RDF2Vec, knowledge graph embeddings, knowledge graphs, background knowledge resources 1. Introduction The data set presented here allows to compare the perfor- A knowledge graph (KG) stores factual information in the mance of different knowledge graph embeddings on differ- form of triples. Today, many such graphs exist for various ent application tasks. It further allows to combine embed- domains, are publicly available, and are being interlinked. dings from different knowledge graphs in downstream ap- As of 2019, the linked open data cloud (Schmachtenberg plications. We evaluated the embeddings on three semantic et al., 2014) counts more than 1,000 data sets with multiple gold standards and also explored the combination of em- billions of unique triples.1 Knowledge graphs are typically beddings.
    [Show full text]
  • An Efficient Wikipedia Semantic Matching Approach to Text Document Classification
    Information Sciences 393 (2017) 15–28 Contents lists available at ScienceDirect Information Sciences journal homepage: www.elsevier.com/locate/ins An efficient Wikipedia semantic matching approach to text document classification ∗ ∗ Zongda Wu a, , Hui Zhu b, , Guiling Li c, Zongmin Cui d, Hui Huang e, Jun Li e, Enhong Chen f, Guandong Xu g a Oujiang College, Wenzhou University, Wenzhou, Zhejiang, China b Wenzhou Vocational College of Science and Technology, Wenzhou, Zhejiang, China c School of Computer Science, China University of Geosciences, Wuhan, China d School of Information Science and Technology, Jiujiang University, Jiangxi, China e College of Physics and Electronic Information Engineering, Wenzhou University, Wenzhou, Zhejiang, China f School of Computer Science and Technology, University of Science and Technology of China, Hefei, Anhui, China g Faculty of Engineering and IT, University of Technology, Sydney, Australia a r t i c l e i n f o a b s t r a c t Article history: A traditional classification approach based on keyword matching represents each text doc- Received 28 July 2016 ument as a set of keywords, without considering the semantic information, thereby, re- Revised 6 January 2017 ducing the accuracy of classification. To solve this problem, a new classification approach Accepted 3 February 2017 based on Wikipedia matching was proposed, which represents each document as a con- Available online 7 February 2017 cept vector in the Wikipedia semantic space so as to understand the text semantics, and Keywords: has been demonstrated to improve the accuracy of classification. However, the immense Wikipedia matching Wikipedia semantic space greatly reduces the generation efficiency of a concept vector, re- Keyword matching sulting in a negative impact on the availability of the approach in an online environment.
    [Show full text]
  • A Survey of Schema Matching Research Using Database Schemas and Instances
    (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 8, No. 10, 2017 A Survey of Schema Matching Research using Database Schemas and Instances Ali A. Alwan Mogahed Alzeber International Islamic University Malaysia, IIUM, International Islamic University Malaysia, IIUM Kuala Lumpur, Malaysia Kuala Lumpur, Malaysia Azlin Nordin Abedallah Zaid Abualkishik International Islamic University Malaysia, IIUM College of Computer Information Technology Kuala Lumpur, Malaysia American University in the Emirates Dubai, United Arab Emirates Abstract—Schema matching is considered as one of the which might negatively influence in the process of integrating essential phases of data integration in database systems. The the data [3]. main aim of the schema matching process is to identify the correlation between schema which helps later in the data Many firms might attempt to integrate some developed integration process. The main issue concern of schema matching heterogeneous data sources where these businesses have is how to support the merging decision by providing the various databases, and each database might consist of a vast correspondence between attributes through syntactic and number of tables that encompass different attributes. The semantic heterogeneous in data sources. There have been a lot of heterogeneity in these data sources leads to increasing the attempts in the literature toward utilizing database instances to complexity of handling these data, which result in the need for detect the correspondence between attributes during schema data integration [4]. Identifying the conflicts of (syntax matching process. Many approaches based on instances have (structure) and semantic heterogeneity) between schemas is a been proposed aiming at improving the accuracy of the matching significant issue during data integration.
    [Show full text]
  • A Distributional Semantic Search Infrastructure for Linked Dataspaces
    A Distributional Semantic Search Infrastructure for Linked Dataspaces Andr´eFreitas, Se´an O’Riain, Edward Curry Digital Enterprise Research Institute (DERI) National University of Ireland, Galway Abstract. This paper describes and demonstrates a distributional se- mantic search service infrastructure for Linked Dataspaces. The center of the approach relies on the use of a distributional semantics infrastruc- ture to provide semantic search and query services over data for users and applications, improving data accessibility over the Dataspace. By ac- cessing the services through a REST API, users can semantically index and search over data using the distributional semantic knowledge embed- ded in the reference corpus. The use of distributional semantic models, which rely on the automatic extraction from large corpora, supports a comprehensive and approximative semantic matching mechanism with a low associated adaptation cost for the inclusion of new data sources. Keywords: Distributional Semantics, Semantic Matching, Semantic Search, Explicit Semantic Analysis, Dataspaces, Linked Data. 1 Motivation Within the realm of the Web and of Big Data, dataspaces where data is more complex, sparse and heterogeneous are becoming more common. Consuming this data demands applications and search/query mechanisms with the seman- tic flexibility necessary to cope with the semantic/vocabulary gap between users, different applications and data sources within the dataspace. Traditionally, con- suming structured data demands that users, applications and databases share the same vocabulary before data consumption, where the semantic matching process is done manually. As dataspaces grow in complexity, the ability to se- mantically search over data using one’s own vocabulary becomes a fundamental functionality for dataspaces.
    [Show full text]
  • YASA-M: a Semantic Web Service Matchmaker
    YASA-M : a semantic Web service matchmaker Yassin Chabeb, Samir Tata, Alain Ozanne To cite this version: Yassin Chabeb, Samir Tata, Alain Ozanne. YASA-M : a semantic Web service matchmaker. 24th IEEE International Conference on Advanced Information Networking and Applications (AINA 2010):, Apr 2010, Perth, Australia. pp.966 - 973, 10.1109/AINA.2010.122. hal-01356801 HAL Id: hal-01356801 https://hal.archives-ouvertes.fr/hal-01356801 Submitted on 26 Aug 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. YASA-M: A Semantic Web Service Matchmaker Yassin Chabeb, Samir Tata, and Alain Ozanne TELECOM SudParis, CNRS UMR Samovar, Evry, France Email: fyassin.chabeb, samir.tata, [email protected] Abstract—In this paper, we present new algorithms for match- This paper is organized as follows. Section II presents a state ing Web services described in YASA4WSDL (YASA for short). of the art of semantic matching approaches. In Section III, We have already defined YASA that overcomes some issues we give an overview of our service description language then missing in WSDL or SAWSDL. In this paper, we continue on our contribution and show how YASA Web services are we detail our service matching algorithm.
    [Show full text]
  • Semantic Matching in Search
    Foundations and Trends⃝R in Information Retrieval Vol. 7, No. 5 (2013) 343–469 ⃝c 2014 H. Li and J. Xu DOI: 10.1561/1500000035 Semantic Matching in Search Hang Li Huawei Technologies, Hong Kong [email protected] Jun Xu Huawei Technologies, Hong Kong [email protected] Contents 1 Introduction 3 1.1 Query Document Mismatch ................. 3 1.2 Semantic Matching in Search ................ 5 1.3 Matching and Ranking ................... 9 1.4 Semantic Matching in Other Tasks ............. 10 1.5 Machine Learning for Semantic Matching in Search .... 11 1.6 About This Survey ...................... 14 2 Semantic Matching in Search 16 2.1 Mathematical View ..................... 16 2.2 System View ......................... 19 3 Matching by Query Reformulation 23 3.1 Query Reformulation .................... 24 3.2 Methods of Query Reformulation .............. 25 3.3 Methods of Similar Query Mining .............. 32 3.4 Methods of Search Result Blending ............. 38 3.5 Methods of Query Expansion ................ 41 3.6 Experimental Results .................... 44 4 Matching with Term Dependency Model 45 4.1 Term Dependency ...................... 45 ii iii 4.2 Methods of Matching with Term Dependency ....... 47 4.3 Experimental Results .................... 53 5 Matching with Translation Model 54 5.1 Statistical Machine Translation ............... 54 5.2 Search as Translation .................... 56 5.3 Methods of Matching with Translation ........... 59 5.4 Experimental Results .................... 61 6 Matching with Topic Model 63 6.1 Topic Models ........................ 64 6.2 Methods of Matching with Topic Model .......... 70 6.3 Experimental Results .................... 74 7 Matching with Latent Space Model 75 7.1 General Framework of Matching .............. 76 7.2 Latent Space Models ...................
    [Show full text]
  • A Library of Schema Matching Algorithms for Dataspace Management Systems
    A LIBRARY OF SCHEMA MATCHING ALGORITHMS FOR DATASPACE MANAGEMENT SYSTEMS A dissertation submitted to the University of Manchester for the degree of Master of Science in the Faculty of Engineering and Physical Sciences 2011 By Syed Zeeshanuddin School of Computer Science Contents Abstract 8 Declaration 10 Copyright 11 Acknowledgments 12 List of Abbreviations 13 1 Introduction 14 1.1 Aims and Objectives . 18 1.2 Overview Of Approach . 19 1.3 Summary of Achievements . 19 1.4 Dissertation Structure . 20 2 Background 22 2.1 Applications Of Schema Matching . 22 2.2 Review Of State-of-the-art Schema Matching Systems . 25 3 Overview 32 3.1 Taxonomy Of Schema Matching Algorithms . 32 3.1.1 Element-level Schema Matching . 33 3.1.2 Structure-level Schema Matching . 38 3.1.3 Instance-level Schema Matching . 39 3.2 String Matching Algorithms . 42 3.2.1 Distance Based . 42 3.2.2 N-gram Based . 43 3.2.3 Stem Based String Comparison . 44 2 3.2.4 Phonetics Based String Comparison . 44 4 Architecture 45 4.1 Development Methodology . 46 4.2 Prototype Design . 46 4.2.1 Use Case . 46 4.2.2 Flow Of Activities . 48 4.2.3 Prototype Components . 48 5 Algorithms 55 5.1 Element-level Schema Matchers . 55 5.1.1 Element-level Name-based Without Context . 55 5.1.2 Element-level Name-based With Context . 56 5.1.3 Element-level Domain-based Without Context . 56 5.1.4 Element-level Domain-based With Context . 56 5.2 Structure-level Schema Matcher .
    [Show full text]
  • Competition-Based User Expertise Score Estimation
    Competition-based User Expertise Score Estimation Jing Liu†*, Young-In Song‡, Chin-Yew Lin‡ † ‡ Harbin Institute of Technology Microsoft Research Asia No. 92, West Da-Zhi St, Nangang Dist. Building 2, No. 5 Dan Ling St, Haidian Dist. Harbin, China 150001 Beijing, China 100190 [email protected] {yosong, cyl}@microsoft.com ABSTRACT covered by existing web pages. With the explosive growth of web 2.0 sites, community question and answering services (denoted as In this paper, we consider the problem of estimating the relative 1 2 expertise score of users in community question and answering CQA) such as Yahoo! Answers and Baidu Zhidao , have become services (CQA). Previous approaches typically only utilize the important services where people can use natural language rather explicit question answering relationship between askers and an- than keywords to ask questions and seek advice or opinions from swerers and apply link analysis to address this problem. The im- real people who have relevant knowledge or experiences. CQA plicit pairwise comparison between two users that is implied in services provide another way to satisfy a user’s information needs the best answer selection is ignored. Given a question and answer- that cannot be met by traditional search engines. Users are the ing thread, it’s likely that the expertise score of the best answerer unique source of knowledge in CQA sites and all users from ex- is higher than the asker’s and all other non-best answerers’. The perts to novices can generate content arbitrarily. Therefore, it is goal of this paper is to explore such pairwise comparisons inferred desirable to have a system that can automatically estimate the user from best answer selections to estimate the relative expertise expertise score and identify experts who can provide good quality scores of users.
    [Show full text]