Esis Titled, “MULTIMODAL LEGAL IN- FORMATION RETRIEVAL” and the Work Presented in It Are My Own

Total Page:16

File Type:pdf, Size:1020Kb

Esis Titled, “MULTIMODAL LEGAL IN- FORMATION RETRIEVAL” and the Work Presented in It Are My Own PhD-FSTC-2018-31 The Faculty of University of Bologna Sciences, Technology and Law School Communication DISSERTATION Defence held on 27/04/2018 in Bologna to obtain the degree of DOCTEUR DE L’UNIVERSITÉ DU LUXEMBOURG EN INFORMATIQUE AND DOTTORE DI RICERCA IN Law, Science and Technology by Kolawole John ADEBAYO Born on 31 January 1986 in Oyo (Nigeria). MULTIMODAL LEGAL INFORMATION RETRIEVAL Dissertation Defence Committee Dr. Leon van der Torre, Dissertation Supervisor Professor, Université du Luxembourg, Luxembourg Dr. Guido Boella, Dissertation Supervisor Professor, Università degli Studi di Torino, Italy Dr. Marie-Francine Moens, Chairman Professor, Katholieke Universiteit, Belgium Dr. Henry Prakken, Vice-Chairman Professor, Universiteit Utrecht, Netherlands Dr. Schweighofer Erich, Member Professor, Professor, Universität Wien, Austria Dr. Monica Palmirani, Discussant Professor, Università di Bologna, Italy Dr. Luigi Di Caro, Discussant Professor, Università degli Studi di Torino, Italy iii Declaration of Authorship I, Kolawole John ADEBAYO, declare that this thesis titled, “MULTIMODAL LEGAL IN- FORMATION RETRIEVAL” and the work presented in it are my own. I confirm that: • This work was done wholly or mainly while in candidature for a research degree at this University. • Where any part of this thesis has previously been submitted for a degree or any other qualification at this University or any other institution, this has been clearly stated. • Where I have consulted the published work of others, this is always clearly at- tributed. • Where I have quoted from the work of others, the source is always given. With the exception of such quotations, this thesis is entirely my own work. • I have acknowledged all main sources of help. • Where the thesis is based on work done by myself jointly with others, I have made clear exactly what was done by others and what I have contributed myself. Signed: Date: v “The limits of my language are the limits of my world.” Ludwig Wittgenstein vii Abstract Kolawole John ADEBAYO MULTIMODAL LEGAL INFORMATION RETRIEVAL The goal of this thesis is to present a multifaceted way of inducing semantic represen- tation from legal documents as well as accessing information in a precise and timely manner. The thesis explored approaches for semantic information retrieval (IR) in the Legal context with a technique that maps specific parts of a text to the relevant con- cept. This technique relies on text segments, using the Latent Dirichlet Allocation (LDA), a topic modeling algorithm for performing text segmentation, expanding the concept using some Natural Language Processing techniques, and then associating the text seg- ments to the concepts using a semi-supervised Text Similarity technique. This solves two problems, i.e., that of user specificity in formulating query, and information over- load, for querying a large document collection with a set of concepts is more fine-grained since specific information, rather than full documents is retrieved. The second part of the thesis describes our Neural Network Relevance Model for E-Discovery Information Re- trieval. Our algorithm is essentially a feature-rich Ensemble system with different compo- nent Neural Networks extracting different relevance signal. This model has been trained and evaluated on the TREC Legal track 2010 data. The performance of our models across board proves that it capture the semantics and relatedness between query and document which is important to the Legal Information Retrieval domain. Subject: Legal Informatics. Keywords: Convolutional Neural Network, Concept, Concept-based IR, CNN, E-Discovery, Eurovoc, EurLex, Information Retrieval, Document Retrieval, Legal Information Retrieval, Semantic Annotation, Semantic Similarity, Latent Dirichlet Allocation, LDA, Long Short- Term Memory, LSTM, Natural Language Processing, Neural Information Retrieval Neu- ral Networks, Text Segmentation, Topic Modeling viii Il Riassunto L’obiettivo di questa tesi è quello di presentare un modo sfaccettato di accesso alle in- formazioni da un curpus di documenti legali in modo preciso ed efficiente. Il lavoro inizia con un’esplorazione degli approcci relativi al recupero di informazioni semantiche (Information Retrieval o IR) nel contesto giuridico con una tecnica che mappa alcune parti di un testo a specifici concetti di una ontologia, basandosi su una segmentazione semantica dei testi. Tecniche di elaborazione del linguaggio naturale vengono poi uti- lizzate per associare i segmenti di testo ai concetti utilizzando una tecnica di similarità testuale. Pertanto, interrogando un documento legale di grandi dimensioni con una serie di concetti è possibile recuperare segmenti di testo ad una grana più fine, piuttosto che i documenti completi originari. La tesi si conclude con la descrizione di un classificatore di reti neurali per l’E-Discovery. Questo modello è stato addestrato e valutato sui dati della legal track TREC 2010, ottenendo una performance in grado di dimostrare che le più recenti tecniche di neural computing possono fornire buone soluzioni al recupero di informazioni che vanno dalla gestione dei documenti, di informazioni aziendali e di sce- nario relativi all’E-Discovery. Oggetto: Informatica legale. Parole chiave: Recupero di informazioni, Annotazione semantica, Somiglianza seman- tica, Allineamento testuale, Risposte automatiche a domande legali, Reti neurali, Eurlex, Eurovoc, Estrazione di keyphrase. ix Acknowledgements I would like to thank the almighty God, the giver of life and the one in whom abso- lute power and grace resides. Many people have made the completion of this doctoral programme a reality. My thanks go to Prof. Monica Palmirani, the coordinator of the LAST-JD programme, and other academic committee members for finding me suitable for this doctoral programme. I thank my supervisors, Prof. Guido Boella, Prof. Leon Van Der Torre, and Dr. Luigi Di Caro for their guidance, advise, countless periods of discussion, and for putting up with my numerous deadline-day paper review request. Guido has been a father figure, and I could not have asked for more. Leon gave me tremendous support throughout the Ph.D. journey. I have participated at many conferences partly due to him. Luigi, a big part of the compliment goes to you! You never stopped reminding me that I could do better. I thank my wife, Oluwatumininu, and beautiful daughters -Joyce and Hillary for the love and understanding even when I am miles and months away. This thesis is only possible because of you! I thank my mum, Modupe, and siblings -Olaide, Adeboyin, Adefemi, Abiola and Oluwatobi for their love and support at all time. I do not forget other family members whom I cannot mention for the sake of space. God keep and bless you all. I thank the friends and colleagues that I have met during the period of the doctoral re- search, You all have made my stay in the many countries where I have worked memo- rable. Marc Beninati, thanks for being a friend and brother. To Livio Robaldo, thanks for your encouragement, you gave me fire when I needed it the most!,Dina Ferrari, thanks for your usual support. Rohan, thanks for our many useful discussions. To Antonia - my Landlady in Barcelona, and Prof. Roig -my host, you made Barcelona to be eternally engraved in my heart. Lastly, I thank the European Commission without whose scholarship I would not have been a part of this prestigious academic programme. xi Contents Declaration of Authorship iii Abstract vii Acknowledgements ix Contents xi List of Figures xv List of Tables xvii I General Introduction1 1 INTRODUCTION3 1.1 Introduction....................................3 1.2 Legal Information Retrieval...........................4 1.3 Motivation.....................................6 1.4 Problem Statement................................9 1.5 Thesis Question.................................. 13 1.5.1 Research Approach............................ 13 1.5.2 Research Goal............................... 14 1.6 Contribution to Knowledge........................... 14 1.7 Thesis Outline................................... 16 1.8 Publication..................................... 17 1.9 Chapter Summary................................. 18 2 INFORMATION RETRIEVAL AND RETRIEVAL MODELS 19 2.1 What is Information Retrieval?......................... 19 2.2 The Goal of an Information Retrieval System................. 21 2.3 Desiderata of an Information Retrieval System................ 22 2.4 Definition..................................... 25 2.4.1 Document................................. 25 2.4.2 Electronically Stored Information (ESI)................ 26 2.4.3 Collection................................. 26 2.5 The Notion of Relevance............................. 26 xii 2.6 Information Retrieval Models.......................... 28 2.6.1 Boolean Model.............................. 29 2.6.2 Vector Space model............................ 31 Term Weighing Approaches....................... 34 Latent Semantic Indexing........................ 35 2.6.3 Probabilistic Models........................... 36 2.6.4 Language Models............................. 38 2.7 Evaluation of Information Retrieval Systems................. 40 2.7.1 Precision.................................. 41 2.7.2 Recall.................................... 42 2.7.3 F-Measure................................. 42 2.7.4 Mean Average Precision......................... 43 2.7.5 Normalized Discounted Cumulative Gain.............. 43 2.7.6 Accuracy.................................. 44 2.8 Approaches for Improving
Recommended publications
  • Domain-Based Sense Disambiguation in Multilingual Structured Data
    Domain-Based Sense Disambiguation in Multilingual Structured Data Gabor´ Bella and Alessio Zamboni and Fausto Giunchiglia1 Abstract. Natural language text is pervasive in structured data Techniques such as cross-lingual semantic matching [2], seman- sets—relational database tables, spreadsheets, XML documents, tic search [7], or semantic service integration [13] were designed RDF graphs, etc.—requiring data processing operations to possess to tackle diversity in data and therefore invariably have some kind some level of natural language understanding capability. This, in of built-in meaning extraction capabilities. In semantic search, nat- turn, involves dealing with aspects of diversity present in structured ural language queries should be interpreted and matched to data in data such as multilingualism or the coexistence of data from multi- a robust way so that search is based on meaning and not on sur- ple domains. Word sense disambiguation is an essential component face forms of words (a tourist’s query on ‘bars’ should also return of natural language understanding processes. State-of-the-art WSD establishments indicated as ‘winebar’, cf. fig. 1 a, but preferably techniques, however, were developed to operate on single languages no results on the physical unit of pressure). In classification tasks, and on corpora that are considerably different from structured data natural-language labels indicating classes need to be formalised sets, such as articles, newswire, web pages, forum posts, or tweets. and compared to each other (establishments categorised as ‘malga’, In this paper we present a WSD method that is designed for short i.e., Alpine huts specific to the region of Trento, should be classified text typically present in structured data, applicable to multiple lan- as lodging facilities, cf.
    [Show full text]
  • Role of Natural Language Processing in Information Retrieval; Challenges and Opportunities Khaled M
    World Academy of Science, Engineering and Technology International Journal of Computer and Information Engineering Vol:8, No:12, 2014 Role of Natural Language Processing in Information Retrieval; Challenges and Opportunities Khaled M. Alhawiti business intelligence. The enterprise systems of searching the Abstract—This paper aims to analyze the role of natural knowledge base may be developed based on ontologies or language processing (NLP). The paper will discuss the role in the computing that is meaning-based. The textual information in context of automated data retrieval, automated question answer, and these technologies is indexed and the knowledge base is text structuring. NLP techniques are gaining wider acceptance in real tagged. However, tagging in the current web is not in a proper life applications and industrial concerns. There are various complexities involved in processing the text of natural language that way semantically. Hence enterprise search methods do not could satisfy the need of decision makers. This paper begins with the result in meaningful information retrieval. There is a need of description of the qualities of NLP practices. The paper then focuses effective search methods for extracting the best and relevant on the challenges in natural language processing. The paper also information that could improve the decision making process discusses major techniques of NLP. The last section describes [3]. opportunities and challenges for future research. One of the approaches of NLP is evidence-based NLP. It consists of three integrative iterative processes. The first step Keywords—Data Retrieval, Information retrieval, Natural filters the search results to obtain a set of information that is Language Processing, Text Structuring.
    [Show full text]
  • Topical Sequence Profiling and Cluster Labeling Sequence Flows from Left to Right, Thetopicsfromtopto Withrespecttodesiredtopiclabelproperties
    1 Topical Sequence Profiling Tim Gollub∗, Nedim Lipkay, Eunyee Kohy, Erdan Genc∗, and Benno Stein∗ ∗ Bauhaus-Universität Weimar <first name>.<last name>@uni-weimar.de y Adobe Systems [email protected], [email protected] Abstract—This paper introduces the problem of topical se- the goal of topical sequence profiling is to showcase research quence profiling. Given a sequence of text collections such as topics that peak as “hot topic” in distinct years but show a the annual proceedings of a conference, the topical sequence significant decline throughout the remaining years. We argue profile is the most diverse explicit topic embedding for that text collection sequence that is both representative and minimal. Topic that, in contrast to topics that never peak or that constantly embeddings represent a text collection sequence as numerical belong to the “usual suspects”, especially from these topics topic vectors by storing the relevance of each text collection valuable insights can be expected. for each topic. Topic embeddings are called explicit if human readable labels are provided for the topics. A topic embedding is representative for a sequence, if for each text collection the A. Problem Definition percentage of documents that address at least one of the topics exceeds a predefined threshold. If no topic can be removed from The problem of topical sequence profiling can be stated as the embedding without loosing representativeness, the embedding follows. Given a sequence of text collections D; D = (D1; is minimal. From the set of all minimal representative embed- D2;:::;Dn), where each D 2 D is a set of documents, find dings, the one with the highest mean topic variance is sought the most diverse, explicit topic embedding T∗ for the sequence and termed as the topical sequence profile.
    [Show full text]
  • BI SEARCH and TEXT ANALYTICS New Additions to the BI Technology Stack
    SECOND QUARTER 2007 TDWI BEST PRACTICES REPORT BI SEARCH AND TEXT ANALYTICS New Additions to the BI Technology Stack By Philip Russom TTDWI_RRQ207.inddDWI_RRQ207.indd cc11 33/26/07/26/07 111:12:391:12:39 AAMM Research Sponsors Business Objects Cognos Endeca FAST Hyperion Solutions Corporation Sybase, Inc. TTDWI_RRQ207.inddDWI_RRQ207.indd cc22 33/26/07/26/07 111:12:421:12:42 AAMM SECOND QUARTER 2007 TDWI BEST PRACTICES REPORT BI SEARCH AND TEXT ANALYTICS New Additions to the BI Technology Stack By Philip Russom Table of Contents Research Methodology and Demographics . 3 Introduction to BI Search and Text Analytics . 4 Defining BI Search . 5 Defining Text Analytics . 5 The State of BI Search and Text Analytics . 6 Quantifying the Data Continuum . 7 New Data Warehouse Sources from the Data Continuum . 9 Ramifications of Increasing Unstructured Data Sources . .11 Best Practices in BI Search . 12 Potential Benefits of BI Search . 12 Concerns over BI Search . 13 The Scope of BI Search . 14 Use Cases for BI Search . 15 Searching for Reports in a Single BI Platform Searching for Reports in Multiple BI Platforms Searching Report Metadata versus Other Report Content Searching for Report Sections Searching non-BI Content along with Reports BI Search as a Subset of Enterprise Search Searching for Structured Data BI Search and the Future of BI . 18 Best Practices in Text Analytics . 19 Potential Benefits of Text Analytics . 19 Entity Extraction . 20 Use Cases for Text Analytics . 22 Entity Extraction as the Foundation of Text Analytics Entity Clustering and Taxonomy Generation as Advanced Text Analytics Text Analytics Coupled with Predictive Analytics Text Analytics Applied to Semi-structured Data Processing Unstructured Data in a DBMS Text Analytics and the Future of BI .
    [Show full text]
  • Auto-Relevancy and Responsiveness Baseline II
    Auto-Relevancy and Responsiveness Baseline II Improving Concept Search to Establish a Subset with Maximized Recall for Automated First Pass and Early Assessment Using Latent Semantic Indexing [LSI], Bigrams and WordNet 3.0 Seeding Cody Bennett [[email protected]] – TREC Legal Track (Automatic; TCDI): TCDI - http://www.tcdi.com Abstract focus will be heavily skewed on the probability of attaining high Recall for the creation of a subset of the corpus. We experiment with manipulating the features at build time by indexing bigrams created from EDRM data and seeding the LSI index with thesaurus-like WordNet 3.0 strata. From Main Experiment Methods experimentation, this produces fewer false positives and a smaller, more focused relevant set. The method allows See the TREC website for details on the mock Requests for concept searching using bigrams and WordNet senses in Production, Reviewer Guidelines per topic and other addition to singular terms increasing polysemous value and information regarding scoring and assessing. Team TCDI’s precision; steps towards a unification of Semantic and participation will be discussed without the repetition of most of Statistical. Also, because of LSI and WordNet senses, WSD that information. appears enhanced. We then apply an automated method for selecting search criteria, query expansion and concept Baseline Participation searching from Reviewer Guidelines and the original Request for Production thereby returning a search result with scores TCDI’s baseline submissions assume that by building a blind across the Enron corpus for each topic. The result of the automated mechanism, the result is a distribution useful as a normalized cosine distance score for each document in each statistical snapshot, part of a knowledge and/or eDiscovery topic is then shifted based on the foundation of primes, golden paradigm, and/or ongoing quality assurance and control within standard, and golden ratio.
    [Show full text]
  • Semantic Analysis of Tweets Using LSA and SVD
    International Journal of Emerging Trends & Technology in Computer Science (IJETTCS) Web Site: www.ijettcs.org Email: [email protected] Volume 5, Issue 4, July - August 2016 ISSN 2278-6856 Semantic Analysis of Tweets using LSA and SVD S. G. Chodhary1, Rajkumar S. Jagdale2, Sachin N. Deshmukh 3 1R.D.I.K. & K.D. College, Badnera-Amravati, Maharashtra, India 2, 3 Department of CS & IT, Dr. Babasaheb Ambedkar Marathwada University, Aurangabad, Maharashtra, India Abstract: This paper presents experimental result on tweeter of search, concept matching, variation of words, data using information retrieval technique called as LSA used synonyms, general and special queries and natural to improve searching in text. LSA consist of matrix operation language queries to provide accurate search results [2]. known as SVD, main component of LSA which is used to Most of the advanced semantic search engine integrates reduce noise in text. The proposed methodology has given some of the above factors to provide most relevant result. accuracy of 84 % on twitter data. Keywords: Semantic Search, Latent Semantic Analysis, There are two types searching, navigational and Singular Value Decomposition. information search [3]. Navigation search intended to find particular document. While in information search, the 1. INTRODUCTION search engine is provided with broad topic for which there may be large number of relevant results. This approach of The aim of semantic search is to enhance search accuracy semantic search is closely related with exploratory search. by realizing the intent of search and related meaning of Also there is a list of semantic search systems which the word in the searching document or in a large text file.
    [Show full text]
  • Unsupervised Extraction and Clustering of Key Phrases from Scientific Publications
    Unsupervised Extraction and Clustering of Key Phrases from Scientific Publications Xiajing Li Uppsala University Department of Linguistics and Philology Master Programme in Language Technology Master’s Thesis in Language Technology, 30 ects credits September 25, 2020 Supervisors: Dr. Fredrik Wahlberg, Uppsala University Dr. Marios Daoutis, Ericsson AB Abstract Mapping a research domain can be of great signicance for understanding and structuring the state-of-art of a research area. Standard techniques for system- atically reviewing scientic literature entail extensive selection and intensive reading of manuscripts, a laborious and time consuming process performed by human experts. Researchers have spent eorts on automating methods in one or more sub-tasks of reviewing process. The main challenge of this work lies in the gap in semantic understanding of text and background domain knowledge. In this thesis we investigate the possibility of extracting keywords from scien- tic abstracts in an automated way. We intended to use the categories of these keywords to form a basis of a classication scheme in the context of systemati- cally mapping studies. We propose a framework by joint unsupervised keyphrase extraction and semantic keyphrase clustering. Specically, we (1) explore the eect of domain relevance and phrase quality measures in keyphrase extraction; (2) explore the eect of knowledge graph based word embedding in embedding rep- resentation of phrase semantics; (3) explore the eect of clustering for grouping semantically related keyphrases. Experiments are conducted on a dataset of publications pertaining the do- main of "Explainable Articial Intelligence (XAI)”. We further test the perfor- mance of clustering using terms and labels from publicly available academic taxonomies and keyword databases.
    [Show full text]
  • Analysis on Semantic Web Information Retrieval Methodologies
    Advances in Engineering Research (AER), volume 142 International Conference for Phoenixes on Emerging Current Trends in Engineering and Management (PECTEAM 2018) Literature Survey: Analysis on Semantic Web Information Retrieval Methodologies 1K.Ezhilarasi, 2G.Maria Kalavathy 1Research Scholar, Anna University, India 2Professor, St.Joseph’s College of Engineering, India [email protected], [email protected] Abstract:The Semantic web is the extension of an existing The focus of semantic web is to make the web as web that defines a standard by which, information is given machine-understandable by declaring annotations, in well-defined meaning and enables the machines to where automated agents will be able to understand the understand information. Many kinds of research are going content on the web, establish the relationship between in semantic web space. Researchers follow different them and take logical decisions to accomplish the approaches to retrieve data from the semantic web. This paper is to investigate the existing situation of the Semantic complex task with minimum human interaction. Web with the focus on effective information retrieval. Ontology is generally defined as formal vocabulary and Based on the architecture of semantic search engine, list of is considered as one of the main pillars of the semantic parameters (like ranking algorithm, reasoning mechanism) web. This is used to define the world by declaring the is framed to carry out a systematic analysis of different concepts and their relationships in an unambiguous way. techniques proposed by the researchers. This survey The Semantic Web search engines aim is to fetch the identifies around 20 unique models that retrieve the data reliable, precise, relevant information what actually user from the semantic web and information systems.
    [Show full text]
  • Reducing Polysemy in Wordnet
    UNIVERSITY OF TRENTO DIPARTIMENTO DI INGEGNERIA E SCIENZA DELL’INFORMAZIONE 38050 Povo – Trento (Italy), Via Sommarive 14 http://www.disi.unitn.it REDUCING POLYSEMY IN WORDNET Kanjana Jiamjitvanich and Mikalai Yatskevich December 2008 Technical Report # DISI-08-085 . Reducing polysemy in WordNet Kanjana Jiamjitvanich, Mikalai Yatskevich Department of Information and Communication Technology University of Trento, Italy Abstract. A known problem of WordNet is that it is too fine-grained in its sense definitions. For instance, it does not distinguish between homographs and polysemes. This distinction is crucial in many natural language processing tasks. In this paper we propose to distinguish only between homographs within WordNet data while merging all polysemous senses. The ultimate goal of this exercise is to compute a more coarse- grained version of linguistic database. In order to achieve this task we propose to merge all polysemous senses according to similarity scores computed by a hybrid algorithm. The key idea of the algorithm is to combine the similarity scores produced by diverse semantic similarity al- gorithms. We implemented the algorithm and evaluated it on the dataset extracted from the WordNet. The evaluation results are promising in comparison to the other state of the art approaches. 1 Introduction WordNet [14] is an electronic lexical database for English which stores lemmas and exceptional forms of words, word senses, sense glosses, semantic and syntac- tic relations between words, and other information related to the structure and use of the language. WordNet data is used by Controlled Vocabulary (CV)[?] component of Sweb [?] knowledge management system. CV stores background knowledge to be used by the other components of the system.
    [Show full text]
  • TWO-LAYERED ARCHITECTURE for PEER-TO-PEER CONCEPT SEARCH Janakiram Dharanipragada, Fausto Giunchiglia, Harisankar Haridas and Ul
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Unitn-eprints Research DISI ‐ Via Sommarive 14 ‐ 38123 Povo ‐ Trento (Italy) http://www.disi.unitn.it TWO-LAYERED ARCHITECTURE FOR PEER-TO-PEER CONCEPT SEARCH Janakiram Dharanipragada, Fausto Giunchiglia, Harisankar Haridas and Uladzimir Kharkevich February 2011 Technical Report # DISI-11-351 Also: submitted to the 4th International Semantic Search Workshop, Semsearch'11 co-located with WWW'11 Two-layered architecture for peer-to-peer concept search Janakiram Fausto Giunchiglia Dharanipragada University of Trento, Italy Indian Institute of Technology, [email protected] Madras [email protected] ∗ Harisankar Haridas Uladzimir Kharkevich Indian Institute of Technology, University of Trento, Italy Madras [email protected] [email protected] ABSTRACT 1. INTRODUCTION The current search landscape consists of a small number Global scale search has become a vital infrastructure for of centralized search engines posing serious issues including the society in the current information age. But, the current centralized control, resource scalability, power consumption search landscape is almost fully controlled by a small num- and inability to handle long tail of user interests. Since, ber of centralized search engines. Centralized search engines the major search engines use syntactic search techniques, crawl the content available in the world wide web, index and the quality of search results are also low, as the meanings maintain it in centralized data-centers. The user queries are of words are not considered effectively. A collaboratively directed towards the data-centers where the queries are pro- managed peer-to-peer semantic search engine realized using cessed internally and results are provided to the users.
    [Show full text]
  • Falcons Concept Search: a Practical Exploring Engine for Net Ontologies
    P.Sreedhar Reddy et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 3 (6), 2012,5476-5481 Falcons Concept Search: A Practical Exploring Engine for Net Ontologies P.Sreedhar Reddy, K.Morarjee Department of CSE, CMRIT, Dist: R.R, A.P, India Abstract—Reuse of knowledge bases and the semantic web the results to the ones in a specific ontology. Within such a are two promising areas in knowledge technologies. Given mode of interaction, users can quickly compare Ontologies some user requirements, finding the suitable ontologies is an and deter-mine whether these Ontologies satisfy their needs important task in both these areas. This paper discusses our by checking query-relevant Concept s as well as their work on Onto Search, a kind of "ontology Google", which can contexts, i.e., structured snippets. help users find ontologies on the Internet. Onto Search combines Google Web APIs with a hierarchy visualization The system also provides the detailed RDF description of technique. It allows the user to perform keyword searches on each Concept and a summary of each ontology on demand. certain types of “ontology” files, and to visually inspect the A demonstration of the system is given in the following. files to check their relevance. Onto Search system is based on A Panoramic Approach to Integrated Evaluation of Java, JSP, Jena and JBoss technologies. Ontologies in the Semantic Web As the sheer volume of new knowledge increases, there is a Keywords— Indexing, Ontologies ranking, Ontologies search, need to find effective ways to convey and correlate snippet generation, virtual document.
    [Show full text]
  • Semantic Similarity Analysis and Application in Knowledge Graphs
    Universidad Politécnica de Madrid Escuela Técnica Superior de Ingenieros de Telecomunicación SEMANTIC SIMILARITY ANALYSIS AND APPLICATION IN KNOWLEDGE GRAPHS Tesis Doctoral Ganggao Zhu Máster en Software y Sistemas 2017 Universidad Politécnica de Madrid Escuela Técnica Superior de Ingenieros de Telecomunicación SEMANTIC SIMILARITY ANALYSIS AND APPLICATION IN KNOWLEDGE GRAPHS Tesis Doctoral Ganggao Zhu Máster en Software y Sistemas 2017 Departamento de Ingeniería de Sistemas Telemáticos Escuela Técnica Superior de Ingenieros de Telecomunicación Universidad Politécnica de Madrid SEMANTIC SIMILARITY ANALYSIS AND APPLICATION IN KNOWLEDGE GRAPHS Autor: Ganggao Zhu Máster en Software y Sistemas Tutor: Carlos Ángel Iglesias Fernández Doctor Ingeniero de Telecomunicación 2017 Tribunal nombrado por el Magfco. y Excmo. Sr. Rector de la Universidad Politécnica de Madrid, el día de de . Presidente: Vocal: Vocal: Vocal: Secretario: Suplente: Suplente: Realizado el acto de defensa y lectura de la Tesis el día de de . en la E.T.S.I. Telecomunicación habiendo obtenido la calificación de . EL PRESIDENTE LOS VOCALES EL SECRETARIO A mi madre, a mi padre, gracias por todo. Acknowledgements Foremost, I would like to express my sincere gratitude to my adviser, Carlos A. Iglesias, for the continuous support of my Ph.D study and research, for his patience, motivation, enthusiasm, and immense knowledge. He provided me an excellent freedom atmosphere for doing research and implementing new ideas. His excellent guidance helped me in all the time of research and writing of this thesis. Besides, I would like to thank my fellow labmates for the discussions, lunch talk, pizza time, morning coffee game, working together before deadlines, and for all the fun we have had in the last five years.
    [Show full text]