Kandidat 102

Total Page:16

File Type:pdf, Size:1020Kb

Kandidat 102 INFO310 0 Advanced Topics in Model-Based Information Systems Kandidat 102 Oppgaver Oppgavetype Vurdering Status Introduction Dokument Automatisk poengsum Levert Plagiarism and Declaration Dokument Automatisk poengsum Levert 1 Essay Filopplasting Manuell poengsum Levert INFO310 0 Advanced Topics in Model-Based Information Systems Emnekode INFO310 PDF opprettet 16.11.2016 16:37 Vurderingsform INFO310 Opprettet av Andreas Lothe Opdahl Starttidspunkt: 03.11.2016 14:00 Antall sider 18 Sluttidspunkt: 09.11.2016 14:00 Oppgaver inkludert Nei Sensurfrist Ikke satt Skriv ut automatisk rettede Nei 1 Kandidat 102 Seksjon 1 1 OPPGAVE Essay Upload your file here. Maximum one file. BESVARELSE Filopplasting Filnavn 9066477_cand-9347761_9157556 Filtype pdf Filstørrelse 1482.886 KB Opplastingstid 09.11.2016 12:42:35 Neste side Besvarelse vedlagt INFO310 0 Advanced Topics in Model-Based Information Systems Page 2 av 18 Kandidat 102 SEMANTIC TECHNOLOGIES IN SEARCH ENGINES: GOOGLE AND COMPETITORS MARIO MARTINEZ REQUENA [email protected] Student number: 248948 INFO310 0 Advanced Topics in Model-Based Information Systems Page 3 av 18 Kandidat 102 Index 1. Introduction .................................................................................................................................. 2 2. Semantic search in Google ........................................................................................................... 2 1. How Google search engine works ............................................................................................ 3 2. Knowledge Graph ..................................................................................................................... 3 3. Knowledge Vault ...................................................................................................................... 4 4. Google Hummingbird ............................................................................................................... 5 5. Minor semantic patents ........................................................................................................... 7 1. Identification of semantic units from within a search query ............................................... 7 2. Inferring User Interests ........................................................................................................ 7 3. Competitors .................................................................................................................................. 7 1. Kngine ....................................................................................................................................... 7 2. Wolfram Alpha ......................................................................................................................... 8 3. Comparative Study ................................................................................................................... 9 4. Conclusion and future directions ............................................................................................... 11 5. Personal opinion and difficulties throw this work ...................................................................... 13 6. Referencies ................................................................................................................................. 14 INFO310 0 Advanced Topics in Model-Based Information Systems Page 4 av 18 Kandidat 102 1. Introduction We are all living on the information age. We have digital components all over the place, from our cars to health trackers. We live in a world where we use our smartphones as a part of us. Smartphones has become the first way that the humans have to interact with the digital word as the number “traditional computers” was surpassed by the intelligent phones in 2011 [1]. This is the first thing to understand this new era of the human-to-machine interaction´. Smartphones can be interpreted now as a part of the “new human being”. Now the relation needs to be more user friendly, more organic. Following with this concept and applied to the search engines, that nowadays are a kind of access to the collective memory, they need to get a “questions and answers” dynamic, an “human touch”, and this is in part achieved by introducing in the traditional search engines parts of semantic search. Semantic search, according to the definition provided by Wikipedia [2], wants to improve the search accuracy by analysing the context and intent of the user. Both of this concepts are really important because they can change radically the correct answer to the same search question. In a normal search engine, it would not even be noticed. This is why the world leaders search companies are introducing it on them powerful engines, and in this paper is going to be discussed why and how. 2. Semantic search in Google According to this rank [3], Google is by far, the first search engine on the internet, so this makes it the first subject of analysis. The Google search engine has been upgraded during the years. During its 18 years, the algorithm behind the engine has been changed many times, and big changes are announced publicly by Google. One of the first semantic big changes that Google has introduced was the Knowledge Graph, on May 16 2012, that aim to give to the users a more “environmental information” and entity recognition about the search that the user performs perform [4]. Apart from the Knowledge Graph, Google perform some changes to the engine itself. The latest ones have been Google Caffeine, designed to return results faster changing the way the crawlers index the pages, Google Panda, that aimed to display the higher quality sites first, Google Penguin, that corrects the errors from the Panda update and penalises the sites that are artificially increasing the rank of their pages and, finally, in September 26, 2013, they announced the biggest in the algorithm change since 2001, Google Hummingbird. This upgrade aims to, apart from the already included synonyms, to understand the context and intent of the user. In other words, they introduced semantic search on their algorithm. Even if Google Hummingbird probably one of the biggest semantic changes to the search engine, semantics have been on Google for a long period of time. They are not as big as Hummingbird, but they all help to create a more semantic Google INFO310 0 Advanced Topics in Model-Based Information Systems Page 5 av 18 Kandidat 102 1. How Google search engine works The whole process between a Google search cannot be displayed as a line because half of the process is being realized constantly. This is the crawling and indexing process: Google send crawlers, called Googlebots, surf the internet. They got throw the web by following links from page to page. Apart from traditional links, they also crawl through books, maps, Wikipedia, CIA world factbook, etc. It is a continuous process and because of that, the sites that are frequently updated will get more crawled. A copy of each page is stored on a gigantic Index as well as some data about it. This index also contains images. From the search perspective, a user performs a Search Query. Google analyse, correct and will try to understand the string of characters/voice command/image. This is the part Hummingbird upgraded. Then, based on this analysis, it will pull pages from its index, and Google will rank them based on more than 200 internal parameters. These parameters are almost secret. On this set of filters are included the quality, freshness and number of users that enter on this pages, between others. This is the part where the SEO experts works, trying to perfect details that makes that a page is considered as “good quality” for Google in order to put it up on higher the list. After this ranking, Google will pick relevant pieces to show from the page according to the search and will elaborate the search page itself. 2. Knowledge Graph One of the main Google statements is the following: “Google’s mission is to organize the world’s information to make it universally accessible and useful.” The introduction of the Knowledge Graph is behind this statement. It is not a remarkable change on the search algorithm, but it is one of the first big approaches that Google has taken to the semantics technologies in its search engine. The knowledge graph is a knowledge base that contains information about entities and relationships between entities. Knowledge extracts information out of text from Wikipedia, Wikidata and the CIA World Factbook. Basically, it is not processing you subject of search as a string of characters that need be found on a database, it is treating your query as an entity, a real world object or character, and as an actual object will be related to other entities. The entities can be classified as the way that they are obtained: Explicit entities: These entities are extracted directly with semantic web technologies from the structured mark-up of a webpage. Implicit entities: These entities are referred or derived from a text on page. In order to get the entities out of the text algorithms for processing the natural language are used. This type of knowledge graph has been used by others companies in different fields: Bing is the second search engine and it works really similar to Google, so, in 2013 Microsoft announced Satori Knowledge Base, with near to 0 information about how it works. INFO310 0 Advanced Topics in Model-Based Information Systems Page 6 av 18 Kandidat 102 Another popular search engine such as Yahoo!
Recommended publications
  • Querying Wikidata: Comparing SPARQL, Relational and Graph Databases
    Querying Wikidata: Comparing SPARQL, Relational and Graph Databases Daniel Hernández1, Aidan Hogan1, Cristian Riveros2, Carlos Rojas2, and Enzo Zerega2 Center for Semantic Web Research 1 Department of Computer Science, Universidad de Chile 2 Department of Computer Science, Pontificia Universidad Católica de Chile Resource type: Benchmark and Empirical Study Permanent URL: https://dx.doi.org/10.6084/m9.figshare.3219217 Abstract: In this paper, we experimentally compare the efficiency of various database engines for the purposes of querying the Wikidata knowledge-base, which can be conceptualised as a directed edge-labelled graph where edges can be annotated with meta-information called quali- fiers. We take two popular SPARQL databases (Virtuoso, Blazegraph), a popular relational database (PostgreSQL), and a popular graph database (Neo4J) for comparison and discuss various options as to how Wikidata can be represented in the models of each engine. We design a set of experiments to test the relative query performance of these representa- tions in the context of their respective engines. We first execute a large set of atomic lookups to establish a baseline performance for each test setting, and subsequently perform experiments on instances of more com- plex graph patterns based on real-world examples. We conclude with a summary of the strengths and limitations of the engines observed. 1 Introduction Wikidata is a new knowledge-base overseen by the Wikimedia foundation and collaboratively edited by a community of thousands of users [21]. The goal of Wikidata is to provide a common interoperable source of factual information for Wikimedia projects, foremost of which is Wikipedia. Currently on Wikipedia, articles that list entities – such as top scoring football players – and the info- boxes that appear on the top-right-hand side of articles – such as to state the number of goals scored by a football player – must be manually maintained.
    [Show full text]
  • One Knowledge Graph to Rule Them All? Analyzing the Differences Between Dbpedia, YAGO, Wikidata & Co
    One Knowledge Graph to Rule them All? Analyzing the Differences between DBpedia, YAGO, Wikidata & co. Daniel Ringler and Heiko Paulheim University of Mannheim, Data and Web Science Group Abstract. Public Knowledge Graphs (KGs) on the Web are consid- ered a valuable asset for developing intelligent applications. They contain general knowledge which can be used, e.g., for improving data analyt- ics tools, text processing pipelines, or recommender systems. While the large players, e.g., DBpedia, YAGO, or Wikidata, are often considered similar in nature and coverage, there are, in fact, quite a few differences. In this paper, we quantify those differences, and identify the overlapping and the complementary parts of public KGs. From those considerations, we can conclude that the KGs are hardly interchangeable, and that each of them has its strenghts and weaknesses when it comes to applications in different domains. 1 Knowledge Graphs on the Web The term \Knowledge Graph" was coined by Google when they introduced their knowledge graph as a backbone of a new Web search strategy in 2012, i.e., moving from pure text processing to a more symbolic representation of knowledge, using the slogan \things, not strings"1. Various public knowledge graphs are available on the Web, including DB- pedia [3] and YAGO [9], both of which are created by extracting information from Wikipedia (the latter exploiting WordNet on top), the community edited Wikidata [10], which imports other datasets, e.g., from national libraries2, as well as from the discontinued Freebase [7], the expert curated OpenCyc [4], and NELL [1], which exploits pattern-based knowledge extraction from a large Web corpus.
    [Show full text]
  • Wikipedia Knowledge Graph with Deepdive
    The Workshops of the Tenth International AAAI Conference on Web and Social Media Wiki: Technical Report WS-16-17 Wikipedia Knowledge Graph with DeepDive Thomas Palomares Youssef Ahres [email protected] [email protected] Juhana Kangaspunta Christopher Re´ [email protected] [email protected] Abstract This paper is organized as follows: first, we review the related work and give a general overview of DeepDive. Sec- Despite the tremendous amount of information on Wikipedia, ond, starting from the data preprocessing, we detail the gen- only a very small amount is structured. Most of the informa- eral methodology used. Then, we detail two applications tion is embedded in unstructured text and extracting it is a non trivial challenge. In this paper, we propose a full pipeline that follow this pipeline along with their specific challenges built on top of DeepDive to successfully extract meaningful and solutions. Finally, we report the results of these applica- relations from the Wikipedia text corpus. We evaluated the tions and discuss the next steps to continue populating Wiki- system by extracting company-founders and family relations data and improve the current system to extract more relations from the text. As a result, we extracted more than 140,000 with a high precision. distinct relations with an average precision above 90%. Background & Related Work Introduction Until recently, populating the large knowledge bases relied on direct contributions from human volunteers as well With the perpetual growth of web usage, the amount as integration of existing repositories such as Wikipedia of unstructured data grows exponentially. Extract- info boxes. These methods are limited by the available ing facts and assertions to store them in a struc- structured data and by human power.
    [Show full text]
  • Knowledge Graphs on the Web – an Overview Arxiv:2003.00719V3 [Cs
    January 2020 Knowledge Graphs on the Web – an Overview Nicolas HEIST, Sven HERTLING, Daniel RINGLER, and Heiko PAULHEIM Data and Web Science Group, University of Mannheim, Germany Abstract. Knowledge Graphs are an emerging form of knowledge representation. While Google coined the term Knowledge Graph first and promoted it as a means to improve their search results, they are used in many applications today. In a knowl- edge graph, entities in the real world and/or a business domain (e.g., people, places, or events) are represented as nodes, which are connected by edges representing the relations between those entities. While companies such as Google, Microsoft, and Facebook have their own, non-public knowledge graphs, there is also a larger body of publicly available knowledge graphs, such as DBpedia or Wikidata. In this chap- ter, we provide an overview and comparison of those publicly available knowledge graphs, and give insights into their contents, size, coverage, and overlap. Keywords. Knowledge Graph, Linked Data, Semantic Web, Profiling 1. Introduction Knowledge Graphs are increasingly used as means to represent knowledge. Due to their versatile means of representation, they can be used to integrate different heterogeneous data sources, both within as well as across organizations. [8,9] Besides such domain-specific knowledge graphs which are typically developed for specific domains and/or use cases, there are also public, cross-domain knowledge graphs encoding common knowledge, such as DBpedia, Wikidata, or YAGO. [33] Such knowl- edge graphs may be used, e.g., for automatically enriching data with background knowl- arXiv:2003.00719v3 [cs.AI] 12 Mar 2020 edge to be used in knowledge-intensive downstream applications.
    [Show full text]
  • Download Slides
    a platform for all that we know savas parastatidis http://savas.me savasp transition from web to apps increasing focus on information (& knowledge) rise of personal digital assistants importance of near-real time processing http://aptito.com/blog/wp-content/uploads/2012/05/smartphone-apps.jpg today... storing computing computers are huge amounts great tools for of data managing indexing example google and microsoft both have copies of the entire web (and more) for indexing purposes tomorrow... storing computing computers are huge amounts great tools for of data managing indexing acquisition discovery aggregation organization we would like computers to of the world’s information also help with the automatic correlation analysis and knowledge interpretation inference data information knowledge intelligence wisdom expert systems watson freebase wolframalpha rdbms google now web indexing data is symbols (bits, numbers, characters) information adds meaning to data through the introduction of relationship - it answers questions such as “who”, “what”, “where”, and “when” knowledge is a description of how the world works - it’s the application of data and information in order to answer “how” questions G. Bellinger, D. Castro, and A. Mills, “Data, Information, Knowledge, and Wisdom,” Inform. pp. 1–4, 2004 web – the data platform web – the information platform web – the knowledge platform foundation for new experiences “wisdom is not a product of schooling but of the lifelong attempt to acquire it” representative examples wolframalpha watson source:
    [Show full text]
  • Ontologies and Languages for Representing Mathematical Knowledge on the Semantic Web
    Ontologies and Languages for Representing Mathematical Knowledge on the Semantic Web Editor(s): Aldo Gangemi, ISTC-CNR Rome, Italy Solicited review(s): Claudio Sacerdoti Coen, University of Bologna, Italy; Alexandre Passant, DERI, National University of Galway, Ireland; Aldo Gangemi, ISTC-CNR Rome, Italy Christoph Lange data vocabularies and domain knowledge from pure and ap- plied mathematics. FB 3 (Mathematics and Computer Science), Many fields of mathematics have not yet been imple- University of Bremen, Germany mented as proper Semantic Web ontologies; however, we Computer Science, Jacobs University Bremen, show that MathML and OpenMath, the standard XML-based exchange languages for mathematical knowledge, can be Germany fully integrated with RDF representations in order to con- E-mail: [email protected] tribute existing mathematical knowledge to the Web of Data. We conclude with a roadmap for getting the mathematical Web of Data started: what datasets to publish, how to inter- link them, and how to take advantage of these new connec- tions. Abstract. Mathematics is a ubiquitous foundation of sci- Keywords: mathematics, mathematical knowledge manage- ence, technology, and engineering. Specific areas of mathe- ment, ontologies, knowledge representation, formalization, matics, such as numeric and symbolic computation or logics, linked data, XML enjoy considerable software support. Working mathemati- cians have recently started to adopt Web 2.0 environments, such as blogs and wikis, but these systems lack machine sup- 1. Introduction: Mathematics on the Web – State port for knowledge organization and reuse, and they are dis- of the Art and Challenges connected from tools such as computer algebra systems or interactive proof assistants.
    [Show full text]
  • Datatone: Managing Ambiguity in Natural Language Interfaces for Data Visualization Tong Gao1, Mira Dontcheva2, Eytan Adar1, Zhicheng Liu2, Karrie Karahalios3
    DataTone: Managing Ambiguity in Natural Language Interfaces for Data Visualization Tong Gao1, Mira Dontcheva2, Eytan Adar1, Zhicheng Liu2, Karrie Karahalios3 1University of Michigan, 2Adobe Research 3University of Illinois, Ann Arbor, MI San Francisco, CA Urbana Champaign, IL fgaotong,[email protected] fmirad,[email protected] [email protected] ABSTRACT to be both flexible and easy to use. General purpose spread- Answering questions with data is a difficult and time- sheet tools, such as Microsoft Excel, focus largely on offer- consuming process. Visual dashboards and templates make ing rich data transformation operations. Visualizations are it easy to get started, but asking more sophisticated questions merely output to the calculations in the spreadsheet. Asking often requires learning a tool designed for expert analysts. a “visual question” requires users to translate their questions Natural language interaction allows users to ask questions di- into operations on the spreadsheet rather than operations on rectly in complex programs without having to learn how to the visualization. In contrast, visual analysis tools, such as use an interface. However, natural language is often ambigu- Tableau,1 creates visualizations automatically based on vari- ous. In this work we propose a mixed-initiative approach to ables of interest, allowing users to ask questions interactively managing ambiguity in natural language interfaces for data through the visualizations. However, because these tools are visualization. We model ambiguity throughout the process of often intended for domain specialists, they have complex in- turning a natural language query into a visualization and use terfaces and a steep learning curve. algorithmic disambiguation coupled with interactive ambigu- Natural language interaction offers a compelling complement ity widgets.
    [Show full text]
  • Knowledge Graph Identification
    Knowledge Graph Identification Jay Pujara1, Hui Miao1, Lise Getoor1, and William Cohen2 1 Dept of Computer Science, University of Maryland, College Park, MD 20742 fjay,hui,[email protected] 2 Machine Learning Dept, Carnegie Mellon University, Pittsburgh, PA 15213 [email protected] Abstract. Large-scale information processing systems are able to ex- tract massive collections of interrelated facts, but unfortunately trans- forming these candidate facts into useful knowledge is a formidable chal- lenge. In this paper, we show how uncertain extractions about entities and their relations can be transformed into a knowledge graph. The ex- tractions form an extraction graph and we refer to the task of removing noise, inferring missing information, and determining which candidate facts should be included into a knowledge graph as knowledge graph identification. In order to perform this task, we must reason jointly about candidate facts and their associated extraction confidences, identify co- referent entities, and incorporate ontological constraints. Our proposed approach uses probabilistic soft logic (PSL), a recently introduced prob- abilistic modeling framework which easily scales to millions of facts. We demonstrate the power of our method on a synthetic Linked Data corpus derived from the MusicBrainz music community and a real-world set of extractions from the NELL project containing over 1M extractions and 70K ontological relations. We show that compared to existing methods, our approach is able to achieve improved AUC and F1 with significantly lower running time. 1 Introduction The web is a vast repository of knowledge, but automatically extracting that knowledge at scale has proven to be a formidable challenge.
    [Show full text]
  • Google Knowledge Graph, Bing Satori and Wolfram Alpha * Farouk Musa Aliyu and Yusuf Isah Yahaya
    International Journal of Scientific & Engineering Research Volume 12, Issue 1, January-2021 11 ISSN 2229-5518 An Investigation of the Accuracy of Knowledge Graph-base Search Engines: Google knowledge Graph, Bing Satori and Wolfram Alpha * Farouk Musa Aliyu and Yusuf Isah Yahaya Abstract— In this paper, we carried out an investigation on the accuracy of two knowledge graph driven search engines (Google knowledge Graph and Bing’s Satori) and a computational knowledge system (Wolfram Alpha). We used a dataset consisting of list of books and their correct authors and constructed queries that will retrieve the author(s) of a book given the book’s name. We evaluate the result from each search engine and measure their precision, recall and F1 score. We also compared the result of these two search engines to the result from the computation knowledge engine (Wolfram Alpha). Our result shows that Google performs better than Bing. While both Google and Bing performs better than Wolfram Alpha. Keywords — Knowledge Graphs, Evaluation, Information Retrieval, Semantic Search Engines.. —————————— —————————— 1 INTRODUCTION earch engines have played a significant role in helping leveraging knowledge graphs or how reliable are the result S web users find their search needs. Most traditional outputted by the semantic search engines? search engines answer their users by presenting a list 2. How are the accuracies of semantic search engines as of ranked documents which they believe are the most relevant compare with computational knowledge engines? to their
    [Show full text]
  • Processing Natural Language Arguments with the <Textcoop>
    Argument and Computation Vol. 3, No. 1, March 2012, 49–82 Processing natural language arguments with the <TextCoop> platform Patrick Saint-Dizier* IRIT-CNRS, 118 route de Narbonne 31062 Toulouse, France (Received 15 June 2011; final version received 31 January 2012) In this article, we first present the <TextCoop> platform and the Dislog language, designed for discourse analysis with a logic and linguistic perspective. The platform has now reached a certain level of maturity which allows the recognition of a large diversity of discourse structures including general-purpose rhetorical structures as well as domain-specific discourse structures. The Dislog language is based on linguistic considerations and includes knowledge access and inference capabilities. Functionalities of the language are presented together with a method for writing discourse analysis rules. Efficiency and portability of the system over domains and languages are investigated to conclude this first part. In a second part, we analyse the different types of arguments found in several document genres, most notably: procedures, didactic texts and requirements. Arguments form a large class of discourse relations. A generic and frequently encountered form emerges from our analysis: ‘reasons for conclusion’ which constitutes a homogeneous family of arguments from a language, functional and conceptual point of view. This family can be viewed as a kind of proto-argument. We then elaborate its linguistic structure and show how it is implemented in <TextCoop>. We then investigate the cooperation between explanation and arguments, in particular in didactic texts where they are particularly rich and elaborated. This article ends with a prospective section that develops current and potential uses of this work and how it can be extended to the recognition of other forms of arguments.
    [Show full text]
  • Building Dialogue Structure from Discourse Tree of a Question
    The Workshops of the Thirty-Second AAAI Conference on Artificial Intelligence Building Dialogue Structure from Discourse Tree of a Question Boris Galitsky Oracle Corp. Redwood Shores CA USA [email protected] Abstract ed, chat bot’s capability to maintain the cohesive flow, We propose a reasoning-based approach to a dialogue man- style and merits of conversation is an underexplored area. agement for a customer support chat bot. To build a dia- When a question is detailed and includes multiple sen- logue scenario, we analyze the discourse tree (DT) of an ini- tences, there are certain expectations concerning the style tial query of a customer support dialogue that is frequently of an answer. Although a topical agreement between ques- complex and multi-sentence. We then enforce what we call tions and answers have been extensively addressed, a cor- complementarity relation between DT of the initial query respondence in style and suitability for the given step of a and that of the answers, requests and responses. The chat bot finds answers, which are not only relevant by topic but dialogue between questions and answers has not been thor- also suitable for a given step of a conversation and match oughly explored. In this study we focus on assessment of the question by style, argumentation patterns, communica- cohesiveness of question/answer (Q/A) flow, which is im- tion means, experience level and other domain-independent portant for a chat bots supporting longer conversation. attributes. We evaluate a performance of proposed algo- When an answer is in a style disagreement with a question, rithm in car repair domain and observe a 5 to 10% im- a user can find this answer inappropriate even when a topi- provement for single and three-step dialogues respectively, in comparison with baseline approaches to dialogue man- cal relevance is high.
    [Show full text]
  • Towards a Knowledge Graph for Science
    Towards a Knowledge Graph for Science Invited Article∗ Sören Auer Viktor Kovtun Manuel Prinz TIB Leibniz Information Centre for L3S Research Centre, Leibniz TIB Leibniz Information Centre for Science and Technology and L3S University of Hannover Science and Technology Research Centre at University of Hannover, Germany Hannover, Germany Hannover [email protected] [email protected] Hannover, Germany [email protected] Anna Kasprzik Markus Stocker TIB Leibniz Information Centre for TIB Leibniz Information Centre for Science and Technology Science and Technology Hannover, Germany Hannover, Germany [email protected] [email protected] ABSTRACT KEYWORDS The document-centric workflows in science have reached (or al- Knowledge Graph, Science and Technology, Research Infrastructure, ready exceeded) the limits of adequacy. This is emphasized by Libraries, Information Science recent discussions on the increasing proliferation of scientific lit- ACM Reference Format: erature and the reproducibility crisis. This presents an opportu- Sören Auer, Viktor Kovtun, Manuel Prinz, Anna Kasprzik, and Markus nity to rethink the dominant paradigm of document-centric schol- Stocker. 2018. Towards a Knowledge Graph for Science: Invited Article. arly information communication and transform it into knowledge- In WIMS ’18: 8th International Conference on Web Intelligence, Mining and based information flows by representing and expressing informa- Semantics, June 25–27, 2018, Novi Sad, Serbia. ACM, New York, NY, USA, tion through semantically rich, interlinked knowledge graphs. At 6 pages. https://doi.org/10.1145/3227609.3227689 the core of knowledge-based information flows is the creation and evolution of information models that establish a common under- 1 INTRODUCTION standing of information communicated between stakeholders as The communication of scholarly information is document-centric.
    [Show full text]