Wikipedia and Wikidata Help Search Engines Understand Your Organization
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
What Do Wikidata and Wikipedia Have in Common? an Analysis of Their Use of External References
What do Wikidata and Wikipedia Have in Common? An Analysis of their Use of External References Alessandro Piscopo Pavlos Vougiouklis Lucie-Aimée Kaffee University of Southampton University of Southampton University of Southampton United Kingdom United Kingdom United Kingdom [email protected] [email protected] [email protected] Christopher Phethean Jonathon Hare Elena Simperl University of Southampton University of Southampton University of Southampton United Kingdom United Kingdom United Kingdom [email protected] [email protected] [email protected] ABSTRACT one hundred thousand. These users have gathered facts about Wikidata is a community-driven knowledge graph, strongly around 24 million entities and are able, at least theoretically, linked to Wikipedia. However, the connection between the to further expand the coverage of the knowledge graph and two projects has been sporadically explored. We investigated continuously keep it updated and correct. This is an advantage the relationship between the two projects in terms of the in- compared to a project like DBpedia, where data is periodically formation they contain by looking at their external references. extracted from Wikipedia and must first be modified on the Our findings show that while only a small number of sources is online encyclopedia in order to be corrected. directly reused across Wikidata and Wikipedia, references of- Another strength is that all the data in Wikidata can be openly ten point to the same domain. Furthermore, Wikidata appears reused and shared without requiring any attribution, as it is to use less Anglo-American-centred sources. These results released under a CC0 licence1. -
A Comparative Evaluation of Geospatial Semantic Web Frameworks for Cultural Heritage
heritage Article A Comparative Evaluation of Geospatial Semantic Web Frameworks for Cultural Heritage Ikrom Nishanbaev 1,* , Erik Champion 1,2,3 and David A. McMeekin 4,5 1 School of Media, Creative Arts, and Social Inquiry, Curtin University, Perth, WA 6845, Australia; [email protected] 2 Honorary Research Professor, CDHR, Sir Roland Wilson Building, 120 McCoy Circuit, Acton 2601, Australia 3 Honorary Research Fellow, School of Social Sciences, FABLE, University of Western Australia, 35 Stirling Highway, Perth, WA 6907, Australia 4 School of Earth and Planetary Sciences, Curtin University, Perth, WA 6845, Australia; [email protected] 5 School of Electrical Engineering, Computing and Mathematical Sciences, Curtin University, Perth, WA 6845, Australia * Correspondence: [email protected] Received: 14 July 2020; Accepted: 4 August 2020; Published: 12 August 2020 Abstract: Recently, many Resource Description Framework (RDF) data generation tools have been developed to convert geospatial and non-geospatial data into RDF data. Furthermore, there are several interlinking frameworks that find semantically equivalent geospatial resources in related RDF data sources. However, many existing Linked Open Data sources are currently sparsely interlinked. Also, many RDF generation and interlinking frameworks require a solid knowledge of Semantic Web and Geospatial Semantic Web concepts to successfully deploy them. This article comparatively evaluates features and functionality of the current state-of-the-art geospatial RDF generation tools and interlinking frameworks. This evaluation is specifically performed for cultural heritage researchers and professionals who have limited expertise in computer programming. Hence, a set of criteria has been defined to facilitate the selection of tools and frameworks. -
Wikipedia Knowledge Graph with Deepdive
The Workshops of the Tenth International AAAI Conference on Web and Social Media Wiki: Technical Report WS-16-17 Wikipedia Knowledge Graph with DeepDive Thomas Palomares Youssef Ahres [email protected] [email protected] Juhana Kangaspunta Christopher Re´ [email protected] [email protected] Abstract This paper is organized as follows: first, we review the related work and give a general overview of DeepDive. Sec- Despite the tremendous amount of information on Wikipedia, ond, starting from the data preprocessing, we detail the gen- only a very small amount is structured. Most of the informa- eral methodology used. Then, we detail two applications tion is embedded in unstructured text and extracting it is a non trivial challenge. In this paper, we propose a full pipeline that follow this pipeline along with their specific challenges built on top of DeepDive to successfully extract meaningful and solutions. Finally, we report the results of these applica- relations from the Wikipedia text corpus. We evaluated the tions and discuss the next steps to continue populating Wiki- system by extracting company-founders and family relations data and improve the current system to extract more relations from the text. As a result, we extracted more than 140,000 with a high precision. distinct relations with an average precision above 90%. Background & Related Work Introduction Until recently, populating the large knowledge bases relied on direct contributions from human volunteers as well With the perpetual growth of web usage, the amount as integration of existing repositories such as Wikipedia of unstructured data grows exponentially. Extract- info boxes. These methods are limited by the available ing facts and assertions to store them in a struc- structured data and by human power. -
Knowledge Graphs on the Web – an Overview Arxiv:2003.00719V3 [Cs
January 2020 Knowledge Graphs on the Web – an Overview Nicolas HEIST, Sven HERTLING, Daniel RINGLER, and Heiko PAULHEIM Data and Web Science Group, University of Mannheim, Germany Abstract. Knowledge Graphs are an emerging form of knowledge representation. While Google coined the term Knowledge Graph first and promoted it as a means to improve their search results, they are used in many applications today. In a knowl- edge graph, entities in the real world and/or a business domain (e.g., people, places, or events) are represented as nodes, which are connected by edges representing the relations between those entities. While companies such as Google, Microsoft, and Facebook have their own, non-public knowledge graphs, there is also a larger body of publicly available knowledge graphs, such as DBpedia or Wikidata. In this chap- ter, we provide an overview and comparison of those publicly available knowledge graphs, and give insights into their contents, size, coverage, and overlap. Keywords. Knowledge Graph, Linked Data, Semantic Web, Profiling 1. Introduction Knowledge Graphs are increasingly used as means to represent knowledge. Due to their versatile means of representation, they can be used to integrate different heterogeneous data sources, both within as well as across organizations. [8,9] Besides such domain-specific knowledge graphs which are typically developed for specific domains and/or use cases, there are also public, cross-domain knowledge graphs encoding common knowledge, such as DBpedia, Wikidata, or YAGO. [33] Such knowl- edge graphs may be used, e.g., for automatically enriching data with background knowl- arXiv:2003.00719v3 [cs.AI] 12 Mar 2020 edge to be used in knowledge-intensive downstream applications. -
Are Encyclopedias Dead? Evaluating the Usefulness of a Traditional Reference Resource Rachel S
St. Cloud State University theRepository at St. Cloud State Library Faculty Publications Library Services 2012 Are Encyclopedias Dead? Evaluating the Usefulness of a Traditional Reference Resource Rachel S. Wexelbaum St. Cloud State University, [email protected] Follow this and additional works at: https://repository.stcloudstate.edu/lrs_facpubs Part of the Library and Information Science Commons Recommended Citation Wexelbaum, Rachel S., "Are Encyclopedias Dead? Evaluating the Usefulness of a Traditional Reference Resource" (2012). Library Faculty Publications. 26. https://repository.stcloudstate.edu/lrs_facpubs/26 This Article is brought to you for free and open access by the Library Services at theRepository at St. Cloud State. It has been accepted for inclusion in Library Faculty Publications by an authorized administrator of theRepository at St. Cloud State. For more information, please contact [email protected]. Are Encyclopedias Dead? Evaluating the Usefulness of a Traditional Reference Resource Author Rachel Wexelbaum is Collection Management Librarian and Assistant Professor at Saint Cloud State University, Saint Cloud, Minnesota. Contact Details Rachel Wexelbaum Collection Management Librarian MC135D Collections Saint Cloud State University 720 4 th Avenue South Saint Cloud, MN 56301 Email: [email protected] Abstract Purpose – To examine past, current, and future usage of encyclopedias. Design/methodology/approach – Review the history of encyclopedias, their composition, and usage by focusing on select publications covering different subject areas. Findings – Due to their static nature, traditionally published encyclopedias are not always accurate, objective information resources. Intentions of editors and authors also come into question. A researcher may find more value in using encyclopedias as historical documents rather than resources for quick facts. -
The Culture of Wikipedia
Good Faith Collaboration: The Culture of Wikipedia Good Faith Collaboration The Culture of Wikipedia Joseph Michael Reagle Jr. Foreword by Lawrence Lessig The MIT Press, Cambridge, MA. Web edition, Copyright © 2011 by Joseph Michael Reagle Jr. CC-NC-SA 3.0 Purchase at Amazon.com | Barnes and Noble | IndieBound | MIT Press Wikipedia's style of collaborative production has been lauded, lambasted, and satirized. Despite unease over its implications for the character (and quality) of knowledge, Wikipedia has brought us closer than ever to a realization of the centuries-old Author Bio & Research Blog pursuit of a universal encyclopedia. Good Faith Collaboration: The Culture of Wikipedia is a rich ethnographic portrayal of Wikipedia's historical roots, collaborative culture, and much debated legacy. Foreword Preface to the Web Edition Praise for Good Faith Collaboration Preface Extended Table of Contents "Reagle offers a compelling case that Wikipedia's most fascinating and unprecedented aspect isn't the encyclopedia itself — rather, it's the collaborative culture that underpins it: brawling, self-reflexive, funny, serious, and full-tilt committed to the 1. Nazis and Norms project, even if it means setting aside personal differences. Reagle's position as a scholar and a member of the community 2. The Pursuit of the Universal makes him uniquely situated to describe this culture." —Cory Doctorow , Boing Boing Encyclopedia "Reagle provides ample data regarding the everyday practices and cultural norms of the community which collaborates to 3. Good Faith Collaboration produce Wikipedia. His rich research and nuanced appreciation of the complexities of cultural digital media research are 4. The Puzzle of Openness well presented. -
Semantic Search
Semantic Search Philippe Cudre-Mauroux Definition ter grasp the semantics (i.e., meaning) and the context of the user query and/or Semantic Search regroups a set of of the indexed content in order to re- techniques designed to improve tra- trieve more meaningful results. ditional document or knowledge base Semantic Search techniques can search. Semantic Search aims at better be broadly categorized into two main grasping the context and the semantics groups depending on the target content: of the user query and/or of the indexed • techniques improving the relevance content by leveraging Natural Language of classical search engines where the Processing, Semantic Web and Machine query consists of natural language Learning techniques to retrieve more text (e.g., a list of keywords) and relevant results from a search engine. results are a ranked list of documents (e.g., webpages); • techniques retrieving semi-structured Overview data (e.g., entities or RDF triples) from a knowledge base (e.g., a knowledge graph or an ontology) Semantic Search is an umbrella term re- given a user query formulated either grouping various techniques for retriev- as natural language text or using ing more relevant content from a search a declarative query language like engine. Traditional search techniques fo- SPARQL. cus on ranking documents based on a set of keywords appearing both in the user’s Those two groups are described in query and in the indexed content. Se- more detail in the following section. For mantic Search, instead, attempts to bet- each group, a wide variety of techniques 1 2 Philippe Cudre-Mauroux have been proposed, ranging from Natu- matical tags (such as noun, conjunction ral Language Processing (to better grasp or verb) to individual words. -
SEKI@Home, Or Crowdsourcing an Open Knowledge Graph
SEKI@home, or Crowdsourcing an Open Knowledge Graph Thomas Steiner1? and Stefan Mirea2 1 Universitat Politècnica de Catalunya – Department LSI, Barcelona, Spain [email protected] 2 Computer Science, Jacobs University Bremen, Germany [email protected] Abstract. In May 2012, the Web search engine Google has introduced the so-called Knowledge Graph, a graph that understands real-world en- tities and their relationships to one another. It currently contains more than 500 million objects, as well as more than 3.5 billion facts about and relationships between these different objects. Soon after its announce- ment, people started to ask for a programmatic method to access the data in the Knowledge Graph, however, as of today, Google does not provide one. With SEKI@home, which stands for Search for Embedded Knowledge Items,weproposeabrowserextension-basedapproachtocrowdsource the task of populating a data store to build an Open Knowledge Graph. As people with the extension installed search on Google.com, the ex- tension sends extracted anonymous Knowledge Graph facts from Search Engine Results Pages (SERPs) to a centralized, publicly accessible triple store, and thus over time creates a SPARQL-queryable Open Knowledge Graph. We have implemented and made available a prototype browser extension tailored to the Google Knowledge Graph, however, note that the concept of SEKI@home is generalizable for other knowledge bases. 1Introduction 1.1 The Google Knowledge Graph With the introduction of the Knowledge Graph, the search engine Google has made a significant paradigm shift towards “things, not strings” [7], as a post on the official Google blog states. Entities covered by the Knowledge Graph include landmarks, celebrities, cities, sports teams, buildings, movies, celestial objects, works of art, and more. -
Seo Report for Brands July 2019
VOICE ASSISTANT SEO REPORT FOR BRANDS JULY 2019 GIVING VOICE TO A REVOLUTION Table of Contents About Voicebot About Magic + Co An Introduction to Voice Search Today // 3 Voicebot produces the leading independent Magic + Co. is a next-gen creative services firm research, online publication, newsletter and podcast whose core expertise is in designing, producing, and Consumer Adoption Trends // 10 focused on the voice and AI industries. Thousands executing on conversational strategy, the technology of entrepreneurs, developers, investors, analysts behind it, and associated marketing campaigns. Voice Search Results Analysis // 16 and other industry leaders look to Voicebot each Grading the Experts // 32 week for the latest news, data, analysis and insights Facts defining the trajectory of the next great computing • 25 people, technologists, strategists, and Practical Voice Assistant SEO Strategies // 36 platform. At Voicebot, we give voice to a revolution. creatives with focus on conversational design and technologies. Additional Resources // 39 • 1 Webby Award Nominee and 3 Honorees. • 3 years old and global leader. Methodology • We’ve worked primarily with CPG, but our practice extends into utilities, governments, Consumer survey data was collected online during banking, entertainment, healthcare and is the first week of January 2019 and included 1,038 international in scope. U.S. adults age 18 or older that were representative of U.S. Census demographic averages. Findings were compared to earlier survey data collected in Magic + Co. the first week of September 2018 that included responses from 1,040 U.S. adults. Voice search data results from Amazon Echo Smart Speakers with Alexa, Apple HomePod with Siri, Google Home and a Smartphone with Google Assistant, and Samsung Galaxy S10 with Bixby, were collected in Q2 2019. -
Arxiv:1812.02903V1 [Cs.LG] 7 Dec 2018 Ized Server
APPLIED FEDERATED LEARNING: IMPROVING GOOGLE KEYBOARD QUERY SUGGESTIONS Timothy Yang*, Galen Andrew*, Hubert Eichner* Haicheng Sun, Wei Li, Nicholas Kong, Daniel Ramage, Franc¸oise Beaufays Google LLC, Mountain View, CA, U.S.A. ftimyang, galenandrew, huberte haicsun, liweithu, kongn, dramage, [email protected] ABSTRACT timely suggestions are necessary in order to maintain rele- vance. On-device inference and training through FL enable Federated learning is a distributed form of machine learn- us to both minimize latency and maximize privacy. ing where both the training data and model training are decen- In this paper, we use FL in a commercial, global-scale tralized. In this paper, we use federated learning in a commer- setting to train and deploy a model to production for infer- cial, global-scale setting to train, evaluate and deploy a model ence – all without access to the underlying user data. Our to improve virtual keyboard search suggestion quality with- use case is search query suggestions [4]: when a user enters out direct access to the underlying user data. We describe our text, Gboard uses a baseline model to determine and possibly observations in federated training, compare metrics to live de- surface search suggestions relevant to the input. For instance, ployments, and present resulting quality increases. In whole, typing “Let’s eat at Charlie’s” may display a web query sug- we demonstrate how federated learning can be applied end- gestion to search for nearby restaurants of that name; other to-end to both improve user experiences and enhance user types of suggestions include GIFs and Stickers. Here, we privacy. -
Falcon 2.0: an Entity and Relation Linking Tool Over Wikidata
Falcon 2.0: An Entity and Relation Linking Tool over Wikidata Ahmad Sakor Kuldeep Singh [email protected] [email protected] L3S Research Center and TIB, University of Hannover Cerence GmbH and Zerotha Research Hannover, Germany Aachen, Germany Anery Patel Maria-Esther Vidal [email protected] [email protected] TIB, University of Hannover L3S Research Center and TIB, University of Hannover Hannover, Germany Hannover, Germany ABSTRACT Management (CIKM ’20), October 19–23, 2020, Virtual Event, Ireland. ACM, The Natural Language Processing (NLP) community has signifi- New York, NY, USA, 8 pages. https://doi.org/10.1145/3340531.3412777 cantly contributed to the solutions for entity and relation recog- nition from a natural language text, and possibly linking them to 1 INTRODUCTION proper matches in Knowledge Graphs (KGs). Considering Wikidata Entity Linking (EL)- also known as Named Entity Disambiguation as the background KG, there are still limited tools to link knowl- (NED)- is a well-studied research domain for aligning unstructured edge within the text to Wikidata. In this paper, we present Falcon text to its structured mentions in various knowledge repositories 2.0, the first joint entity and relation linking tool over Wikidata. It (e.g., Wikipedia, DBpedia [1], Freebase [4] or Wikidata [28]). Entity receives a short natural language text in the English language and linking comprises two sub-tasks. The first task is Named Entity outputs a ranked list of entities and relations annotated with the Recognition (NER), in which an approach aims to identify entity proper candidates in Wikidata. The candidates are represented by labels (or surface forms) in an input sentence. -
New Generation of Social Networks Based on Semantic Web Technologies: the Importance of Social Data Portability
Workshop on Adaptation and Personalization for Web 2.0, UMAP'09, June 22-26, 2009 New Generation of Social Networks Based on Semantic Web Technologies: the Importance of Social Data Portability Liana Razmerita1, Martynas Jusevičius2, Rokas Firantas2 Copenhagen Business School, Denmark1 IT University of Copenhagen, Copenhagen, Denmark2 [email protected], [email protected], [email protected] Abstract. This article investigates several well-known social network applications such as Last.fm, Flickr and identifies social data portability as one of the main technical issues that need to be addressed in the future. We argue that this issue can be addressed by building social networks as Semantic Web applications with FOAF, SIOC, and Linked Data technologies, and prove it by implementing a prototype application using Java and core Semantic Web standards. Furthermore, the developed prototype shows how features from semantic websites such as Freebase and DBpedia can be reused in social applications and lead to more relevant content and stronger social connections. 1 Introduction Social networking sites are developing at a very fast pace, attracting more and more users. Their application domain can be music sharing, photos sharing, videos sharing, bookmarks sharing, professional networks, and others. Despite their tremendous success social networking websites have a number of limitations that are identified and discussed within this article. Most of the social networking applications are “walled websites” and the “online communities are like islands in a sea”[1]. Lack of interoperability between data in different social networks applications limits access to relevant content available on different social networking sites, and limits the integration and reuse of available data and information.