A Graph-Structured Dataset for Wikipedia Research

Total Page:16

File Type:pdf, Size:1020Kb

A Graph-Structured Dataset for Wikipedia Research A graph-structured dataset for Wikipedia research Nicolas Aspert, Volodymyr Miz, Benjamin Ricaud, and Pierre Vandergheynst LTS2, EPFL, Station 11, CH-1015 Lausanne, Switzerland [email protected] ABSTRACT Page 2 Page 7 Wikipedia is a rich and invaluable source of information. Its cen- Page 6 tral place on the Web makes it a particularly interesting object Page 6 of study for scientists. Researchers from different domains used Page 4 various complex datasets related to Wikipedia to study language, so- Page 3 Page 5 Page 1 cial behavior, knowledge organization, and network theory. While Page 4 being a scientific treasure, the large size of the dataset hinders pre- Page 3 processing and may be a challenging obstacle for potential new Page 7 Page 2 studies. This issue is particularly acute in scientific domains where Page 5 researchers may not be technically and data processing savvy. On Page 1 one hand, the size of Wikipedia dumps is large. It makes the parsing Day 1 Day 2 Day 3 Day 4 Day 5 Day 6 and extraction of relevant information cumbersome. On the other hand, the API is straightforward to use but restricted to a relatively Figure 1: A subset of Wikipedia web pages with viewership small number of requests. The middle ground is at the mesoscopic activity (pagecounts). Left: Wikipedia hyperlinks network, scale, when researchers need a subset of Wikipedia ranging from where nodes correspond to Wikipedia articles and edges rep- thousands to hundreds of thousands of pages but there exists no resent hyperlinks between the articles. Right: hourly page- efficient solution at this scale. view statistics of Wikipedia articles (right). In this work, we propose an efficient data structure to make requests and access subnetworks of Wikipedia pages and categories. We provide convenient tools for accessing and filtering viewership In this work, we present a convenient and intuitive graph-based statistics or "pagecounts" of Wikipedia web pages. The dataset toolset for researchers that will ease the access to this data and its organization leverages principles of graph databases that allows further analysis. rapid and intuitive access to subgraphs of Wikipedia articles and Wikimedia Foundation, the organization that hosts Wikipedia, categories. The dataset and deployment guidelines are available on makes the web activity records and the hyperlinks structure of the LTS2 website https://lts2.epfl.ch/Datasets/Wikipedia/. Wikipedia publicly available so anyone can access the records either through an API or through the database dump files. Even though KEYWORDS the data is well structured, efficient pre-processing and wrangling Dataset, Graph, Wikipedia, Temporal Network, Web Logs requires data engineering skills. First, the dumps are very large and ACM Reference Format: it takes a long time for researchers to load and filter them to get Nicolas Aspert, Volodymyr Miz, Benjamin Ricaud, and Pierre Vandergheynst. what they need to study a particular question. Second, although 2019. A graph-structured dataset for Wikipedia research. In Companion the API is well documented and easy to use, the number of queries Proceedings of the 2019 World Wide Web Conference (WWW ’19 Companion), and the response size are very limited. May 13–17, 2019, San Francisco, CA, USA. ACM, New York, NY, USA, 6 pages. Even though the API is quite convenient, it can cause repro- https://doi.org/10.1145/3308560.3316757 ducibility issues. The network of hyperlinks evolves with time so the API can only provide the latest network configuration. To 1 INTRODUCTION solve this problem, as a workaround, researchers use static pre- arXiv:1903.08597v1 [cs.IR] 20 Mar 2019 Wikipedia is one of the most visited websites in the world. Millions processed datasets. Two of the most popular datasets for Wikipedia of people use it every day searching for answers to various ques- network research are available on the SNAP archive, Wikipedia tions ranging from biographies of popular figures to definitions Network of Hyperlinks [11] and Wikipedia Network of Top Cate- of complex scientific concepts. As any other website on the Web, gories [6, 10, 16]. The initial publications referring to these datasets Wikipedia stores web logs that contain viewership statistics of every have been cited more than 1000 times, showing the high interest in page. Worldwide popularity of this free encyclopedia makes these these datasets. These archives were created from Wikipedia dumps records an invaluable resource of data for the research community. in 2011 and 2013 respectively. However, Wikipedia has evolved since then and Wikipedia research community would benefit from This paper is published under the Creative Commons Attribution 4.0 International being able to access more recent data. (CC-BY 4.0) license. Authors reserve their rights to disseminate the work on their personal and corporate Web sites with the appropriate attribution. Multiple studies have analyzed Wikipedia from a network sci- WWW ’19 Companion, May 13–17, 2019, San Francisco, CA, USA ence perspective and have used its network structure to improve © 2019 IW3C2 (International World Wide Web Conference Committee), published Wikipedia itself or to gain insights into collective behavior of its under Creative Commons CC-BY 4.0 License. ACM ISBN 978-1-4503-6675-5/19/05. users. In [17], Zesch and Gurevych used Wikipedia category graph https://doi.org/10.1145/3308560.3316757 as a natural language processing resource. Buriol et al. [4] studied WWW ’19 Companion, May 13–17, 2019, San Francisco, CA, USA Aspert, et al. the temporal evolution of Wikipedia hyperlinks graph. Bellomi and Bonato conducted a study [3] of macro-structure of English Wikipedia network and cultural biases related to specific topics. West et al. proposed an approach enabling the identification of missing hyperlinks in Wikipedia to improve the navigation experi- ence [12]. Another direction of Wikipedia research focuses on the page- counts analysis. Moat et al. [9] used Wikipedia viewership statistics to gain insights into stock markets. Yasseri et al. [15] studied edi- torial wars in Wikipedia analyzing activity patterns in viewership dynamics of articles that describe controversial topics. Mestyán et al. [7] demonstrated that Wikipedia pagecounts can be used to predict the popularity of a movie. Collective memory phenome- non was studied in [5], where authors analyzed visitors activity to evaluate the reaction of Wikipedia users on aircraft incidents. Figure 2: Wikipedia graph structure. In blue: articles and hy- The hyperlink network structure, on one hand, and the view- perlinks referring to them. In red: category pages and hyper- ership statistics (pagecounts) of Wikipedia articles, on the other links connecting the pages or subcategories to parent cate- hand, have attracted significant attention from the research commu- gories. In green: a redirected article, i.e. Article 1 refers to nity. Recent studies open new directions where these two datasets Article 2 via the redirected page. In black: a redirection link. are combined. The emerging field of spatio-temporal data mining The blue, dashed line, is the new link created from the redi- [2] highlighted an increasing interest and a need for reproducible rection. network datasets that contain dynamically changing components. Miz et al [8] adopted an anomaly detection approach on graphs to analyze the visitors’ activity in relation to real-world events. • Get relatively large subgraphs of Wikipedia pages (1K–100K Following the recent advances of scientific research on Wikipedia, nodes) without redirects. in this work, we focus on two components: the spatial component • Use filters by the number of page views, category/sub-category, (Wikipedia hyperlinks network) and the temporal component (page- graph measures (n-hop neighborhood of a node, node degree, counts). We design a database that allows querying this hybrid data page rank, centrality measures, and others). structure conveniently (see Fig. 1). Since Wikipedia web logs are • Get viewership statistics for a subset/subgraph of Wikipedia continuously updating, we designed this database in a way that pages. will make its maintenance as easy and fast as possible. • Get a subgraph of pages with a number of visits higher than a threshold, in a predefined range of dates. The database allows its users to return subgraphs with millions 2 DATASET of links. However, requesting a large subgraph from the database may take several hours. Besides, it may require a large amount of There are multiple ways to access Wikipedia data but none of memory on the hosting server. Such queries may cause an overload them provide native support of a graph data structure. Therefore, of the database server that has to process queries from multiple if researchers want to study Wikipedia from the network science users at the same time. Therefore, instead of setting up a remote perspective, they have to create the graph themselves, which is database server, we have decided to provide the code to deploy a usually very time-consuming. To do that, they need to pre-process local or cloud-based one from Wikipedia dumps. This will allow large dumps of data or to use the limited API. researchers to explore the dataset on their own server, create new In spatio-temporal data mining [2], researchers are most inter- queries and, possibly, contribute to the project. ested in the dynamics of the networks. Hence, when it comes to Lastly, the database will be updated every month and will be con- Wikipedia analysis, one needs to merge the hyperlinks network sistent with the latest Wikipedia dumps. This gives the researchers with page-view statistics of the web pages. This is another large the ability to reproduce previous studies on Wikipedia data and to chunk of data, which requires another round of time-consuming conduct new experiments on the latest data. The dataset and the pre-processing. deployment instructions are available online [1]. After the pre-processing and the merge is completed, researchers usually realize that they do not need the full network and the entire 3 FRAMEWORK history of visitors’ activity.
Recommended publications
  • A Topic-Aligned Multilingual Corpus of Wikipedia Articles for Studying Information Asymmetry in Low Resource Languages
    Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020), pages 2373–2380 Marseille, 11–16 May 2020 c European Language Resources Association (ELRA), licensed under CC-BY-NC A Topic-Aligned Multilingual Corpus of Wikipedia Articles for Studying Information Asymmetry in Low Resource Languages Dwaipayan Roy, Sumit Bhatia, Prateek Jain GESIS - Cologne, IBM Research - Delhi, IIIT - Delhi [email protected], [email protected], [email protected] Abstract Wikipedia is the largest web-based open encyclopedia covering more than three hundred languages. However, different language editions of Wikipedia differ significantly in terms of their information coverage. We present a systematic comparison of information coverage in English Wikipedia (most exhaustive) and Wikipedias in eight other widely spoken languages (Arabic, German, Hindi, Korean, Portuguese, Russian, Spanish and Turkish). We analyze the content present in the respective Wikipedias in terms of the coverage of topics as well as the depth of coverage of topics included in these Wikipedias. Our analysis quantifies and provides useful insights about the information gap that exists between different language editions of Wikipedia and offers a roadmap for the Information Retrieval (IR) community to bridge this gap. Keywords: Wikipedia, Knowledge base, Information gap 1. Introduction other with respect to the coverage of topics as well as Wikipedia is the largest web-based encyclopedia covering the amount of information about overlapping topics.
    [Show full text]
  • The Culture of Wikipedia
    Good Faith Collaboration: The Culture of Wikipedia Good Faith Collaboration The Culture of Wikipedia Joseph Michael Reagle Jr. Foreword by Lawrence Lessig The MIT Press, Cambridge, MA. Web edition, Copyright © 2011 by Joseph Michael Reagle Jr. CC-NC-SA 3.0 Purchase at Amazon.com | Barnes and Noble | IndieBound | MIT Press Wikipedia's style of collaborative production has been lauded, lambasted, and satirized. Despite unease over its implications for the character (and quality) of knowledge, Wikipedia has brought us closer than ever to a realization of the centuries-old Author Bio & Research Blog pursuit of a universal encyclopedia. Good Faith Collaboration: The Culture of Wikipedia is a rich ethnographic portrayal of Wikipedia's historical roots, collaborative culture, and much debated legacy. Foreword Preface to the Web Edition Praise for Good Faith Collaboration Preface Extended Table of Contents "Reagle offers a compelling case that Wikipedia's most fascinating and unprecedented aspect isn't the encyclopedia itself — rather, it's the collaborative culture that underpins it: brawling, self-reflexive, funny, serious, and full-tilt committed to the 1. Nazis and Norms project, even if it means setting aside personal differences. Reagle's position as a scholar and a member of the community 2. The Pursuit of the Universal makes him uniquely situated to describe this culture." —Cory Doctorow , Boing Boing Encyclopedia "Reagle provides ample data regarding the everyday practices and cultural norms of the community which collaborates to 3. Good Faith Collaboration produce Wikipedia. His rich research and nuanced appreciation of the complexities of cultural digital media research are 4. The Puzzle of Openness well presented.
    [Show full text]
  • How to Contribute Climate Change Information to Wikipedia : a Guide
    HOW TO CONTRIBUTE CLIMATE CHANGE INFORMATION TO WIKIPEDIA Emma Baker, Lisa McNamara, Beth Mackay, Katharine Vincent; ; © 2021, CDKN This work is licensed under the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/legalcode), which permits unrestricted use, distribution, and reproduction, provided the original work is properly credited. Cette œuvre est mise à disposition selon les termes de la licence Creative Commons Attribution (https://creativecommons.org/licenses/by/4.0/legalcode), qui permet l’utilisation, la distribution et la reproduction sans restriction, pourvu que le mérite de la création originale soit adéquatement reconnu. IDRC Grant/ Subvention du CRDI: 108754-001-CDKN knowledge accelerator for climate compatible development How to contribute climate change information to Wikipedia A guide for researchers, practitioners and communicators Contents About this guide .................................................................................................................................................... 5 1 Why Wikipedia is an important tool to communicate climate change information .................................................................................................................................. 7 1.1 Enhancing the quality of online climate change information ............................................. 8 1.2 Sharing your work more widely ......................................................................................................8 1.3 Why researchers should
    [Show full text]
  • Semantically Annotated Snapshot of the English Wikipedia
    Semantically Annotated Snapshot of the English Wikipedia Jordi Atserias, Hugo Zaragoza, Massimiliano Ciaramita, Giuseppe Attardi Yahoo! Research Barcelona, U. Pisa, on sabbatical at Yahoo! Research C/Ocata 1 Barcelona 08003 Spain {jordi, hugoz, massi}@yahoo-inc.com, [email protected] Abstract This paper describes SW1, the first version of a semantically annotated snapshot of the English Wikipedia. In recent years Wikipedia has become a valuable resource for both the Natural Language Processing (NLP) community and the Information Retrieval (IR) community. Although NLP technology for processing Wikipedia already exists, not all researchers and developers have the computational resources to process such a volume of information. Moreover, the use of different versions of Wikipedia processed differently might make it difficult to compare results. The aim of this work is to provide easy access to syntactic and semantic annotations for researchers of both NLP and IR communities by building a reference corpus to homogenize experiments and make results comparable. These resources, a semantically annotated corpus and a “entity containment” derived graph, are licensed under the GNU Free Documentation License and available from http://www.yr-bcn.es/semanticWikipedia. 1. Introduction 2. Processing Wikipedia1, the largest electronic encyclopedia, has be- Starting from the XML Wikipedia source we carried out a come a widely used resource for different Natural Lan- number of data processing steps: guage Processing tasks, e.g. Word Sense Disambiguation (Mihalcea, 2007), Semantic Relatedness (Gabrilovich and • Basic preprocessing: Stripping the text from the Markovitch, 2007) or in the Multilingual Question Answer- XML tags and dividing the obtained text into sen- ing task at Cross-Language Evaluation Forum (CLEF)2.
    [Show full text]
  • Lessons from Citizendium
    Lessons from Citizendium Wikimania 2009, Buenos Aires, 28 August 2009 HaeB [[de:Benutzer:HaeB]], [[en:User:HaeB]] Please don't take photos during this talk. Citizendium Timeline ● September 2006: Citizendium announced. Sole founder: Larry Sanger, known as former editor-in-chief of Nupedia, chief organizer of Wikipedia (2001-2002), and later as Wikipedia critic ● October 2006: Started non-public pilot phase ● January 2007: “Big Unfork”: All unmodified copies of Wikipedia articles deleted ● March 2007: Public launch ● December 2007: Decision to use CC-BY-3.0, after debate about commercial reuse and compatibility with Wikipedia ● Mid-2009: Sanger largely inactive on Citizendium, focuses on WatchKnow ● August 2009: Larry Sanger announces he will step down as editor-in-chief soon (as committed to in 2006) Citizendium and Wikipedia: Similarities and differences ● Encyclopedia ● Strict real names ● Free license policy ● ● Open (anyone can Special role for contribute) experts: “editors” can issue content ● Created by amateurs decisions, binding to ● MediaWiki-based non-editors collaboration ● Governance: Social ● Non-profit contract, elements of a constitutional republic Wikipedian views of Citizendium ● Competitor for readers, contributions ● Ally, common goal of creating free encyclopedic content ● “Who?” ● In this talk: A long-time experiment testing several fundamental policy changes, in a framework which is still similar enough to that of Wikipedia to generate valuable evidence as to what their effect might be on WP Active editors: Waiting to explode ● Sanger (October 2007): ”At some point, possibly very soon, the Citizendium will grow explosively-- say, quadruple the number of its active contributors, or even grow by an order of magnitude ....“ © Aleksander Stos, CC-BY 3.0 Number of users that made at least one edit in each month Article creation rate: Still muddling Sanger (October 2007): “It's still possible that the project will, from here until eternity, muddle on creating 14 articles per day.
    [Show full text]
  • Wikipedia Matters∗
    Wikipedia Matters∗ Marit Hinnosaar† Toomas Hinnosaar‡ Michael Kummer§ Olga Slivko¶ September 29, 2017 Abstract We document a causal impact of online user-generated information on real-world economic outcomes. In particular, we conduct a randomized field experiment to test whether additional content on Wikipedia pages about cities affects tourists’ choices of overnight visits. Our treatment of adding information to Wikipedia increases overnight visits by 9% during the tourist season. The impact comes mostly from improving the shorter and incomplete pages on Wikipedia. These findings highlight the value of content in digital public goods for informing individual choices. JEL: C93, H41, L17, L82, L83, L86 Keywords: field experiment, user-generated content, Wikipedia, tourism industry 1 Introduction Asymmetric information can hinder efficient economic activity. In recent decades, the Internet and new media have enabled greater access to information than ever before. However, the digital divide, language barriers, Internet censorship, and technological con- straints still create inequalities in the amount of accessible information. How much does it matter for economic outcomes? In this paper, we analyze the causal impact of online information on real-world eco- nomic outcomes. In particular, we measure the impact of information on one of the primary economic decisions—consumption. As the source of information, we focus on Wikipedia. It is one of the most important online sources of reference. It is the fifth most ∗We are grateful to Irene Bertschek, Avi Goldfarb, Shane Greenstein, Tobias Kretschmer, Thomas Niebel, Marianne Saam, Greg Veramendi, Joel Waldfogel, and Michael Zhang as well as seminar audiences at the Economics of Network Industries conference in Paris, ZEW Conference on the Economics of ICT, and Advances with Field Experiments 2017 Conference at the University of Chicago for valuable comments.
    [Show full text]
  • Most Edited Wikipedia Articles
    Most Edited Wikipedia Articles Scrotal Jefferson mutualize that quinquennium miscount digressively and stalagmometer fleetly. Circumlocutional or perverted, Chariot never concentring any pomes! Nevins remains distressing after Ronald pugged new or smuggles any boondoggle. If we had such small small percentage of women contributing, then where are a rate of issues that will potentially be skewed or get right attention instead they should. Start with extensive and how accurate is written by signing up for slate and include the prefect exemplifies the king. Cancel culture: Have any two words become more weaponised? But wikipedia article is the most edited wikipedia the place on those contributors who they found the push them and to contribute to. Carrie Underwood for sufficient number one albums for different American Idol alumni. You edit wikipedia article links in editing, while the articles. They go the extra mile with professional graphics and a clean, modern layout. He became even invited to object at what school when someone who saw though he shared. We went on editing and edits are most. Review than other authors have written get them. The images were linked to her username; Lightbreather has been careful to make sure whether no more on Wikipedia knows her former name. Wikipedia edits as the zeitgeist of the internet. Chrome to soap the Pixel Helper. Slideshare, Blogger, and Quora. Further reading to an electronic encyclopedia britannica: removed duplicated grid style from the experts in places editing, university offers citations is each page pretty much forget! US Minor Outlying Is. Are They Getting the Most out of Them? Seo tool that wikipedia articles from editing wikipedia as an effect on.
    [Show full text]
  • Simple English Wikipedia: a New Text Simplification Task
    Simple English Wikipedia: A New Text Simplification Task William Coster David Kauchak Computer Science Department Computer Science Department Pomona College Pomona College Claremont, CA 91711 Claremont, CA 91711 [email protected] [email protected] Abstract gadda et al., 2009; Vickrey and Koller, 2008; Chan- drasekar and Srinivas, 1997). Finally, models for In this paper we examine the task of sentence text simplification are similar to models for sentence simplification which aims to reduce the read- compression; advances in simplification can bene- ing complexity of a sentence by incorporat- fit compression, which has applications in mobile ing more accessible vocabulary and sentence structure. We introduce a new data set that devices, summarization and captioning (Knight and pairs English Wikipedia with Simple English Marcu, 2002; McDonald, 2006; Galley and McKe- Wikipedia and is orders of magnitude larger own, 2007; Nomoto, 2009; Cohn and Lapata, 2009). than any previously examined for sentence One of the key challenges for text simplification simplification. The data contains the full range is data availability. The small amount of simplifi- of simplification operations including reword- cation data currently available has prevented the ap- ing, reordering, insertion and deletion. We plication of data-driven techniques like those used provide an analysis of this corpus as well as in other text-to-text translation areas (Och and Ney, preliminary results using a phrase-based trans- lation approach for simplification. 2004; Chiang, 2010). Most prior techniques for text simplification have involved either hand-crafted rules (Vickrey and Koller, 2008; Feng, 2008) or 1 Introduction learned within a very restricted rule space (Chan- The task of text simplification aims to reduce the drasekar and Srinivas, 1997).
    [Show full text]
  • Critical Point of View: a Wikipedia Reader
    w ikipedia pedai p edia p Wiki CRITICAL POINT OF VIEW A Wikipedia Reader 2 CRITICAL POINT OF VIEW A Wikipedia Reader CRITICAL POINT OF VIEW 3 Critical Point of View: A Wikipedia Reader Editors: Geert Lovink and Nathaniel Tkacz Editorial Assistance: Ivy Roberts, Morgan Currie Copy-Editing: Cielo Lutino CRITICAL Design: Katja van Stiphout Cover Image: Ayumi Higuchi POINT OF VIEW Printer: Ten Klei Groep, Amsterdam Publisher: Institute of Network Cultures, Amsterdam 2011 A Wikipedia ISBN: 978-90-78146-13-1 Reader EDITED BY Contact GEERT LOVINK AND Institute of Network Cultures NATHANIEL TKACZ phone: +3120 5951866 INC READER #7 fax: +3120 5951840 email: [email protected] web: http://www.networkcultures.org Order a copy of this book by sending an email to: [email protected] A pdf of this publication can be downloaded freely at: http://www.networkcultures.org/publications Join the Critical Point of View mailing list at: http://www.listcultures.org Supported by: The School for Communication and Design at the Amsterdam University of Applied Sciences (Hogeschool van Amsterdam DMCI), the Centre for Internet and Society (CIS) in Bangalore and the Kusuma Trust. Thanks to Johanna Niesyto (University of Siegen), Nishant Shah and Sunil Abraham (CIS Bangalore) Sabine Niederer and Margreet Riphagen (INC Amsterdam) for their valuable input and editorial support. Thanks to Foundation Democracy and Media, Mondriaan Foundation and the Public Library Amsterdam (Openbare Bibliotheek Amsterdam) for supporting the CPOV events in Bangalore, Amsterdam and Leipzig. (http://networkcultures.org/wpmu/cpov/) Special thanks to all the authors for their contributions and to Cielo Lutino, Morgan Currie and Ivy Roberts for their careful copy-editing.
    [Show full text]
  • Assessing the Accuracy and Quality of Wikipedia Entries Compared to Popular Online Encyclopaedias
    Assessing the accuracy and quality of Wikipedia entries compared to popular online encyclopaedias A preliminary comparative study across disciplines in English, Spanish and Arabic ˃ Imogen Casebourne ˃ Dr. Chris Davies ˃ Dr. Michelle Fernandes ˃ Dr. Naomi Norman Casebourne, I., Davies, C., Fernandes, M., Norman, N. (2012) Assessing the accuracy and quality of Wikipedia entries compared to popular online encyclopaedias: A comparative preliminary study across disciplines in English, Spanish and Arabic. Epic, Brighton, UK. Retrieved from: http://commons.wikimedia.org/wiki/File:EPIC_Oxford_report.pdf This work is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported Licence http://creativecommons.org/licences/by-sa/3.0/ Supplementary information Additional information, datasets and supplementary materials from this study can be found at: http://meta.wikimedia.org/wiki/Research:Accuracy_and_quality_of_Wikipedia_entries/Data 2 Table of Contents Table of Contents .......................................................................................................................................... 2 Executive Summary ....................................................................................................................................... 5 1. Introduction .......................................................................................................................................... 5 2. Aims and Objectives .............................................................................................................................
    [Show full text]
  • Analyzing Wikidata Transclusion on English Wikipedia
    Analyzing Wikidata Transclusion on English Wikipedia Isaac Johnson Wikimedia Foundation [email protected] Abstract. Wikidata is steadily becoming more central to Wikipedia, not just in maintaining interlanguage links, but in automated popula- tion of content within the articles themselves. It is not well understood, however, how widespread this transclusion of Wikidata content is within Wikipedia. This work presents a taxonomy of Wikidata transclusion from the perspective of its potential impact on readers and an associated in- depth analysis of Wikidata transclusion within English Wikipedia. It finds that Wikidata transclusion that impacts the content of Wikipedia articles happens at a much lower rate (5%) than previous statistics had suggested (61%). Recommendations are made for how to adjust current tracking mechanisms of Wikidata transclusion to better support metrics and patrollers in their evaluation of Wikidata transclusion. Keywords: Wikidata · Wikipedia · Patrolling 1 Introduction Wikidata is steadily becoming more central to Wikipedia, not just in maintaining interlanguage links, but in automated population of content within the articles themselves. This transclusion of Wikidata content within Wikipedia can help to reduce maintenance of certain facts and links by shifting the burden to main- tain up-to-date, referenced material from each individual Wikipedia to a single repository, Wikidata. Current best estimates suggest that, as of August 2020, 62% of Wikipedia ar- ticles across all languages transclude Wikidata content. This statistic ranges from Arabic Wikipedia (arwiki) and Basque Wikipedia (euwiki), where nearly 100% of articles transclude Wikidata content in some form, to Japanese Wikipedia (jawiki) at 38% of articles and many small wikis that lack any Wikidata tran- sclusion.
    [Show full text]
  • An Analysis of Topical Coverage of Wikipedia
    Journal of Computer-Mediated Communication An Analysis of Topical Coverage of Wikipedia Alexander Halavais School of Communications Quinnipiac University Derek Lackaff Department of Communication State University of New York at Buffalo Many have questioned the reliability and accuracy of Wikipedia. Here a different issue, but one closely related: how broad is the coverage of Wikipedia? Differences in the interests and attention of Wikipedia’s editors mean that some areas, in the traditional sciences, for example, are better covered than others. Two approaches to measuring this coverage are presented. The first maps the distribution of topics on Wikipedia to the distribution of books published. The second compares the distribution of topics in three established, field-specific academic encyclopedias to the articles found in Wikipedia. Unlike the top-down construction of traditional encyclopedias, Wikipedia’s topical cov- erage is driven by the interests of its users, and as a result, the reliability and complete- ness of Wikipedia is likely to be different depending on the subject-area of the article. doi:10.1111/j.1083-6101.2008.00403.x Introduction We are stronger in science than in many other areas. we come from geek culture, we come from the free software movement, we have a lot of technologists involved. that’s where our major strengths are. We know we have systemic biases. - James Wales, Wikipedia founder, Wikimania 2006 keynote address Satirist Stephen Colbert recently noted that he was a fan of ‘‘any encyclopedia that has a longer article on ‘truthiness’ [a term he coined] than on Lutheranism’’ (July 31, 2006). The focus of the encyclopedia away from the appraisal of experts and to the interests of its contributors stands out as one of the significant advantages of Wikipedia, ‘‘the free encyclopedia that anyone can edit,’’ but also places it in sharp contrast with what many expect of an encyclopedia.
    [Show full text]