Simple English Wikipedia: a New Text Simplification Task

Total Page:16

File Type:pdf, Size:1020Kb

Simple English Wikipedia: a New Text Simplification Task Simple English Wikipedia: A New Text Simplification Task William Coster David Kauchak Computer Science Department Computer Science Department Pomona College Pomona College Claremont, CA 91711 Claremont, CA 91711 [email protected] [email protected] Abstract gadda et al., 2009; Vickrey and Koller, 2008; Chan- drasekar and Srinivas, 1997). Finally, models for In this paper we examine the task of sentence text simplification are similar to models for sentence simplification which aims to reduce the read- compression; advances in simplification can bene- ing complexity of a sentence by incorporat- fit compression, which has applications in mobile ing more accessible vocabulary and sentence structure. We introduce a new data set that devices, summarization and captioning (Knight and pairs English Wikipedia with Simple English Marcu, 2002; McDonald, 2006; Galley and McKe- Wikipedia and is orders of magnitude larger own, 2007; Nomoto, 2009; Cohn and Lapata, 2009). than any previously examined for sentence One of the key challenges for text simplification simplification. The data contains the full range is data availability. The small amount of simplifi- of simplification operations including reword- cation data currently available has prevented the ap- ing, reordering, insertion and deletion. We plication of data-driven techniques like those used provide an analysis of this corpus as well as in other text-to-text translation areas (Och and Ney, preliminary results using a phrase-based trans- lation approach for simplification. 2004; Chiang, 2010). Most prior techniques for text simplification have involved either hand-crafted rules (Vickrey and Koller, 2008; Feng, 2008) or 1 Introduction learned within a very restricted rule space (Chan- The task of text simplification aims to reduce the drasekar and Srinivas, 1997). complexity of text while maintaining the content We have generated a data set consisting of 137K (Chandrasekar and Srinivas, 1997; Carroll et al., aligned simplified/unsimplified sentence pairs by 1998; Feng, 2008). In this paper, we explore the pairing documents, then sentences from English 1 sentence simplification problem: given a sentence, Wikipedia with corresponding documents and sen- 2 the goal is to produce an equivalent sentence where tences from Simple English Wikipedia . Simple En- the vocabulary and sentence structure are simpler. glish Wikipedia contains articles aimed at children Text simplification has a number of important ap- and English language learners and contains similar plications. Simplification techniques can be used to content to English Wikipedia but with simpler vo- make text resources available to a broader range of cabulary and grammar. readers, including children, language learners, the Figure 1 shows example sentence simplifications elderly, the hearing impaired and people with apha- from the data set. Like machine translation and other sia or cognitive disabilities (Carroll et al., 1998; text-to-text domains, text simplification involves the Feng, 2008). As a preprocessing step, simplification full range of transformation operations including can improve the performance of NLP tasks, includ- deletion, rewording, reordering and insertion. ing parsing, semantic role labeling, machine transla- 1http://en.wikipedia.org/ tion and summarization (Miwa et al., 2010; Jonnala- 2http://simple.wikipedia.org 665 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics:shortpapers, pages 665–669, Portland, Oregon, June 19-24, 2011. c 2011 Association for Computational Linguistics a. Normal: As Isolde arrives at his side, Tristan dies with her name on his lips. Simple: As Isolde arrives at his side, Tristan dies while speaking her name. b. Normal: Alfonso Perez Munoz, usually referred to as Alfonso, is a former Spanish footballer, in the striker position. Simple: Alfonso Perez is a former Spanish football player. c. Normal: Endemic types or species are especially likely to develop on islands because of their geographical isolation. Simple: Endemic types are most likely to develop on islands because they are isolated. d. Normal: The reverse process, producing electrical energy from mechanical, energy, is accomplished by a generator or dynamo. Simple: A dynamo or an electric generator does the reverse: it changes mechanical movement into electric energy. Figure 1: Example sentence simplifications extracted from Wikipedia. Normal refers to a sentence in an English Wikipedia article and Simple to a corresponding sentence in Simple English Wikipedia. 2 Previous Data 3 Simplification Corpus Generation We generated a parallel simplification corpus by aligning sentences between English Wikipedia and Wikipedia and Simple English Wikipedia have both Simple English Wikipedia. We obtained complete received some recent attention as a useful resource copies of English Wikipedia and Simple English for text simplification and the related task of text Wikipedia in May 2010. We first paired the articles compression. Yamangil and Nelken (2008) examine by title, then removed all article pairs where either the history logs of English Wikipedia to learn sen- article: contained only a single line, was flagged as a tence compression rules. Yatskar et al. (2010) learn stub, was flagged as a disambiguation page or was a a set of candidate phrase simplification rules based meta-page about Wikipedia. After pairing and filter- on edits identified in the revision histories of both ing, 10,588 aligned, content article pairs remained Simple English Wikipedia and English Wikipedia. (a 90% reduction from the original 110K Simple En- However, they only provide a list of the top phrasal glish Wikipedia articles). Throughout the rest of this simplifications and do not utilize them in an end- paper we will refer to unsimplified text from English to-end simplification system. Finally, Napoles and Wikipedia as normal and to the simplified version Dredze (2010) provide an analysis of the differences from Simple English Wikipedia as simple. between documents in English Wikipedia and Sim- ple English Wikipedia, though they do not view the To generate aligned sentence pairs from the data set as a parallel corpus. aligned document pairs we followed an approach similar to those utilized in previous monolingual Although the simplification problem shares some alignment problems (Barzilay and Elhadad, 2003; characteristics with the text compression problem, Nelken and Shieber, 2006). Paragraphs were iden- existing text compression data sets are small and tified based on formatting information available in contain a restricted set of possible transformations the articles. Each simple paragraph was then aligned (often only deletion). Knight and Marcu (2002) in- to every normal paragraph where the TF-IDF, co- troduced the Zipf-Davis corpus which contains 1K sine similarity was over a threshold or 0.5. We ini- sentence pairs. Cohn and Lapata (2009) manually tially investigated the paragraph clustering prepro- generated two parallel corpora from news stories to- cessing step in (Barzilay and Elhadad, 2003), but taling 3K sentence pairs. Finally, Nomoto (2009) did not find a qualitative difference and opted for the generated a data set based on RSS feeds containing simpler similarity-based alignment approach, which 2K sentence pairs. does not require manual annotation. 666 For each aligned paragraph pair (i.e. a simple 91/100 were identified as correct, though many of paragraph and one or more normal paragraphs), we the remaining 9 also had some partial content over- then used a dynamic programming approach to find lap. We also repeated the experiment using only that best global sentence alignment following Barzi- those sentences with a similarity above 0.75 (rather lay and Elhadad (2003). Specifically, given n nor- than 0.50 in the original data). This reduced the mal sentences to align to m simple sentences, we number of pairs from 137K to 90K, but the eval- find a(n, m) using the following recurrence: uators identified 98/100 as correct. The analysis throughout the rest of the section is for threshold a(i, j) = of 0.5, though similar results were also seen for the 8 a(i, j − 1) − skip penalty threshold of 0.75. > > Although the average simple article contained ap- > a(i − 1, j) − skip penalty <> a(i − 1, j − 1) + sim(i, j) proximately 40 sentences, we extracted an average max > a(i − 1, j − 2) + sim(i, j) + sim(i, j − 1) of 14 aligned sentence pairs per article. Qualita- > > a(i − 2, j − 1) + sim(i, j) + sim(i − 1, j) tively, it is rare to find a simple article that is a direct :> a(i − 2, j − 2) + sim(i, j − 1) + sim(i − 1, j) translation of the normal article, that is, a simple ar- ticle that was generated by only making sentence- where each line above corresponds to a sentence level changes to the normal document. However, alignment operation: skip the simple sentence, skip there is a strong relationship between the two data the normal sentence, align one normal to one sim- sets: 27% of our aligned sentences were identical ple, align one normal to two simple, align two nor- between simple and normal. We left these identical mal to one simple and align two normal to two sim- sentence pairs in our data set since not all sentences ple. sim(i, j) is the similarity between the ith nor- need to be simplified and it is important for any sim- mal sentence and the jth simple sentence and was plification algorithm to be able to handle this case. calculated using TF-IDF, cosine similarity. We set Much of the content without direct correspon- skip penalty = 0.0001 manually. dence is removed during paragraph alignment. 65% Barzilay and Elhadad (2003) further discourage of the simple paragraphs do not align to a normal aligning dissimilar sentences by including a “mis- paragraphs and are ignored. On top of this, within match penalty” in the similarity measure. Instead, aligned paragraphs, there are a large number of sen- we included a filtering step removing all sentence tences that do not align. Table 1 shows the propor- pairs with a normalized similarity below a threshold tion of the different sentence level alignment opera- of 0.5.
Recommended publications
  • A Topic-Aligned Multilingual Corpus of Wikipedia Articles for Studying Information Asymmetry in Low Resource Languages
    Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020), pages 2373–2380 Marseille, 11–16 May 2020 c European Language Resources Association (ELRA), licensed under CC-BY-NC A Topic-Aligned Multilingual Corpus of Wikipedia Articles for Studying Information Asymmetry in Low Resource Languages Dwaipayan Roy, Sumit Bhatia, Prateek Jain GESIS - Cologne, IBM Research - Delhi, IIIT - Delhi [email protected], [email protected], [email protected] Abstract Wikipedia is the largest web-based open encyclopedia covering more than three hundred languages. However, different language editions of Wikipedia differ significantly in terms of their information coverage. We present a systematic comparison of information coverage in English Wikipedia (most exhaustive) and Wikipedias in eight other widely spoken languages (Arabic, German, Hindi, Korean, Portuguese, Russian, Spanish and Turkish). We analyze the content present in the respective Wikipedias in terms of the coverage of topics as well as the depth of coverage of topics included in these Wikipedias. Our analysis quantifies and provides useful insights about the information gap that exists between different language editions of Wikipedia and offers a roadmap for the Information Retrieval (IR) community to bridge this gap. Keywords: Wikipedia, Knowledge base, Information gap 1. Introduction other with respect to the coverage of topics as well as Wikipedia is the largest web-based encyclopedia covering the amount of information about overlapping topics.
    [Show full text]
  • The Culture of Wikipedia
    Good Faith Collaboration: The Culture of Wikipedia Good Faith Collaboration The Culture of Wikipedia Joseph Michael Reagle Jr. Foreword by Lawrence Lessig The MIT Press, Cambridge, MA. Web edition, Copyright © 2011 by Joseph Michael Reagle Jr. CC-NC-SA 3.0 Purchase at Amazon.com | Barnes and Noble | IndieBound | MIT Press Wikipedia's style of collaborative production has been lauded, lambasted, and satirized. Despite unease over its implications for the character (and quality) of knowledge, Wikipedia has brought us closer than ever to a realization of the centuries-old Author Bio & Research Blog pursuit of a universal encyclopedia. Good Faith Collaboration: The Culture of Wikipedia is a rich ethnographic portrayal of Wikipedia's historical roots, collaborative culture, and much debated legacy. Foreword Preface to the Web Edition Praise for Good Faith Collaboration Preface Extended Table of Contents "Reagle offers a compelling case that Wikipedia's most fascinating and unprecedented aspect isn't the encyclopedia itself — rather, it's the collaborative culture that underpins it: brawling, self-reflexive, funny, serious, and full-tilt committed to the 1. Nazis and Norms project, even if it means setting aside personal differences. Reagle's position as a scholar and a member of the community 2. The Pursuit of the Universal makes him uniquely situated to describe this culture." —Cory Doctorow , Boing Boing Encyclopedia "Reagle provides ample data regarding the everyday practices and cultural norms of the community which collaborates to 3. Good Faith Collaboration produce Wikipedia. His rich research and nuanced appreciation of the complexities of cultural digital media research are 4. The Puzzle of Openness well presented.
    [Show full text]
  • How to Contribute Climate Change Information to Wikipedia : a Guide
    HOW TO CONTRIBUTE CLIMATE CHANGE INFORMATION TO WIKIPEDIA Emma Baker, Lisa McNamara, Beth Mackay, Katharine Vincent; ; © 2021, CDKN This work is licensed under the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/legalcode), which permits unrestricted use, distribution, and reproduction, provided the original work is properly credited. Cette œuvre est mise à disposition selon les termes de la licence Creative Commons Attribution (https://creativecommons.org/licenses/by/4.0/legalcode), qui permet l’utilisation, la distribution et la reproduction sans restriction, pourvu que le mérite de la création originale soit adéquatement reconnu. IDRC Grant/ Subvention du CRDI: 108754-001-CDKN knowledge accelerator for climate compatible development How to contribute climate change information to Wikipedia A guide for researchers, practitioners and communicators Contents About this guide .................................................................................................................................................... 5 1 Why Wikipedia is an important tool to communicate climate change information .................................................................................................................................. 7 1.1 Enhancing the quality of online climate change information ............................................. 8 1.2 Sharing your work more widely ......................................................................................................8 1.3 Why researchers should
    [Show full text]
  • Semantically Annotated Snapshot of the English Wikipedia
    Semantically Annotated Snapshot of the English Wikipedia Jordi Atserias, Hugo Zaragoza, Massimiliano Ciaramita, Giuseppe Attardi Yahoo! Research Barcelona, U. Pisa, on sabbatical at Yahoo! Research C/Ocata 1 Barcelona 08003 Spain {jordi, hugoz, massi}@yahoo-inc.com, [email protected] Abstract This paper describes SW1, the first version of a semantically annotated snapshot of the English Wikipedia. In recent years Wikipedia has become a valuable resource for both the Natural Language Processing (NLP) community and the Information Retrieval (IR) community. Although NLP technology for processing Wikipedia already exists, not all researchers and developers have the computational resources to process such a volume of information. Moreover, the use of different versions of Wikipedia processed differently might make it difficult to compare results. The aim of this work is to provide easy access to syntactic and semantic annotations for researchers of both NLP and IR communities by building a reference corpus to homogenize experiments and make results comparable. These resources, a semantically annotated corpus and a “entity containment” derived graph, are licensed under the GNU Free Documentation License and available from http://www.yr-bcn.es/semanticWikipedia. 1. Introduction 2. Processing Wikipedia1, the largest electronic encyclopedia, has be- Starting from the XML Wikipedia source we carried out a come a widely used resource for different Natural Lan- number of data processing steps: guage Processing tasks, e.g. Word Sense Disambiguation (Mihalcea, 2007), Semantic Relatedness (Gabrilovich and • Basic preprocessing: Stripping the text from the Markovitch, 2007) or in the Multilingual Question Answer- XML tags and dividing the obtained text into sen- ing task at Cross-Language Evaluation Forum (CLEF)2.
    [Show full text]
  • Lessons from Citizendium
    Lessons from Citizendium Wikimania 2009, Buenos Aires, 28 August 2009 HaeB [[de:Benutzer:HaeB]], [[en:User:HaeB]] Please don't take photos during this talk. Citizendium Timeline ● September 2006: Citizendium announced. Sole founder: Larry Sanger, known as former editor-in-chief of Nupedia, chief organizer of Wikipedia (2001-2002), and later as Wikipedia critic ● October 2006: Started non-public pilot phase ● January 2007: “Big Unfork”: All unmodified copies of Wikipedia articles deleted ● March 2007: Public launch ● December 2007: Decision to use CC-BY-3.0, after debate about commercial reuse and compatibility with Wikipedia ● Mid-2009: Sanger largely inactive on Citizendium, focuses on WatchKnow ● August 2009: Larry Sanger announces he will step down as editor-in-chief soon (as committed to in 2006) Citizendium and Wikipedia: Similarities and differences ● Encyclopedia ● Strict real names ● Free license policy ● ● Open (anyone can Special role for contribute) experts: “editors” can issue content ● Created by amateurs decisions, binding to ● MediaWiki-based non-editors collaboration ● Governance: Social ● Non-profit contract, elements of a constitutional republic Wikipedian views of Citizendium ● Competitor for readers, contributions ● Ally, common goal of creating free encyclopedic content ● “Who?” ● In this talk: A long-time experiment testing several fundamental policy changes, in a framework which is still similar enough to that of Wikipedia to generate valuable evidence as to what their effect might be on WP Active editors: Waiting to explode ● Sanger (October 2007): ”At some point, possibly very soon, the Citizendium will grow explosively-- say, quadruple the number of its active contributors, or even grow by an order of magnitude ....“ © Aleksander Stos, CC-BY 3.0 Number of users that made at least one edit in each month Article creation rate: Still muddling Sanger (October 2007): “It's still possible that the project will, from here until eternity, muddle on creating 14 articles per day.
    [Show full text]
  • Wikipedia Matters∗
    Wikipedia Matters∗ Marit Hinnosaar† Toomas Hinnosaar‡ Michael Kummer§ Olga Slivko¶ September 29, 2017 Abstract We document a causal impact of online user-generated information on real-world economic outcomes. In particular, we conduct a randomized field experiment to test whether additional content on Wikipedia pages about cities affects tourists’ choices of overnight visits. Our treatment of adding information to Wikipedia increases overnight visits by 9% during the tourist season. The impact comes mostly from improving the shorter and incomplete pages on Wikipedia. These findings highlight the value of content in digital public goods for informing individual choices. JEL: C93, H41, L17, L82, L83, L86 Keywords: field experiment, user-generated content, Wikipedia, tourism industry 1 Introduction Asymmetric information can hinder efficient economic activity. In recent decades, the Internet and new media have enabled greater access to information than ever before. However, the digital divide, language barriers, Internet censorship, and technological con- straints still create inequalities in the amount of accessible information. How much does it matter for economic outcomes? In this paper, we analyze the causal impact of online information on real-world eco- nomic outcomes. In particular, we measure the impact of information on one of the primary economic decisions—consumption. As the source of information, we focus on Wikipedia. It is one of the most important online sources of reference. It is the fifth most ∗We are grateful to Irene Bertschek, Avi Goldfarb, Shane Greenstein, Tobias Kretschmer, Thomas Niebel, Marianne Saam, Greg Veramendi, Joel Waldfogel, and Michael Zhang as well as seminar audiences at the Economics of Network Industries conference in Paris, ZEW Conference on the Economics of ICT, and Advances with Field Experiments 2017 Conference at the University of Chicago for valuable comments.
    [Show full text]
  • Most Edited Wikipedia Articles
    Most Edited Wikipedia Articles Scrotal Jefferson mutualize that quinquennium miscount digressively and stalagmometer fleetly. Circumlocutional or perverted, Chariot never concentring any pomes! Nevins remains distressing after Ronald pugged new or smuggles any boondoggle. If we had such small small percentage of women contributing, then where are a rate of issues that will potentially be skewed or get right attention instead they should. Start with extensive and how accurate is written by signing up for slate and include the prefect exemplifies the king. Cancel culture: Have any two words become more weaponised? But wikipedia article is the most edited wikipedia the place on those contributors who they found the push them and to contribute to. Carrie Underwood for sufficient number one albums for different American Idol alumni. You edit wikipedia article links in editing, while the articles. They go the extra mile with professional graphics and a clean, modern layout. He became even invited to object at what school when someone who saw though he shared. We went on editing and edits are most. Review than other authors have written get them. The images were linked to her username; Lightbreather has been careful to make sure whether no more on Wikipedia knows her former name. Wikipedia edits as the zeitgeist of the internet. Chrome to soap the Pixel Helper. Slideshare, Blogger, and Quora. Further reading to an electronic encyclopedia britannica: removed duplicated grid style from the experts in places editing, university offers citations is each page pretty much forget! US Minor Outlying Is. Are They Getting the Most out of Them? Seo tool that wikipedia articles from editing wikipedia as an effect on.
    [Show full text]
  • Critical Point of View: a Wikipedia Reader
    w ikipedia pedai p edia p Wiki CRITICAL POINT OF VIEW A Wikipedia Reader 2 CRITICAL POINT OF VIEW A Wikipedia Reader CRITICAL POINT OF VIEW 3 Critical Point of View: A Wikipedia Reader Editors: Geert Lovink and Nathaniel Tkacz Editorial Assistance: Ivy Roberts, Morgan Currie Copy-Editing: Cielo Lutino CRITICAL Design: Katja van Stiphout Cover Image: Ayumi Higuchi POINT OF VIEW Printer: Ten Klei Groep, Amsterdam Publisher: Institute of Network Cultures, Amsterdam 2011 A Wikipedia ISBN: 978-90-78146-13-1 Reader EDITED BY Contact GEERT LOVINK AND Institute of Network Cultures NATHANIEL TKACZ phone: +3120 5951866 INC READER #7 fax: +3120 5951840 email: [email protected] web: http://www.networkcultures.org Order a copy of this book by sending an email to: [email protected] A pdf of this publication can be downloaded freely at: http://www.networkcultures.org/publications Join the Critical Point of View mailing list at: http://www.listcultures.org Supported by: The School for Communication and Design at the Amsterdam University of Applied Sciences (Hogeschool van Amsterdam DMCI), the Centre for Internet and Society (CIS) in Bangalore and the Kusuma Trust. Thanks to Johanna Niesyto (University of Siegen), Nishant Shah and Sunil Abraham (CIS Bangalore) Sabine Niederer and Margreet Riphagen (INC Amsterdam) for their valuable input and editorial support. Thanks to Foundation Democracy and Media, Mondriaan Foundation and the Public Library Amsterdam (Openbare Bibliotheek Amsterdam) for supporting the CPOV events in Bangalore, Amsterdam and Leipzig. (http://networkcultures.org/wpmu/cpov/) Special thanks to all the authors for their contributions and to Cielo Lutino, Morgan Currie and Ivy Roberts for their careful copy-editing.
    [Show full text]
  • Assessing the Accuracy and Quality of Wikipedia Entries Compared to Popular Online Encyclopaedias
    Assessing the accuracy and quality of Wikipedia entries compared to popular online encyclopaedias A preliminary comparative study across disciplines in English, Spanish and Arabic ˃ Imogen Casebourne ˃ Dr. Chris Davies ˃ Dr. Michelle Fernandes ˃ Dr. Naomi Norman Casebourne, I., Davies, C., Fernandes, M., Norman, N. (2012) Assessing the accuracy and quality of Wikipedia entries compared to popular online encyclopaedias: A comparative preliminary study across disciplines in English, Spanish and Arabic. Epic, Brighton, UK. Retrieved from: http://commons.wikimedia.org/wiki/File:EPIC_Oxford_report.pdf This work is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported Licence http://creativecommons.org/licences/by-sa/3.0/ Supplementary information Additional information, datasets and supplementary materials from this study can be found at: http://meta.wikimedia.org/wiki/Research:Accuracy_and_quality_of_Wikipedia_entries/Data 2 Table of Contents Table of Contents .......................................................................................................................................... 2 Executive Summary ....................................................................................................................................... 5 1. Introduction .......................................................................................................................................... 5 2. Aims and Objectives .............................................................................................................................
    [Show full text]
  • Analyzing Wikidata Transclusion on English Wikipedia
    Analyzing Wikidata Transclusion on English Wikipedia Isaac Johnson Wikimedia Foundation [email protected] Abstract. Wikidata is steadily becoming more central to Wikipedia, not just in maintaining interlanguage links, but in automated popula- tion of content within the articles themselves. It is not well understood, however, how widespread this transclusion of Wikidata content is within Wikipedia. This work presents a taxonomy of Wikidata transclusion from the perspective of its potential impact on readers and an associated in- depth analysis of Wikidata transclusion within English Wikipedia. It finds that Wikidata transclusion that impacts the content of Wikipedia articles happens at a much lower rate (5%) than previous statistics had suggested (61%). Recommendations are made for how to adjust current tracking mechanisms of Wikidata transclusion to better support metrics and patrollers in their evaluation of Wikidata transclusion. Keywords: Wikidata · Wikipedia · Patrolling 1 Introduction Wikidata is steadily becoming more central to Wikipedia, not just in maintaining interlanguage links, but in automated population of content within the articles themselves. This transclusion of Wikidata content within Wikipedia can help to reduce maintenance of certain facts and links by shifting the burden to main- tain up-to-date, referenced material from each individual Wikipedia to a single repository, Wikidata. Current best estimates suggest that, as of August 2020, 62% of Wikipedia ar- ticles across all languages transclude Wikidata content. This statistic ranges from Arabic Wikipedia (arwiki) and Basque Wikipedia (euwiki), where nearly 100% of articles transclude Wikidata content in some form, to Japanese Wikipedia (jawiki) at 38% of articles and many small wikis that lack any Wikidata tran- sclusion.
    [Show full text]
  • An Analysis of Topical Coverage of Wikipedia
    Journal of Computer-Mediated Communication An Analysis of Topical Coverage of Wikipedia Alexander Halavais School of Communications Quinnipiac University Derek Lackaff Department of Communication State University of New York at Buffalo Many have questioned the reliability and accuracy of Wikipedia. Here a different issue, but one closely related: how broad is the coverage of Wikipedia? Differences in the interests and attention of Wikipedia’s editors mean that some areas, in the traditional sciences, for example, are better covered than others. Two approaches to measuring this coverage are presented. The first maps the distribution of topics on Wikipedia to the distribution of books published. The second compares the distribution of topics in three established, field-specific academic encyclopedias to the articles found in Wikipedia. Unlike the top-down construction of traditional encyclopedias, Wikipedia’s topical cov- erage is driven by the interests of its users, and as a result, the reliability and complete- ness of Wikipedia is likely to be different depending on the subject-area of the article. doi:10.1111/j.1083-6101.2008.00403.x Introduction We are stronger in science than in many other areas. we come from geek culture, we come from the free software movement, we have a lot of technologists involved. that’s where our major strengths are. We know we have systemic biases. - James Wales, Wikipedia founder, Wikimania 2006 keynote address Satirist Stephen Colbert recently noted that he was a fan of ‘‘any encyclopedia that has a longer article on ‘truthiness’ [a term he coined] than on Lutheranism’’ (July 31, 2006). The focus of the encyclopedia away from the appraisal of experts and to the interests of its contributors stands out as one of the significant advantages of Wikipedia, ‘‘the free encyclopedia that anyone can edit,’’ but also places it in sharp contrast with what many expect of an encyclopedia.
    [Show full text]
  • Why Wikipedia Is So Successful?
    Term Paper E Business Technologies Why Wikipedia is so successful? Master in Business Consulting Winter Semester 2010 Submitted to Prof.Dr. Eduard Heindl Prepared by Suresh Balajee Madanakannan Matriculation number-235416 Fakultät Wirtschaftsinformatik Hochschule Furtwangen University I ACKNOWLEDGEMENT This is to claim that all the content in this article are from the author Suresh Balajee Madanakannan. The resources can found in the reference list at the end of each page. All the ideas and state in the article are from the author himself with none plagiary and the author owns the copyright of this article. Suresh Balajee Madanakannan II Contents 1. Introduction .............................................................................................................................................. 1 1.1 About Wikipedia ................................................................................................................................. 1 1.2 Wikipedia servers and architecture .................................................................................................... 5 2. Factors that led Wikipedia to be successful ............................................................................................ 7 2.1 User factors ......................................................................................................................................... 7 2.2 Knowledge factors .............................................................................................................................. 8
    [Show full text]