Using Natural Language Generation to Bootstrap Missing Wikipedia Articles: a Human-Centric Perspective
Total Page:16
File Type:pdf, Size:1020Kb

Load more
Recommended publications
-
Improving Wikimedia Projects Content Through Collaborations Waray Wikipedia Experience, 2014 - 2017
Improving Wikimedia Projects Content through Collaborations Waray Wikipedia Experience, 2014 - 2017 Harvey Fiji • Michael Glen U. Ong Jojit F. Ballesteros • Bel B. Ballesteros ESEAP Conference 2018 • 5 – 6 May 2018 • Bali, Indonesia In 8 November 2013, typhoon Haiyan devastated the Eastern Visayas region of the Philippines when it made landfall in Guiuan, Eastern Samar and Tolosa, Leyte. The typhoon affected about 16 million individuals in the Philippines and Tacloban City in Leyte was one of [1] the worst affected areas. Philippines Eastern Visayas Eastern Visayas, specifically the provinces of Biliran, Leyte, Northern Samar, Samar and Eastern Samar, is home for the Waray speakers in the Philippines. [3] [2] Outline of the Presentation I. Background of Waray Wikipedia II. Collaborations made by Sinirangan Bisaya Wikimedia Community III. Problems encountered IV. Lessons learned V. Future plans References Photo Credits I. Background of Waray Wikipedia (https://war.wikipedia.org) Proposed on or about June 23, 2005 and created on or about September 24, 2005 Deployed lsjbot from Feb 2013 to Nov 2015 creating 1,143,071 Waray articles about flora and fauna As of 24 April 2018, it has a total of 1,262,945 articles created by lsjbot (90.5%) and by humans (9.5%) As of 31 March 2018, it has 401 views per hour Sinirangan Bisaya (Eastern Visayas) Wikimedia Community is the (offline) community that continuously improves Waray Wikipedia and related Wikimedia projects I. Background of Waray Wikipedia (https://war.wikipedia.org) II. Collaborations made by Sinirangan Bisaya Wikimedia Community A. Collaborations with private* and national government** Introductory letter organizations Series of meetings and communications B. -
Włodzimierz Lewoniewski Metoda Porównywania I Wzbogacania
Włodzimierz Lewoniewski Metoda porównywania i wzbogacania informacji w wielojęzycznych serwisach wiki na podstawie analizy ich jakości The method of comparing and enriching informa- on in mullingual wikis based on the analysis of their quality Praca doktorska Promotor: Prof. dr hab. Witold Abramowicz Promotor pomocniczy: dr Krzysztof Węcel Pracę przyjęto dnia: podpis Promotora Kierunek: Specjalność: Poznań 2018 Spis treści 1 Wstęp 1 1.1 Motywacja .................................... 1 1.2 Cel badawczy i teza pracy ............................. 6 1.3 Źródła informacji i metody badawcze ....................... 8 1.4 Struktura rozprawy ................................ 10 2 Jakość danych i informacji 12 2.1 Wprowadzenie .................................. 12 2.2 Jakość danych ................................... 13 2.3 Jakość informacji ................................. 15 2.4 Podsumowanie .................................. 19 3 Serwisy wiki oraz semantyczne bazy wiedzy 20 3.1 Wprowadzenie .................................. 20 3.2 Serwisy wiki .................................... 21 3.3 Wikipedia jako przykład serwisu wiki ....................... 24 3.4 Infoboksy ..................................... 25 3.5 DBpedia ...................................... 27 3.6 Podsumowanie .................................. 28 4 Metody określenia jakości artykułów Wikipedii 29 4.1 Wprowadzenie .................................. 29 4.2 Wymiary jakości serwisów wiki .......................... 30 4.3 Problemy jakości Wikipedii ............................ 31 4.4 -
Librarians As Wikimedia Movement Organizers in Spain: an Interpretive Inquiry Exploring Activities and Motivations
Librarians as Wikimedia Movement Organizers in Spain: An interpretive inquiry exploring activities and motivations by Laurie Bridges and Clara Llebot Laurie Bridges Oregon State University https://orcid.org/0000-0002-2765-5440 Clara Llebot Oregon State University https://orcid.org/0000-0003-3211-7396 Citation: Bridges, L., & Llebot, C. (2021). Librarians as Wikimedia Movement Organizers in Spain: An interpretive inQuiry exploring activities and motivations. First Monday, 26(6/7). https://doi.org/10.5210/fm.v26i3.11482 Abstract How do librarians in Spain engage with Wikipedia (and Wikidata, Wikisource, and other Wikipedia sister projects) as Wikimedia Movement Organizers? And, what motivates them to do so? This article reports on findings from 14 interviews with 18 librarians. The librarians interviewed were multilingual and contributed to Wikimedia projects in Castilian (commonly referred to as Spanish), Catalan, BasQue, English, and other European languages. They reported planning and running Wikipedia events, developing partnerships with local Wikimedia chapters, motivating citizens to upload photos to Wikimedia Commons, identifying gaps in Wikipedia content and filling those gaps, transcribing historic documents and adding them to Wikisource, and contributing data to Wikidata. Most were motivated by their desire to preserve and promote regional languages and culture, and a commitment to open access and open education. Introduction This research started with an informal conversation in 2018 about the popularity of Catalan Wikipedia, Viquipèdia, between two library coworkers in the United States, the authors of this article. Our conversation began with a sense of wonder about Catalan Wikipedia, which is ranked twentieth by number of articles, out of 300 different language Wikipedias (Meta contributors, 2020). -
Topological Evolution of Networks: Case Studies in the US Airlines and Language Wikipedias By
Topological Evolution of Networks: Case Studies in the US Airlines and Language Wikipedias by Gergana Assenova Bounova B.S., Theoretical Mathematics, Massachusetts Institute of Technology (2003) B.S., Aeronautics & Astronautics, Massachusetts Institute of Technology (2003) S.M., Aeronautics & Astronautics, Massachusetts Institute of Technology (2005) Submitted to the Department of Aeronautics and Astronautics in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY February 2009 c 2009 Gergana A. Bounova, All rights reserved. ... The author hereby grants to MIT the permission to reproduce and to distribute publicly paper and electronic copies of this thesis document in whole or in part. Author.......................................................................................... Department of Aeronautics and Astronautics February 27, 2009 Certified by..................................................................................... Prof. Olivier L. de Weck Associate Professor of Aeronautics and Astronautics and Engineering Systems Thesis Supervisor Certified by..................................................................................... Prof. Christopher L. Magee Professor of the Practice of Mechanical Engineering and Engineering Systems Certified by..................................................................................... Dr. Daniel E. Whitney Senior Research Scientist, Center for Technology, Policy and Industrial Development, Senior Lecturer in -
Language-Agnostic Relation Extraction from Abstracts in Wikis
information Article Language-Agnostic Relation Extraction from Abstracts in Wikis Nicolas Heist, Sven Hertling and Heiko Paulheim * ID Data and Web Science Group, University of Mannheim, Mannheim 68131, Germany; [email protected] (N.H.); [email protected] (S.H.) * Correspondence: [email protected] Received: 5 February 2018; Accepted: 28 March 2018; Published: 29 March 2018 Abstract: Large-scale knowledge graphs, such as DBpedia, Wikidata, or YAGO, can be enhanced by relation extraction from text, using the data in the knowledge graph as training data, i.e., using distant supervision. While most existing approaches use language-specific methods (usually for English), we present a language-agnostic approach that exploits background knowledge from the graph instead of language-specific techniques and builds machine learning models only from language-independent features. We demonstrate the extraction of relations from Wikipedia abstracts, using the twelve largest language editions of Wikipedia. From those, we can extract 1.6 M new relations in DBpedia at a level of precision of 95%, using a RandomForest classifier trained only on language-independent features. We furthermore investigate the similarity of models for different languages and show an exemplary geographical breakdown of the information extracted. In a second series of experiments, we show how the approach can be transferred to DBkWik, a knowledge graph extracted from thousands of Wikis. We discuss the challenges and first results of extracting relations from a larger set of Wikis, using a less formalized knowledge graph. Keywords: relation extraction; knowledge graphs; Wikipedia; DBpedia; DBkWik; Wiki farms 1. Introduction Large-scale knowledge graphs, such as DBpedia [1], Freebase [2], Wikidata [3], or YAGO [4], are usually built using heuristic extraction methods, by exploiting crowd-sourcing processes, or both [5]. -
Langmag July06 14-17.Qxd (Page 1)
Methodology Robert L. Read and Steven D. Brewer explain how Esperanto acts as a springboard for the acquisition of other languages Who Knows Where Esperanto Might Lead? In 1887, an obscure eye doctor in ly attain a competency that eluded them in Esperanto, or any language, provides a Poland self-published a little book in Russian. learning an ethnic language or report that they propaedeutic effect in learning a next lan- Over the next several years Lingvo Internacia1 reached a given level of competency in a frac- guage which is similar. appeared in English, French, German, tion of the time required by a national lan- Several factors may contribute to the Hebrew, and Polish. This book, written under guage. Early success creates a virtuous cycle Corder effect, including similarities in vocabu- the pen name Doctor Esperanto, laid the which encourages more study and often leads lary, grammatical structure, and word order. foundation for a new language that would to genuine fluency. Achievement yields positive Similarity of vocabulary has been shown to achieve what no other language project had effects on student self-confidence, insight into be an effective metric for predicting how ever done: establish a living community that the nature of languages in general, and the much knowing one language will help with would go on to survive the death of its cre- structure of their native language in particular. learning another.5 Since Esperanto was ator. Even conservative estimates place the Barry Farber writes in his book How to designed to have a widely recognized vocab- number of active speakers in the tens of Learn Any Language:2 “It’s said that once you ulary and grammatical features broadly thousands, with the number who have master one foreign language, all others come shared across language families, it takes learned Esperanto at some time in their lives much more easily. -
Text Corpora and the Challenge of Newly Written Languages
Proceedings of the 1st Joint SLTU and CCURL Workshop (SLTU-CCURL 2020), pages 111–120 Language Resources and Evaluation Conference (LREC 2020), Marseille, 11–16 May 2020 c European Language Resources Association (ELRA), licensed under CC-BY-NC Text Corpora and the Challenge of Newly Written Languages Alice Millour∗, Karen¨ Fort∗y ∗Sorbonne Universite´ / STIH, yUniversite´ de Lorraine, CNRS, Inria, LORIA 28, rue Serpente 75006 Paris, France, 54000 Nancy, France [email protected], [email protected] Abstract Text corpora represent the foundation on which most natural language processing systems rely. However, for many languages, collecting or building a text corpus of a sufficient size still remains a complex issue, especially for corpora that are accessible and distributed under a clear license allowing modification (such as annotation) and further resharing. In this paper, we review the sources of text corpora usually called upon to fill the gap in low-resource contexts, and how crowdsourcing has been used to build linguistic resources. Then, we present our own experiments with crowdsourcing text corpora and an analysis of the obstacles we encountered. Although the results obtained in terms of participation are still unsatisfactory, we advocate that the effort towards a greater involvement of the speakers should be pursued, especially when the language of interest is newly written. Keywords: text corpora, dialectal variants, spelling, crowdsourcing 1. Introduction increasing number of speakers (see for instance, the stud- Speakers from various linguistic communities are increas- ies on specific languages carried by Rivron (2012) or Soria ingly writing their languages (Outinoff, 2012), and the In- et al. -
Reciprocal Enrichment Between Basque Wikipedia and Machine Translation
Reciprocal Enrichment Between Basque Wikipedia and Machine Translation Inaki˜ Alegria, Unai Cabezon, Unai Fernandez de Betono,˜ Gorka Labaka, Aingeru Mayor, Kepa Sarasola and Arkaitz Zubiaga Abstract In this chapter, we define a collaboration framework that enables Wikipe- dia editors to generate new articles while they help development of Machine Trans- lation (MT) systems by providing post-edition logs. This collaboration framework was tested with editors of Basque Wikipedia. Their post-editing of Computer Sci- ence articles has been used to improve the output of a Spanish to Basque MT system called Matxin. For the collaboration between editors and researchers, we selected a set of 100 articles from the Spanish Wikipedia. These articles would then be used as the source texts to be translated into Basque using the MT engine. A group of volunteers from Basque Wikipedia reviewed and corrected the raw MT translations. This collaboration ultimately produced two main benefits: (i) the change logs that would potentially help improve the MT engine by using an automated statistical post-editing system , and (ii) the growth of Basque Wikipedia. The results show that this process can improve the accuracy of an Rule Based MT (RBMT) system in nearly 10% benefiting from the post-edition of 50,000 words in the Computer Inaki˜ Alegria Ixa Group, University of the Basque Country UPV/EHU, e-mail: [email protected] Unai Cabezon Ixa Group, University of the Basque Country, e-mail: [email protected] Unai Fernandez de Betono˜ Basque Wikipedia and University of the Basque Country, e-mail: [email protected] Gorka Labaka Ixa Group, University of the Basque Country, e-mail: [email protected] Aingeru Mayor Ixa Group, University of the Basque Country, e-mail: [email protected] Kepa Sarasola Ixa Group, University of the Basque CountryU, e-mail: [email protected] Arkaitz Zubiaga Basque Wikipedia and Queens College, CUNY, CS Department, Blender Lab, New York, e-mail: [email protected] 1 2 Alegria et al. -
Wikimatrix: Mining 135M Parallel Sentences
Under review as a conference paper at ICLR 2020 WIKIMATRIX:MINING 135M PARALLEL SENTENCES IN 1620 LANGUAGE PAIRS FROM WIKIPEDIA Anonymous authors Paper under double-blind review ABSTRACT We present an approach based on multilingual sentence embeddings to automat- ically extract parallel sentences from the content of Wikipedia articles in 85 lan- guages, including several dialects or low-resource languages. We do not limit the extraction process to alignments with English, but systematically consider all pos- sible language pairs. In total, we are able to extract 135M parallel sentences for 1620 different language pairs, out of which only 34M are aligned with English. This corpus of parallel sentences is freely available.1 To get an indication on the quality of the extracted bitexts, we train neural MT baseline systems on the mined data only for 1886 languages pairs, and evaluate them on the TED corpus, achieving strong BLEU scores for many language pairs. The WikiMatrix bitexts seem to be particularly interesting to train MT systems between distant languages without the need to pivot through English. 1 INTRODUCTION Most of the current approaches in Natural Language Processing (NLP) are data-driven. The size of the resources used for training is often the primary concern, but the quality and a large variety of topics may be equally important. Monolingual texts are usually available in huge amounts for many topics and languages. However, multilingual resources, typically sentences in two languages which are mutual translations, are more limited, in particular when the two languages do not involve English. An important source of parallel texts are international organizations like the European Parliament (Koehn, 2005) or the United Nations (Ziemski et al., 2016). -
Update/No.53, 2011
UpdateEAB www.esperanto-gb.org ✩ Gisdateˆ A quarterly organ of the Esperanto Association of Britain Kvaronjara organo de Esperanto-Asocio de Britio ★ No. 53, April–June 2011 N-ro 53, aprilo–junio 2011 ISSN 1741-4679 International communication without discrimination ★ Respect for all cultures Internacia komunikado sen diskriminacio ★ Respekto por ˆiujc kulturoj Trustees’ Annual • Successful Report 2009–10 Twitter campaign The trustees are pleased to present their report with the financial statements of the charity for the year ended 31 October 2010. A full copy of the report celebrates and financial statements may be obtained from Esperanto House. he Management Commit- their languages and cultures to Zamenhof tee of the Association are be treated equally. Tthe trustees of the charity. Day Those who served from 1 Nov. PUBLIC BENEFIT 2009 to 31 Oct. 2010 were: In setting objectives and plan- (see page 7) John Wells (president), Edmund ning activities the trustees have Grimley Evans (vice-president), given consideration to the David Kelso (hon. secretary), Charity Commission’s general • Prestigˆa Joyce Bunting (hon. treasurer), guidelines on public benefit, and Geoffrey Greatrex, Clare Hunter. in particular to guidance on artikolo David Bisset and Jean Bisset advancing education. EAB does served until 16 May 2010. not exclude anyone for any en israela reason. STAFF AND APPOINTEES Continued overleaf Education & Development Co- revuo ordinator: Angela Tellier. Office (vidu pa©on 11) administrator: Viv O’Dunne. Hon. librarian: Geoffrey King. Webmaster: Bill Walker. Editor of EAB Update: Geoffrey Sutton. Editor of La Brita Esperantisto: Paul Gubbins. Accountant & Independent Examiner: N. Brooks, Kingston Smith, 60 Goswell Road, London, EC1M 7AD. -
The Future of the Past
THE FUTURE OF THE PAST A CASE STUDY ON THE REPRESENTATION OF THE HOLOCAUST ON WIKIPEDIA 2002-2014 Rudolf den Hartogh Master Thesis Global History and International Relations Erasmus University Rotterdam The future of the past A case study on the representation of the Holocaust on Wikipedia Rudolf den Hartogh Master Thesis Global History and International Relations Erasmus School of History, Culture and Communication (ESHCC) Erasmus University Rotterdam July 2014 Supervisor: prof. dr. Kees Ribbens Co-reader: dr. Robbert-Jan Adriaansen Cover page images: 1. Wikipedia-logo_inverse.png (2007) of the Wikipedia user Nohat (concept by Wikipedian Paulussmagnus). Digital image. Retrieved from: http://commons.wikimedia.org (2-12-2013). 2. Holocaust-Victim.jpg (2013) Photographer unknown. Digital image. Retrieved from: http://www.myinterestingfacts.com/ (2-12-2013). 4 This thesis is dedicated to the loving memory of my grandmother, Maagje den Hartogh-Bos, who sadly passed away before she was able to see this thesis completed. 5 Abstract Since its creation in 2001, Wikipedia, ‘the online free encyclopedia that anyone can edit’, has increasingly become a subject of debate among historians due to its radical departure from traditional history scholarship. The medium democratized the field of knowledge production to an extent that has not been experienced before and it was the incredible popularity of the medium that triggered historians to raise questions about the quality of the online reference work and implications for the historian’s craft. However, despite a vast body of research devoted to the digital encyclopedia, no consensus has yet been reached in these debates due to a general lack of empirical research. -
Analyzing Wikidata Transclusion on English Wikipedia
Analyzing Wikidata Transclusion on English Wikipedia Isaac Johnson Wikimedia Foundation [email protected] Abstract. Wikidata is steadily becoming more central to Wikipedia, not just in maintaining interlanguage links, but in automated popula- tion of content within the articles themselves. It is not well understood, however, how widespread this transclusion of Wikidata content is within Wikipedia. This work presents a taxonomy of Wikidata transclusion from the perspective of its potential impact on readers and an associated in- depth analysis of Wikidata transclusion within English Wikipedia. It finds that Wikidata transclusion that impacts the content of Wikipedia articles happens at a much lower rate (5%) than previous statistics had suggested (61%). Recommendations are made for how to adjust current tracking mechanisms of Wikidata transclusion to better support metrics and patrollers in their evaluation of Wikidata transclusion. Keywords: Wikidata · Wikipedia · Patrolling 1 Introduction Wikidata is steadily becoming more central to Wikipedia, not just in maintaining interlanguage links, but in automated population of content within the articles themselves. This transclusion of Wikidata content within Wikipedia can help to reduce maintenance of certain facts and links by shifting the burden to main- tain up-to-date, referenced material from each individual Wikipedia to a single repository, Wikidata. Current best estimates suggest that, as of August 2020, 62% of Wikipedia ar- ticles across all languages transclude Wikidata content. This statistic ranges from Arabic Wikipedia (arwiki) and Basque Wikipedia (euwiki), where nearly 100% of articles transclude Wikidata content in some form, to Japanese Wikipedia (jawiki) at 38% of articles and many small wikis that lack any Wikidata tran- sclusion.