Is Wikipedia Really Neutral? a Sentiment Perspective Study of War-Related Wikipedia Articles Since 1945

Total Page:16

File Type:pdf, Size:1020Kb

Is Wikipedia Really Neutral? a Sentiment Perspective Study of War-Related Wikipedia Articles Since 1945 PACLIC 29 Is Wikipedia Really Neutral? A Sentiment Perspective Study of War-related Wikipedia Articles since 1945 Yiwei Zhou, Alexandra I. Cristea and Zachary Roberts Department of Computer Science University of Warwick Coventry, United Kingdom fYiwei.Zhou, A.I.Cristea, [email protected] Abstract include books, journal articles, newspapers, web- pages, sound recordings2, etc. Although a “Neutral Wikipedia is supposed to be supporting the 3 “Neutral Point of View”. Instead of accept- point of view” (NPOV) is Wikipedia’s core content ing this statement as a fact, the current paper policy, we believe sentiment expression is inevitable analyses its veracity by specifically analysing in this user-generated content. Already in (Green- a typically controversial (negative) topic, such stein and Zhu, 2012), researchers have raised doubt as war, and answering questions such as “Are about Wikipedia’s neutrality, as they pointed out there sentiment differences in how Wikipedia that “Wikipedia achieves something akin to a NPOV articles in different languages describe the across articles, but not necessarily within them”. same war?”. This paper tackles this chal- Moreover, people of different language backgrounds lenge by proposing an automatic methodology based on article level and concept level senti- share different cultures and sources of information. ment analysis on multilingual Wikipedia arti- These differences have reflected on the style of con- cles. The results obtained so far show that rea- tributions (Pfeil et al., 2006) and the type of informa- sons such as people’s feelings of involvement tion covered (Callahan and Herring, 2011). Further- and empathy can lead to sentiment expression more, Wikipedia webpages actually allow to con- differences across multilingual Wikipedia on tain opinions, as long as they come from reliable au- war-related topics; the more people contribute thors4. Due to its openness to multiple forms of con- to an article on a war-related topic, the more extreme sentiment the article will express; dif- tribution, the articles on Wikipedia can be viewed as ferent cultures also focus on different concepts a summarisation of thoughts in multiple languages about the same war and present different sen- about specific topics. Automatically detecting and timents towards them. Moreover, our research measuring the differences can be crucial in many provides a framework for performing differ- applications: public relation departments can get ent levels of sentiment analysis on multilin- some useful suggestions from Wikipedia about top- gual texts. ics close to their hearts; Wikipedia readers can get some insights about what people speaking other lan- 1 Introduction guages think about the same topic; Wikipedia ad- Wikipedia is the largest and most widely used ministrators can quickly locate the Wikipedia arti- encyclopaedia in collaborative knowledge building cles that express extreme sentiment, to better apply (Medelyan et al., 2009). Since its start in 2001, it the NPOV policy, by eliminating some edits. contains more than 33 million articles in more than 2http://en.wikipedia.org/wiki/Wikipedia: 200 languages, while only about 4 million articles Citing_sources 3 are in English1. Possible sources for the content http://en.wikipedia.org/wiki/Wikipedia: Neutral_point_of_view 1http://meta.wikimedia.org/wiki/List_of_ 4http://en.wikipedia.org/wiki/Wikipedia: Wikipedias Identifying_reliable_sources 160 29th Pacific Asia Conference on Language, Information and Computation pages 160 - 168 Shanghai, China, October 30 - November 1, 2015 Copyright 2015 by Yiwei Zhou, Alexandra Cristea and Zachary Roberts PACLIC 29 In order to further gain insight on these mat- parts only. Through extracting the subjective sen- ters, and especially, on the degree of neutrality on tences of one article, a classifier can not only achieve given topics presented in different languages, we ex- higher efficiency, because of the shorter length, but plore an approach that can perform multiple levels can also achieve higher accuracy, by leaving out the of sentiment analysis on multilingual Wikipedia ar- ‘noises’. We thus choose to extract the possible sub- ticles. We generate graded sentiment analysis re- jective parts of the articles first. sults for multilingual articles, and attribute senti- More recently, many researchers have realised ment analysis to concepts, to analyse the sentiment that multiple sentences may express sentiment about that onespecific named entity is involved in. For the the same concept, or that one sentence may contain sake of simplicity, we restrict our scenario within sentiment towards different concepts. As a result, the war-related topics, although our approach can the concept level (or aspect level) sentiment analysis be easily applied on other domains. Our results has attracted more and more attention. Researchers show that even though the overall sentiment polar- have proposed two approaches to extract concepts. ities of multilingual Wikipedia articles on the same The first approach is to manually create a list of war-related topic are consistent, the strengths of sen- interesting concepts, before the analysis (Singh et timent expression vary from language to language. al., 2013). The second approach is to extract candi- The remainder of the paper is structured as fol- date concepts from the object content, automatically lows. In Section 2, we present an overview of differ- (Mudinas et al., 2012). As in Wikipedia different ent approaches of sentiment analysis. Section 3 de- articles will mention different concepts, it is impos- scribes the approach selected in this research to per- sible to pre-create the concepts list without reading form article level and concept level sentiment analy- all the articles. We thus choose to automatically ex- sis on multilingual Wikipedia articles. In Section 4, tract the named entities in the subjective sentences experimental results are presented and analysed, and as concepts. in Section 5, we conclude the major findings and re- marks for further research. 3 Methodology 3.1 Overview 2 Related Research We employ war-related topics in Wikipedia as Researchers have been addressing the problem of counter-examples to refute the statement that sentiment analysis of user-generated content mainly ‘Wikipedia is neutral’. at three levels of granularity: sentence level senti- Based on our choice of approaches (see Sec- ment analysis, article level sentiment analysis and tion 2), we build a straightforward processing concept level sentiment analysis. pipeline (Figure 1) as briefly sketched below (with The most common level of sentiment analysis is details in the subsequent sub-sections). the sentence level, which has laid the ground for the First, we retrieve the related topic name based other two. Its basic assumption is that each sentence on some input keywords. After that, the Wikipedia has only one target concept. webpages in all available languages on this topic are Article level (or document level) sentiment analy- downloaded. Because of the diversity of the content sis is often used on product reviews, news and blogs, in Wikipedia webpages, some data pre-processing, where it is believed there is only one target concept as described in Section 3.2, is needed, in order to in the whole article. Our research performs article acquire plain descriptive text. We further translate level sentiment analysis of Wikipedia articles. We the plain descriptive text into English (see notes in believe this is applicable here, as the Wikipedia web- Section 3.2 on accuracy and the estimated errors in- pages’ structure is that with a topic as the title, the troduced), for further processing. To extract the sub- body of the webpage is the corresponding descrip- jective contents from each translated article, we to- tion of the topic. There are mainly two directions kenise the article into sentences, and then perform for article level sentiment analysis: analysis towards subjective analysis, as is described in Section 3.3, on the whole article, or analysis towards the subjective each sentence. As mentioned already, based on prior 161 PACLIC 29 Webpages Data Machine Extraction Preprocessing Translation Translated Wikipedia Webpages Articles Keywords Articles Sentiment Subjective Scores Sentences Concept Level Article Level Subjectivity Sentiment Analysis Sentiment Analysis Analysis Figure 1: Processing Pipeline research (Pang and Lee, 2004), only the subjective plemented, not the same can be said for other lan- sentences are retained, while the rest are discarded. guages. To close the gap between other languages We then leverage the English sentiment analysis re- and English sentiment analysis resources, we ap- sources to measure the sentiment score for each sub- ply machine translation on the texts in other lan- jective sentence, and utilise named entity extraction guages. To date, machine translation techniques are tools to extract the named entities as target concepts well developed; products such as Google Translate7 in this subjective sentence. We calculate the arti- and Bing Translator 8 are widely used in academic cle level sentiment scores, as is described in Sec- (Bautin et al., 2008; Wan, 2009) and business con- tion 4.1, as well as the concept level sentence scores, text. In (Balahur and Turchi, 2014), researchers as is described in Section 3.5, with both being based pointed out that the machine translation techniques on the sentence level sentiment scores. In the final have reached a reasonable level
Recommended publications
  • A Survey of Orthographic Information in Machine Translation 3
    Machine Translation Journal manuscript No. (will be inserted by the editor) A Survey of Orthographic Information in Machine Translation Bharathi Raja Chakravarthi1 ⋅ Priya Rani 1 ⋅ Mihael Arcan2 ⋅ John P. McCrae 1 the date of receipt and acceptance should be inserted later Abstract Machine translation is one of the applications of natural language process- ing which has been explored in different languages. Recently researchers started pay- ing attention towards machine translation for resource-poor languages and closely related languages. A widespread and underlying problem for these machine transla- tion systems is the variation in orthographic conventions which causes many issues to traditional approaches. Two languages written in two different orthographies are not easily comparable, but orthographic information can also be used to improve the machine translation system. This article offers a survey of research regarding orthog- raphy’s influence on machine translation of under-resourced languages. It introduces under-resourced languages in terms of machine translation and how orthographic in- formation can be utilised to improve machine translation. We describe previous work in this area, discussing what underlying assumptions were made, and showing how orthographic knowledge improves the performance of machine translation of under- resourced languages. We discuss different types of machine translation and demon- strate a recent trend that seeks to link orthographic information with well-established machine translation methods. Considerable attention is given to current efforts of cog- nates information at different levels of machine translation and the lessons that can Bharathi Raja Chakravarthi [email protected] Priya Rani [email protected] Mihael Arcan [email protected] John P.
    [Show full text]
  • A Topic-Aligned Multilingual Corpus of Wikipedia Articles for Studying Information Asymmetry in Low Resource Languages
    Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020), pages 2373–2380 Marseille, 11–16 May 2020 c European Language Resources Association (ELRA), licensed under CC-BY-NC A Topic-Aligned Multilingual Corpus of Wikipedia Articles for Studying Information Asymmetry in Low Resource Languages Dwaipayan Roy, Sumit Bhatia, Prateek Jain GESIS - Cologne, IBM Research - Delhi, IIIT - Delhi [email protected], [email protected], [email protected] Abstract Wikipedia is the largest web-based open encyclopedia covering more than three hundred languages. However, different language editions of Wikipedia differ significantly in terms of their information coverage. We present a systematic comparison of information coverage in English Wikipedia (most exhaustive) and Wikipedias in eight other widely spoken languages (Arabic, German, Hindi, Korean, Portuguese, Russian, Spanish and Turkish). We analyze the content present in the respective Wikipedias in terms of the coverage of topics as well as the depth of coverage of topics included in these Wikipedias. Our analysis quantifies and provides useful insights about the information gap that exists between different language editions of Wikipedia and offers a roadmap for the Information Retrieval (IR) community to bridge this gap. Keywords: Wikipedia, Knowledge base, Information gap 1. Introduction other with respect to the coverage of topics as well as Wikipedia is the largest web-based encyclopedia covering the amount of information about overlapping topics.
    [Show full text]
  • Modeling Popularity and Reliability of Sources in Multilingual Wikipedia
    information Article Modeling Popularity and Reliability of Sources in Multilingual Wikipedia Włodzimierz Lewoniewski * , Krzysztof W˛ecel and Witold Abramowicz Department of Information Systems, Pozna´nUniversity of Economics and Business, 61-875 Pozna´n,Poland; [email protected] (K.W.); [email protected] (W.A.) * Correspondence: [email protected] Received: 31 March 2020; Accepted: 7 May 2020; Published: 13 May 2020 Abstract: One of the most important factors impacting quality of content in Wikipedia is presence of reliable sources. By following references, readers can verify facts or find more details about described topic. A Wikipedia article can be edited independently in any of over 300 languages, even by anonymous users, therefore information about the same topic may be inconsistent. This also applies to use of references in different language versions of a particular article, so the same statement can have different sources. In this paper we analyzed over 40 million articles from the 55 most developed language versions of Wikipedia to extract information about over 200 million references and find the most popular and reliable sources. We presented 10 models for the assessment of the popularity and reliability of the sources based on analysis of meta information about the references in Wikipedia articles, page views and authors of the articles. Using DBpedia and Wikidata we automatically identified the alignment of the sources to a specific domain. Additionally, we analyzed the changes of popularity and reliability in time and identified growth leaders in each of the considered months. The results can be used for quality improvements of the content in different languages versions of Wikipedia.
    [Show full text]
  • Omnipedia: Bridging the Wikipedia Language
    Omnipedia: Bridging the Wikipedia Language Gap Patti Bao*†, Brent Hecht†, Samuel Carton†, Mahmood Quaderi†, Michael Horn†§, Darren Gergle*† *Communication Studies, †Electrical Engineering & Computer Science, §Learning Sciences Northwestern University {patti,brent,sam.carton,quaderi}@u.northwestern.edu, {michael-horn,dgergle}@northwestern.edu ABSTRACT language edition contains its own cultural viewpoints on a We present Omnipedia, a system that allows Wikipedia large number of topics [7, 14, 15, 27]. On the other hand, readers to gain insight from up to 25 language editions of the language barrier serves to silo knowledge [2, 4, 33], Wikipedia simultaneously. Omnipedia highlights the slowing the transfer of less culturally imbued information similarities and differences that exist among Wikipedia between language editions and preventing Wikipedia’s 422 language editions, and makes salient information that is million monthly visitors [12] from accessing most of the unique to each language as well as that which is shared information on the site. more widely. We detail solutions to numerous front-end and algorithmic challenges inherent to providing users with In this paper, we present Omnipedia, a system that attempts a multilingual Wikipedia experience. These include to remedy this situation at a large scale. It reduces the silo visualizing content in a language-neutral way and aligning effect by providing users with structured access in their data in the face of diverse information organization native language to over 7.5 million concepts from up to 25 strategies. We present a study of Omnipedia that language editions of Wikipedia. At the same time, it characterizes how people interact with information using a highlights similarities and differences between each of the multilingual lens.
    [Show full text]
  • Title of Thesis: ABSTRACT CLASSIFYING BIAS
    ABSTRACT Title of Thesis: CLASSIFYING BIAS IN LARGE MULTILINGUAL CORPORA VIA CROWDSOURCING AND TOPIC MODELING Team BIASES: Brianna Caljean, Katherine Calvert, Ashley Chang, Elliot Frank, Rosana Garay Jáuregui, Geoffrey Palo, Ryan Rinker, Gareth Weakly, Nicolette Wolfrey, William Zhang Thesis Directed By: Dr. David Zajic, Ph.D. Our project extends previous algorithmic approaches to finding bias in large text corpora. We used multilingual topic modeling to examine language-specific bias in the English, Spanish, and Russian versions of Wikipedia. In particular, we placed Spanish articles discussing the Cold War on a Russian-English viewpoint spectrum based on similarity in topic distribution. We then crowdsourced human annotations of Spanish Wikipedia articles for comparison to the topic model. Our hypothesis was that human annotators and topic modeling algorithms would provide correlated results for bias. However, that was not the case. Our annotators indicated that humans were more perceptive of sentiment in article text than topic distribution, which suggests that our classifier provides a different perspective on a text’s bias. CLASSIFYING BIAS IN LARGE MULTILINGUAL CORPORA VIA CROWDSOURCING AND TOPIC MODELING by Team BIASES: Brianna Caljean, Katherine Calvert, Ashley Chang, Elliot Frank, Rosana Garay Jáuregui, Geoffrey Palo, Ryan Rinker, Gareth Weakly, Nicolette Wolfrey, William Zhang Thesis submitted in partial fulfillment of the requirements of the Gemstone Honors Program, University of Maryland, 2018 Advisory Committee: Dr. David Zajic, Chair Dr. Brian Butler Dr. Marine Carpuat Dr. Melanie Kill Dr. Philip Resnik Mr. Ed Summers © Copyright by Team BIASES: Brianna Caljean, Katherine Calvert, Ashley Chang, Elliot Frank, Rosana Garay Jáuregui, Geoffrey Palo, Ryan Rinker, Gareth Weakly, Nicolette Wolfrey, William Zhang 2018 Acknowledgements We would like to express our sincerest gratitude to our mentor, Dr.
    [Show full text]
  • Multilingual Ranking of Wikipedia Articles with Quality and Popularity Assessment in Different Topics
    computers Article Multilingual Ranking of Wikipedia Articles with Quality and Popularity Assessment in Different Topics Włodzimierz Lewoniewski * , Krzysztof W˛ecel and Witold Abramowicz Department of Information Systems, Pozna´nUniversity of Economics and Business, 61-875 Pozna´n,Poland * Correspondence: [email protected]; Tel.: +48-(61)-639-27-93 Received: 10 May 2019; Accepted: 13 August 2019; Published: 14 August 2019 Abstract: On Wikipedia, articles about various topics can be created and edited independently in each language version. Therefore, the quality of information about the same topic depends on the language. Any interested user can improve an article and that improvement may depend on the popularity of the article. The goal of this study is to show what topics are best represented in different language versions of Wikipedia using results of quality assessment for over 39 million articles in 55 languages. In this paper, we also analyze how popular selected topics are among readers and authors in various languages. We used two approaches to assign articles to various topics. First, we selected 27 main multilingual categories and analyzed all their connections with sub-categories based on information extracted from over 10 million categories in 55 language versions. To classify the articles to one of the 27 main categories, we took into account over 400 million links from articles to over 10 million categories and over 26 million links between categories. In the second approach, we used data from DBpedia and Wikidata. We also showed how the results of the study can be used to build local and global rankings of the Wikipedia content.
    [Show full text]
  • The Culture of Wikipedia
    Good Faith Collaboration: The Culture of Wikipedia Good Faith Collaboration The Culture of Wikipedia Joseph Michael Reagle Jr. Foreword by Lawrence Lessig The MIT Press, Cambridge, MA. Web edition, Copyright © 2011 by Joseph Michael Reagle Jr. CC-NC-SA 3.0 Purchase at Amazon.com | Barnes and Noble | IndieBound | MIT Press Wikipedia's style of collaborative production has been lauded, lambasted, and satirized. Despite unease over its implications for the character (and quality) of knowledge, Wikipedia has brought us closer than ever to a realization of the centuries-old Author Bio & Research Blog pursuit of a universal encyclopedia. Good Faith Collaboration: The Culture of Wikipedia is a rich ethnographic portrayal of Wikipedia's historical roots, collaborative culture, and much debated legacy. Foreword Preface to the Web Edition Praise for Good Faith Collaboration Preface Extended Table of Contents "Reagle offers a compelling case that Wikipedia's most fascinating and unprecedented aspect isn't the encyclopedia itself — rather, it's the collaborative culture that underpins it: brawling, self-reflexive, funny, serious, and full-tilt committed to the 1. Nazis and Norms project, even if it means setting aside personal differences. Reagle's position as a scholar and a member of the community 2. The Pursuit of the Universal makes him uniquely situated to describe this culture." —Cory Doctorow , Boing Boing Encyclopedia "Reagle provides ample data regarding the everyday practices and cultural norms of the community which collaborates to 3. Good Faith Collaboration produce Wikipedia. His rich research and nuanced appreciation of the complexities of cultural digital media research are 4. The Puzzle of Openness well presented.
    [Show full text]
  • An Analysis of Contributions to Wikipedia from Tor
    Are anonymity-seekers just like everybody else? An analysis of contributions to Wikipedia from Tor Chau Tran Kaylea Champion Andrea Forte Department of Computer Science & Engineering Department of Communication College of Computing & Informatics New York University University of Washington Drexel University New York, USA Seatle, USA Philadelphia, USA [email protected] [email protected] [email protected] Benjamin Mako Hill Rachel Greenstadt Department of Communication Department of Computer Science & Engineering University of Washington New York University Seatle, USA New York, USA [email protected] [email protected] Abstract—User-generated content sites routinely block contri- butions from users of privacy-enhancing proxies like Tor because of a perception that proxies are a source of vandalism, spam, and abuse. Although these blocks might be effective, collateral damage in the form of unrealized valuable contributions from anonymity seekers is invisible. One of the largest and most important user-generated content sites, Wikipedia, has attempted to block contributions from Tor users since as early as 2005. We demonstrate that these blocks have been imperfect and that thousands of attempts to edit on Wikipedia through Tor have been successful. We draw upon several data sources and analytical techniques to measure and describe the history of Tor editing on Wikipedia over time and to compare contributions from Tor users to those from other groups of Wikipedia users. Fig. 1. Screenshot of the page a user is shown when they attempt to edit the Our analysis suggests that although Tor users who slip through Wikipedia article on “Privacy” while using Tor. Wikipedia’s ban contribute content that is more likely to be reverted and to revert others, their contributions are otherwise similar in quality to those from other unregistered participants and to the initial contributions of registered users.
    [Show full text]
  • © 2020 Xiaoman Pan CROSS-LINGUAL ENTITY EXTRACTION and LINKING for 300 LANGUAGES
    © 2020 Xiaoman Pan CROSS-LINGUAL ENTITY EXTRACTION AND LINKING FOR 300 LANGUAGES BY XIAOMAN PAN DISSERTATION Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate College of the University of Illinois at Urbana-Champaign, 2020 Urbana, Illinois Doctoral Committee: Dr. Heng Ji, Chair Dr. Jiawei Han Dr. Hanghang Tong Dr. Kevin Knight ABSTRACT Information provided in languages which people can understand saves lives in crises. For example, language barrier was one of the main difficulties faced by humanitarian workers responding to the Ebola crisis in 2014. We propose to break language barriers by extracting information (e.g., entities) from a massive variety of languages and ground the information into an existing Knowledge Base (KB) which is accessible to a user in their own language (e.g., a reporter from the World Health Organization who speaks English only). The ambitious goal of this thesis is to develop a Cross-lingual Entity Extraction and Linking framework for 1,000 fine-grained entity types and 300 languages that exist in Wikipedia. Given a document in any of these languages, our framework is able to identify entity name mentions, assign a fine-grained type to each mention, and link it to an English KB if it is linkable. Traditional entity linking methods rely on costly human annotated data to train supervised learning-to-rank models to select the best candidate entity for each mention. In contrast, we propose a novel unsupervised represent-and-compare approach that can accurately capture the semantic meaning representation of each mention, and directly compare its representation with the representation of each candidate entity in the target KB.
    [Show full text]
  • How to Contribute Climate Change Information to Wikipedia : a Guide
    HOW TO CONTRIBUTE CLIMATE CHANGE INFORMATION TO WIKIPEDIA Emma Baker, Lisa McNamara, Beth Mackay, Katharine Vincent; ; © 2021, CDKN This work is licensed under the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/legalcode), which permits unrestricted use, distribution, and reproduction, provided the original work is properly credited. Cette œuvre est mise à disposition selon les termes de la licence Creative Commons Attribution (https://creativecommons.org/licenses/by/4.0/legalcode), qui permet l’utilisation, la distribution et la reproduction sans restriction, pourvu que le mérite de la création originale soit adéquatement reconnu. IDRC Grant/ Subvention du CRDI: 108754-001-CDKN knowledge accelerator for climate compatible development How to contribute climate change information to Wikipedia A guide for researchers, practitioners and communicators Contents About this guide .................................................................................................................................................... 5 1 Why Wikipedia is an important tool to communicate climate change information .................................................................................................................................. 7 1.1 Enhancing the quality of online climate change information ............................................. 8 1.2 Sharing your work more widely ......................................................................................................8 1.3 Why researchers should
    [Show full text]
  • Semantically Annotated Snapshot of the English Wikipedia
    Semantically Annotated Snapshot of the English Wikipedia Jordi Atserias, Hugo Zaragoza, Massimiliano Ciaramita, Giuseppe Attardi Yahoo! Research Barcelona, U. Pisa, on sabbatical at Yahoo! Research C/Ocata 1 Barcelona 08003 Spain {jordi, hugoz, massi}@yahoo-inc.com, [email protected] Abstract This paper describes SW1, the first version of a semantically annotated snapshot of the English Wikipedia. In recent years Wikipedia has become a valuable resource for both the Natural Language Processing (NLP) community and the Information Retrieval (IR) community. Although NLP technology for processing Wikipedia already exists, not all researchers and developers have the computational resources to process such a volume of information. Moreover, the use of different versions of Wikipedia processed differently might make it difficult to compare results. The aim of this work is to provide easy access to syntactic and semantic annotations for researchers of both NLP and IR communities by building a reference corpus to homogenize experiments and make results comparable. These resources, a semantically annotated corpus and a “entity containment” derived graph, are licensed under the GNU Free Documentation License and available from http://www.yr-bcn.es/semanticWikipedia. 1. Introduction 2. Processing Wikipedia1, the largest electronic encyclopedia, has be- Starting from the XML Wikipedia source we carried out a come a widely used resource for different Natural Lan- number of data processing steps: guage Processing tasks, e.g. Word Sense Disambiguation (Mihalcea, 2007), Semantic Relatedness (Gabrilovich and • Basic preprocessing: Stripping the text from the Markovitch, 2007) or in the Multilingual Question Answer- XML tags and dividing the obtained text into sen- ing task at Cross-Language Evaluation Forum (CLEF)2.
    [Show full text]
  • The Case of Croatian Wikipedia: Encyclopaedia of Knowledge Or Encyclopaedia for the Nation?
    The Case of Croatian Wikipedia: Encyclopaedia of Knowledge or Encyclopaedia for the Nation? 1 Authorial statement: This report represents the evaluation of the Croatian disinformation case by an external expert on the subject matter, who after conducting a thorough analysis of the Croatian community setting, provides three recommendations to address the ongoing challenges. The views and opinions expressed in this report are those of the author and do not necessarily reflect the official policy or position of the Wikimedia Foundation. The Wikimedia Foundation is publishing the report for transparency. Executive Summary Croatian Wikipedia (Hr.WP) has been struggling with content and conduct-related challenges, causing repeated concerns in the global volunteer community for more than a decade. With support of the Wikimedia Foundation Board of Trustees, the Foundation retained an external expert to evaluate the challenges faced by the project. The evaluation, conducted between February and May 2021, sought to assess whether there have been organized attempts to introduce disinformation into Croatian Wikipedia and whether the project has been captured by ideologically driven users who are structurally misaligned with Wikipedia’s five pillars guiding the traditional editorial project setup of the Wikipedia projects. Croatian Wikipedia represents the Croatian standard variant of the Serbo-Croatian language. Unlike other pluricentric Wikipedia language projects, such as English, French, German, and Spanish, Serbo-Croatian Wikipedia’s community was split up into Croatian, Bosnian, Serbian, and the original Serbo-Croatian wikis starting in 2003. The report concludes that this structure enabled local language communities to sort by points of view on each project, often falling along political party lines in the respective regions.
    [Show full text]