Cultural Neighbourhoods, Or Approaches to Quantifying Cultural Contextualisation in Multilingual Knowledge Repository Wikipedia

Total Page:16

File Type:pdf, Size:1020Kb

Cultural Neighbourhoods, Or Approaches to Quantifying Cultural Contextualisation in Multilingual Knowledge Repository Wikipedia CULTURAL NEIGHBOURHOODS, OR APPROACHES TO QUANTIFYING CULTURAL CONTEXTUALISATION IN MULTILINGUAL KNOWLEDGE REPOSITORY WIKIPEDIA by Anna Samoilenko Approved Dissertation thesis for the partial fulfillment of the requirements for a Doctor of Natural Sciences (Dr. rer. nat.) Fachbereich 4: Informatik Universität Koblenz-landau Chair of PhD Board: Prof. Dr. Ralf Lämmel Chair of PhD Commission: Prof. Dr. Stefan Müller Examiner and Supervisor: Prof. Dr. Steffen Staab Further Examiners: Prof. Dr. Brent Hecht, Jun.-Prof. Dr. Tobias Krämer Date of the doctoral viva: 16 June 2021 iii Cultural Neighbourhoods, or approaches to quantifying cultural contextualisation in multilingual knowledge repository Wikipedia by Anna SAMOILENKO Abstract As a multilingual system, Wikipedia provides many challenges for academics and engineers alike. One such challenge is cultural contextualisation of Wikipedia content, and the lack of approaches to effectively quantify it. Additionally, what seems to lack is the intent of establishing sound computational practices and frameworks for measuring cultural variations in the data. Current approaches seem to mostly be dictated by the data availability, which makes it difficult to apply them in other contexts. Another common drawback is that they rarely scale due to a significant qualitative or translation effort. To address these limitations, this thesis develops and tests two modular quantitative approaches. They are aimed at quantifying culture-related phenomena in systems which rely on multilingual user-generated content. In particular, they allow to: (1) operationalise a custom concept of cul- ture in a system; (2) quantify and compare culture-specific content- or coverage biases in such a system; and (3) map a large scale landscape of shared cultural interests and focal points. Empirical validation of these approaches is split into two parts. First, an approach to mapping Wikipedia communities of shared co-editing interests is validated on two large Wikipedia datasets comprising multilateral geopolitical and linguistic editor communities. Both datasets reveal mea- surable clusters of consistent co-editing interest, and computationally confirm that these clusters correspond to existing colonial, religious, socio-economic, and geographical ties. Second, an approach to quantifying content differences is validated on a multilingual Wikipedia dataset, and a multi-platform (Wikipedia and Encyclopedia Britannica) dataset. Both are limited to a selected knowledge domain of national history. This analysis allows, for the first time on the large scale, to quantify and visualise the distribution of historical focal points in the articles on national histories. All results are cross-validated either by domain experts, or external datasets. Main thesis contributions. This thesis: (1) presents an effort to formalise the process of mea- suring cultural variations in user-generated data; (2) introduces and tests two novel approaches to quantifying cultural contextualisation in multilingual data; (3) synthesises a valuable overview of literature on defining and quantifying culture; (4) provides important empirical insights on the effect of culture on Wikipedia content and coverage; demonstrates that Wikipedia is not context- free, and these differences should not be treated as noise, but rather, as an important feature of the data. (5) makes practical service contributions through sharing data and visualisations. v Zusammennfassung Als mehrsprachiges System stellt Wikipedia viele Herausforderungen sowohl an Akademiker als auch an Ingenieure. Eine dieser Herausforderungen ist die kulturelle Kontextualisierung der Wikipedia-Inhalte und der Mangel an Ansätzen zu ihrer effektiven Quantifizierung. Außerdem scheint es an der Absicht zu fehlen, solide Berechnungspraktiken und Rahmenbedingungen für die Messung kultureller Variationen in dem Datenmaterial zu schaffen. Die derzeitigen Ansätze scheinen hauptsächlich von der Datenverfügbarkeit diktiert zu werden, was ihre Anwendung in anderen Kontexten erschwert. Ein weiterer häufiger Nachteil ist, dass sie aufgrund eines erhe- blichen qualitativen oder Übersetzungsaufwands selten skalieren. Um diesen Einschränkungen zu begegnen, werden in dieser Arbeit zwei modulare quantita- tive Ansätze entwickelt und getestet. Sie zielen darauf ab, kulturbezogene Phänomene in Syste- men zu quantifizieren, die auf mehrsprachigem, nutzergeneriertem Inhalt beruhen. Insbesondere ermöglichen sie es: (1) einen benutzerdefinierten Kulturbegriff in einem System zu operational- isieren; (2) kulturspezifische Inhalts- oder Abdeckungsverzerrungen in einem solchen System zu quantifizieren und zu vergleichen; und (3) eine großräumige Landschaft mit gemeinsamen kul- turellen Interessen und Schwerpunkten abzubilden. Die empirische Validierung dieser Ansätze ist in zwei Teile gegliedert. Erstens wird ein Ansatz zur Kartierung von Wikipedia-Gemeinschaften mit gemeinsamen redaktionellen Interessen auf zwei großen Wikipedia-Datensätzen validiert, die multilaterale geopolitische und sprachliche Re- dakteursgemeinschaften umfassen. Beide Datensätze zeigen messbare Cluster von konsistenten Mitredaktionsinteressen und bestätigen rechnerisch, dass diese Cluster mit bestehenden kolo- nialen, religiösen, sozioökonomischen und geographischen Bindungen übereinstimmen. Zweitens wird ein Ansatz zur Quantifizierung von Inhaltsunterschieden anhand eines mehr- sprachigen Wikipedia-Datensatzes und eines Multiplattform-Datensatzes (Wikipedia und Ency- clopedia Britannica) validiert. Beide sind auf einen ausgewählten Wissensbereich der Nation- algeschichte beschränkt. Diese Analyse ermöglicht es erstmals im großen Maßstab, die Verteilung der historischen Schwerpunkte in den Artikeln zur Nationalgeschichte zu quantifizieren und zu visualisieren. Alle Ergebnisse werden entweder von Fachexperten oder von externen Datensätzen kreuzvalidiert. Die wichtigsten Beiträge der Dissertation. Diese Dissertation: (1) stellt einen Versuch dar, den Prozess der Messung kultureller Variationen in nutzergeneriertem Datenmaterial zu formal- isieren; (2) stellt zwei neue Ansätze zur Quantifizierung der kulturellen Kontextualisierung in mehrsprachigem Datenmaterial vor und testet sie; (3) schafft einen wertvollen Überblick über die Literatur zur Definition und Quantifizierung von Kultur; (4) liefert wichtige empirische Erken- ntnisse über die Wirkung von Kultur auf den Inhalt und die Abdeckung von Wikipedia; zeigt, dass Wikipedia nicht kontextfrei ist, und dass diese Unterschiede nicht als Rauschen, sondern als ein wichtiges Merkmal des Datenmaterials behandelt werden sollten. (5) leistet einen praktischen Beitrag durch das Teilen von Datenmaterial und Visualisierungen. vii Contents 1 Introduction 1 1.1 Research questions, proposed approaches, results....................4 1.1.1 Mapping communities of shared information interest..............4 1.1.2 Quantifying content differences in a specific knowledge domain.......5 1.2 Thesis contributions.....................................7 1.3 Publications related to this thesis and author contributions...............9 1.4 Thesis structure........................................ 12 2 Related work 13 2.1 User-generated content (UGC)............................... 13 2.2 Motivation for studying Wikipedia............................. 15 2.2.1 Wikipedia in education and academic research.................. 15 2.2.2 Wikipedia in Computer Science and modern algorithms............ 16 2.2.3 Wikipedia in Business and Economy........................ 16 2.2.4 Wikipedia in Public policy.............................. 18 2.3 Cultural contextualisation of UGC............................. 20 2.4 Defining and operationalising culture........................... 23 2.5 Approaches to quantifying culture............................. 29 2.6 Conclusion........................................... 33 3 Mapping communities of shared information interest 35 3.1 Approach: Extracting and understanding ties of shared interests............ 37 3.1.1 Step I: Statistical formalisation of the shared interest model.......... 37 3.1.2 Step II: Clustering editor communities of similar interests........... 39 3.1.3 Step III: Understanding shared interests..................... 40 3.2 Validation I: Mapping bilateral information interests................... 41 3.2.1 Data collection.................................... 42 3.2.2 Approach and results................................ 43 The world map of information interests...................... 43 Interconnections between the clusters....................... 44 Bilateral ties within the clusters........................... 44 Regression analysis of hypotheses related to the strengths of shared interests 46 3.3 Validation II: Linguistic neighbourhoods.......................... 49 3.3.1 Data collection.................................... 50 3.3.2 Approach and results................................ 51 Testing for non-randomness of co-editing patterns................ 51 The network of shared interests among the language communities...... 51 Explaining the clusters of co-editing interests: Hypothesis formulation... 55 Bayesian inference – HypTrails........................... 57 Frequentist approach – MRQAP.......................... 59 3.4 Discussion of empirical results............................... 60 3.5 Limitations........................................... 63 3.6 Chapter summary...................................... 65 viii 3.7 Conclusions and implications................................ 66 4 Quantifying content differences
Recommended publications
  • Cultural Anthropology Through the Lens of Wikipedia: Historical Leader Networks, Gender Bias, and News-Based Sentiment
    Cultural Anthropology through the Lens of Wikipedia: Historical Leader Networks, Gender Bias, and News-based Sentiment Peter A. Gloor, Joao Marcos, Patrick M. de Boer, Hauke Fuehres, Wei Lo, Keiichi Nemoto [email protected] MIT Center for Collective Intelligence Abstract In this paper we study the differences in historical World View between Western and Eastern cultures, represented through the English, the Chinese, Japanese, and German Wikipedia. In particular, we analyze the historical networks of the World’s leaders since the beginning of written history, comparing them in the different Wikipedias and assessing cultural chauvinism. We also identify the most influential female leaders of all times in the English, German, Spanish, and Portuguese Wikipedia. As an additional lens into the soul of a culture we compare top terms, sentiment, emotionality, and complexity of the English, Portuguese, Spanish, and German Wikinews. 1 Introduction Over the last ten years the Web has become a mirror of the real world (Gloor et al. 2009). More recently, the Web has also begun to influence the real world: Societal events such as the Arab spring and the Chilean student unrest have drawn a large part of their impetus from the Internet and online social networks. In the meantime, Wikipedia has become one of the top ten Web sites1, occasionally beating daily newspapers in the actuality of most recent news. Be it the resignation of German national soccer team captain Philipp Lahm, or the downing of Malaysian Airlines flight 17 in the Ukraine by a guided missile, the corresponding Wikipedia page is updated as soon as the actual event happened (Becker 2012.
    [Show full text]
  • Compilation of a Swiss German Dialect Corpus and Its Application to Pos Tagging
    Compilation of a Swiss German Dialect Corpus and its Application to PoS Tagging Nora Hollenstein Noemi¨ Aepli University of Zurich University of Zurich [email protected] [email protected] Abstract Swiss German is a dialect continuum whose dialects are very different from Standard German, the official language of the German part of Switzerland. However, dealing with Swiss German in natural language processing, usually the detour through Standard German is taken. As writing in Swiss German has become more and more popular in recent years, we would like to provide data to serve as a stepping stone to automatically process the dialects. We compiled NOAH’s Corpus of Swiss German Dialects consisting of various text genres, manually annotated with Part-of- Speech tags. Furthermore, we applied this corpus as training set to a statistical Part-of-Speech tagger and achieved an accuracy of 90.62%. 1 Introduction Swiss German is not an official language of Switzerland, rather it includes dialects of Standard German, which is one of the four official languages. However, it is different from Standard German in terms of phonetics, lexicon, morphology and syntax. Swiss German is not dividable into a few dialects, in fact it is a dialect continuum with a huge variety. Swiss German is not only a spoken dialect but increasingly used in written form, especially in less formal text types. Often, Swiss German speakers write text messages, emails and blogs in Swiss German. However, in recent years it has become more and more popular and authors are publishing in their own dialect. Nonetheless, there is neither a writing standard nor an official orthography, which increases the variations dramatically due to the fact that people write as they please with their own style.
    [Show full text]
  • Universality, Similarity, and Translation in the Wikipedia Inter-Language Link Network
    In Search of the Ur-Wikipedia: Universality, Similarity, and Translation in the Wikipedia Inter-language Link Network Morten Warncke-Wang1, Anuradha Uduwage1, Zhenhua Dong2, John Riedl1 1GroupLens Research Dept. of Computer Science and Engineering 2Dept. of Information Technical Science University of Minnesota Nankai University Minneapolis, Minnesota Tianjin, China {morten,uduwage,riedl}@cs.umn.edu [email protected] ABSTRACT 1. INTRODUCTION Wikipedia has become one of the primary encyclopaedic in- The world: seven seas separating seven continents, seven formation repositories on the World Wide Web. It started billion people in 193 nations. The world's knowledge: 283 in 2001 with a single edition in the English language and has Wikipedias totalling more than 20 million articles. Some since expanded to more than 20 million articles in 283 lan- of the content that is contained within these Wikipedias is guages. Criss-crossing between the Wikipedias is an inter- probably shared between them; for instance it is likely that language link network, connecting the articles of one edition they will all have an article about Wikipedia itself. This of Wikipedia to another. We describe characteristics of ar- leads us to ask whether there exists some ur-Wikipedia, a ticles covered by nearly all Wikipedias and those covered by set of universal knowledge that any human encyclopaedia only a single language edition, we use the network to under- will contain, regardless of language, culture, etc? With such stand how we can judge the similarity between Wikipedias a large number of Wikipedia editions, what can we learn based on concept coverage, and we investigate the flow of about the knowledge in the ur-Wikipedia? translation between a selection of the larger Wikipedias.
    [Show full text]
  • State of Wikimedia Communities of India
    State of Wikimedia Communities of India Assamese http://as.wikipedia.org State of Assamese Wikipedia RISE OF ASSAMESE WIKIPEDIA Number of edits and internal links EDITS PER MONTH INTERNAL LINKS GROWTH OF ASSAMESE WIKIPEDIA Number of good Date Articles January 2010 263 December 2012 301 (around 3 articles per month) November 2011 742 (around 40 articles per month) Future Plans Awareness Sessions and Wiki Academy Workshops in Universities of Assam. Conduct Assamese Editing Workshops to groom writers to write in Assamese. Future Plans Awareness Sessions and Wiki Academy Workshops in Universities of Assam. Conduct Assamese Editing Workshops to groom writers to write in Assamese. THANK YOU Bengali বাংলা উইকিপিডিয়া Bengali Wikipedia http://bn.wikipedia.org/ By Bengali Wikipedia community Bengali Language • 6th most spoken language • 230 million speakers Bengali Language • National language of Bangladesh • Official language of India • Official language in Sierra Leone Bengali Wikipedia • Started in 2004 • 22,000 articles • 2,500 page views per month • 150 active editors Bengali Wikipedia • Monthly meet ups • W10 anniversary • Women’s Wikipedia workshop Wikimedia Bangladesh local chapter approved in 2011 by Wikimedia Foundation English State of WikiProject India on ENGLISH WIKIPEDIA ● One of the largest Indian Wikipedias. ● WikiProject started on 11 July 2006 by GaneshK, an NRI. ● Number of article:89,874 articles. (Excludes those that are not tagged with the WikiProject banner) ● Editors – 465 (active) ● Featured content : FAs - 55, FLs - 20, A class – 2, GAs – 163. BASIC STATISTICS ● B class – 1188 ● C class – 801 ● Start – 10,931 ● Stub – 43,666 ● Unassessed for quality – 20,875 ● Unknown importance – 61,061 ● Cleanup tags – 43,080 articles & 71,415 tags BASIC STATISTICS ● Diversity of opinion ● Lack of reliable sources ● Indic sources „lost in translation“ ● Editor skills need to be upgraded ● Lack of leadership ● Lack of coordinated activities ● ….
    [Show full text]
  • Modeling Popularity and Reliability of Sources in Multilingual Wikipedia
    information Article Modeling Popularity and Reliability of Sources in Multilingual Wikipedia Włodzimierz Lewoniewski * , Krzysztof W˛ecel and Witold Abramowicz Department of Information Systems, Pozna´nUniversity of Economics and Business, 61-875 Pozna´n,Poland; [email protected] (K.W.); [email protected] (W.A.) * Correspondence: [email protected] Received: 31 March 2020; Accepted: 7 May 2020; Published: 13 May 2020 Abstract: One of the most important factors impacting quality of content in Wikipedia is presence of reliable sources. By following references, readers can verify facts or find more details about described topic. A Wikipedia article can be edited independently in any of over 300 languages, even by anonymous users, therefore information about the same topic may be inconsistent. This also applies to use of references in different language versions of a particular article, so the same statement can have different sources. In this paper we analyzed over 40 million articles from the 55 most developed language versions of Wikipedia to extract information about over 200 million references and find the most popular and reliable sources. We presented 10 models for the assessment of the popularity and reliability of the sources based on analysis of meta information about the references in Wikipedia articles, page views and authors of the articles. Using DBpedia and Wikidata we automatically identified the alignment of the sources to a specific domain. Additionally, we analyzed the changes of popularity and reliability in time and identified growth leaders in each of the considered months. The results can be used for quality improvements of the content in different languages versions of Wikipedia.
    [Show full text]
  • Omnipedia: Bridging the Wikipedia Language
    Omnipedia: Bridging the Wikipedia Language Gap Patti Bao*†, Brent Hecht†, Samuel Carton†, Mahmood Quaderi†, Michael Horn†§, Darren Gergle*† *Communication Studies, †Electrical Engineering & Computer Science, §Learning Sciences Northwestern University {patti,brent,sam.carton,quaderi}@u.northwestern.edu, {michael-horn,dgergle}@northwestern.edu ABSTRACT language edition contains its own cultural viewpoints on a We present Omnipedia, a system that allows Wikipedia large number of topics [7, 14, 15, 27]. On the other hand, readers to gain insight from up to 25 language editions of the language barrier serves to silo knowledge [2, 4, 33], Wikipedia simultaneously. Omnipedia highlights the slowing the transfer of less culturally imbued information similarities and differences that exist among Wikipedia between language editions and preventing Wikipedia’s 422 language editions, and makes salient information that is million monthly visitors [12] from accessing most of the unique to each language as well as that which is shared information on the site. more widely. We detail solutions to numerous front-end and algorithmic challenges inherent to providing users with In this paper, we present Omnipedia, a system that attempts a multilingual Wikipedia experience. These include to remedy this situation at a large scale. It reduces the silo visualizing content in a language-neutral way and aligning effect by providing users with structured access in their data in the face of diverse information organization native language to over 7.5 million concepts from up to 25 strategies. We present a study of Omnipedia that language editions of Wikipedia. At the same time, it characterizes how people interact with information using a highlights similarities and differences between each of the multilingual lens.
    [Show full text]
  • Collaboration: Interoperability Between People in the Creation of Language Resources for Less-Resourced Languages a SALTMIL Workshop May 27, 2008 Workshop Programme
    Collaboration: interoperability between people in the creation of language resources for less-resourced languages A SALTMIL workshop May 27, 2008 Workshop Programme 14:30-14:35 Welcome 14:35-15:05 Icelandic Language Technology Ten Years Later Eiríkur Rögnvaldsson 15:05-15:35 Human Language Technology Resources for Less Commonly Taught Languages: Lessons Learned Toward Creation of Basic Language Resources Heather Simpson, Christopher Cieri, Kazuaki Maeda, Kathryn Baker, and Boyan Onyshkevych 15:35-16:05 Building resources for African languages Karel Pala, Sonja Bosch, and Christiane Fellbaum 16:05-16:45 COFFEE BREAK 16:45-17:15 Extracting bilingual word pairs from Wikipedia Francis M. Tyers and Jacques A. Pienaar 17:15-17:45 Building a Basque/Spanish bilingual database for speaker verification Iker Luengo, Eva Navas, Iñaki Sainz, Ibon Saratxaga, Jon Sanchez, Igor Odriozola, Juan J. Igarza and Inma Hernaez 17:45-18:15 Language resources for Uralic minority languages Attila Novák 18:15-18:45 Eslema. Towards a Corpus for Asturian Xulio Viejo, Roser Saurí and Angel Neira 18:45-19:00 Annual General Meeting of the SALTMIL SIG i Workshop Organisers Briony Williams Language Technologies Unit Bangor University, Wales, UK Mikel L. Forcada Departament de Llenguatges i Sistemes Informàtics Universitat d'Alacant, Spain Kepa Sarasola Lengoaia eta Sistema Informatikoak Saila Euskal Herriko Unibertsitatea / University of the Basque Country SALTMIL Speech and Language Technologies for Minority Languages A SIG of the International Speech Communication Association
    [Show full text]
  • The Culture of Wikipedia
    Good Faith Collaboration: The Culture of Wikipedia Good Faith Collaboration The Culture of Wikipedia Joseph Michael Reagle Jr. Foreword by Lawrence Lessig The MIT Press, Cambridge, MA. Web edition, Copyright © 2011 by Joseph Michael Reagle Jr. CC-NC-SA 3.0 Purchase at Amazon.com | Barnes and Noble | IndieBound | MIT Press Wikipedia's style of collaborative production has been lauded, lambasted, and satirized. Despite unease over its implications for the character (and quality) of knowledge, Wikipedia has brought us closer than ever to a realization of the centuries-old Author Bio & Research Blog pursuit of a universal encyclopedia. Good Faith Collaboration: The Culture of Wikipedia is a rich ethnographic portrayal of Wikipedia's historical roots, collaborative culture, and much debated legacy. Foreword Preface to the Web Edition Praise for Good Faith Collaboration Preface Extended Table of Contents "Reagle offers a compelling case that Wikipedia's most fascinating and unprecedented aspect isn't the encyclopedia itself — rather, it's the collaborative culture that underpins it: brawling, self-reflexive, funny, serious, and full-tilt committed to the 1. Nazis and Norms project, even if it means setting aside personal differences. Reagle's position as a scholar and a member of the community 2. The Pursuit of the Universal makes him uniquely situated to describe this culture." —Cory Doctorow , Boing Boing Encyclopedia "Reagle provides ample data regarding the everyday practices and cultural norms of the community which collaborates to 3. Good Faith Collaboration produce Wikipedia. His rich research and nuanced appreciation of the complexities of cultural digital media research are 4. The Puzzle of Openness well presented.
    [Show full text]
  • Mapping the Nature of Content and Processes on the English Wikipedia
    ‘Creating Knowledge’: Mapping the nature of content and processes on the English Wikipedia Report submitted by Sohnee Harshey M. Phil Scholar, Advanced Centre for Women's Studies Tata Institute of Social Sciences Mumbai April 2014 Study Commissioned by the Higher Education Innovation and Research Applications (HEIRA) Programme, Centre for the Study of Culture and Society, Bangalore as part of an initiative on ‘Mapping Digital Humanities in India’, in collaboration with the Centre for Internet and Society, Bangalore. Supported by the Ford Foundation’s ‘Pathways to Higher Education Programme’ (2009-13) Introduction Run a search on Google and one of the first results to show up would be a Wikipedia entry. So much so, that from ‘googled it’, the phrase ‘wikied it’ is catching up with students across university campuses. The Wikipedia, which is a ‘collaboratively edited, multilingual, free Internet encyclopedia’1, is hugely popular simply because of the range and extent of topics covered in a format that is now familiar to most people using the internet. It is not unknown that the ‘quick ready reference’ nature of Wikipedia makes it a popular source even for those in the higher education system-for quick information and even as a starting point for academic writing. Since there is no other source which is freely available on the internet-both in terms of access and information, the content from Wikipedia is thrown up when one runs searches on Google, Yahoo or other search engines. With Wikipedia now accessible on phones, the rate of distribution of information as well as the rate of access have gone up; such use necessitates that the content on this platform must be neutral and at the same time sensitive to the concerns of caste, gender, ethnicity, race etc.
    [Show full text]
  • Semantic Approaches in Natural Language Processing
    Cover:Layout 1 02.08.2010 13:38 Seite 1 10th Conference on Natural Language Processing (KONVENS) g n i s Semantic Approaches s e c o in Natural Language Processing r P e g a u Proceedings of the Conference g n a on Natural Language Processing 2010 L l a r u t a This book contains state-of-the-art contributions to the 10th N n Edited by conference on Natural Language Processing, KONVENS 2010 i (Konferenz zur Verarbeitung natürlicher Sprache), with a focus s e Manfred Pinkal on semantic processing. h c a o Ines Rehbein The KONVENS in general aims at offering a broad perspective r on current research and developments within the interdiscipli - p p nary field of natural language processing. The central theme A Sabine Schulte im Walde c draws specific attention towards addressing linguistic aspects i t of meaning, covering deep as well as shallow approaches to se - n Angelika Storrer mantic processing. The contributions address both knowledge- a m based and data-driven methods for modelling and acquiring e semantic information, and discuss the role of semantic infor - S mation in applications of language technology. The articles demonstrate the importance of semantic proces - sing, and present novel and creative approaches to natural language processing in general. Some contributions put their focus on developing and improving NLP systems for tasks like Named Entity Recognition or Word Sense Disambiguation, or focus on semantic knowledge acquisition and exploitation with respect to collaboratively built ressources, or harvesting se - mantic information in virtual games. Others are set within the context of real-world applications, such as Authoring Aids, Text Summarisation and Information Retrieval.
    [Show full text]
  • Text Corpora and the Challenge of Newly Written Languages
    Proceedings of the 1st Joint SLTU and CCURL Workshop (SLTU-CCURL 2020), pages 111–120 Language Resources and Evaluation Conference (LREC 2020), Marseille, 11–16 May 2020 c European Language Resources Association (ELRA), licensed under CC-BY-NC Text Corpora and the Challenge of Newly Written Languages Alice Millour∗, Karen¨ Fort∗y ∗Sorbonne Universite´ / STIH, yUniversite´ de Lorraine, CNRS, Inria, LORIA 28, rue Serpente 75006 Paris, France, 54000 Nancy, France [email protected], [email protected] Abstract Text corpora represent the foundation on which most natural language processing systems rely. However, for many languages, collecting or building a text corpus of a sufficient size still remains a complex issue, especially for corpora that are accessible and distributed under a clear license allowing modification (such as annotation) and further resharing. In this paper, we review the sources of text corpora usually called upon to fill the gap in low-resource contexts, and how crowdsourcing has been used to build linguistic resources. Then, we present our own experiments with crowdsourcing text corpora and an analysis of the obstacles we encountered. Although the results obtained in terms of participation are still unsatisfactory, we advocate that the effort towards a greater involvement of the speakers should be pursued, especially when the language of interest is newly written. Keywords: text corpora, dialectal variants, spelling, crowdsourcing 1. Introduction increasing number of speakers (see for instance, the stud- Speakers from various linguistic communities are increas- ies on specific languages carried by Rivron (2012) or Soria ingly writing their languages (Outinoff, 2012), and the In- et al.
    [Show full text]
  • Same Gap, Different Experiences
    SAME GAP, DIFFERENT EXPERIENCES An Exploration of the Similarities and Differences Between the Gender Gap in the Indian Wikipedia Editor Community and the Gap in the General Editor Community By Eva Jadine Lannon 997233970 An Undergraduate Thesis Presented to Professor Leslie Chan, PhD. & Professor Kenneth MacDonald, PhD. in the Partial Fulfillment of the Requirements for the Degree of Honours Bachelor of Arts in International Development Studies (Co-op) University of Toronto at Scarborough Ontario, Canada June 2014 ACKNOWLEDGEMENTS I would like to express my deepest gratitude to all of those people who provided support and guidance at various stages of my undergraduate research, and particularly to those individuals who took the time to talk me down off the ledge when I was certain I was going to quit. To my friends, and especially Jennifer Naidoo, who listened to my grievances at all hours of the day and night, no matter how spurious. To my partner, Karim Zidan El-Sayed, who edited every word (multiple times) with spectacular patience. And finally, to my research supervisors: Prof. Leslie Chan, you opened all the doors; Prof. Kenneth MacDonald, you laid the foundations. Without the support, patience, and understanding that both of you provided, it would have never been completed. Thank you. i EXECUTIVE SUMMARY According to the second official Wikipedia Editor Survey conducted in December of 2011, female- identified editors comprise only 8.5% of contributors to Wikipedia's contributor population (Glott & Ghosh, 2010). This significant lack of women and women's voices in the Wikipedia community has led to systemic bias towards male histories and culturally “masculine” knowledge (Lam et al., 2011; Gardner, 2011; Reagle & Rhue, 2011; Wikimedia Meta-Wiki, 2013), and an editing environment that is often hostile and unwelcoming to women editors (Gardner, 2011; Lam et al., 2011; Wikimedia Meta- Wiki, 2013).
    [Show full text]