Graph-Based Methods for Large-Scale Multilingual Knowledge Integration

Total Page:16

File Type:pdf, Size:1020Kb

Graph-Based Methods for Large-Scale Multilingual Knowledge Integration demelo_cover:Layout 1 13.01.2012 10:35 Seite 1 Dissertationen aus der Naturwissenschaftlich- Technischen Fakultät I der Universität des Saarlandes Graph-based Methods for Large-Scale n o i t a r Multilingual Knowledge Integration Given that much of our knowledge is expressed in textual form, infor - g e mation systems increasingly depend on knowledge about words and t n the entities they represent. This book investigates novel methods for I e automatically building large repositories of knowledge that capture g d semantic relationships between words, names, and entities, in many e l different languages. Three major new contributions are presented, w o each involving graph algorithms and statistical techniques that com - n bine evidence from multiple sources of information. K Gerard de Melo l a u The lexical integration method involves learning models that disam - g n i biguate word meanings based on contextual information in a graph, l i t thereby providing a means to connect words to the entities that they l denote. The entity integration method combines semantic items from u M different sources into a single unified registry of entities by reconciling e l equivalence and distinctness information and solving a combinatorial a c optimization problem. Finally, the taxonomic integration method adds S - a comprehensive and coherent taxonomic hierarchy on top of this e g registry, capturing how different entities relate to each other. r a L Together, these methods can be used to produce a large-scale multi - r o lingual knowledge base semantically describing over 5 million entities f s and over 16 million natural language words and names in more than d o 200 different languages. h t e M d e s a b - h p a r G universaar Universitätsverlag des Saarlandes Saarland University Press Presses Universitaires de la Sarre Gerard de Melo Graph-based Methods for Large-Scale Multilingual Knowledge Integration universaar Universitätsverlag des Saarlandes Saarland University Press Presses Universitaires de la Sarre D 291 © 2012 universaar Universitätsverlag des Saarlandes Saarland University Press Presses Universitaires de la Sarre Postfach 151150, 66041 Saarbrücken ISBN 978-3-86223-028-0 gedruckte Ausgabe ISBN 978-3-86223-029-7 Online-Ausgabe URN urn:nbn:de:bsz:291-universaar-278 Doctoral Dissertation, Saarland University Dissertation zur Erlangung des Grades des Doktors der Ingenieurwissenschaften (Dr.-Ing.) der Naturwissenschaftlich-Technischen Fakultäten der Universität des Saarlandes Defense: Saarbrücken, 15 December 2010 Committee: Gert Smolka, Gerhard Weikum, Hans Uszkoreit, Hinrich Schütze, Sebastian Michel Dean: Holger Hermanns Projektbetreuung universaar : Isolde Teufel Satz: Gerard de Melo Umschlaggestaltung: Julian Wichert Bibliografische Information der Deutschen Nationalbibliothek: Die Deutsche Nationalbibliothek verzeichnet diese Publikation in der Deutschen Nationalbibliografie; detaillierte bibliografische Daten sind im Internet über <http://dnb.d-nb.de> abrufbar. Abstract Given that much of our knowledge is expressed in textual form, inform- ation systems are increasingly dependent on knowledge about words and the entities they represent. This thesis investigates novel methods for automatically building large repositories of knowledge that capture semantic relationships between words, names, and entities, in many different languages. Three major contributions are made, each involving graph algorithms and statistical techniques that combine evidence from multiple sources of information. The lexical integration method involves learning models that dis- ambiguate word meanings based on contextual information in a graph, thereby providing a means to connect words to the entities that they denote. The entity integration method combines semantic items from different sources into a single unified registry of entities by reconciling equivalence and distinctness information and solving a combinatorial optimization problem. Finally, the taxonomic integration method adds a comprehensive and coherent taxonomic hierarchy on top of this registry, capturing how different entities relate to each other. Together, these methods can be used to produce a large-scale multi- lingual knowledge base semantically describing over 5 million entities and over 16 million natural language words and names in more than 200 different languages. iii Kurzfassung Da ein großer Teil unseres Wissens in textueller Form vorliegt, sind In- formationssysteme in zunehmendem Maße auf Wissen uber¨ Worter¨ und den von ihnen reprasentierten¨ Entitaten¨ angewiesen. Gegenstand dieser Arbeit sind neue Methoden zur automatischen Erstellung großer multi- lingualer Wissensbanken, welche semantische Beziehungen zwischen Wortern,¨ Namen und Entitaten¨ formal erfassen. In drei Hauptbeitragen¨ werden jeweils Indizien aus mehreren Wissensquellen mittels graph- theoretischer und statistischer Verfahren verknupft.¨ Bei der lexikalischen Integration werden statistische Modelle zur Dis- ambiguierung erlernt, um Worter¨ mit den von ihnen reprasentierten¨ En- titaten¨ in Verbindung zu setzen. Bei der Entitaten-Integration¨ werden se- mantische Einheiten aus verschiedenen Quellen unter Berucksichtigung¨ von Aquivalenz¨ und Verschiedenheit durch Losung¨ eines kombinato- rischen Optimierungsproblems zu einem koharenten¨ Register von En- titaten¨ zusammengefasst. Dieses wird schließlich bei der taxonomischen Integration durch eine umfassende taxonomische Hierarchie erganzt,¨ in der Entitaten¨ zueinander in Verbindung gesetzt werden. Es zeigt sich, dass diese Methoden zusammen zur Induzierung einer großen multilingualen Wissensbank eingesetzt werden konnen,¨ welche uber¨ 5 Millionen Entitaten¨ und uber¨ 16 Millionen Worter¨ und Namen in mehr als 200 Sprachen semantisch beschreibt. v Summary Much of our knowledge is expressed in textual form, and it is by keywords that humans most commonly search for information. In- formation systems are increasingly facing the challenge of making sense of words or names of objects, and often rely on background know- ledge about them. While such knowledge can be encoded manually, this thesis examines to what extent existing knowledge sources on the Web and elsewhere can be used to automatically derive much larger knowledge bases that capture explicit semantic relationships between words, names, and entities, in many different languages. At an abstract level, such knowledge bases correspond to labelled graphs with nodes representing arbitrary entities and arcs representing typed relationships between them. The problem is approached from three complementary angles, leading to three novel methods to produce large-scale multilin- gual knowledge bases. In each case, graph algorithms and statistical techniques are used to combine and integrate evidence from multiple existing sources of information. The lexical integration method considers the task of connecting words in different languages to the entities that they denote. Translations and synonyms in a graph are used to determine potential entities corres- ponding to the meanings of a word. The main challenge is assessing which ones are likely to be correct, which is tackled by learning statist- ical disambiguation models. These models operate in a feature space that reflects local contextual information about a word in the graph. This strategy allows us to turn an essentially monolingual resource like the commonly used WordNet database into a much larger multilingual lexical knowledge base. vii viii The entity integration method addresses the problem of extending the range of potential entities that words can refer to by adding large numbers of further entities from separate sources. Given some prior knowledge or heuristics that reveal equivalence as well as distinctness information between entities from one or more knowledge sources, the aim is to combine the different repositories into a single unified registry of entities. Semantic duplicates should be unified, while distinct items should be kept as separate entities. Reconciling conflicting equivalence and distinctness information can be modelled as a combinatorial optim- ization task. An algorithm with a logarithmic approximation guarantee is developed that uses linear programming and region growing to obtain a consistent registry of entities from over 200 language-specific editions of Wikipedia. Finally, the taxonomic integration method adds another layer of or- ganization to this registry of entities, based on taxonomic relationships that connect instances to their classes and classes to parent classes. The central challenge is to combine unreliable and incomplete taxonomic links into a single comprehensive taxonomic hierarchy, which captures how entities in the knowledge base relate to each other. We achieve this by relying on a new Markov chain algorithm. Together, these methods can be used to produce a large-scale multilin- gual knowledge base that substantially goes beyond previous resources by semantically describing over 5 million entities and over 16 million natural language words and names in more than 200 different languages. Zusammenfassung Da ein großer Teil des menschlichen Wissens in textueller Form vorliegt, und auch die Informationssuche primar¨ durch Suchbegriffe erfolgt, sind Informationssysteme in zunehmendem Maße darauf angewiesen, Worter¨ und andere Begriffe semantisch interpretieren zu konnen,¨ oft- mals unter Zuhilfenahme von Hintergrundwissen. Gegenstand dieser Arbeit ist die Frage, inwiefern große multilinguale Wissendatenban- ken automatisch
Recommended publications
  • Universality, Similarity, and Translation in the Wikipedia Inter-Language Link Network
    In Search of the Ur-Wikipedia: Universality, Similarity, and Translation in the Wikipedia Inter-language Link Network Morten Warncke-Wang1, Anuradha Uduwage1, Zhenhua Dong2, John Riedl1 1GroupLens Research Dept. of Computer Science and Engineering 2Dept. of Information Technical Science University of Minnesota Nankai University Minneapolis, Minnesota Tianjin, China {morten,uduwage,riedl}@cs.umn.edu [email protected] ABSTRACT 1. INTRODUCTION Wikipedia has become one of the primary encyclopaedic in- The world: seven seas separating seven continents, seven formation repositories on the World Wide Web. It started billion people in 193 nations. The world's knowledge: 283 in 2001 with a single edition in the English language and has Wikipedias totalling more than 20 million articles. Some since expanded to more than 20 million articles in 283 lan- of the content that is contained within these Wikipedias is guages. Criss-crossing between the Wikipedias is an inter- probably shared between them; for instance it is likely that language link network, connecting the articles of one edition they will all have an article about Wikipedia itself. This of Wikipedia to another. We describe characteristics of ar- leads us to ask whether there exists some ur-Wikipedia, a ticles covered by nearly all Wikipedias and those covered by set of universal knowledge that any human encyclopaedia only a single language edition, we use the network to under- will contain, regardless of language, culture, etc? With such stand how we can judge the similarity between Wikipedias a large number of Wikipedia editions, what can we learn based on concept coverage, and we investigate the flow of about the knowledge in the ur-Wikipedia? translation between a selection of the larger Wikipedias.
    [Show full text]
  • Improving Wikimedia Projects Content Through Collaborations Waray Wikipedia Experience, 2014 - 2017
    Improving Wikimedia Projects Content through Collaborations Waray Wikipedia Experience, 2014 - 2017 Harvey Fiji • Michael Glen U. Ong Jojit F. Ballesteros • Bel B. Ballesteros ESEAP Conference 2018 • 5 – 6 May 2018 • Bali, Indonesia In 8 November 2013, typhoon Haiyan devastated the Eastern Visayas region of the Philippines when it made landfall in Guiuan, Eastern Samar and Tolosa, Leyte. The typhoon affected about 16 million individuals in the Philippines and Tacloban City in Leyte was one of [1] the worst affected areas. Philippines Eastern Visayas Eastern Visayas, specifically the provinces of Biliran, Leyte, Northern Samar, Samar and Eastern Samar, is home for the Waray speakers in the Philippines. [3] [2] Outline of the Presentation I. Background of Waray Wikipedia II. Collaborations made by Sinirangan Bisaya Wikimedia Community III. Problems encountered IV. Lessons learned V. Future plans References Photo Credits I. Background of Waray Wikipedia (https://war.wikipedia.org) Proposed on or about June 23, 2005 and created on or about September 24, 2005 Deployed lsjbot from Feb 2013 to Nov 2015 creating 1,143,071 Waray articles about flora and fauna As of 24 April 2018, it has a total of 1,262,945 articles created by lsjbot (90.5%) and by humans (9.5%) As of 31 March 2018, it has 401 views per hour Sinirangan Bisaya (Eastern Visayas) Wikimedia Community is the (offline) community that continuously improves Waray Wikipedia and related Wikimedia projects I. Background of Waray Wikipedia (https://war.wikipedia.org) II. Collaborations made by Sinirangan Bisaya Wikimedia Community A. Collaborations with private* and national government** Introductory letter organizations Series of meetings and communications B.
    [Show full text]
  • Feedforward Approach to Sequential Morphological Analysis in the Tagalog Language
    IALP 2020, Kuala Lumpur, Dec 4-6, 2020 Feedforward Approach to Sequential Morphological Analysis in the Tagalog Language Arian N. Yambao1,2 and Charibeth K. Cheng1,2 1College of Computer Studies, De La Salle University, Manila, Philippines 2Senti Techlabs Inc., Manila, Philippines arian_yambao, charibeth.cheng @dlsu.edu.ph, arian.yambao, chari.cheng @senti.com.ph { } { } Abstract—Morphological analysis is a preprocessing task used the affix class of a given verb. This can be shown with the to normalize and get the grammatical information of a word root words starting with a consonant like ’baba’ (’down’ in which may also be referred to as lemmatization. Several NLP English), a prefix ba- added to the word makes it ’bababa’ tasks use this preprocessing technique to improve their im- plementations in different languages, however languages like (”going down” in English). And for words that start with a Tagalog, are considered to be morphosyntactically rich for having vowel, like ’asa’ (’hope’ or ’wish’ in English), an infix -um- a diverse number of morphological phenomena. This paper is added and the initial letter of the word is reduplicated in presents a Tagalog morphological analyzer implemented using a between to form the word ’umaasa’ (’hoping in English) [3] Feed Forward Neural Networks model (FFNN). We transformed [4]. our data to its character-level representation and used the model to identify its capability of learning the language’s inflections. We There have been efforts in MA that are mainly focused compared our MA to previous rule-based Tagalog MA works and on Tagalog includes the works of De Guzman [5], Nelson saw an improved accuracy of 0.93.
    [Show full text]
  • The Waray Wikipedia Experience 族語傳遞資訊 瓦萊語維
    他者から自らを省みる 移鏡借鑑 Quotable Experiences Disseminating Information Using an Indigenous Language – the Waray Wikipedia Experience 族語傳遞資訊──瓦萊語維基百科經驗談 Disseminating Information Using an Indigenous Language – the Waray Wikipedia Experience 族語傳遞資訊──瓦萊語維基百科經驗談 民族語で情報伝達―ワライ語ウィキペディア経験談 Disseminating Information Using an Indigenous Language – the Waray Wikipedia Experience 文‧圖︱Joseph F. Ballesteros, Belinda T. Bernas-Ballesteros, Michael Glen U. Ong 譯者︱陳穎柔 A forum discussing Waray Wikipedia at University of the Philippines Visayas Tacloban College at Tacloban City, Leyte, Philippines on 21 November 2014 with students and professors. [Photo by JinJian - Own work, CC BY-SA 4.0] 2014年11月21日一場與學生及教授討論瓦萊語維基百科的論壇,於菲律賓禮智省獨魯萬市的菲律賓米沙鄢大學獨魯萬學院。 of October 2019, there are 175 living 2019年10月,菲律賓尚存 時至 種原住民族語言,使 As indigenous languages in the Philippines. One 175 用人口 萬的瓦萊瓦萊語(簡稱瓦 of these is the Waray-waray or simply Waray which is 260 used in one of the Philippine-language based editions 維基百科,是菲律賓眾語版本維基百科 萊語)為其一,主要分布於菲律賓薩 spoken by 2.6 million Filipinos mostly in the Samar of Wikipedia – the Waray Wikipedia. Wikipedia is a 之一。維基百科是人人皆可使用、發布 馬島及禮智島。瓦萊語有自己版本的 and Leyte islands in the Philippines. This language is free and online encyclopedia that anyone can use, 及編輯的免費線上百科全書。瓦萊語維 distribute and edit. In Waray Wikipedia, information 基百科由自願者以瓦萊語文撰寫資訊, is written in the Waray language by volunteers, from 秉持中性觀點,資訊來源獨立、可靠亦 a neutral point of view and obtained from 可查核;正如任一語言版本的維基百 independent, reliable and verifiable sources. Just like 科,瓦萊語維基百科無須網際網路連線 any language
    [Show full text]
  • Taxonomic Data Integration from Multilingual Wikipedia Editions
    Noname manuscript No. (will be inserted by the editor) Taxonomic Data Integration from Multilingual Wikipedia Editions Gerard de Melo · Gerhard Weikum Received: date / Accepted: date Abstract Information systems are increasingly making use of taxonomic knowl- edge about words and entities. A taxonomic knowledge base may reveal that the Lago di Garda is a lake, and that lakes as well as ponds, reservoirs, and marshes are all bodies of water. As the number of available taxonomic knowledge sources grows, there is a need for techniques to integrate such data into combined, unified taxonomies. In particular, the Wikipedia encyclopedia has been used by a num- ber of projects, but its multilingual nature has largely been neglected. This paper investigates how entities from all editions of Wikipedia as well as WordNet can be integrated into a single coherent taxonomic class hierarchy. We rely on link- ing heuristics to discover potential taxonomic relationships, graph partitioning to form consistent equivalence classes of entities, and a Markov chain-based ranking approach to construct the final taxonomy. This results in MENTA (Multilingual Entity Taxonomy), a resource that describes 5.4 million entities and is one of the largest multilingual lexical knowledge bases currently available. Keywords Taxonomy induction · Multilingual · Graph · Ranking 1 Introduction Motivation. Capturing knowledge in the form of machine-readable semantic knowl- edge bases has been a long-standing goal in computer science, information science, and knowledge management. Such resources have facilitated tasks like query ex- pansion [34], semantic search [48], faceted search [8], question answering [69], se- mantic document clustering [19], clinical decision support systems [14], and many more.
    [Show full text]
  • Wikipedia Revolutionizes the Web Changing the World, One Edit at A
    May Kalagitnaan Ba ang Wika? Isang Pagsusuri sa mga Patakarang Pangwika ng Wikipediang Tagalog Is There a Middle Ground to Language? An Analysis of the Tagalog Wikipedia’s Language Policies Presented at Wikimania 2011 – The International Wikimedia Conference Haifa Cinematheque, Haifa Auditorium Haifa, Israel 5 August 2011 Presented by: Wikimedia Philippines James Joshua G. Lim, Joseph F. Ballesteros, Eugene Alvin S. Villar and Frederick A. Calica A Background on the Tagalog Wikipedia Fast Facts • The Tagalog Wikipedia was founded on December 1, 2003. It is the oldest of the Philippine-language Wikipedias. • The Tagalog Wikipedia currently has around 53,000 articles. • Per the decision of the Wikimedia Foundation Language Committee on several occasions, as well as the Tagalog Wikipedia community, Tagalog and Filipino (the national language of the Philippines) are considered to be the same, although they are legally different. Although it has only the second-largest number of articles, the Tagalog Wikipedia plays a de facto role as the largest Philippine Wikimedia project. • Most page views (almost 11,000/hour) • Biggest reach of speakers (90 million) • Highest level of depth (18) Language on the Tagalog Wikipedia did not have an easy history. Tagalog: The Basis of the New National Language The national language of the Philippines is Filipino. As it evolves, it shall be further developed and enriched on the basis of existing Philippine and other languages. Subject to provisions of law and as the Congress may deem appropriate, the Government shall take steps to initiate and sustain the use of Filipino as a medium of official communication and as language of instruction in the educational system.
    [Show full text]
  • Wikipedia Depicting Mohammed Different Approaches in Different Environments
    Wikipedia depicting Mohammed Different approaches in different environments Man77, Wikimania 2016, Esino Lario أهلً وسهلً Benvenuti I am ● From Vienna, Austria ● Speaker at WikiCon 2015 ● Wikipedian since 2006 ● Mainly active in de.wikipedia.org Topic: Illustration of the ● Student of article on Mohammed in anthropology de.wikipedia.org ● First-time speaker at Debate on other Wikimania Wikipedias Why is this a topic of interest? ● Wikipedistically: – Diversity of visual representations available to Wikipedia – Wikipedias with different communities and different style guidelines – Principles, in particular NPOV Why is this a topic of interest? ● Islamically: – Aniconism in Islam – Depictions of Mohammed In the tiniest of nutshells Editor at Large, Pistachio shells, CC BY-SA 2.5 In the tiniest of nutshells ● It's not a Qur'an thing, but a hadith thing ● Verbal descriptions of Mohammed's appearance ● Islamic schools of thought are devided ● Calligraphy as classic Islamic art ● There are even Muslim paintings of Mohammed ● In recent years, the „Westerners“ experimented a little... What kinds of depictions do we have? ● Figurative depictions – Islamic tradition – „Western“ depictions – chris cosco, Everybody Draw Mohammed Day - Mohammed at night by Ysterius, CC BY-SA 2.0 What kinds of depictions do we have? ● Calligraphic representations What kinds of depictions do we have? ● Associative illustrations So there are options to choose from Thomas Rosenau, Gummy bears, CC BY-SA 2.5 Arabic Wikipedia Mohammed with face Mohammed w/o face Calligraphies
    [Show full text]
  • UNIVERSITY of CALIFORNIA SANTA CRUZ PRINTED TEXTS and DIGITAL DOPPELGANGERS: READING LITERATURE in the 21 CENTURY a Dissertation
    UNIVERSITY OF CALIFORNIA SANTA CRUZ PRINTED TEXTS AND DIGITAL DOPPELGANGERS: READING LITERATURE IN THE 21ST CENTURY A dissertation submitted in partial satisfaction of the requirements for the degree of DOCTOR OF PHILOSOPHY in LITERATURE By Jeremy Throne December 2018 The Dissertation of Jeremy Throne is approved: _________________________________ Professor Susan Gillman, chair _________________________________ Professor Kirsten Silva Gruesz _________________________________ Professor Johanna Drucker _________________________________ Lori Kletzer Vice Provost and Dean of Graduate Studies Table of Contents Abstract……………………………………………………………………………….iv Dedications……..……………………………………………………………………..v Introduction……………………………………………………………………………1 Chapter 1: Mark Twain’s Literary Legacies: A Digital Perspective………...………23 Chapter 2: “Mark Twain” in Chronicling America: Contextualizing Mark Twain’s Autobiography……………………………………………………………………….63 Chapter 3: From Mining to Modeling: An Agent-Based Approach to Mark Twain’s Autobiography…………………………………………………...…………………108 Conclusion………………………………………………………………………….146 Appendix for Chapter 1…………………………………………………….………158 Appendix for Chapter 2…………………………………………………….………178 Appendix for Chapter 3………………………………………………………….…204 List of Supplemental Files………………………………………………………….219 Bibliography………………………………………………………………..………225 iii Abstract Printed Texts and Digital Doppelgangers: Reading Literature in the 21st Century Jeremy Throne Much ink has been spilt worrying over the death of the book. It may be, however, that we find ourselves
    [Show full text]
  • Infixer: a Method for Segmenting Non-Concatenative Morphology in Tagalog
    City University of New York (CUNY) CUNY Academic Works All Dissertations, Theses, and Capstone Projects Dissertations, Theses, and Capstone Projects 6-2016 Infixer: A Method for Segmenting Non-Concatenative Morphology in Tagalog Steven R. Butler Graduate Center, City University of New York How does access to this work benefit ou?y Let us know! More information about this work at: https://academicworks.cuny.edu/gc_etds/1308 Discover additional works at: https://academicworks.cuny.edu This work is made publicly available by the City University of New York (CUNY). Contact: [email protected] INFIXER:AMETHOD FOR SEGMENTING NON-CONCATENATIVE MORPHOLOGY IN TAGALOG by STEVEN BUTLER A masters thesis submitted to the Graduate Faculty in Linguistics in partial fulfillment of the requirements for the degree of Master of Arts, The City University of New York 2016 © 2016 STEVEN BUTLER Some Rights Reserved This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. https://creativecommons.org/licenses/by-nc-sa/4.0/ ii This manuscript has been read and accepted by the Graduate Faculty in Linguistics in satisfaction of the thesis requirement for the degree of Master of Arts. Professor William Sakas Date Thesis Advisor Professor Gita Martohardjono Date Executive Officer THE CITY UNIVERSITY OF NEW YORK iii Abstract INFIXER:AMETHOD FOR SEGMENTING NON-CONCATENATIVE MORPHOLOGY IN TAGALOG by STEVEN BUTLER Adviser: Professor William Sakas In this paper, I present a method for coercing a widely-used morphological segmentation algo- rithm, Morfessor (Creutz and Lagus 2005a), into accurately segmenting non-concatenative mor- phological patterns. The non-concatenative patterns targeted—infixation and partial-reduplication— present problems for many segmentation algorithms, and tools that can successfully identify and segment those patterns can improve a number of downstream natural language processing tasks, including keyword search and machine translation.
    [Show full text]
  • Wikimedia Foundation Board of Trustees Meeting July 2018
    Wikimedia Foundation Board of Trustees meeting July 2018 ● Welcome ● Operations Agenda ● Movement strategy ● Look back/forward ● Chair’s year-in-review Day One ● Future of the Board ● Executive session Welcome 3 Operations 4 Revenue & Fundraising 5 $104 million raised in FY 17-18 Note: The Advancement Department releases a detailed fundraising report every September. Stay tuned . Revenue by $60.1m Quarter FY17-18 $22.4m $100.4m $10.3m $7.6m Q1 Q2 Q3 Q4 FY2018-19 Q1 revenue projections PROGRAM TARGET PROBABILITY PROJECTION Country Campaigns in Spain, South Africa, $5.3m 90% $4.8m Malaysia, and Japan Low-Level Multi-Country Campaigns $1.7m 90% $1.5m English Testing $2.5m 90% $2.2m Recurring Donations $2.2m 95% $2.1m Major Gifts Pipeline (that may come by September 30) $3m 25% $750,000 TOTAL: $11.35 m Financials 9 +$23.4M (+30%) revenue over plan -$0.4M (-1%) spending under budget Year-end +$9.4M (+10%) YoY growth in revenue +$10.6M (+16%) YoY growth in spending FY17-18 Maintained programmatic ratio at 74% overview ● Repurposed over $2.7M in underspend to Wikidata, Grants, combatting Wikipedia block in Turkey, trademark filings in countries with *Please note that all FY17-18 amount in this key emerging communities, strategic deck are preliminary pending completion of partnerships, Singapore data center, and other the full financial closing process and audit. programmatic investments FY17-18 $100.4M Revenue Actual $78.8M Budget $76.8M $78.4M vs. Target Actual Spending Revenue Spending Revenue exceeded spending by $22M Additional staffing, including $5.4M +16% YoY Increase CDPs Increasing grants to $2.3M +37% Drivers communities $78.4M Funds available for a specific $1.4M purpose (including Movement - FY17-18 Strategy) Building capacity in Technology $1.2M +36% and our data centers $67.6M Wikimania (which was not held $0.8M - FY16-17 in the prior fiscal year) Donation processing fees $0.7M +17% related to increased revenue Legal fees related to community $0.2M +17% defense, supporting privacy, & combating state censorship In FY16-17, Movement Strategy was $1.5M.
    [Show full text]
  • Imagine a World in Which Every Single Filipino Is Given Free Access to the Sum of All Human Knowledge
    Wikimedia Philippines Annual Report 2011 Imagine a world in which every single Filipino is given free access to the sum of all human knowledge. Wikimedia Philippines Inc. G/F Gervasia Center 152 Amorsolo Street, Legaspi Village, Makati City, Philippines 1229 +63 (2) 8127177 [email protected] http://www.wikimedia.org.ph/ http://www.facebook.com/pinoywiki Twitter: @pinoywikipedia Copyright Information Wikipedia and Wikimedia logos are copyrighted and trademark by the Wikimedia Foundation. Use of the Wikimedia logos and trademarks is subject to the Wikimedia trademark policy and visual identity guidelines, and may require permission. Wikmedia Philippines Annual Report 2011 is released under the Creative Commons Attribution-Share Alike 3.0 Unported license. (http://creativecommons.org/license/by-sa/3.0/) (COVER) Rizal Monument, CC-BY-SA, by Shutters Layout and design by Jojit Ballesteros and Bel Ballesteros Contents Message from the President 2 Educational Outreach, the Wikimedia Way 4 Advocating Freedom in Cyberspace 8 Partnerships with Local Organizations 10 International Cooperation 12 Wikimedia Foundation’s Outreach to the Philippines 15 Profile of the Organization History 18 Objectives 19 Membership 20 Meetings & Resolutions 21 Financial Statement 22 Board of Trustees (2010-2013) 24 Officers (2010-2011) 25 Officers (2011-2012) 25 Message from the President he year 2011 was full of opportunities and tough challenges. Wikimedia Philippines (WMPH) started the year with the celebration of the 10th anniversary of Wikipedia through Ta photo scavenger hunt called Wikipedia Takes the City (WTC) held in Manila on January 15. It was one of the biggest WTC event all over the world, at that time, with more than 100 participants.
    [Show full text]
  • The Global Language Network and Its Association with Global Fame
    Links that speak: The global language network and its association with global fame The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Ronen, Shahar, Bruno Gonçalves, Kevin Z. Hu, Alessandro Vespignani, Steven Pinker, and César A. Hidalgo. 2014. “Links That Speak: The Global Language Network and Its Association with Global Fame.” Proc Natl Acad Sci USA 111 (52) (December 15): E5616–E5622. doi:10.1073/pnas.1410931111. Published Version 10.1073/pnas.1410931111 Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:23680428 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Open Access Policy Articles, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#OAP Links that speak: The global language networks of Twitter, Wikipedia and Book Translations Shahar Ronen1, Bruno Gonçalves2,3,4, Kevin Z. Hu1, Alessandro Vespignani2, Steven A. Pinker5, César A. Hidalgo1 1 Macro Connections, The MIT Media Lab, Cambridge, MA 02139, USA 2 Department of Physics, Northeastern University, Boston, MA 02115, USA 3 Aix-Marseille Université, CNRS, CPT, UMR 7332, 13288 Marseille, France 4 Université de Toulon, CNRS, CPT, UMR 7332, 83957 La Garde, France 5 Department of Psychology, Harvard University, Cambridge, MA 02138, USA Abstract Languages vary enormously in global importance because of historical, demographic, political, and technological forces, and there has been much speculation about the current and future status of English as a global language. Yet there has been no rigorous way to define or quantify the relative global influence of languages.
    [Show full text]