Introducing New Features to Wikipedia

Total Page:16

File Type:pdf, Size:1020Kb

Introducing New Features to Wikipedia Introducing new features to Wikipedia – Case studies for Web Science – Mathias Schindler Denny Vrandeciˇ c´ Wikimedia Deutschland e.V. Institute AIFB Deutsche Nationalbibliothek Universität Karlsruhe(TH) Germany Germany [email protected] [email protected] ABSTRACT Wikipedia is a free web-based encyclopedia. It is written in col- laboration by hundred of thousands of contributors [2]. It runs on the Open Source MediaWiki wiki engine. Introducing new features to Wikipedia is not just a technical question, but a complex socio- technical process. Previous introductions of new features (the cat- egory system in 2004, parser functions in 2006, and flagged revi- sions in 2008) are described and analyzed. Based on these experi- ences, the design of a new feature (creating semantic annotations) is given and discussed. The interaction between the technical fea- tures and the community is shown to be an instantiation of the Web Science research cycle, thus testing the cycle as a methodological tool for web science. 1. INTRODUCTION Wikipedia is a free web-based encyclopedia. It is written in col- laboration by hundred of thousands of contributors [2]. Today, Wikipedia is available in more than 250 languages offering more than 10 million articles. For some of these languages, Wikipedia is not only a free encyclopedia, but also the only encyclopedia avail- able in this language. Currently it is among the ten most visited Figure 1: The web science process as presented by Tim web sites in the world, sporting more than 50 thousand hits per Berners-Lee in his Keynote The two Magics of Web Science second in the peak time. at the WWW2007, Banff, Canada. The two magics are creativity and complexity, since both are not well under- The introduction of new technical features to Wikipedia has al- stood yet. Source: http://www.w3.org/2007/Talks/ ways turned out to be a complex socio-technical process. Since 0509-www-keynote-tbl/#(11) both the technical implementation and the content development of Wikipedia are done in free and open projects that both offer their 1 complete change logs, the co-development of the technical and aspects. If done well, the socio-technical implementation of the social aspects can be traced and analyzed. idea may resolve the original issue on a micro-level. But due to the size and the complexity of the web, the solution will have novel and Section 2 offers a short overview of the web science process, before unexpected implications on the macro-level. These in turn have to using it to describe the introduction of new features to Wikipedia be analyzed in order to identify new issues (i.e. situations that do in four case studies: the category system, introduced in 2004 (Sec- not conform to certain values), thus starting the web science pro- tion 3.1); parser functions, introduced in 2006 (Section 3.2); flagged cess anew. A detailed description of the steps of the process is given revisions, introduced in 2008 (Section 3.3); and semantic annota- in [4]. Figure 1 gives an graphical overview of the model [3]. tion, currently being proposed (Section 4). We close the paper in Section 5 with conclusions drawn from the case studies. The authors already identify a number of examples for the web science process. In this paper we describe further examples from 2. WEB SCIENCE PROCESS the history of Wikipedia, that can be fully traced and analyzed due The web science process [4] is a model to describe the evolution to the availability of the data about the Wikipedia project and its of the web and of systems on the web. In order to resolve issues, history. engineers creatively come up with new ideas. These are then im- plemented. Since the web is an inherently social infrastructure, the technical implementation of an idea will necessarily involve social 3. PREVIOUS EXPERIENCES 1MediaWiki’s SVN is accessible at http://svn.wikmedia. This section describes three case studies of previously introduced org, the database dumps of Wikipedia’s complete log history can features to the Wikipedia system. Section 3.4 notes some other be found at http://download.wikipedia.org examples that may grant further studies. 3.1 Category system Greece is {{Country | Continent=Europe (a) Following the initial growth of Wikipedia, features to improve ex- | Capital=Athens}} ploring and navigating the content became necessary. Early ver- a country in {{{Continent}}}. It’s sions of the MediaWiki software offered basically only the man- capital is {{{Capital}}}. ually created links, a backlinks feature, and a full text search in (b) order to discover the content of the encyclopedia. In many other [[Category:Country]] wikis the backlinks feature (that displays all pages that link to a specific page) was used to implement a rudimentary tagging sys- Greece is a country in Europe. It’s capital is Athens. tem: on creating a link to a page like "Greece-related topic" on all (c) pages that discuss topics related to Greece, the list of backlinks on the page "Greece-related topic" will be a list of all pages on that [[Category:Country]] topic. This approach has a number of disadvantages: the links to the topic page will behave like any other link (i.e. display in the Figure 2: Usage of templates. (a) Source of a page about text), the linked page itself will behave like a normal article (i.e. Greece calling the country template; (b) Source of the coun- it will be a normal page, and displaying the backlinks will require try template; (c) Text of the page about Greece after template- a second click on the appropriate link in the toolbar), and normal expansion. articles like "Greece" can not be used as category pages (since all normal links would also appear in the list, not just those that were introduced for categorizing: Albania would be in the list of pages 3.2 Parser functions linking to Greece, since it is would link to it, being a bordering Templates are used to include the text of another page (usually in country). Even though this is the usual procedure in many other the separate Template namespace) at the place were the template wikis, Wikipedia often avoided such wiki-specific idiosyncrasies, is being called. This allows for a higher consistency within the e.g. the omission of camel-case syntax in favor of free links (lead- wiki, since some elements will have to be written only once and ing to Wikipedia often being described as a rather atypical wiki). reused in several articles. Templates may also feature parameters, i.e. parametrized template calls will insert the parameter’s values The category system was introduced to MediaWiki in 2004 in or- in the replacing text. See Figure 2 for an example. Templates are der to solve that problem. Each article could be put into an ar- widely used in Wikipedia, e.g. the English Wikipedia offers more bitrary number of categories that are identified by freely chosen than 200,000 templates. names. Adding a page to the category "Greece" is done by adding [[Category:Greece]] to the page. Category links themselves In 2006 some Wikipedians discovered that through an intricate and are not displayed in the article text (but rather in separate visual ele- complicated interplay of templating features and CSS they could ments on the rendering of a page). The category page is a page of its create conditional wiki text, i.e. text that was displayed if a tem- own in the newly introduced category namespace, thus separating plate parameter had a specific value. This included repeated calls of it cleanly from the article space. Category pages were programmed templates within templates, which bogged down the performance in such a way to display the list of all pages categorized with this of the whole system. The developers faced the choice of either dis- category. The new category system resolved all the identified tech- allowing the spreading of an obviously desired feature by detecting nical issues of using backlinks for categorizing articles. such usage and explicitly disallowing it within the software, or offer an efficient alternative. The latter was done by Tim Starling, who The category system was activated in all language editions at the in April 2006 announced the introduction of parser functions,2 i.e. same time without consulting the Wikipedia communities of the wiki text that calls functions implemented in the underlying soft- different languages. On most languages it was quickly applied to ware. categorize a majority of the existing pages. Soon new issues were discovered, namely how to organize categories themselves. This At first, only conditional text and the computation of simple math- lead to further new features like the category tree extensions, which ematical expressions was implemented, but this already increased renders a tree of subcategories on the page of a category etc. The the possibilities for wiki editors enormously. With time further introduction and application of the categories were heatedly de- parser functions were introduced, finally leading to a framework bated in some language communities. Restrictions on the usage of that allowed the simple writing of extension function to add arbi- categories were introduced and enforced solely by the community trary functionalities, like e.g. geo-coding services or widgets. This and with manual effort. On the German Wikipedia, the commu- time the developers were clearly reacting to the demand of the com- nity decided on a moratorium on category usage, in order to first munity, being forced either to fight the solution of the issue that the define some guidelines to their application. The moratorium, and community had (i.e. conditional text), or offer an improved techni- the enforcement of the guidelines, were done fully manually by the cal implementation to replace the previous practice and achieve an community members.
Recommended publications
  • Report on Requirements for Usage and Reuse Statistics for GLAM Content
    Report on Requirements for Usage and Reuse Statistics for GLAM Content Author: Maarten Zeinstra, Kennisland 1/19 Report on Requirements for Usage and Reuse Statistics for GLAM Content 1. Table of Contents 1. Table of Contents 2 2. GLAMTools project 3 3. Executive Summary 4 4. Introduction 5 5. Problem definition 6 6. Current status 7 7. Research overview 8 7.1. Questionnaire 8 7.2. Qualitative research on the technical possibilities of GLAM stats 12 7.3. Research conclusions 13 8. Requirements for usage and reuse statistics for GLAM content 14 8.1. High-level requirements 14 8.2. High-level technical requirements 14 8.3. Medium-level technical requirements 15 8.4. Non-functional requirements 15 8.5. Non-requirements / out of scope 15 8.6. Data model 15 9. Next steps 19 2/19 2. GLAMTools project The “Report on requirements for usage and reuse statistics for GLAM content” is part of the Europeana GLAMTools project, a joint Wikimedia Chapters and Europeana project. The purpose of the project is to develop a scalable and maintainable system for mass uploading (open) content from galleries, libraries, archives and museums (henceforth GLAMs) to Wikimedia Commons and to create GLAM-specific requirements for usage statistics. The GLAMTools project is initiated and made possible by financial support from Wikimedia France, Wikimedia UK, Wikimedia Netherlands, and Wikimedia Switzerland. Maarten Zeinstra from Kennisland wrote the research behind this report and the report itself. Kennisland is a Dutch think tank active in the cultural sector, working especially with open content, dissemination of cultural heritage and copyright law and is also a partner in the project.
    [Show full text]
  • Brion Vibber 2011-08-05 Haifa Part I: Editing So Pretty!
    Editing 2.0 MediaWiki's upcoming visual editor and the future of templates Brion Vibber 2011-08-05 Haifa Part I: Editing So pretty! Wikipedia articles can include rich formatting, beyond simple links and images to complex templates to generate tables, pronunciation guides, and all sorts of details. So icky! But when you push that “edit” button, you often come face to face with a giant blob of markup that’s very hard to read. Here we can’t even see the first paragraph of the article until we scroll past several pages of infobox setup. Even fairly straightforward paragraphs start to blur together when you break out the source. The markup is often non-obvious; features that are clearly visible in the rendered view like links and images don’t stand out in the source view, and long inline citations and such can make it harder to find the actual body text you wanted to edit. RTL WTF? As many of you here today will be well aware, the way our markup displays in a raw text editor can also be really problematic for right-to-left scripts like Hebrew and Arabic. It’s very easy to get lost about whether you’ve opened or closed some tag, or whether your list item is starting at the beginning of a line. Without control over the individual pieces, we can’t give any hints to the text editor. RTL A-OK The same text rendered as structured HTML doesn’t have these problems; bullets stay on the right side of their text, and reference citations are distinct entities.
    [Show full text]
  • The Culture of Wikipedia
    Good Faith Collaboration: The Culture of Wikipedia Good Faith Collaboration The Culture of Wikipedia Joseph Michael Reagle Jr. Foreword by Lawrence Lessig The MIT Press, Cambridge, MA. Web edition, Copyright © 2011 by Joseph Michael Reagle Jr. CC-NC-SA 3.0 Purchase at Amazon.com | Barnes and Noble | IndieBound | MIT Press Wikipedia's style of collaborative production has been lauded, lambasted, and satirized. Despite unease over its implications for the character (and quality) of knowledge, Wikipedia has brought us closer than ever to a realization of the centuries-old Author Bio & Research Blog pursuit of a universal encyclopedia. Good Faith Collaboration: The Culture of Wikipedia is a rich ethnographic portrayal of Wikipedia's historical roots, collaborative culture, and much debated legacy. Foreword Preface to the Web Edition Praise for Good Faith Collaboration Preface Extended Table of Contents "Reagle offers a compelling case that Wikipedia's most fascinating and unprecedented aspect isn't the encyclopedia itself — rather, it's the collaborative culture that underpins it: brawling, self-reflexive, funny, serious, and full-tilt committed to the 1. Nazis and Norms project, even if it means setting aside personal differences. Reagle's position as a scholar and a member of the community 2. The Pursuit of the Universal makes him uniquely situated to describe this culture." —Cory Doctorow , Boing Boing Encyclopedia "Reagle provides ample data regarding the everyday practices and cultural norms of the community which collaborates to 3. Good Faith Collaboration produce Wikipedia. His rich research and nuanced appreciation of the complexities of cultural digital media research are 4. The Puzzle of Openness well presented.
    [Show full text]
  • Wikipedia Editing History in Dbpedia Fabien Gandon, Raphael Boyer, Olivier Corby, Alexandre Monnin
    Wikipedia editing history in DBpedia Fabien Gandon, Raphael Boyer, Olivier Corby, Alexandre Monnin To cite this version: Fabien Gandon, Raphael Boyer, Olivier Corby, Alexandre Monnin. Wikipedia editing history in DB- pedia : extracting and publishing the encyclopedia editing activity as linked data. IEEE/WIC/ACM International Joint Conference on Web Intelligence (WI’ 16), Oct 2016, Omaha, United States. hal- 01359575 HAL Id: hal-01359575 https://hal.inria.fr/hal-01359575 Submitted on 2 Sep 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Wikipedia editing history in DBpedia extracting and publishing the encyclopedia editing activity as linked data Fabien Gandon, Raphael Boyer, Olivier Corby, Alexandre Monnin Université Côte d’Azur, Inria, CNRS, I3S, France Wimmics, Sophia Antipolis, France [email protected] Abstract— DBpedia is a huge dataset essentially extracted example, the French editing history dump represents 2TB of from the content and structure of Wikipedia. We present a new uncompressed data. This data extraction is performed by extraction producing a linked data representation of the editing stream in Node.js with a MongoDB instance. It takes 4 days to history of Wikipedia pages. This supports custom querying and extract 55 GB of RDF in turtle on 8 Intel(R) Xeon(R) CPU E5- combining with other data providing new indicators and insights.
    [Show full text]
  • Genedb and Wikidata[Version 1; Peer Review: 1 Approved, 1 Approved with Reservations]
    Wellcome Open Research 2019, 4:114 Last updated: 20 OCT 2020 SOFTWARE TOOL ARTICLE GeneDB and Wikidata [version 1; peer review: 1 approved, 1 approved with reservations] Magnus Manske , Ulrike Böhme , Christoph Püthe , Matt Berriman Parasites and Microbes, Wellcome Trust Sanger Institute, Cambridge, CB10 1SA, UK v1 First published: 01 Aug 2019, 4:114 Open Peer Review https://doi.org/10.12688/wellcomeopenres.15355.1 Latest published: 14 Oct 2019, 4:114 https://doi.org/10.12688/wellcomeopenres.15355.2 Reviewer Status Abstract Invited Reviewers Publishing authoritative genomic annotation data, keeping it up to date, linking it to related information, and allowing community 1 2 annotation is difficult and hard to support with limited resources. Here, we show how importing GeneDB annotation data into Wikidata version 2 allows for leveraging existing resources, integrating volunteer and (revision) report report scientific communities, and enriching the original information. 14 Oct 2019 Keywords GeneDB, Wikidata, MediaWiki, Wikibase, genome, reference, version 1 annotation, curation 01 Aug 2019 report report 1. Sebastian Burgstaller-Muehlbacher , This article is included in the Wellcome Sanger Medical University of Vienna, Vienna, Austria Institute gateway. 2. Andra Waagmeester , Micelio, Antwerp, Belgium Any reports and responses or comments on the article can be found at the end of the article. Corresponding author: Magnus Manske ([email protected]) Author roles: Manske M: Conceptualization, Methodology, Software, Writing – Original Draft Preparation; Böhme U: Data Curation, Validation, Writing – Review & Editing; Püthe C: Project Administration, Writing – Review & Editing; Berriman M: Resources, Writing – Review & Editing Competing interests: No competing interests were disclosed. Grant information: This work was supported by the Wellcome Trust through a Biomedical Resources Grant to MB [108443].
    [Show full text]
  • The History of Mediawiki and the Prospective for the Future
    The history of MediaWiki and the prospective for the future Bern, 2017-02-04 Magnus Manske https://commons.wikimedia.org/wiki/File:MediaWiki_talk,_Bern,_2017-02-04,_Magnus_Manske.pdf Overview ● History of MediaWiki, with a focus on Wikimedia ● Usage, extensions and scripts ● Wikidata ● Of course, a few tools Pre-history UseModWiki by Ward Cunningham ● Single file Perl script ● All data stored in file system ● Does not scale well ● No “special pages” beyond Recent Changes ● Default: delete revisions older than two weeks, to save disk space… ● Used for Wikipedia since 2001-01 Phase 2 ● Mea maxima culpa ● Switch on 2002-01-25 ● PHP (for better or worse) ● Using MySQL to store data ● Based on UseModWiki syntax for backwards compatibility Brion Vibber Photo by JayWalsh CC-BY-SA 3.0 Tim Starling Photo by Lane Hartwell Nostalgia Wikipedia screenshot CC-BY-SA 3.0 Phase 3 / MediaWiki ● Massive refactoring/rewrite by Lee Daniel Crocker ● Proper OO, profiling functions ● Known as “Phase 3”, subsequently renamed “MediaWiki” (July 2003) Lee Daniel Crocker Picture by User:Cowtung, CC-BY-SA 3.0 New features since UseModWiki (for Wikipedia) ● Namespaces ● Special Pages with advanced functionality ● Skins ● Local (later: Commons) file upload ● Categories ● Templates ● Parser functions ● Table syntax ● Media viewer ● Visual editor ● Kartograph MediaWiki today ● Most widely used Wiki software ● Over 2,200 extensions ● Customizable via JavaScript gadgets and user scripts ● Comprehensive API Multiple caching mechanisms for large installations (e.g. Wikipedia)
    [Show full text]
  • A Complete, Longitudinal and Multi-Language Dataset of the Wikipedia Link Networks
    WikiLinkGraphs: A Complete, Longitudinal and Multi-Language Dataset of the Wikipedia Link Networks Cristian Consonni David Laniado Alberto Montresor DISI, University of Trento Eurecat, Centre Tecnologic` de Catalunya DISI, University of Trento [email protected] [email protected] [email protected] Abstract and result in a huge conceptual network. According to Wiki- pedia policies2 (Wikipedia contributors 2018e), when a con- Wikipedia articles contain multiple links connecting a sub- cept is relevant within an article, the article should include a ject to other pages of the encyclopedia. In Wikipedia par- link to the page corresponding to such concept (Borra et al. lance, these links are called internal links or wikilinks. We present a complete dataset of the network of internal Wiki- 2015). Therefore, the network between articles may be seen pedia links for the 9 largest language editions. The dataset as a giant mind map, emerging from the links established by contains yearly snapshots of the network and spans 17 years, the community. Such graph is not static but is continuously from the creation of Wikipedia in 2001 to March 1st, 2018. growing and evolving, reflecting the endless collaborative While previous work has mostly focused on the complete hy- process behind it. perlink graph which includes also links automatically gener- The English Wikipedia includes over 163 million con- ated by templates, we parsed each revision of each article to nections between its articles. This huge graph has been track links appearing in the main text. In this way we obtained exploited for many purposes, from natural language pro- a cleaner network, discarding more than half of the links and cessing (Yeh et al.
    [Show full text]
  • What Is a Wiki? Tutorial 1 for New Wikieducators
    What is a Wiki? Tutorial 1 for new WikiEducators wikieducator.org book - Generated using the open source mwlib toolkit - see http://code.pediapress.com for more information 2 Introducing a Wiki Objectives In this tutorial we will: • provide an overview of what wikis are, • and show some examples of their different uses. • discuss the advantages and disadvantages of using wikis to develop content • describe the main features of WikiEducator What is a Wiki? The name "Wiki" was chosen by Ward Cunningham - - the creator of the first Wiki. It is a shortened form of "wiki- wiki", the Hawaiian word for quick. A wiki is a web site that is generally editable by anyone with a computer, a web browser, and an internet connection. Wikis use a quick and easy syntax to allow users to apply formatting to text and create links between pages. This simple formatting syntax means that authors no longer need to learn the complexities of HTML to create content on the web. The main strength of a wiki is that it gives people the ability to work collaboratively on the same document. The only software you need is an Wiki wiki sign outside Honolulu International Internet browser. Consequently, wikis are used Airport. (Image courtesy of A. Barataz) for a variety of purposes. If you make a mistake, it's easy to revert back to an earlier version of the document. All content sourced from WikiEducator.org and is licensed under CC-BY-SA or CC-BY where specified. 3 Examples of Wikis The largest and most talked about Wiki on the Internet is Wikipedia[1] Wikipedia is, for the most part, editable by anyone in the world with a computer and an internet connection and, at the time of this writing, contained over 1,500,000 pages.
    [Show full text]
  • The Case of Wikipedia Jansn
    UvA-DARE (Digital Academic Repository) Wisdom of the crowd or technicity of content? Wikipedia as a sociotechnical system Niederer, S.; van Dijck, J. DOI 10.1177/1461444810365297 Publication date 2010 Document Version Submitted manuscript Published in New Media & Society Link to publication Citation for published version (APA): Niederer, S., & van Dijck, J. (2010). Wisdom of the crowd or technicity of content? Wikipedia as a sociotechnical system. New Media & Society, 12(8), 1368-1387. https://doi.org/10.1177/1461444810365297 General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. UvA-DARE is a service provided by the library of the University of Amsterdam (https://dare.uva.nl) Download date:27 Sep 2021 Full Title: Wisdom of the Crowd or Technicity of Content? Wikipedia as a socio-technical system Authors: Sabine Niederer and José van Dijck Sabine Niederer, University of Amsterdam, Turfdraagsterpad 9, 1012 XT Amsterdam, The Netherlands [email protected] José van Dijck, University of Amsterdam, Spuistraat 210, 1012 VT Amsterdam, The Netherlands [email protected] Authors’ Biographies Sabine Niederer is PhD candidate in Media Studies at the University of Amsterdam, and member of the Digital Methods Initiative, Amsterdam.
    [Show full text]
  • Critical Point of View: a Wikipedia Reader
    w ikipedia pedai p edia p Wiki CRITICAL POINT OF VIEW A Wikipedia Reader 2 CRITICAL POINT OF VIEW A Wikipedia Reader CRITICAL POINT OF VIEW 3 Critical Point of View: A Wikipedia Reader Editors: Geert Lovink and Nathaniel Tkacz Editorial Assistance: Ivy Roberts, Morgan Currie Copy-Editing: Cielo Lutino CRITICAL Design: Katja van Stiphout Cover Image: Ayumi Higuchi POINT OF VIEW Printer: Ten Klei Groep, Amsterdam Publisher: Institute of Network Cultures, Amsterdam 2011 A Wikipedia ISBN: 978-90-78146-13-1 Reader EDITED BY Contact GEERT LOVINK AND Institute of Network Cultures NATHANIEL TKACZ phone: +3120 5951866 INC READER #7 fax: +3120 5951840 email: [email protected] web: http://www.networkcultures.org Order a copy of this book by sending an email to: [email protected] A pdf of this publication can be downloaded freely at: http://www.networkcultures.org/publications Join the Critical Point of View mailing list at: http://www.listcultures.org Supported by: The School for Communication and Design at the Amsterdam University of Applied Sciences (Hogeschool van Amsterdam DMCI), the Centre for Internet and Society (CIS) in Bangalore and the Kusuma Trust. Thanks to Johanna Niesyto (University of Siegen), Nishant Shah and Sunil Abraham (CIS Bangalore) Sabine Niederer and Margreet Riphagen (INC Amsterdam) for their valuable input and editorial support. Thanks to Foundation Democracy and Media, Mondriaan Foundation and the Public Library Amsterdam (Openbare Bibliotheek Amsterdam) for supporting the CPOV events in Bangalore, Amsterdam and Leipzig. (http://networkcultures.org/wpmu/cpov/) Special thanks to all the authors for their contributions and to Cielo Lutino, Morgan Currie and Ivy Roberts for their careful copy-editing.
    [Show full text]
  • Download Annual Report
    Annual Report 2018 - 2019 Contents About CIS 4 Theory of Change 6 Resource Utilisation 8 About the Team 14 Donors for Financial Year 2018-19 16 Highlights for 2017-18 18 Count of Entries 22 CIS and the News 24 Access to Knowledge 26 Openness 34 Internet Governance 36 researchers @ work 50 Telecom 54 Annexure 56 About CIS The Centre for Internet & Society (CIS) is a 10 years’ old non-profit organization with offices in Bengaluru and New Delhi undertaking interdisciplinary policy and academic research on internet and digital technologies. CIS secretariat works across several research programmes such as Accessibility (for the visually impaired and persons with disabilities); Access to Knowledge (covering copyright, patent, open data, free and open source software, open standards, open access, open educational resources, and open content); Internet Governance (on free speech, privacy, artificial intelligence, big data, cyber security, and future of work); Telecommunication (spectrum and broadband), and Researchers at Work (multi-disciplinary research on internet and digital media technologies). CIS has been associated with several regional and international research networks: Research Networks Privacy International: Member of the Privacy International network funded by IDRC Cyber Stewards: Member of the Cyber Stewards Network since 2013 Global Partnership on AI: Member of the Partnership on Artificial Intelligence to Benefit People and Society since 2017 International Self Regulatory /Multi Stakeholder Networks WIPO: Accredited NGO Global Network Initiative: Elonnai Hickok is an Alternative Board Member of GNI ICANN: Active Member of Non Commercial Stakeholder Group and Asia Pacific Representative of the NCUC in 2013 Open Data Charter: CIS is a lead steward of the Open Data Charter 4 Regional Networks CPR South: Nirmita Narasimhan has been a Board Member CPR South since December 2016 Asia Privacy Scholars Network: Elonnai Hickok and Sunil Abraham are members of the Asia Privacy Scholars Network since 2016 REGISTRATION NO.
    [Show full text]
  • Working-With-Mediawiki-Yaron-Koren.Pdf
    Working with MediaWiki Yaron Koren 2 Working with MediaWiki by Yaron Koren Published by WikiWorks Press. Copyright ©2012 by Yaron Koren, except where otherwise noted. Chapter 17, “Semantic Forms”, includes significant content from the Semantic Forms homepage (https://www. mediawiki.org/wiki/Extension:Semantic_Forms), available under the Creative Commons BY-SA 3.0 license. All rights reserved. Library of Congress Control Number: 2012952489 ISBN: 978-0615720302 First edition, second printing: 2014 Ordering information for this book can be found at: http://workingwithmediawiki.com All printing of this book is handled by CreateSpace (https://createspace.com), a subsidiary of Amazon.com. Cover design by Grace Cheong (http://gracecheong.com). Contents 1 About MediaWiki 1 History of MediaWiki . 1 Community and support . 3 Available hosts . 4 2 Setting up MediaWiki 7 The MediaWiki environment . 7 Download . 7 Installing . 8 Setting the logo . 8 Changing the URL structure . 9 Updating MediaWiki . 9 3 Editing in MediaWiki 11 Tabs........................................................... 11 Creating and editing pages . 12 Page history . 14 Page diffs . 15 Undoing . 16 Blocking and rollbacks . 17 Deleting revisions . 17 Moving pages . 18 Deleting pages . 19 Edit conflicts . 20 4 MediaWiki syntax 21 Wikitext . 21 Interwiki links . 26 Including HTML . 26 Templates . 27 3 4 Contents Parser and tag functions . 30 Variables . 33 Behavior switches . 33 5 Content organization 35 Categories . 35 Namespaces . 38 Redirects . 41 Subpages and super-pages . 42 Special pages . 43 6 Communication 45 Talk pages . 45 LiquidThreads . 47 Echo & Flow . 48 Handling reader comments . 48 Chat........................................................... 49 Emailing users . 49 7 Images and files 51 Uploading . 51 Displaying images . 55 Image galleries .
    [Show full text]