Robots.Txt: an Ethnographic Investigation of Automated Software Agents in User-Generated Content Platforms

Total Page:16

File Type:pdf, Size:1020Kb

Robots.Txt: an Ethnographic Investigation of Automated Software Agents in User-Generated Content Platforms robots.txt: An Ethnographic Investigation of Automated Software Agents in User-Generated Content Platforms By Richard Stuart Geiger II A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy In Information Management and Systems and the Designated Emphasis in New Media in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Jenna Burrell, Chair Professor Coye Cheshire Professor Paul Duguid Professor Eric Paulos Fall 2015 robots.txt: An Ethnographic Investigation of Automated Software Agents in User-Generated Content Platforms © 2015 by Richard Stuart Geiger II Freely licensed under the Creative Commons Attribution-ShareAlike 4.0 License (License text at https://creativecommons.org/licenses/by-sa/4.0/) Abstract robots.txt: An Ethnographic Investigation of Automated Software Agents in User-Generated Content Platforms by Richard Stuart Geiger II Doctor of Philosophy in Information Management and Systems with a Designated Emphasis in New Media University of California, Berkeley Professor Jenna Burrell, Chair This dissertation investigates the roles of automated software agents in two user-generated content platforms: Wikipedia and Twitter. I analyze ‘bots’ as an emergent form of sociotechnical governance, raising many issues about how code intersects with community. My research took an ethnographic approach to understanding how participation and governance operates in these two sites, including participant-observation in everyday use of the sites and in developing ‘bots’ that were delegated work. I also took a historical and case studies approach, exploring the development of bots in Wikipedia and Twitter. This dissertation represents an approach I term algorithms-in-the-making, which extends the lessons of scholars in the field of science and technology studies to this novel domain. Instead of just focusing on the impacts and effects of automated software agents, I look at how they are designed, developed, and deployed – much in the same way that ethnographers and historians of science tackle the construction of scientific facts. In this view, algorithmic agents come on the scene as ways for people to individually and collectively articulate the kind of governance structures they want to see in these platforms. Each bot that is delegated governance work stands in for a wide set of assumptions and practices about what Wikipedia or Twitter is and how it ought to operate. I argue that these bots are most important for the activities of collective sensemaking that they facilitate, as developers and non-developers work to articulate a common understanding of what kind of work they want a bot to do. Ultimately, these cases have strong implications and lessons for those who are increasingly concerned with ‘the politics of algorithms,’ as they touch on issues of gatekeeping, socialization, governance, and the construction of community through algorithmic agents. 1 Acknowledgements There are too many people for me to thank in completing this dissertation and my Ph.D. This work is dedicated my personal infrastructure of knowing and knowledge production – the vast assemblage of individuals who I have relied on for guidance and support throughout my time as a graduate student. There could be a whole dissertation of material documenting all the people who have helped me get to this point, but these few pages will have to suffice for now. The School of Information at UC-Berkeley has given me a wealth of resources to help me get to this moment, but the people have been the greatest part of that. My fellow members of my Ph.D cohort, Galen Panger and Meena Natarajan, have been on the same path with me for years, helping each other through all the twists and turns of this journey. I also want to thank many members of my Ph.D program, who have helped me refine and shape my ideas and myself. I’ve presented much of the work in this dissertation at both formal colloquia and informal talks over drinks in the Ph.D lounge, so thanks to Andy Brooks, Ashwin Matthew, Christo Sims, Dan Perkel, Daniela Rosner, Elaine Sedenberg, Elizabeth Goodman, Ishita Ghosh, Jen King, Megan Finn, Nick Doty, Nick Merrill, Noura Howell, Richmond Wong, Sarah Van Wart, and Sebastian Benthall for their ideas and support. The School of Information is also home to three members of my committee, without whom this dissertation could not have been written. I want to thank my chair and advisor Jenna Burrell for her time and energy in supporting me as a Ph.D student, as well as Paul Duguid and Coye Cheshire, for their years of service and support. They have helped me not only with research for this dissertation, but also in various other aspects of becoming an academic. I’d also like to thank Eric Paulos, my outside committee member, for his advising around my dissertation. And there are other professors at Berkeley who have helped shape this work and my thinking through classes and conversations: Greg Niemeyer, Ken Goldberg, Abigail De Kosnik, Cathryn Carson, Massimo Mazzotti, and Geoffrey Nunberg. I’d also like to thank South Hall administrative staff for all of their work in getting me to this point, including: Meg St. John, Lety Sanchez, Roberta Epstein, Siu Yung Wong, Alexis Scott, and Daniel Sebescen, as well as Lara Wolfe at the Berkeley Center for New Media and Stacey Dornton at the Berkeley Institute for Data Science. I’ve had the opportunity to collaborate with amazing researchers, who have shaped and expanded my thought. Aaron Halfaker, my long-time collaborator, has been a constant source of support and inspiration, and I know that we have both expanded our ways of thinking in our collective work over the years. I also want to thank Heather Ford, my fellow ethnographer of Wikipedia, for her collaboration, friendship, and support. I’m thankful to met the members of the Wikimedia Summer of Research 2011, who early on in my Ph.D helped me understand the full breath of Wikipedia research: Fabian Kaelin, Giovanni Luca Ciampaglia, Jonathan Morgan, Melanie Kill, Shawn Walker, and Yusuke Matsubara. I also want to thank many people at the Wikimedia Foundation, who have been incredibly helpful throughout this research including Dario Taraborelli, Diederik van Liere, Oliver Keyes, Maryana Pinchuk, Siko Bouterse, Steven i Walling, and Zack Exley. I’d also like to thank the Wikipedians and Twitter blockbot developers who have assisted with this research, both in the time they took to talk with me and in the incredibly important work that they do in these spaces. I am also grateful to have an abundant wealth of friends and colleagues in academia, who have helped shape both me and my work. Many of these people I met when we were both graduate students, although some are now faculty! I’ve met many of these people at conferences where I’ve presented my work or heard them present their work, and our conversations have helped me get to the place where I am today. These people include: Aaron Shaw, Airi Lampinen, Alex Leavitt, Alyson Young, Amanda Menking, Amelia Acker, Amy Johnson, Andrea Wiggins, Andrew Schrock, Anne Helmond, Brian Keegan, Caroline Jack, Casey Fiesler, Charlotte Cabasse, Chris Peterson, Dave Lester, Heather Ford, Jed Brubaker, Jen Schradie, Jodi Schneider, Jonathan Morgan, Jordan Kraemer, Kate Miltner, Katie Derthick, Kevin Driscoll, Lana Swartz, Lilly Irani, Mako Hill, Matt Burton, Melanie Kill, Meryl Alper, Molly Sauter, Nate Tkacz, Nathan Matias, Nick Seaver, Norah Abokhodair, Paloma Checa-Gismero, Rachel Magee, Ryan Milner, Sam Wolley, Shawn Walker, Stacy Blasiola, Stephanie Steinhardt, Tanja Aitamurto, Tim Highfield, Whitney Philips, and Whitney Erin Boesel. I’m sure I’m also leaving some people out of this long list, but this only goes to show you that so much of what I have accomplished is because of all the people with whom I’ve been able to trade ideas and insights. There have also been many faculty members who have generously helped me think about my work and my career, as well as given me many opportunities during my time as a graduate student. I want to particularly thank David Ribes, my advisor in my M.A. program at Georgetown, for helping me first dive into these academic fields, and for continuing to help support me. I also want to thank those scholars who have helped me think through my work and my career in academia: Amy Bruckman, Andrea Tapia, Andrea Forte, Andrew Lih, Annette Markham, Chris Kelty, Cliff Lampe, David McDonald, Geert Lovink, Gina Neff, Janet Vertesi, Kate Crawford, Katie Shilton, Loren Terveen, Malte Ziewitz, Mark Graham, Mary Gray, Nancy Baym, Nicole Elison, Paul Dourish, Phil Howard, Sean Goggins, Steve Jackson, Steve Sawyer, T.L. Taylor, Tarleton Gillespie, Zeynep Tufecki, and Zizi Papacharissi. Institutionally, I’d like to thank many departments, centers, and institutes for supporting me and my work, both at Berkeley and beyond. Locally, I’m thankful for the support of the UC- Berkeley School of Information, the Department of Sociology, the Center for Science, Technology, Medicine, and Society (CSTMS), the Center for New Media, and the Berkeley Institute for Data Science. Further out, I’m thankful for the support of the Center for Media, Data, and Society at Central European University, the Oxford Internet Institute, the MIT Center for Civic Media, the Berkman Center at Harvard Law School, the Consortium for the Science of the Sociotechnical, and the Wikimedia Foundation. People and programs at these institutions have graciously hosted me and other like-minded academics throughout the years, giving an invaluable opportunity for me to grow as a scholar. ii Last but certainly not least, I’d like to thank the people in my family, who have supported me for longer than anybody else mentioned so far.
Recommended publications
  • Using New Technologies for Library Instruction in Science and Engineering: Web 2.0 Applications
    University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Faculty Publications, UNL Libraries Libraries at University of Nebraska-Lincoln October 2006 Using New Technologies for Library Instruction in Science and Engineering: Web 2.0 Applications Virginia A. Baldwin University of Nebraska-Lincoln, [email protected] Follow this and additional works at: https://digitalcommons.unl.edu/libraryscience Part of the Library and Information Science Commons Baldwin, Virginia A., "Using New Technologies for Library Instruction in Science and Engineering: Web 2.0 Applications" (2006). Faculty Publications, UNL Libraries. 56. https://digitalcommons.unl.edu/libraryscience/56 This Article is brought to you for free and open access by the Libraries at University of Nebraska-Lincoln at DigitalCommons@University of Nebraska - Lincoln. It has been accepted for inclusion in Faculty Publications, UNL Libraries by an authorized administrator of DigitalCommons@University of Nebraska - Lincoln. Using New Technologies for Library Instruction in Science and Engineering: Web 2.0 Applications “Quantum computation is... a distinctively new way of harnessing nature... It will be the first technology that allows useful tasks to be performed in collaboration between parallel universes.” … David Deutsch, The Fabric of Reality: the Science of Parallel Universes-- and its Implications http://en.wikiquote.org/wiki/David_Deutsch INTRODUCTION The transformational concept of Web 2.0 for libraries was a hot topic at three major conferences in June of 2006. The American Library Association (ALA), Special Libraries Association (SLA), and the American Society for Engineering Education (ASEE) conferences all had sessions on the subject. Not all of the focus was on sci-tech libraries. An exploration of the use of these technologies for library instruction in science and engineering fields is the emphasis for this column.
    [Show full text]
  • DETECTING BOTS in INTERNET CHAT by SRITI KUMAR Under The
    DETECTING BOTS IN INTERNET CHAT by SRITI KUMAR (Under the Direction of Kang Li) ABSTRACT Internet chat is a real-time communication tool that allows on-line users to communicate via text in virtual spaces, called chat rooms or channels. The abuse of Internet chat by bots also known as chat bots/chatterbots poses a serious threat to the users and quality of service. Chat bots target popular chat networks to distribute spam and malware. We first collect data from a large commercial chat network and then conduct a series of analysis. While analyzing the data, different patterns were detected which represented different bot behaviors. Based on the analysis on the dataset, we proposed a classification system with three main components (1) content- based classifiers (2) machine learning classifier (3) communicator. All three components of the system complement each other in detecting bots. Evaluation of the system has shown some measured success in detecting bots in both log-based dataset and in live chat rooms. INDEX WORDS: Yahoo! Chat room, Chat Bots, ChatterBots, SPAM, YMSG DETECTING BOTS IN INTERNET CHAT by SRITI KUMAR B.E., Visveswariah Technological University, India, 2006 A Thesis Submitted to the Graduate Faculty of The University of Georgia in Partial Fulfillment of the Requirements for the Degree MASTER OF SCIENCE ATHENS, GEORGIA 2010 © 2010 Sriti Kumar All Rights Reserved DETECTING BOTS IN INTERNET CHAT by SRITI KUMAR Major Professor: Kang Li Committee: Lakshmish Ramaxwamy Prashant Doshi Electronic Version Approved: Maureen Grasso Dean of the Graduate School The University of Georgia December 2010 DEDICATION I would like to dedicate my work to my mother to be patient with me, my father for never questioning me, my brother for his constant guidance and above all for their unconditional love.
    [Show full text]
  • Wikipedia and Intermediary Immunity: Supporting Sturdy Crowd Systems for Producing Reliable Information Jacob Rogers Abstract
    THE YALE LAW JOURNAL FORUM O CTOBER 9 , 2017 Wikipedia and Intermediary Immunity: Supporting Sturdy Crowd Systems for Producing Reliable Information Jacob Rogers abstract. The problem of fake news impacts a massive online ecosystem of individuals and organizations creating, sharing, and disseminating content around the world. One effective ap- proach to addressing false information lies in monitoring such information through an active, engaged volunteer community. Wikipedia, as one of the largest online volunteer contributor communities, presents one example of this approach. This Essay argues that the existing legal framework protecting intermediary companies in the United States empowers the Wikipedia community to ensure that information is accurate and well-sourced. The Essay further argues that current legal efforts to weaken these protections, in response to the “fake news” problem, are likely to create perverse incentives that will harm volunteer engagement and confuse the public. Finally, the Essay offers suggestions for other intermediaries beyond Wikipedia to help monitor their content through user community engagement. introduction Wikipedia is well-known as a free online encyclopedia that covers nearly any topic, including both the popular and the incredibly obscure. It is also an encyclopedia that anyone can edit, an example of one of the largest crowd- sourced, user-generated content websites in the world. This user-generated model is supported by the Wikimedia Foundation, which relies on the robust intermediary liability immunity framework of U.S. law to allow the volunteer editor community to work independently. Volunteer engagement on Wikipedia provides an effective framework for combating fake news and false infor- mation. 358 wikipedia and intermediary immunity: supporting sturdy crowd systems for producing reliable information It is perhaps surprising that a project open to public editing could be highly reliable.
    [Show full text]
  • Improving Wikimedia Projects Content Through Collaborations Waray Wikipedia Experience, 2014 - 2017
    Improving Wikimedia Projects Content through Collaborations Waray Wikipedia Experience, 2014 - 2017 Harvey Fiji • Michael Glen U. Ong Jojit F. Ballesteros • Bel B. Ballesteros ESEAP Conference 2018 • 5 – 6 May 2018 • Bali, Indonesia In 8 November 2013, typhoon Haiyan devastated the Eastern Visayas region of the Philippines when it made landfall in Guiuan, Eastern Samar and Tolosa, Leyte. The typhoon affected about 16 million individuals in the Philippines and Tacloban City in Leyte was one of [1] the worst affected areas. Philippines Eastern Visayas Eastern Visayas, specifically the provinces of Biliran, Leyte, Northern Samar, Samar and Eastern Samar, is home for the Waray speakers in the Philippines. [3] [2] Outline of the Presentation I. Background of Waray Wikipedia II. Collaborations made by Sinirangan Bisaya Wikimedia Community III. Problems encountered IV. Lessons learned V. Future plans References Photo Credits I. Background of Waray Wikipedia (https://war.wikipedia.org) Proposed on or about June 23, 2005 and created on or about September 24, 2005 Deployed lsjbot from Feb 2013 to Nov 2015 creating 1,143,071 Waray articles about flora and fauna As of 24 April 2018, it has a total of 1,262,945 articles created by lsjbot (90.5%) and by humans (9.5%) As of 31 March 2018, it has 401 views per hour Sinirangan Bisaya (Eastern Visayas) Wikimedia Community is the (offline) community that continuously improves Waray Wikipedia and related Wikimedia projects I. Background of Waray Wikipedia (https://war.wikipedia.org) II. Collaborations made by Sinirangan Bisaya Wikimedia Community A. Collaborations with private* and national government** Introductory letter organizations Series of meetings and communications B.
    [Show full text]
  • Report on Requirements for Usage and Reuse Statistics for GLAM Content
    Report on Requirements for Usage and Reuse Statistics for GLAM Content Author: Maarten Zeinstra, Kennisland 1/19 Report on Requirements for Usage and Reuse Statistics for GLAM Content 1. Table of Contents 1. Table of Contents 2 2. GLAMTools project 3 3. Executive Summary 4 4. Introduction 5 5. Problem definition 6 6. Current status 7 7. Research overview 8 7.1. Questionnaire 8 7.2. Qualitative research on the technical possibilities of GLAM stats 12 7.3. Research conclusions 13 8. Requirements for usage and reuse statistics for GLAM content 14 8.1. High-level requirements 14 8.2. High-level technical requirements 14 8.3. Medium-level technical requirements 15 8.4. Non-functional requirements 15 8.5. Non-requirements / out of scope 15 8.6. Data model 15 9. Next steps 19 2/19 2. GLAMTools project The “Report on requirements for usage and reuse statistics for GLAM content” is part of the Europeana GLAMTools project, a joint Wikimedia Chapters and Europeana project. The purpose of the project is to develop a scalable and maintainable system for mass uploading (open) content from galleries, libraries, archives and museums (henceforth GLAMs) to Wikimedia Commons and to create GLAM-specific requirements for usage statistics. The GLAMTools project is initiated and made possible by financial support from Wikimedia France, Wikimedia UK, Wikimedia Netherlands, and Wikimedia Switzerland. Maarten Zeinstra from Kennisland wrote the research behind this report and the report itself. Kennisland is a Dutch think tank active in the cultural sector, working especially with open content, dissemination of cultural heritage and copyright law and is also a partner in the project.
    [Show full text]
  • Włodzimierz Lewoniewski Metoda Porównywania I Wzbogacania
    Włodzimierz Lewoniewski Metoda porównywania i wzbogacania informacji w wielojęzycznych serwisach wiki na podstawie analizy ich jakości The method of comparing and enriching informa- on in mullingual wikis based on the analysis of their quality Praca doktorska Promotor: Prof. dr hab. Witold Abramowicz Promotor pomocniczy: dr Krzysztof Węcel Pracę przyjęto dnia: podpis Promotora Kierunek: Specjalność: Poznań 2018 Spis treści 1 Wstęp 1 1.1 Motywacja .................................... 1 1.2 Cel badawczy i teza pracy ............................. 6 1.3 Źródła informacji i metody badawcze ....................... 8 1.4 Struktura rozprawy ................................ 10 2 Jakość danych i informacji 12 2.1 Wprowadzenie .................................. 12 2.2 Jakość danych ................................... 13 2.3 Jakość informacji ................................. 15 2.4 Podsumowanie .................................. 19 3 Serwisy wiki oraz semantyczne bazy wiedzy 20 3.1 Wprowadzenie .................................. 20 3.2 Serwisy wiki .................................... 21 3.3 Wikipedia jako przykład serwisu wiki ....................... 24 3.4 Infoboksy ..................................... 25 3.5 DBpedia ...................................... 27 3.6 Podsumowanie .................................. 28 4 Metody określenia jakości artykułów Wikipedii 29 4.1 Wprowadzenie .................................. 29 4.2 Wymiary jakości serwisów wiki .......................... 30 4.3 Problemy jakości Wikipedii ............................ 31 4.4
    [Show full text]
  • Are Encyclopedias Dead? Evaluating the Usefulness of a Traditional Reference Resource Rachel S
    St. Cloud State University theRepository at St. Cloud State Library Faculty Publications Library Services 2012 Are Encyclopedias Dead? Evaluating the Usefulness of a Traditional Reference Resource Rachel S. Wexelbaum St. Cloud State University, [email protected] Follow this and additional works at: https://repository.stcloudstate.edu/lrs_facpubs Part of the Library and Information Science Commons Recommended Citation Wexelbaum, Rachel S., "Are Encyclopedias Dead? Evaluating the Usefulness of a Traditional Reference Resource" (2012). Library Faculty Publications. 26. https://repository.stcloudstate.edu/lrs_facpubs/26 This Article is brought to you for free and open access by the Library Services at theRepository at St. Cloud State. It has been accepted for inclusion in Library Faculty Publications by an authorized administrator of theRepository at St. Cloud State. For more information, please contact [email protected]. Are Encyclopedias Dead? Evaluating the Usefulness of a Traditional Reference Resource Author Rachel Wexelbaum is Collection Management Librarian and Assistant Professor at Saint Cloud State University, Saint Cloud, Minnesota. Contact Details Rachel Wexelbaum Collection Management Librarian MC135D Collections Saint Cloud State University 720 4 th Avenue South Saint Cloud, MN 56301 Email: [email protected] Abstract Purpose – To examine past, current, and future usage of encyclopedias. Design/methodology/approach – Review the history of encyclopedias, their composition, and usage by focusing on select publications covering different subject areas. Findings – Due to their static nature, traditionally published encyclopedias are not always accurate, objective information resources. Intentions of editors and authors also come into question. A researcher may find more value in using encyclopedias as historical documents rather than resources for quick facts.
    [Show full text]
  • Attacker Chatbots for Randomised and Interactive Security Labs, Using Secgen and Ovirt
    Hackerbot: Attacker Chatbots for Randomised and Interactive Security Labs, Using SecGen and oVirt Z. Cliffe Schreuders, Thomas Shaw, Aimée Mac Muireadhaigh, Paul Staniforth, Leeds Beckett University Abstract challenges, rewarding correct solutions with flags. We deployed an oVirt infrastructure to host the VMs, and Capture the flag (CTF) has been applied with success in leveraged the SecGen framework [6] to generate lab cybersecurity education, and works particularly well sheets, provision VMs, and provide randomisation when learning offensive techniques. However, between students. defensive security and incident response do not always naturally fit the existing approaches to CTF. We present 2. Related Literature Hackerbot, a unique approach for teaching computer Capture the flag (CTF) is a type of cyber security game security: students interact with a malicious attacker which involves collecting flags by solving security chatbot, who challenges them to complete a variety of challenges. CTF events give professionals, students, security tasks, including defensive and investigatory and enthusiasts an opportunity to test their security challenges. Challenges are randomised using SecGen, skills in competition. CTFs emerged out of the and deployed onto an oVirt infrastructure. DEFCON hacker conference [7] and remain common Evaluation data included system performance, mixed activities at cybersecurity conferences and online [8]. methods questionnaires (including the Instructional Some events target students with the goal of Materials Motivation Survey (IMMS) and the System encouraging interest in the field: for example, PicoCTF Usability Scale (SUS)), and group interviews/focus is an annual high school competition [9], and CSAW groups. Results were encouraging, finding the approach CTF is an annual competition for students in Higher convenient, engaging, fun, and interactive; while Education (HE) [10].
    [Show full text]
  • Brion Vibber 2011-08-05 Haifa Part I: Editing So Pretty!
    Editing 2.0 MediaWiki's upcoming visual editor and the future of templates Brion Vibber 2011-08-05 Haifa Part I: Editing So pretty! Wikipedia articles can include rich formatting, beyond simple links and images to complex templates to generate tables, pronunciation guides, and all sorts of details. So icky! But when you push that “edit” button, you often come face to face with a giant blob of markup that’s very hard to read. Here we can’t even see the first paragraph of the article until we scroll past several pages of infobox setup. Even fairly straightforward paragraphs start to blur together when you break out the source. The markup is often non-obvious; features that are clearly visible in the rendered view like links and images don’t stand out in the source view, and long inline citations and such can make it harder to find the actual body text you wanted to edit. RTL WTF? As many of you here today will be well aware, the way our markup displays in a raw text editor can also be really problematic for right-to-left scripts like Hebrew and Arabic. It’s very easy to get lost about whether you’ve opened or closed some tag, or whether your list item is starting at the beginning of a line. Without control over the individual pieces, we can’t give any hints to the text editor. RTL A-OK The same text rendered as structured HTML doesn’t have these problems; bullets stay on the right side of their text, and reference citations are distinct entities.
    [Show full text]
  • The Culture of Wikipedia
    Good Faith Collaboration: The Culture of Wikipedia Good Faith Collaboration The Culture of Wikipedia Joseph Michael Reagle Jr. Foreword by Lawrence Lessig The MIT Press, Cambridge, MA. Web edition, Copyright © 2011 by Joseph Michael Reagle Jr. CC-NC-SA 3.0 Purchase at Amazon.com | Barnes and Noble | IndieBound | MIT Press Wikipedia's style of collaborative production has been lauded, lambasted, and satirized. Despite unease over its implications for the character (and quality) of knowledge, Wikipedia has brought us closer than ever to a realization of the centuries-old Author Bio & Research Blog pursuit of a universal encyclopedia. Good Faith Collaboration: The Culture of Wikipedia is a rich ethnographic portrayal of Wikipedia's historical roots, collaborative culture, and much debated legacy. Foreword Preface to the Web Edition Praise for Good Faith Collaboration Preface Extended Table of Contents "Reagle offers a compelling case that Wikipedia's most fascinating and unprecedented aspect isn't the encyclopedia itself — rather, it's the collaborative culture that underpins it: brawling, self-reflexive, funny, serious, and full-tilt committed to the 1. Nazis and Norms project, even if it means setting aside personal differences. Reagle's position as a scholar and a member of the community 2. The Pursuit of the Universal makes him uniquely situated to describe this culture." —Cory Doctorow , Boing Boing Encyclopedia "Reagle provides ample data regarding the everyday practices and cultural norms of the community which collaborates to 3. Good Faith Collaboration produce Wikipedia. His rich research and nuanced appreciation of the complexities of cultural digital media research are 4. The Puzzle of Openness well presented.
    [Show full text]
  • Tomenet-Guide.Pdf
    .==========================================================================+−−. | TomeNET Guide | +==========================================================================+− | Latest update: 17. September 2021 − written by C. Blue ([email protected]) | | for TomeNET version v4.7.4b − official websites are: : | https://www.tomenet.eu/ (official main site, formerly www.tomenet.net) | https://muuttuja.org/tomenet/ (Mikael’s TomeNET site) | Runes & Runemastery sections by Kurzel ([email protected]) | | You should always keep this guide up to date: Either go to www.tomenet.eu | to obtain the latest copy or simply run the TomeNET−Updater.exe in your | TomeNET installation folder (desktop shortcut should also be available) | to update it. | | If your text editor cannot display the guide properly (needs fixed−width | font like for example Courier), simply open it in any web browser instead. +−−− | Welcome to this guide! | Although I’m trying, I give no guarantee that this guide | a) contains really every detail/issue about TomeNET and | b) is all the time 100% accurate on every occasion. | Don’t blame me if something differs or is missing; it shouldn’t though. | | If you have any suggestions about the guide or the game, please use the | /rfe command in the game or write to the official forum on www.tomenet.eu. : \ Contents −−−−−−−− (0) Quickstart (If you don’t like to read much :) (0.1) Start & play, character validation, character timeout (0.1a) Colours and colour blindness (0.1b) Photosensitivity / Epilepsy issues (0.2) Command reference
    [Show full text]
  • Genedb and Wikidata[Version 1; Peer Review: 1 Approved, 1 Approved with Reservations]
    Wellcome Open Research 2019, 4:114 Last updated: 20 OCT 2020 SOFTWARE TOOL ARTICLE GeneDB and Wikidata [version 1; peer review: 1 approved, 1 approved with reservations] Magnus Manske , Ulrike Böhme , Christoph Püthe , Matt Berriman Parasites and Microbes, Wellcome Trust Sanger Institute, Cambridge, CB10 1SA, UK v1 First published: 01 Aug 2019, 4:114 Open Peer Review https://doi.org/10.12688/wellcomeopenres.15355.1 Latest published: 14 Oct 2019, 4:114 https://doi.org/10.12688/wellcomeopenres.15355.2 Reviewer Status Abstract Invited Reviewers Publishing authoritative genomic annotation data, keeping it up to date, linking it to related information, and allowing community 1 2 annotation is difficult and hard to support with limited resources. Here, we show how importing GeneDB annotation data into Wikidata version 2 allows for leveraging existing resources, integrating volunteer and (revision) report report scientific communities, and enriching the original information. 14 Oct 2019 Keywords GeneDB, Wikidata, MediaWiki, Wikibase, genome, reference, version 1 annotation, curation 01 Aug 2019 report report 1. Sebastian Burgstaller-Muehlbacher , This article is included in the Wellcome Sanger Medical University of Vienna, Vienna, Austria Institute gateway. 2. Andra Waagmeester , Micelio, Antwerp, Belgium Any reports and responses or comments on the article can be found at the end of the article. Corresponding author: Magnus Manske ([email protected]) Author roles: Manske M: Conceptualization, Methodology, Software, Writing – Original Draft Preparation; Böhme U: Data Curation, Validation, Writing – Review & Editing; Püthe C: Project Administration, Writing – Review & Editing; Berriman M: Resources, Writing – Review & Editing Competing interests: No competing interests were disclosed. Grant information: This work was supported by the Wellcome Trust through a Biomedical Resources Grant to MB [108443].
    [Show full text]