Stanford Internet Observatory 2019-2021

Stanford Internet Observatory 2019-2021

STANFORD INTERNET OBSERVATORY 2019-2021 The first two years. Internet Observatory Cyber Policy Center io.stanford.edu CONTENTS Two Year Review June 6, 2019 — June 6, 2021 LETTER FROM THE DIRECTORS TEACHING 04/05 14/15 SPOTLIGHT: SIO BY THE NUMBERS TELLING CHINA'S STORY 06/07 16/17 RESEARCH POLICY 08/09 18/19 SPOTLIGHT: SPOTLIGHT: THE ELECTION POTEMKIN PAGES AND PERSONAS INTEGRITY PARTNERSHIP 10/11 20/21 SPOTLIGHT: PLATFORM TAKEDOWNS ACKNOWLEDGEMENTS 12/13 22/23 4 LETTER FROM THE DIRECTORS Two years ago, we launched the Stanford Internet manipulation of livestreams to push fear during Observatory as a cross-disciplinary laboratory for the Black Lives Matter protests; global superpowers study of abuse in current information technologies, using health diplomacy as concerted soft power with a focus on the misuse of social media. The moves in the global south; and the 2020 US election, Observatory was created to learn about these abuses culminating in the unprecedented—although in real time and to translate our research discoveries perhaps not unanticipated—Capitol insurrection on into education for the next generation of engineers January 6, 2021. and entrepreneurs and into policy innovations for the public good. The term “Observatory” was We launched on June 6, 2019, with an initial team not an accident: for centuries, physicists and of three and have since grown to a full-time team astronomers have coordinated resources to build of 10, working with 76 student research assistants the massive technological infrastructure necessary over the past two years. SIO’s success relies on the to research the universe. The internet is similarly an tireless efforts of the students and staff whose work ecosystem constantly in flux as new apps, emerging is highlighted in this report. technologies, and new communities of users transform the space; researchers need innovative As we embark on our third year, we reflect deeply capabilities to research this new information frontier. on our research and refine our path forward as a research center. In addition to highlighting the When we launched, we knew our work would be output of our team, this report details our focus important because of the extent to which online areas and goals for the coming year. activity increasingly shapes public perception of our society’s most important issues. We did not We would like to extend our gratitude to our faculty anticipate some of the specific forms this activity leads Nate Persily and Dan Boneh at the Stanford would take. The global pandemic moved human Cyber Policy Center; Michael McFaul, the director interaction from substantively online to near- of the Freeman Spogli Institute; and our generous completely online. As our team adapted to working supporters including Craig Newmark Philanthropies, from home, the spread of online information the William and Flora Hewlett Foundation, the intensified: an organized marketing campaign Omidyar Network, the Charles Koch Foundation and to launch the conspiratorial “Plandemic” video; Felicis Ventures. Alex Stamos, Director Elena Cryst, Associate Director LETTER FROM THE DIRECTORS 5 building the infrastructure to study the internet 6 STANFORD INTERNET OBSERVATORY BY THE NUMBERS 76 9 STUDENT RESEARCH COURSES TAUGHT ASSISTANTS TRUST & SAFETY ENGINEERING x3 ONLINE OPEN-SOURCE INVESTIGATIONS x3 479 HACK LAB x2 STUDENTS ENROLLED IN CURRENT TOPICS IN TECHNOLOGY PLATFORM SIO COURSES POLICY x1 3 5 COLLABORATIVE RESEARCH END-TO-END INITIATIVES ENCRYPTION WORKSHOPS ATTRIBUTION.NEWS ELECTION INTEGRITY PARTNERSHIP VIRALITY PROJECT 200+ WORKSHOP PARTICIPANTS BY THE NUMBERS 7 PUBLICATIONS BLOG POSTS 58 REPORTS 33 ARTICLES AND OP-EDS 20 WORKING PAPERS UNDER REVIEW 2 PUBLISHED PEER-REVIEWED JOURNAL ARTICLES 1 8 RESEARCH The Stanford Internet Observatory (SIO) started manipulation around the 2020 Taiwan election, the with intentionally broad research interests— influence of Russian cyber mercenaries in Libya, and including disinformation, trust and safety, and the dynamics and behavior of Parler’s first 13 million encryption policy—to see where the inquiry areas users. In addition, we spearheaded two inter- would grow. Our focus was on creating short-term institutional collaborations to study misinformation impact through policy briefs, blog posts, and media around specific domestic US events, the Election engagement that would allow our research to Integrity Partnership and the Virality Project, which inform decision makers in real time. That flexibility are detailed in this report’s Research Spotlights. enabled us to adapt our research and embrace new The research spotlights in this report highlight our projects. Our technical team built tools to ingest, takedown analysis and three of our pivotal reports process, and analyze data from a dozen different published in the past two years. social media platforms. These shared research tools have greatly reduced the manual effort necessary to The pathologies of information ecosystems extend understand and analyze online conversation trends, beyond mis- and disinformation, so we continue to and have empowered individual social science reevaluate the scope of our investigation. In looking researchers to pursue their own lines of inquiry at our work from the past two years—over 100 blog without the need for specialized data science skills. posts, op-eds, reports, and papers—we have refined our research into four distinct but interrelated Our researchers—from students to postdocs, young areas: Trust and Safety, Platform Policy, Information professionals to established experts—leveraged this Interference, and Emergent Technology. capacity to understand such topics as social media Trust and Safety At the core of the Stanford Internet Observatory’s vision is the desire for the future of the consumer internet to be informed by the mistakes of its past. The Trust and Safety research area focuses on the ways consumer internet services are abused to cause real human harm and on the potential operational, design, and engineering responses. Our research will continue to investigate technical weaknesses on platforms such as Clubhouse, share innovative open-source research techniques such as our published work about Wikipedia, and examine the accessibility of self-harm prevention content on platforms across the internet. Our key outputs in the next year will be our open-access textbook on Trust and Safety Engineering, and a new cross-disciplinary journal incentivizing research collaboration in this space. RESEARCH 9 Platform Policy How well do technology companies’ published policies address user and content concerns? Is government regulation grounded in technical reality? An unexpected outcome of our Election Integrity Partnership was the power of our comprehensive platform policy analysis for creating a rubric to assess policy. Since publishing that analysis, we have continued pressure- testing platform policies in light of the ever-evolving geopolitical climate. This research area works in tandem with the Trust and Safety track but focuses strictly on policy as a way to strengthen trust and safety. Projects examine ways to limit abuse on end- to-end encrypted platforms, the impact of platform policy on access to self-harm content, the role of consent in online intimate imagery, and others. Information Interference Information manipulation and disinformation research to date has primarily focused on finding and moderating away networks of inauthentic actors on social media platforms. We believe this scopes the problem too narrowly. Campaigns that deliberately spread false and misleading information no longer rely on inauthentic actors; they originate from domestic as well as foreign actors, who blend traditional media as well as social media manipulation into their strategic efforts. In our assessments of campaigns we increasingly examine full-spectrum activity: both overt/attributable and covert/inauthentic activity spanning broadcast, print, and social media. Information interference, more so than mis- and disinformation, encompasses the variety of operations to manipulate public consensus within the information environment writ large. This area of our work increasingly intersects with the others, as cryptocurrencies, emerging technologies, and new platforms are incorporated into manipulation efforts. At times, these efforts incorporate hacked and leaked material as well, which serves as an intersection point with more traditional cybersecurity research. Emerging Technology We identified our fourth research area with an eye to the future, which will include new generative machine learning technologies such as deep fakes or the Generative Pre-trained Transformer (GPT) autoregressive language models capable of emulating near-perfect human writing. Our research will look at both the potential and the harm inherent in these technologies, and how they will shape our online world. We also will explore the ways new machine learning techniques can be deployed in protective situations, and look at policy frameworks that can appropriately regulate AI advancements. Our technical infrastructure will play an integral role in creating toolkits for researchers interested in further exploring this new technology. RESEARCH SPOTLIGHT 10 POTEMKIN PAGES AND PERSONAS: ASSESSING GRU ONLINE OPERATIONS, 2014-2019 Renée DiResta and Shelby Grossman • November 12, 2019 The Stanford Internet Observatory published its first detailed whitepaper in 2019 at the request of the United States Senate Select Committee on Intelligence

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    24 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us