
INTERNET, BIG DATA, AND ALGORITHMS: THREATS TO PRIVACY AND FREEDOM OR GATEWAY TO A NEW FUTURE May 10 – 13, 2019 | Cambridge, Massachusetts INTERNET, BIG DATA & ALGORITHMS: GATEWAY TO A NEW FUTURE OR A THREAT TO PRIVACY AND FREEDOM The Aspen Institute Congressional Program May 10-13, 2019 Cambridge, Massachusetts TABLE OF CONTENTS Rapporteur’s Summary Grace Abuhamad ............................................................................................................... 3 Opening Remarks by MIT President L. Rafael Reif ..................................................................................................................... 9 Artificial Intelligence & Public Policy: The Beginning of a Conversation R. David Edelman .............................................................................................................13 Algorithms are Replacing Nearly All Bureaucratic Processes Cathy O’Neil .....................................................................................................................17 How to Exercise the Power You Didn’t Ask For Jonathan Zittrain ...............................................................................................................21 Beyond the Vast Wasteland Ethan Zuckerman ..............................................................................................................27 Privacy and Consumer Control J. Howard Beales III and Timothy J. Muris ..........................................................................37 Privacy and Human Behavior in the Age of Misinformation Alessandro Acquisti ...........................................................................................................43 The Summer of Hate Speech Larry Downes ...................................................................................................................49 Is the Tech Backlash Going Askew? Larry Downes and Blair Levin .............................................................................................53 How More Regulation for U.S. Tech Could Backfire Larry Downes ...................................................................................................................57 Fixing Social Media’s Grand Bargain Jack Balkin .......................................................................................................................61 Conference Agenda ...........................................................................................................79 Conference Participants .....................................................................................................83 1 2 RAPPORTEUR’S SUMMARY Grace Abuhamad Graduate student, Technology and Policy Program, MIT Under the auspices of the Aspen programmed. Like humans, these machines Institute Congressional Program, a learn from past data to predict future bipartisan group of twelve members of outcomes. When the input data is limited, Congress convened from May 10—13, 2019, machines produce biased and harmful at the Massachusetts Institute of results that tend to have disparate impact Technology to discuss implications and on disempowered groups. policy options regarding the Internet, big Algorithm designers can mitigate data, and algorithms. The members of these results by recognizing limitations and Congress deliberated with scholars and changing their definition of success. practitioners to acquire a better Currently, success is measured by an understanding of artificial intelligence algorithm’s overall or aggregate technologies, their current and future performance at a defined task, such as applications, and possible threats to matching faces to names. Research consumer privacy and freedom. indicates that algorithms can have high The participants were mindful that aggregate accuracy, and yet, when results artificial intelligence is a new source of are disaggregated by racial or ethnic wealth, but also a new source of inequality groups, can show significant disparities among nations and within nations. Today’s among these groups. Applications of such “arms race” is one where countries such as algorithms can automate inequality and China have directed national strategies and discrimination that existed in past data on aim to claim technological supremacy within which these algorithms are trained. a decade. Given the scope and scale of In most cases, designers are not artificial intelligence, the nation that will aware of the data limitations and their shape the future of these technologies will unintended consequences in artificial shape the future of the world. Whether or intelligence applications. This challenge is not the United States may be the only not unique to artificial intelligence. For nation able to leverage its resources and example, there used to be more female win such a race remains to be seen. than male fatalities in automobile accidents Defining Success in Artificial since automobiles were designed and tested Intelligence according to male-form crash test dummies. Once this limitation was recognized and Artificial intelligence is the ability for corrected, fatalities equalized across machines to learn without being explicitly genders. The definition of successful design 3 and testing expanded to include gender process in order for these systems to be equality. As awareness around algorithmic trustworthy. bias increases, there may also be an Explanations are a form of expansion of the definition of success for transparency that develop this human- these algorithms. machine collaboration. These too, are Awareness, context, and context specific, and each context may transparency are three ways by which to require different gradients of explanations, expand the definition of success. Given that raising questions such as: for whom is the artificial intelligence has the potential to explanation for? What is its purpose? Can impact every sector of the economy and the explanation be contested? Is the aspect of American lives, there needs to be explanation feasible technically and more widespread training to increase financially? In the medical context, for awareness of both the benefits and risks of example, a machine learning system artificial intelligence. Once aware, developed to reduce the instances of sepsis Americans can democratize artificial was not explainable, but brought down the intelligence by bringing diverse experiences instance of sepsis by 60%. Participants to recognize and address limitations. agreed that this example and others make the debate about explanation requirements Context plays an important role in more nuanced. artificial intelligence, since some applications have more limitations than Threats to Privacy and Democratic others. Participants recognized, for Freedoms example, that export controls needed to be Algorithmic harms are perhaps less more precise: instead of limiting artificial noticeable, though no less threatening to intelligence as a whole, limits could be civil liberties. In the context of facial applied specifically to kinetic applications. recognition technology, an estimated 130 Contexts that are highly-regulated today, million people in the United States already such as healthcare and national security, have images of their faces in government will have higher thresholds for safety and databases and can be subject to accuracy of artificial intelligence unwarranted searches. While these images applications. Where determining precise may have been lawfully collected through regulations or thresholds may not yet be driver’s license registries and other possible, increasing transparency is another government services, it is not clear that any way to expand discourse around success in searches respect the context in which the artificial intelligence applications. image was collected.. Transparency can help align artificial Private companies engage in more intelligence with public trust. Artificial pervasive data collection since individuals intelligence presents a number of voluntarily upload and identify personal opportunities from autonomy, speed, and photographs, without full awareness as to endurance that exceed human capacities how these data will be used. During the and could serve as a power system for Black Lives Matter protests, law national security. Even as these applications enforcement officials identified some may deliver lethal capacity in a more individuals using data sourced from both targeted way, there is a need for legal government databases and social media checks, and perhaps a “human-in-the-loop” 4 platforms. Such uses of data could have democratic values, in addition to, or along chilling effects on civil liberties. There are with, a role in protecting individual privacy. no federal laws regulating the use of facial Negotiating the Boundaries of Privacy recognition technology and individual face and Control prints. The legal and social boundaries of Like the facial recognition example privacy have changed over time, and are above, certain uses of data are “lawful but based on different assumptions in different awful,” in the sense that they do not cultures and societies. In our modern world, promote democratic values. At scale, these data is key. But who actually owns the data uses can undermine democracy and election and when or how one consents to having integrity through surveillance, their data collected are disputable topics. misinformation, disinformation, and For example, once an individual’s data has manipulation. About 70% of American been harvested and processed, through
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages85 Page
-
File Size-