INTERNET, BIG DATA, AND ALGORITHMS: THREATS TO PRIVACY AND FREEDOM OR GATEWAY TO A NEW FUTURE

May 10 – 13, 2019 | Cambridge, Massachusetts INTERNET, BIG DATA & ALGORITHMS: GATEWAY TO A NEW FUTURE OR A THREAT TO PRIVACY AND FREEDOM The Aspen Institute Congressional Program May 10-13, 2019 Cambridge, Massachusetts

TABLE OF CONTENTS Rapporteur’s Summary Grace Abuhamad ...... 3 Opening Remarks by MIT President L. Rafael Reif ...... 9 Artificial Intelligence & Public Policy: The Beginning of a Conversation R. David Edelman ...... 13 Algorithms are Replacing Nearly All Bureaucratic Processes Cathy O’Neil ...... 17 How to Exercise the Power You Didn’t Ask For Jonathan Zittrain ...... 21 Beyond the Vast Wasteland Ethan Zuckerman ...... 27 Privacy and Consumer Control J. Howard Beales III and Timothy J. Muris ...... 37 Privacy and Human Behavior in the Age of Misinformation Alessandro Acquisti ...... 43 The Summer of Hate Speech Larry Downes ...... 49 Is the Tech Backlash Going Askew? Larry Downes and Blair Levin ...... 53 How More Regulation for U.S. Tech Could Backfire Larry Downes ...... 57 Fixing Social Media’s Grand Bargain Jack Balkin ...... 61 Conference Agenda ...... 79 Conference Participants ...... 83

1

2 RAPPORTEUR’S SUMMARY

Grace Abuhamad

Graduate student, Technology and Policy Program, MIT

Under the auspices of the Aspen programmed. Like humans, these machines Institute Congressional Program, a learn from past data to predict future bipartisan group of twelve members of outcomes. When the input data is limited, Congress convened from May 10—13, 2019, machines produce biased and harmful at the Massachusetts Institute of results that tend to have disparate impact Technology to discuss implications and on disempowered groups. policy options regarding the Internet, big Algorithm designers can mitigate data, and algorithms. The members of these results by recognizing limitations and Congress deliberated with scholars and changing their definition of success. practitioners to acquire a better Currently, success is measured by an understanding of artificial intelligence algorithm’s overall or aggregate technologies, their current and future performance at a defined task, such as applications, and possible threats to matching faces to names. Research consumer privacy and freedom. indicates that algorithms can have high The participants were mindful that aggregate accuracy, and yet, when results artificial intelligence is a new source of are disaggregated by racial or ethnic wealth, but also a new source of inequality groups, can show significant disparities among nations and within nations. Today’s among these groups. Applications of such “arms race” is one where countries such as algorithms can automate inequality and China have directed national strategies and discrimination that existed in past data on aim to claim technological supremacy within which these algorithms are trained. a decade. Given the scope and scale of In most cases, designers are not artificial intelligence, the nation that will aware of the data limitations and their shape the future of these technologies will unintended consequences in artificial shape the future of the world. Whether or intelligence applications. This challenge is not the may be the only not unique to artificial intelligence. For nation able to leverage its resources and example, there used to be more female win such a race remains to be seen. than male fatalities in automobile accidents Defining Success in Artificial since automobiles were designed and tested Intelligence according to male-form crash test dummies. Once this limitation was recognized and Artificial intelligence is the ability for corrected, fatalities equalized across machines to learn without being explicitly genders. The definition of successful design

3 and testing expanded to include gender process in order for these systems to be equality. As awareness around algorithmic trustworthy. bias increases, there may also be an Explanations are a form of expansion of the definition of success for transparency that develop this human- these algorithms. machine collaboration. These too, are Awareness, context, and context specific, and each context may transparency are three ways by which to require different gradients of explanations, expand the definition of success. Given that raising questions such as: for whom is the artificial intelligence has the potential to explanation for? What is its purpose? Can impact every sector of the economy and the explanation be contested? Is the aspect of American lives, there needs to be explanation feasible technically and more widespread training to increase financially? In the medical context, for awareness of both the benefits and risks of example, a machine learning system artificial intelligence. Once aware, developed to reduce the instances of sepsis Americans can democratize artificial was not explainable, but brought down the intelligence by bringing diverse experiences instance of sepsis by 60%. Participants to recognize and address limitations. agreed that this example and others make the debate about explanation requirements Context plays an important role in more nuanced. artificial intelligence, since some applications have more limitations than Threats to Privacy and Democratic others. Participants recognized, for Freedoms example, that export controls needed to be Algorithmic harms are perhaps less more precise: instead of limiting artificial noticeable, though no less threatening to intelligence as a whole, limits could be civil liberties. In the context of facial applied specifically to kinetic applications. recognition technology, an estimated 130 Contexts that are highly-regulated today, million people in the United States already such as healthcare and national security, have images of their faces in government will have higher thresholds for safety and databases and can be subject to accuracy of artificial intelligence unwarranted searches. While these images applications. Where determining precise may have been lawfully collected through regulations or thresholds may not yet be driver’s license registries and other possible, increasing transparency is another government services, it is not clear that any way to expand discourse around success in searches respect the context in which the artificial intelligence applications. image was collected.. Transparency can help align artificial Private companies engage in more intelligence with public trust. Artificial pervasive data collection since individuals intelligence presents a number of voluntarily upload and identify personal opportunities from autonomy, speed, and photographs, without full awareness as to endurance that exceed human capacities how these data will be used. During the and could serve as a power system for Black Lives Matter protests, law national security. Even as these applications enforcement officials identified some may deliver lethal capacity in a more individuals using data sourced from both targeted way, there is a need for legal government databases and social media checks, and perhaps a “human-in-the-loop”

4 platforms. Such uses of data could have democratic values, in addition to, or along chilling effects on civil liberties. There are with, a role in protecting individual privacy. no federal laws regulating the use of facial Negotiating the Boundaries of Privacy recognition technology and individual face and Control prints. The legal and social boundaries of Like the facial recognition example privacy have changed over time, and are above, certain uses of data are “lawful but based on different assumptions in different awful,” in the sense that they do not cultures and societies. In our modern world, promote democratic values. At scale, these data is key. But who actually owns the data uses can undermine democracy and election and when or how one consents to having integrity through surveillance, their data collected are disputable topics. misinformation, disinformation, and For example, once an individual’s data has manipulation. About 70% of American been harvested and processed, through adults use social media today, yet only either voluntarily or involuntarily online about 5% did in 2005. Social media interactions, it can be put to use in targeted networks have evolved to play a role in consumer marketing campaigns, campaign trust and civic engagement, though perhaps advertisements, and individualized sales not always as a positive force. A recent pitches. study indicated that, following mass shootings, notoriety can be propagated by In debating how and whether to and within social media networks: after a protect privacy, policymakers face a request not to name the perpetrator of the revealed preferences contrast: individuals attacks in Christchurch, New Zealand, only claim privacy is important to them, yet 15% of the American press did, yet those behave as though it is not. One may articles represented 45% of the articles therefore suggest that protecting privacy is shared on social media. not necessary. However, there are other ways to understand consumer behavior that Social media platforms were perhaps lead to the opposite conclusion. Consumers not designed to protect democratic values, have less information than collectors about though this does not necessarily prevent a how their data can be used and, at the change of purpose. Participants discussed same time, they are overloaded with too Newt Minow’s leadership as Chairman of the much information that they cannot easily Federal Communications Commission in the process. Researchers suggest that the late 1960s to evolve television programming opportunity cost of reading privacy terms of beyond what he viewed as a “vast service amounts to over $700 billion in a wasteland,” launching the Public consumer’s lifetime. As a result, consumers Broadcasting Service and creating a role for tend to select default options, which, television in informing the public. These depending on how they are set, influence actions did not eliminate television, but the outcomes. For example, with organ instead created an additional form of donation, in countries that presume television aligned with public purpose. donation, there are higher donations, and Similar to the “4th Estate” role of the press, vice versa. Overall, this suggests that today’s influence of social media networks consumers exert less of a choice than is suggests perhaps a public role in protecting assumed with regard to protecting their information and privacy.

5 Most privacy policies today are One of the challenges is assessing developed on a flawed “notice and consent” the value of personal data and the impact model. For one, privacy policies change of stronger protections. In discussing often, yet they do not require new consent whether there should be some form of for every change. They assume consent. insurance policy against personal data Second, this model depends on the breaches, participants agree that it is construct of information as property that difficult to quantify the consumer costs of consumers can control, yet it is increasingly violations. While individuals’ comfort with difficult for consumers to exert such control. these techniques varies, one thing is Often, information that is used is not the certain: marketing will never be the same. same as the purpose for which it was The explosive power of artificial intelligence collected. Another issue with control is that is being harnessed for commercial consumers cannot always control others’ advantage, which can be either sharing information about them, such as advantageous or disadvantageous to the through a social network. Privacy is viewed consumer depending on what perspective is as an individual choice, but it is increasingly held. communal. It may be that privacy needs to Platform Regulation as Information be defined not by information control, but Fiduciaries rather a negotiation of boundaries between public and private spaces. The major digital companies spent over $60 million in 2018 in lobbying. Privacy protection has put too much Consolidation in the digital industry has burden on consumers to manage their raised questions about the power of information. There are social norms about dominant major players. Participants privacy in public that have been broken by discussed whether the economies of scale social networks. Consumers may feel serve consumer interest, or are to the coerced into handing over information in detriment of consumer choices and costs. order to use a service. Participants agreed The digital market is dominated by that privacy should not be a privilege: advertising technology, with consumer data consumers need options and cannot be cut as the value driver. Large platforms out of processes simply because they do not dominate the space since their products are agree to terms, especially as services are designed to collect user information as soon increasingly online. There was interest in as the user is signed-in and identified. Third privacy-enhancing tools and technologies, parties are struggling to compete by such as encryption, differential privacy, and collecting data through other online activity. secure multi-party computing, though concern about whether consumers would To hold companies accountable for bear the cost of deployment. Participants their power, regulators need to identify also considered how to operationalize the harms to consumers. Data breach harms principle of data minimization. Another have galvanized some action, and this suggestion was a do-not-track registry for action might be best focused on increasing consumers to opt-out of data sharing, consumer confidence. While there are many though there was concern that offering opt- criticisms of the General Data Protection out options would create a selection bias Regulation in Europe, the law gave among consumers. Europeans a sense of confidence and control over their privacy. At the same time,

6 some view this law as a “digital trade war” liability protections awarded to them in and a way for Europe to extract rents from Section 230 of the Communications American companies since they have yet to Decency Act (also known as Title V of the develop their own innovative environment. 1996 Telecommunications Act). Participants Of the 20 most valuable companies in the discussed the intent of Section 230, world, 11 are in the United States, 9 are in suggesting that its goal was to protect China, and none are in Europe. Despite very companies when they acted as editors, different approaches, both China and the assuming that companies would engage United States have designed markets that rather than not. Participants also discussed allow for companies to prosper and create possible First Amendment challenges, citing, value. Participants agreed that regulation in as an example, the 2018 repeal of the the United States needs to better balance Department of Labor’s financial fiduciary consumer protection with innovation, rule. keeping in mind smaller companies and Takeaways startups that could bear a heavier compliance burden than large incumbents. Though not specified explicitly in the Constitution, privacy has emerged as an One suggestion for platform individual right. Consumers have lost control regulation emerges in the common law over their personal information and the tradition of fiduciaries. For example, medical ability to think about privacy as consumers and legal professionals need sensitive rather than products. Companies have information about their patients and clients taken some action, but need further in order to perform a service, yet these direction and clarity as to their roles and professionals also have a responsibility not responsibilities, especially as the European to disclose this information. Platforms could Union’s General Data Privacy Regulation, be considered “information fiduciaries.” This and now the state of California, have means that companies would be responsible imposed greater privacy protections for for acting in the best interest of consumers online behavior than previously required. when it comes to their data. These From an innovation perspective, countries companies would have a duty of care, a such as China are investing heavily in duty of confidentiality, and a duty of loyalty. research and development of technologies Such duties would foster an environment of to shape our future. The time to act is now. trust and increase consumer confidence. The public interest is at stake, perhaps even These duties would still allow beyond the borders of the United States. companies to continue advertising in ways Participants suggested shifting the that provide information to consumers, but burden of responsibility off consumers, would hold them responsible when actions focusing remedies to measurable harms, are discriminatory, manipulative, or being aware of the balance between predatory. Some companies have already incumbents and entrants, and encouraging started to take on a fiduciary role with experimentation in a “sandbox” type regard to “news,” by defining the term and regulatory environment. While the Federal choosing to boost rankings for more reliable Trade Commission has the ability to police sources of information. In developing the bad behavior, participants did not settle the terms of their moderation role, some may question of whether extending rulemaking argue that companies lose the intermediary authority would be beneficial. Some of the

7 proposals require more research and impact privacy and democratic values should be assessments prior to decisions, which viewed through a lens of continuous and policymakers could facilitate by mandating evolving risk mitigation, as opposed to total more transparency and access to data. eradication. Overall, participants agreed that threats to

8 OPENING REMARKS BY MIT PRESIDENT

L. Rafael Reif

President, Massachusetts Institute of Technology

As you may have heard, MIT clearly come with risks, the kind you have recently launched the MIT Stephen A. all come here to learn about: threats to Schwarzman College of Computing. It’s a privacy, public safety, jobs, the security of $1.1 billion dollar initiative that will add 50 nations – and more. new faculty—and it represents the largest All of this is to say: the opportunities restructuring of MIT in 70 years. are immense, very few people are prepared As a university president, let me tell to seize them—and society is simply not you: Orchestrating that amount of change is armed with practical strategies for making really difficult! You definitely would not try sure these technologies are responsibly it without some very good reasons! deployed for the common good. So, what inspired us to start the new Now let me frame this in terms of College? what we’re doing with the MIT Schwarzman College of Computing. Artificial Intelligence is an enabling technology. You may even have heard it Given the power and pervasiveness called, “the new electricity.” That means it of AI and related technologies, and given has the power to decisively reshape how all their potent mix of opportunity and threats, of us live and work together. It will help I believe that those of us in higher humanity learn more, waste less, work education share a pressing responsibility: smarter, live longer – and better We need to reshape our institutions, understand, predict and make decisions so we can equip our students to shape the about almost anything that can be future. measured. At MIT, our students have been As a result, in the not-too-distant quietly leading this revolution for some future, AI will be a dominant source of new time. Computer science has long been our wealth—for those nations equipped to make most popular major, and in the last few the right commitments now. There are not years, the interest has been explosive. many of those countries! So we should not Roughly 40% of our students now major in be surprised if this new source of wealth computer science alone or paired with also becomes a new source of inequality, another subject. On their own, our students both between nations, and within them.At are making sure they graduated ready to the same time, however, the promises and bring computing-intensive approaches to benefits of AI and related technologies

9 fields from molecular biology to economics this by pursuing a systematic, long-term, to urban planning. highly funded national strategy. In effect, our students are telling us I believe that America needs to that educational institutions like MIT must respond urgently and deliberately to the deliberately equip tomorrow’s leaders to be scale and intensity of this challenge. We “bilingual” in computing and whatever else may not do so, however. In which case, we interests them. Graduates entering every should expect that, in fields from personal field will need to be fluent in AI strategies communications to business, health and to advance their own work. And security, China is likely to become the technologists will also need to be fluent in world’s most advanced technological nation, the cultural values and ethical principles and the source of the most cutting-edge that should ground and govern the use of technological products in not much more these tools, for the good of all. than a decade. We aim to equip our students to be Fortunately, this scenario is not leaders in making sure that AI can flourish inevitable. in, and support, a society that values The United States has tremendous individual rights and freedoms. And we assets, including the immense global want them to be actively engaged with how strength of our technology sector today. to help American workers compete and This is the result, in part, of a unique succeed, as AI transforms the very nature formula that no other country has been able of work. to copy: the large number of first-rate In short, we have come to believe American universities, pursuing advanced that it’s time to educate a new generation research—with long-term federal support. of technologists in the public interest. This relationship is rooted in a national culture of opportunity and Fortunately, MIT is not alone. Other entrepreneurship. It is inspired by an institutions across the country are atmosphere of intellectual freedom. It is responding to this new reality too, in supported by the rule of law. And, most various ways—and that is good news for the importantly, it enables new creative heights nation. by uniting brilliant talent from every sector But no matter how well we teach of our society and every corner of the our students, and no matter how “bilingual” world. they become, these efforts must be For decades, these factors have matched by a national effort to sustain our helped make our nation the most powerful nation’s technology leadership. So let me scientific and technological engine on Earth. close on that note. Every American can take pride in this Because you signed up for this distinctive system. remarkable program, I do not need to tell anyone here that our nation is globally engaged in a technological race to the If we want to secure our nation’s horizon. Other nations, such as China, have future, and its technological pre-eminence been advancing aggressively to assert in this technology race, we need a highly technological supremacy in critical fields of visible, focused, sustained federal effort to science and technology. And they are doing fund research and development in key

10 science and technology areas. And we need future of AI will shape the future for us all. I incentives to make sure universities, hope and believe it will be ours. industry and government are working I am impressed and grateful that all together to capitalize on them. of you made this journey in search of In the coming decades, it will feel as greater understanding. As the nation though the opportunities, disruptions and confronts the challenges of the progress of the Industrial Revolution are technological future, I hope you will playing out at time-lapse speed. Responding continue to see MIT as a resource. And I to the magnitude of this challenge will wish you many wonderful discussions over require a strategic effort across society. the next two days! Whichever nation acts now to shape the

11

12 ARTIFICIAL INTELLIGENCE & PUBLIC POLICY: THE BEGINNING OF A CONVERSATION

R. David Edelman

Director, Project on Technology, Economy and National Security, MIT

Whether in the headlines or our singular: to inform and help guide sound news feeds, Board Rooms or the Situation public policy amidst a context of continual Room, discussion about artificial intelligence technological change. (AI) seems to have reached a fever pitch — As we have seen so spectacularly, and with good reason. Like cybersecurity a perhaps even tragically over the last couple decade ago, a combination of optimism, of years, gone are the days when intrigue, concern, and technical complexity technologists have their world, when the are catapulting this once-obscure set of governments had another, completely technological tools to the center of separate world and ne’er the two shall dialogues far beyond computer science. meet. The conversations we will have over The United States is blessed with these next few days, between those from exceptional expertise in computing, and the technology world and those in the policy continues to be at the forefront of many world, are proof that the nation’s top technological advances in AI algorithms, policymakers and engineers are beginning applications, and hardware to run them. to understand two key insights: that What is perhaps newer is a growing sense technology policy is now increasingly just of discussion, urgency, and for many policy, and data ethics are increasingly just profound purpose associated with ensuring ethics. No part of the business of that the benefits of these advances of government is unaffected by the changes computing are widely distributed and the ushered in by the digital world, and ethical substantial (and increasingly visible) harms decisions are increasingly going to be data- they might exacerbate. The challenges are driven. myriad but varied across applications of AI; Our task is to work to align the the benefits are potentially substantial but benefits of Artificial Intelligence with the not automatic; and the government’s role in obligations of public trust. any of these areas is deeply unsettled. This discussion will hopefully provide a Now what does that effort require? foundation for both what AI and its First, it takes technical rigor, insights from constituent technology of machine learning the people who are in the labs, in academia (ML) truly is — and, crucially, what it is not and in industry, pioneering these yet capable of — and a sense of how it is innovations in machine learning robotics. and will continue to be applied. The goal is Second, it takes policy awareness, the

13 perspective of those who know how to turn all day, if not all year? How specifically hard problems that have inevitable tradeoffs should engineers be building and validating (there are always tradeoffs) into public systems to make them worthy of public policy that communities and countries can confidence, the sort of confidence we have get behind. And third, it takes cross when we get in our cars or airplanes today? pollination, which is recognition that neither Who gets to decide what the proper side, technical, policy or anyone in between, threshold for that trust is? And how do we has a monopoly on good ideas, nor all the do all of that while, as you will hear, we are answers — and, frankly, a recognition that still perfecting these machines? To use the none of us in a single community have the crudest metaphor: how can we build the luxury of deference to the other completely plane while we’re flying it? anymore. These implications differ from those The days in which engineers can at the intersection of AI and manufacturing, simply ship a tool and assume that including with its dramatic implications for someone else will handle the consequences the future of work. Perhaps more-so than are gone. When it comes to AI in the area many are aware, AI is already changing of public trust, the era of moving fast and manufacturing, and along with it, potentially breaking everything is coming to a close. bringing profound shifts in the U.S. labor There is simply too much at stake for us not market. But this has also been an area rife to, collectively, have a say. with speculation and anxiety, with wild prognostications about the “end of work” The topics that we will cover in our striking fear in the hearts of both sides of discussion are intentionally broad. We will the political spectrum. If there is one discuss how AI is in use across a range of benefit to this uncertainty, though, it has industries, in a range of applications, been to motivate leading research — creating a range of cross-functional issues. including here at MIT, from scholars like We are taking that approach to help David Autor — to develop real data about illustrate both the interconnections among what the future of work might look like, and them, and crucially, the differences. The what is working to manage these transitions use of AI to serve better ads on social where they present themselves. media might be technologically similar to AI in aviation or autonomous vehicles, but AI is also changing the face of arguably only one has the real if not healthcare, an area with immense potential immediate potential to save hundreds, to improve lives and to save them, but is thousands, or even tens of thousands of also one of the most heavily regulated lives every year — if we can build and industries in the United States and around sustain enough trust in these systems to get the world. And so here, too, there are no inside of them ourselves. easy answers. The argument, for instance, that AI systems should be able to explain In that context of transportation, for themselves — as some European instance, we might rightly ask: what is the jurisdictions are now demanding of the tools proper threshold of safety? Why is it that wherever they manifest themselves — we have, for instance, almost zero tolerance sounds like a very appealing concept on its for safety risk as airline passengers but face. But if patients are given the choice getting in my car for the commute home is between a perfect cancer diagnosis system probably the most dangerous thing I will do and another that is very imperfect, but

14 better at explaining itself, it is not hard to it? Is the genie out the bottle? And what imagine which most will prefer. role, importantly as you’ve seen in the front pages, should engineers play, if any, in In reality the American medical keeping their innovations from being used system accommodates innovation that in the conduct of war? works in ways that elude us — permitting drugs and devices that are proven effective The one thing that we do know and and safe, even if their exact operation that is that we cannot wait. As remains partially mysterious. By analogy, policymakers, we cannot wait for the we might know that patching a tire prevents engineers to design the perfectly fair, it from deflating, even if we lack a profound perfectly accountable, perfectly transparent understanding of the physics of rubber. system. We have to help them understand Might we want to build our regulations of how. And we have to help them understand these AI-enabled medical systems to what they need to do to meet the burden of accommodate that sort of ambiguity? If we public trust. As engineers, we also cannot do, it would make “regulation of AI wait for policymakers to have a single, transparency” writ large much more difficult clear, uniform, perfect instruction manual — it would instead by context-specific. for how to design systems that are better, Might it be, then, that the concept of “AI fairer, more accountable and consistent regulation” is no more meaningful than with law. In other words, we have to do “regulation of C++” the programming both of those together. language, or plastic, or steel? If AI is today This discussion will be for some a simply a series of machine learning continuation, for many a start, but for none techniques, does it make sense to treat the final word on issues that will no doubt them as a distinct technology at all? The occupy us all for the duration of our answer to this question is core to careers, whether in public service, research, understanding how policymakers, and or in private industry. Our hope is that you Congress in particular, might approach will leave with a sense of information, these concerns. engagement, and empowerment; to know Time-permitting, we will also discuss where AI is and where it’s headed, to the use of AI for national security and understand the opportunities and be on the defense. The notion of autonomous killer watch for the warning signs of it going robots is certainly evocative. But more awry. discussion can help us separate the fact Elected officials, policymakers, from the fiction here, and unpack some engineers, lawyers, teachers, advocates — proposed ideas of how to deal with these and all of us, as citizens — have a role to issues. For instance, the U.S. government play in shaping a future suffused with AI. alone has proposed the notion of export Policy prescriptions may differ, but controls or arms control regimes for foundational understanding of the machine learning and computer vision — in technology should not. That foundation, other words, core AI technologies. But there which we hope to build here together, is the are deep questions there. Can you have only way that we can ensure that this controls of any kind on systems that are, at technology is used for us — and not against their core in many cases, open source? How us. We look forward to the conversation. does that work? How can you even control

15

16 ALGORITHMS ARE REPLACING NEARLY ALL BUREAUCRATIC PROCESSES

Cathy O’Neil

CEO, O’Neil Risk Consulting and Algorithmic Auditing

Wherever there was once a the make and model of the device they’re complex, sticky human question, we now using. Similarly, life insurance, car have a slick black box called artificial insurance, and to some extent health intelligence (AI) and that is presented as a insurance companies are using all kinds of foolproof scientific instrument. We should “big data” and AI techniques to decide on not trust these AI tools blindly. someone’s risk profile and premiums. For example, algorithms have taken There’s more. College admissions over what was once the work of corporate algorithms, child abuse risk scores, Human Resources departments. Who gets automated drone strikes, facial recognition interviewed, who gets hired or fired, who in cities and stadiums, and suicide risk gets a bonus—these decisions were once scores on social media to name a few. made by humans, fairly or not. Now they They’re proliferating as the data about us are increasingly being made by machines, makes those algorithms cheap to build and or at least supplemented by data. profitable or efficient to use. Algorithms have been picked up by That’s a lot of power. The question the justice system. Where to send the is, how well made are these tools? Can we police, how long to sentence, whether to trust them to work? incarcerate pre-trial, or whether to grant To give two examples where the parole are decisions that used to place answer is no, think of the Volkswagen humans at the center. Now we have scoring emissions scandal – an algorithm in systems that do that for us. automobiles trained to game emissions In financial matters, algorithms are tests – and the recent Boeing disaster, wholeheartedly embraced by companies which looks to be an algorithm that takes competing with each other for the best control of the airplane under certain customers. Who gets a credit card, and conditions, which erred tragically in the what their offered interest rate is, and for presence of a malfunctioning sensor. that matter which credit card ads they see If you ask the owner of one of these when they go to the company’s website, all algorithms whether it works, they’ll determined by AI scoring systems that undoubtably say “yes.” And if you ask them, evaluate a person by their social media “what do you mean by that?”, they’ll presence, browsing history, location, and suggest that it’s either more efficient than

17 the old system or that it’s more accurate violated because nobody could explain to than humans, who are notoriously biased. them what the scores actually meant. In other words, they’ll explain how they That’s a single example of how work for them, the owners. scoring systems can be flagged for They will also probably suggest that accountability failures. We’re seeing the algorithm is too scientifically proposed legislation for cities and states to sophisticated to really explain or conduct “audits” of algorithms used by understand, and that we should trust them government. More generally, we can expect that they’ve got things under control. class action lawsuits to determine what exactly our constitutional rights are for What they probably will not have accountability as consumers, workers, and measured, however, is for whom the citizens. algorithms fail, and what is meant by failure for those folks. Powerful algorithms are sometimes invisible to the target. Especially in the Are the job application algorithms context of online advertising, people often filtering out perfectly good candidates? Are don’t even know they’re being scored. they filtering out more qualified candidates from protected classes? They probably And while that’s fine when we’re haven’t tested that question. being evaluated online for our propensity to buy a sofa versus a loveseat, it becomes In other words, the makers and the less clear when we’re being evaluated for owners of these algorithms have overly whether we’d like to be shown an ad for a narrow views of success, and insufficiently STEM job in Silicon Valley or a daycare thoughtful approaches to what could go provider job in Sacramento. wrong. They often avoid thinking about worst case scenarios, first because it’s We’ve recently seen a spate of against their commercial interest to do so, lawsuits against Facebook for these very and second because they honestly haven’t problems, both in employment and housing been forced to. So far people have trusted advertising. them. But what’s even messier about this Algorithms are often unaccountable. particular problem is that, in general and This is a threat to our constitutional rights. outside of Facebook, online advertising ecosystems often do not collect information In the context of livelihood or the on protected class status like gender, race, justice system, people are not typically age, or disability status. So it’s unclear how allowed to appeal their scores, and often to clean up the situation even if the don’t even know what their scores are. intention was to do so, without entirely In a recent court case in Houston, revamping the advertising ecosystem. six teachers who had been fired based Regulators are not trained to solve solely on the basis of a “teacher these problems. accountability” scoring algorithm, called the teacher value-added model, sued and won. We are no longer in the age where a The judge ruled that, although their scores regulator could find a “smoking gun” email were low, their due process rights had been of a boss telling a hiring manager not to hire people based on their race. Instead, we

18 have opaque, black box algorithms that might have a disparate impact, but it could well be unintentional. How do we even discover that problem? It’s not impossible. Reuters recently reported that Amazon built a hiring algorithm for engineers, but decided not to use it after discovering it was sexist. That’s good news, both because Amazon bothered to test that question and because they acted on the result. It also means that, if Amazon can do that, then so can regulators, at least once they’ve learned how. Problems of algorithmic accountability are not going away. They’re just proliferating, because algorithms are indeed quite good at making things more profitable, more efficient, and less accountable for their owners. The risks are too high to ignore the potential for algorithms to do harm. The government should figure out ways to prioritize public good, fairness, and accountability for these tools, whether that means training regulators to enforce current laws, passing new laws that enlarge the concept of algorithmic accountability, or forming a new regulator that acts like an “FDA” for algorithms.

19

20 HOW TO EXERCISE THE POWER YOU DIDN’T ASK FOR*

Jonathan Zittrain

Professor of International Law, Harvard Law School

I used to be largely indifferent to Facebook targeting could be used to show claims about the use of private data for housing ads only to white consumers. targeted advertising, even as I worried Narrow targeting can also render long- about privacy more generally. How much of standing mechanisms for detecting market an intrusion was it, really, for a merchant to failure and abuse ineffective: State hit me with a banner ad for dog food attorneys general or consumer advocates instead of cat food, since it had reason to can’t respond to a deceitful ad campaign, believe I owned a dog? And any users who for instance, when they don’t see it were sensitive about their personal themselves. Uber took this predicament to information could just click on a menu and cartoon villain extremes when, to avoid simply opt out of that kind of tracking. sting operations by local regulators, it used data collected from the Uber app to figure But times have changed. out who the officials were and then sent The digital surveillance economy has fake information about cars in service to ballooned in size and sophistication, while their phones. keeping most of its day-to-day tracking These are relatively new problems. apparatus out of view. Public reaction has Originally, our use of information platforms, ranged from muted to deeply concerned, whether search engines or social media, with a good portion of those in the wasn’t tailored much to anything about us, concerned camp feeling so overwhelmed by except through our own direct choices. Your the pervasiveness of their privacy loss that search results for the query “Are they’re more or less reconciled to it. It’s vaccinations safe?” would be the same as long past time not only to worry but to act. mine or, for a term like “pizza,” varied in a Advertising dog food to dog owners straightforward way, such as by location, remains innocuous, but pushing payday offering up nearby restaurants. If you didn’t loans to people identified as being like what you got, the absence of tailoring emotionally and financially vulnerable is not. suggested that the search platform wasn’t Neither is targeted advertising that is used to blame; you simply were seeing a window to exclude people. Julia Angwin, Ariana on the web at large. For a long time that Tobin, and Madeleine Varner found that on was a credible, even desirable, position for

* This article first appeared in the Harvard Business Review, September 19, 2018

21 content aggregators to take. And for the doors to content control by a handful of most part they themselves weren’t always corporate parties — after all, Facebook has good at predicting what their own platforms access to far more eyeballs than a single would offer up. It was a roulette wheel, newspaper has ever had — or by the removed from any human agent’s shaping. governments that regulate them? Today that’s not true. The digital Companies can no longer sit this world has gone from pull to push: Instead out, much as they’d like to. As platforms of actively searching for specific things, provide highly curated and often single people read whatever content is in the responses to consumers’ queries, they’re feeds they see on sites like Facebook and likely to face heated questions — and Twitter. And more and more, people get not perhaps regulatory scrutiny — about whom a range of search results but a single they’re favoring or disfavoring. They can’t answer from a virtual concierge like just shrug and point to a “neutral” algorithm Amazon’s Alexa. And it may not be long when asked why their results are the way before such concierges rouse themselves to they are. That abdication of responsibility suggest it’s time to buy a gift for a friend’s has led to abuse by sophisticated and well- birthday (perhaps from a sponsor) or funded propagandists, who often build persistently recommend Uber over Lyft Astroturf campaigns that are meant to look when asked to procure a ride (again, thanks as if they’re grassroots. to sponsorship). So what should mediating platforms Is it still fair for search platforms to do? say, “Don’t blame me, blame the web!” if a An answer lies in recognizing that concierge provides the wrong directions to a today’s issues with surveillance and location or the wrong drug interaction targeting stem from habit and misplaced precautions? While we tend not to hold trust. People share information about Google and Bing responsible for the themselves without realizing it and are accuracy of every link they return on a unaware of how it gets used, passed on, search, the case may be different when and sold. But the remedy of allowing them platforms actively pluck out only one to opt out of data collection leads to answer to a question — or answer a decision fatigue for users, who can question that wasn’t even asked. articulate few specific preferences about We’ve also moved to a world where data practices and simply wish not to be online news feeds — and in some cases taken advantage of. concierges’ answers to questions — are Restaurants must meet minimum aggressively manipulated by third parties standards for cleanliness, or (ideally) they’ll trying to gain exposure for their messages. be shut down. We don’t ask the public to There’s great concern about what happens research food safety before grabbing a bite when those messages are propaganda — and then to “opt out” of the dubious dining that is, false and offered in bad faith, often establishments. No one would rue being obscuring their origins. Elections can be deprived of the choice to eat food swayed, and people physically hurt, by lies. contaminated with salmonella. Similar Should the platforms be in the business of intervention is needed in the digital deciding what’s true or not, the way that universe. newspapers are? Or does that open the

22 Of course, best practices for the use would still get dog food ads — but it would of personal information online aren’t nearly preclude predatory advertising, like as clear cut as those for restaurant promotions for payday loans. It would also cleanliness. After all, much of the prevent data from being used for purposes personalization that results from online unrelated to the expectations of the people surveillance is truly valued by customers. who shared it, as happened with the That’s why we should turn to a different “personality quiz” survey results that were kind of relationship for inspiration: one in later used to psychometrically profile voters which the person gathering and using and then attempt to sway their political information is a skilled hired professional opinions. helping the person whose data is in play. This approach would eliminate the That is the context of interactions between need to judge good from bad content, doctors and patients, lawyers and clients, because it would let platforms make and certified financial planners and decisions based on what their users want, investors. rather than on what society wants for them. Yale Law School’s Jack Balkin has Most users want the truth and should be invoked these examples and proposed that offered it; others may not value accuracy today’s online platforms become and may prefer colorful and highly “information fiduciaries.” We are among a opinionated content instead — and when number of academics who have been they do, they should get it, perhaps labeled working with policymakers and internet as such. Aggregators like Google News and companies to map out what sorts of duties Facebook are already starting to make such a responsible platform could embrace. determinations about what to include as We’ve found that our proposal has “news” and what counts as “everything bipartisan appeal in Congress, because it else.” It may well be that an already- protects consumers and corrects a clear skeptical public only digs in further when market failure without the need for heavy- these giants offer their judgments, but well- handed government intervention. grounded tools could also inform journalists and help prevent propaganda posted on “Fiduciary” has a legalese ring to it, Facebook from spreading into news outlets. but it’s a long-standing, commonsense notion. The key characteristic of fiduciaries More generally, the fiduciary is loyalty: They must act in their charges’ approach would bring some coherence to best interests, and when conflicts arise, the piecemeal privacy protections that have must put their charges’ interests above their emerged over the years. The right to know own. That makes them trustworthy. Like what data has been collected about you, doctors, lawyers, and financial advisers, the right to ask that it be corrected or social media platforms and their concierges purged, and the right to withhold certain are given sensitive information by their data entirely all jibe with the idea that a users, and those users expect a fair shake powerful company has an obligation to — whether they’re trying to find out what’s behave in an open, fair way toward going on in the world or how to get consumers and put their interests above its somewhere or do something. own. A fiduciary duty wouldn’t broadly While restaurant cleanliness can be rule out targeted advertising — dog owners managed with readily learned best practices

23 (keep the raw chicken on a separate plate), way, fiduciary duties wouldn’t be imposed doctors and lawyers face more complicated on companies that don’t want them; they questions about what their duty to their could take their chances, as they already patients and clients entails (should a patient do, with state-level regulation. with a contagious and dangerous disease be In addition, firms would need to allowed to walk out of the office without structure themselves so that new practices treatment or follow-up?). But the that raise ethical issues are surfaced, quandaries of online platforms are even less discussed internally, and disclosed easy to address. Indeed, one of the few externally. This is not as easy as touchstones of data privacy — the concept establishing a standard compliance of “personally identifiable information,” or framework, because in a compliance PII — has become completely blurry, as framework the assumption is that what’s identifying information can now be gleaned right and wrong is known, and managers from previously innocuous sources, making need only to ensure that employees stay nearly every piece of data drawn from within the lines. Instead the idea should be someone sensitive. to encourage employees working on new Nevertheless, many online practices projects to flag when something could be will always be black-and-white breaches of “lawful but awful” and congratulate — an information fiduciary’s duty. If Waze told rather than retaliate against — them for me that the “best route” somewhere just so calling attention to it. This is a principle of happened to pass by a particular Burger what in medical and some other fields is King, and it gave that answer to get a known as a “just culture,” and it’s supported commission if I ate there, then Waze would by the management concept of be putting its own interests ahead of mine. “psychological safety,” wherein a group is So would Mark Zuckerberg if hypothetically set up in a way that allows people to feel he tried to orchestrate Facebook feeds so comfortable expressing reservations about that Election Day alerts went only to people what they’re doing. Further, information who would reliably vote for his preferred fiduciary law as it develops could provide candidate. It would be helpful to take such some immunity not just to individuals but to possibilities entirely off the table now, at the firms that in good faith alert the public or point when no one is earning money from regulators to iffy practices. them or prepared to go to bat for them. As Instead of having investigations into for the practices that fall into a grayer area, problems by attorneys general or plaintiffs’ the information fiduciary approach can be lawyers, we should seek to create incentives tailored to account for newness and for bringing problems to light and uncertainty as the internet ecosystem addressing them industrywide. That continues to evolve. suggests a third touchstone for an initial Ideally, companies would become implementation of information fiduciary law: fiduciaries by choice, instead of by legal Any public body chartered with offering mandate. Balkin and I have proposed how judgments on new issues should be able to this might come about — with, say, U.S. make them prospectively rather than federal law offering relief from the existing retroactively. For example, the IRS can give requirements of individual states if taxpayers a “private letter ruling” before companies opt in to fiduciary status. That they commit to one tax strategy or another.

24 On truly novel issues, companies ought to be able to ask public authorities — whether the Federal Trade Commission or a new body chartered specifically to deal with information privacy — for guidance rather than having to make a call in unclear circumstances and then potentially face damages if it turns out to be the wrong one. Any approach that prioritizes duty to customers over profit risks trimming margins. That’s why we need to encourage a level playing field, where all major competitors have to show a baseline of respect. But the status quo is simply not acceptable. Though cleaning up their data practices will increase the expenses of the companies who abuse consumers’ privacy, that’s no reason to allow it to continue, any more than we should heed polluters who complain that their margins will suffer if they’re forced to stop dumping contaminants in rivers. The problems arising from a surveillance-heavy digital ecosystem are getting more difficult and more ingrained. It’s time to try a comprehensive solution that’s sensitive to complexities, geared toward addressing them as they unfold, and based on duty to the individual consumers whose data might otherwise be used against them.

25

26 BEYOND THE VAST WASTELAND*

Ethan Zuckerman

Director, Center for Civic Media, MIT Media Lab

In 1961, the newly appointed thoroughly appropriate moment to evaluate chairman of the Federal Communications whether social media is making our society Commission, Newt Minow, addressed the and our democracy stronger, or pulling it National Association of Broadcasters in apart. From Cambridge Analytica to Comet Washington, D.C. Minow’s speech Ping Pong to the massacre in New Zealand, demanded that broadcasters take seriously alarm bells are sounding that not all is well the idea to serve the public interest – and in our online public spaces. distinguished the public interest from simply But Minow’s speech didn’t end with what interests the public. And Minow coined a condemnation of the sorry state of an unforgettable phrase to explain what a broadcasting in 1961. Instead, Minow poor job broadcasters were doing. articulated a vision for television to inform, Challenging executives to watch a day of enlighten and entertain, a future he hoped their own programming without anything to to achieve without censorship, without distract or divert them, Minow declared, “I replacing private companies with can assure you that what you will observe is government entities, and mostly through a vast wasteland.”1 voluntary compliance. And, with 1967’s There have been hundreds of Public Broadcasting Act, the founding of the articles written over the past two years Public Broadcasting Service in 1969 and about social media that might have been National Public Radio in 1970, a surprising better titled “a vast wasteland”. This flood amount of Minow’s vision came to pass. of articles argues that social media often It’s important that we consider the doesn’t work the way we think it should, real and potential harms linked to the rise that partisan manipulation of Facebook may of social media, from increasing political be swaying elections, and that extremism polarization, the spread of mis-, dis- and on YouTube may be contributing to a wave malinformation2 to trolling, bullying and of ethnonationalist violence. It’s a online abuse. But much as television was in

* Adapted from “Six or Seven Things Social Media Can Do for Democracy”, May 30, 2018, https://www.media.mit.edu/articles/six-or-seven-things-social-media-can-do-for-democracy/ 1 Newt Minow, “Television and the Public Interest”, address delivered 9 May 1961, National Association of Broadcasters, Washington, DC. 2 Clare Wardle and Hossein Derakshan, “Information disorder: definitions”, in “Understanding and Addressing the Disinformation Ecosystem”, conference publication. https://firstdraftnews.org/wp- content/uploads/2018/03/The-Disinformation-Ecosystem-20180207-v4.pdf

27 its teenage years in the early 1960s, social analysis, knitting reported facts into media isn’t going away any time soon. It’s complex possible narratives of significance essential that we have a positive vision for and direction. what social media can be as well as a Schudson wades into deeper waters critical take on mitigating its harms. with the next three functions. News can I’m interested in what social media serve as a public forum, allowing citizens to should do for us as citizens in a democracy. raise their voices through letters to the We talk about social media as a digital editor, op-eds and (when they’re still public sphere, invoking Habermas and permitted) through comments. The news coffeehouses frequented by the can serve as a tool for social empathy, bourgeoisie. Before we ask whether the helping us feel the importance of social internet succeeds as a public sphere, we issues through careful storytelling, ought to ask whether that’s actually what appealing to our hearts as well as our we want it to be. heads. Controversially, Schudson argues, news can be a force for mobilization, urging I take my lead here from journalism readers to take action, voting, marching, scholar Michael Schudson, who took issue protesting, boycotting, or using any of the with a hyperbolic statement made by media other tools we have access to as citizens. critic James Carey: “journalism as a practice is unthinkable except in the context of His essay closes with a seventh role democracy; in fact, journalism is usefully that Schudson believes the news should fill, understood as another name for even if it has yet to embrace it. The news democracy.” For Schudson, this was a step can be a force for the promotion of too far. Journalism may be necessary for representative democracy. For Schudson, democracy to function well, but journalism this includes the idea of protecting minority by itself is not democracy and cannot rights against the excesses of populism, and produce democracy. Instead, we should he sees a possible role for journalists in work to understand the “Six or Seven ensuring that these key protections remain Things News Can Do for Democracy”, the in force. title of an incisive essay Schudson wrote to This is perhaps not an exhaustive anchor his book, Why Democracies Need an list, nor is the news required to do all that Unlovable Press3. Schudson believes it can do. Neither does The six things Schudson sees news the list include things that the news tries to currently doing for democracy are do that aren’t necessarily connected to presented in order of their frequency – as a democracy, like providing an advertising result, the first three functions Schudson platform for local businesses, providing sees are straightforward and unsurprising. revenue for publishers, or entertaining The news informs us about events, locally audiences. And Schudson acknowledges and globally, that we need to know about that these functions can come into conflict – as citizens. The news investigates issues the more a news organization engages in that are not immediately obvious, doing the mobilization, the more likely it is that it will hard work of excavating truths that compromise its ability to inform impartially. someone did not want told. News provides

3 Schudson, Michael. Why Democracies Need an Unlovable Press. Polity, 2008.

28 In this same spirit, I’d like to diaspora, and Al Jazeera rebroadcast suggest six or seven things social media can footage, helping spread the protests to do for democracy. As with Schudson’s list, Tunis and beyond. The importance of social these functions are not exhaustive – media in informing us is that it provides a obviously, social media entertains us, channel for those excluded by the news – connects us with family, friends and any whether through censorship, as in Tunisia, advertiser willing to pay for the privilege, in or through disinterest or ignorance – to addition to the civic functions I outline here. have their voices and issues heard. Furthermore, as with news media, these Places don’t need to be as far away civic purposes are not always mutually as Tunisia for social media to be a conduit reinforcing and can easily come into for information – when Michael Brown was conflict. (And because I’m much less killed in Ferguson, Missouri, many people learned than Schudson, my list may be learned of his death, the protests that incomplete or just plain wrong.) unfolded in the wake, and the militarized Social media can inform us. response to those protests, via Twitter. (And as news reporters were arrested for Many of us have heard the statistic covering events in Ferguson, they turned to that a majority of young people see Twitter to share news of their own Facebook as a primary source for news4, detention.) Social media is critically and virtually every newsroom now considers important in giving voice to communities Facebook as an important distributor of who’ve been systemically excluded from their content (sometimes to their peril.) But media – people of color, women, LGBTQIA that’s not what’s most important in people, poor people. By giving people a considering social media as a tool for chance to share their under-covered democracy. Because social media is perspectives with broadcast media, social participatory, it is a tool people use to media has a possible role in making the create and share information with friends media ecosystem more inclusive and fair. and family, and potentially the wider world. Usually this information is of interest only to Finally, social media may be helping a few people – it’s what you had for lunch, replace or augment local information, as or the antics of the squirrel in your people connect directly with their children’s backyard. But sometimes the news you see schools or with community organizations. is of intense importance to the rest of the This function is increasingly important as world. local newspapers shed staff or close altogether, as social media may become the When protesters took to the streets primary conduit for local information. of Sidi Bouzid, Tunisia, they were visible to the world through Facebook even though Social media can amplify important the Tunisian government had prevented voices and issues. journalists from coming to the town. Videos In traditional (broadcast or from Facebook made their way to Al newspaper) media, editors decide what Jazeera through Tunisian activists in the topics are worth the readers’ attention. This

4 Amy Mitchell, Katerina Eva Masta and Jeffrey Gottfried, “Facebook Top Source for Political News Among Millennials”, June 1, 2015. https://www.journalism.org/2015/06/01/facebook-top-source-for- political-news-among-millennials/

29 “agenda setting” function has enormous Children managed to amplify an issue to a political importance – as Max McCombs and level of visibility where powerful backlash Donald Shaw observed in 19725 , the news was inevitable.) doesn’t tell us what to think, but it’s very Amplification works within much good at telling us what to think about. smaller circles than those surrounding U.S. That agenda-setting power takes a foreign policy. By sharing content with small different shape in the era of social media. personal networks on social media, Instead of a linear process from an editor’s individuals signal the issues they see as desk through a reporter to the paper on most important and engage in a constant your front porch, social media works with process of self-definition. In the process, news media through a set of feedback they advocate for friends to pay attention to loops6. Readers make stories more visible these issues as well. Essentially, social by sharing them on social media (and help media provides an efficient mechanism for ensure invisibility by failing to share the two-step flow of communication, stories). Editors and writers respond to documented by Paul Lazarsfeld and Elihu sharing as a signal of popularity and Katz8, to unfold online. We are less interest, and will often write more stories to influenced by mass media than we are by capitalize on this interest. Readers may opinion leaders, who share their opinions respond to stories by becoming authors, about mass media. Social media invites all injecting their stories into the mix and of us to become opinion leaders, at least for competing with professional stories for our circles of friends, and makes the attention and amplification. process entertaining, gamifying our role as influencers by rewarding us with up to the Amplification has become a new second numbers on how our tweets and form of exercising political power. In 2012, posts have been liked and shared by our we watched Invisible Children use a friends. carefully crafted campaign, built around a manipulative video and a strategy of Social media can be a tool for sharing the video with online influencers. connection and solidarity. Within an few days, roughly half of The pre-web internet of the 1980s American young people had seen the video, and 1990s was organized around topics of and U.S. funding for the Ugandan military – interest, rather than offline friendships, as the goal of the campaign – was being social networks like Facebook organize. supported by powerful people in the U.S. Some of the most long-lasting communities Congress and military7. (That the that emerged from the Usenet era of the organization’s director had a nervous internet were communities of interest that breakdown, leading to the group’s connected people who had a hard time implosion, was not a coincidence – Invisible

5 McCombs, Maxwell E., and Donald L. Shaw. "The agenda-setting function of mass media." Public opinion quarterly 36.2 (1972): 176-187. 6 Ethan Zuckerman, “Four Problems for Media and Democracy”. April 1, 2018. https://medium.com/trust-media-and-democracy/we-know-the-news-is-in-crisis-5d1c4fbf7691 7 Josh Kron and J. David Goodman, “Online, a Distant Conflict Soars to Topic No. 1”, New York Times, March 8, 2012. 8 Katz, Elihu, Paul F. Lazarsfeld, and Elmo Roper. Personal influence: The part played by people in the flow of mass communications. Routledge, 2017.

30 finding each other offline: young people Communities of solidarity can both questioning their sexuality, religious and exacerbate and combat that problem. We ethnic minorities, people with esoteric or may end up more firmly rooted in our specialized interests. The spirit of the existing opinions, or we may create a new community of interest and identity set of weak ties to people who we may continued through Scott Hefferman’s disagree with in terms of traditional political meetup.com, which helped poodle owners categories, but with whom we share or Bernie Sanders supporters in Des Moines powerful bonds around shared interests, find each other, and now surfaces again in identities and struggles. Facebook Groups, semi-private spaces Social media can be a space for designed to allow people to connect with mobilization likeminded individuals in safe, restricted spaces. The power of social media to raise money for candidates, recruit people to Social critics, notably Robert participate in marches and rallies, to Putnam9, have worried that the internet is organize boycotts of products or the undermining our sense of community and overthrow of governments is one of the lessening people’s abilities to engage in civic best-documented – and most debated – behavior. Another possibility is that we’re powers of social media. From Clay Shirky’s forming new bonds of solidarity based on examination of group formation and shared interests than on shared mobilization in Here Comes Everybody to geographies. I think of Jen Brea, whose endless analyses of the power of Facebook academic career at Harvard was cut short and Twitter in mobilizing youth in Tahrir by myalgic encephalomyelitis10, who used Square or Gezi Park, including Zeynep the internet to build an online community of Tufekçi’s Twitter and Tear Gas, the power fellow disease sufferers, a powerful of social media to both recruit people to documentary film that premiered at social movements and to organize actions Sundance, and a powerful campaign calling offline has been well documented. It’s also attention to the ways diseases that been heartily critiqued, from Malcolm disproportionately affect women are Gladwell, who believes that online systemically misdiagnosed. Brea’s disease connections can never be as powerful as makes it difficult for her to connect with her real-world strong ties for leading people to local, physical community, but social media protest, or by thinkers like Tufekçi, who has made it possible to build a powerful readily admit that the ease of mobilizing community of interest that is working on people online is an Achilles heel, teaching helping people live with their disease. leaders like Erdogan to discount the One of the major worries voiced importance of citizens protesting in the about social media is the ways in which it streets. can increase political polarization.

9 Putnam, Robert D. "Bowling Alone: America’s Declining Social Capital." Culture and politics. Palgrave Macmillan, New York, 2000. 223-234. 10 Kaitlyn Tiffany, “Jen Brea documented her Chronic Fatigue Syndrome on an iPhone so that doctors would believe other women”, September 21, 2017. https://www.theverge.com/2017/9/21/16163950/unrest-documentary-sundance-creative-distribution- fellowship-interview

31 It’s worth noting that mobilization rather than experiencing conflict and online does not have to lead to offline tension. action to be effective. A wave of campaigns Yet it is possible to create spaces for like Sleeping Giants, which has urged deliberation and debate within social media. advertisers to pull support from Breitbart, or Wael Ghonim was the organizer of the We #metoo, where tens of thousands of Are All Khaled Said Facebook page, one of women have demonstrated that sexual the major groups that mobilized “Tahrir harassment is a pervasive condition, not youth” to stand up to the Mubarak regime just the product of a few Harvey in Egypt, leading to the most dramatic Weinsteins, have connected primarily online changes to come out of the Arab Spring. action to real-world change. What’s After the revolution, Ghonim was deeply increasingly clear is that online mobilization involved with democratic organizing in – like amplification – is simply a tool in the Egypt. He became frustrated with contemporary civic toolkit, alongside more Facebook, which was an excellent platform traditional forms of organizing. for rallying people and harnessing anger, Social media can be a space for but far less effective in enabling nuanced deliberation and debate. debate about political futures. Ghonim went on to build his own social network, Parlio, Perhaps no promise of social media which focused on civility and respectful has been more disappointing than hope that debate, featuring dialogs with intellectuals social media would provide us with an and political leaders rather than updates on inclusive public forum. Newspapers began what participants were eating for lunch or experimenting with participatory media watching on TV. The network had difficulty through open comments fora, and quickly scaling, but was acquired by Quora, the discovered that online discourse was often question-answering social network, which mean, petty, superficial and worth ignoring. was attracted to Parlio’s work in building Moving debate from often anonymous high-value conversations that went beyond comment sections onto real-name social questions and answers11. networks like Facebook had less of a mediating effect that many hoped. While Parlio suggests that the dynamics of conversations less often devolve into insults social networks as we understand them and shouting, everyone who’s shared have to do with the choices made by their political news online has had the experience founders and governing team. Facebook of a friend or family member ending an and Twitter can be such unpleasant places online friendship over controversial content. because strong emotions lead to high It’s likely that the increasing popularity of engagement, and engagement sells ads. closed online spaces, like Facebook groups, Engineer a different social network around has to do with the unwillingness of people different principles, and it’s possible that the to engage in civil deliberation and debate, deliberation and debate we might hope for and the hope that people can find from a digital public sphere could happen affirmation and support for their views within a platform.

11 Josh Constantine, “Quora’s first acquisition is Arab Spring instigator’s Q&A site Parlio”, https://techcrunch.com/2016/03/30/quora-parlio/

32 Social media can be a tool for showing serendipity, rather than increased us a diversity of views and randomness. perspectives. Social media can be a model for The hope that social media could democratically governed spaces. serve as a tool for introducing us to people Users in social networks like Twitter we don’t already know – and particularly to and Facebook have little control over how people we don’t agree with – may seem those networks are governed, despite the impossibly cyberutopian. Indeed, I wrote a great value they collectively create for book, Rewire, that argues that social media platform owners. This disparity has led tends to reinforce homophily, the tendency Rebecca MacKinnon to call for platform of birds of a feather to flock together. Given owners to seek Consent of the Networked, the apparent track record of social media as and Trebor Scholz to call us to recognize a space where ethnonationalism and racism participation in social networks as Digital thrive, skepticism that social media can Labor. But some platforms have done more introduce us to new perspectives seems than others to engage their communities in eminently reasonable. governance. Contemporary social networks have Reddit is the fourth most popular an enormous amount of potential diversity, site on the U.S. internet and sixth most but very little manifest diversity. In theory, popular site worldwide, as measured by you can connect with 2 billion people from Alexa Internet, and is a daily destination for virtually every country in the world on at least 250 million users. The site is Facebook. In practice, you connect with a organized into thousands of “subreddits”, few hundred people you know offline, who each managed by a team of tend to share your national origin, race, uncompensated, volunteer moderators, who religion and politics. But a social network determine what content is allowable in each that focused explicitly on broadening your community. The result is a wildly diverse set perspectives would have a tremendous of conversations, ranging from insightful foundation to build upon: networks like conversations about science and politics in Facebook know a great deal about who you some communities, to ugly, racist, already pay attention to, and have a deep misogynistic, hateful speech in others. The well of alternative content to draw from. difference in outcomes in those Projects like FlipFeed from MIT’s communities comes in large part to Laboratory for Social Machines and differences in governance and to the gobo.social from my group at the MIT partipants each community attracts. Media Lab explicitly re-engineer your social Some Reddit communities have media feeds to encourage encounters with begun working with scholars to examine a more diverse set of perspectives. If a scientifically how they could govern their network like Twitter or Facebook concluded communities more effectively. /r/science, a that increased diversity was a worthy metric community of 18 million subscribers and to manage to, there’s dozens of ways to over a thousand volunteer moderators, has accomplish the goal, and rich questions to worked with communications scholar be solved in combining increased diversity Nathan Matias to experiment with ways of with a user’s interests to accomplish enforcing their rules to maximize positive

33 discussions and throw out fewer their throat, or to silence the exclusive truth rulebreakers12. The ability to experiment of their perspectives through dialoge. The with different rules in different parts of a bad news is that making social media work site and to study what rulesets best enable better for democracy likely means making it what kinds of conversations could have work better for the Nazis as well. The good benefits for supporters of participatory news is that there’s a lot more participatory democracy offline as well as online. democrats than there are Nazis. Beyond the vast wasteland My aim in putting forward seven things social media could do for democracy It’s fair to point out that the social is two-fold. As we demand that Facebook, media platforms we use today don’t fulfill all Twitter and others do better – and we these functions. Few have taken steps to should – we need to know what we’re increase the diversity of opinions users are asking for. I want Facebook to be more exposed to, and though many have tried to respectful of my personal information, more encourage civil discourse, very few have dedicated to helping me connect with my succeeded. It’s likely that some of these friends than marketing me to advertisers, goals are incompatible with current ad but I also want them to be thinking about supported business models. Political which of these democratic goals they hope polarization and name-calling may well to achieve. generate more pageviews than diversity and civil deliberation. The most profound changes Newt Minow inspired in television happened Some of these proposed functions outside of commercial broadcasting, in the are likely incompatible. Communities that new space of public broadcasting. I believe favor solidarity and subgroup identity, or we face a similar public media moment for turn that identity into mobilization, aren’t social media. Achieving the democratic aims the best ones to support efforts for diversity for social media outlined here requires a or for dialog. vision of social media that is plural in Finally, it’s also fair to note that purpose, public in spirit and participatory in there’s a dark side to every democratic governance. Rather than one social network function I’ve listed. The tools that allow that fills all our needs, we need thousands marginalized people to report their news of different social networks that serve and influence media are the same ones that different communities, meeting their needs allow fake news to be injected into the for conversation with different rules, norms media ecosystem. Amplification is a and purposes. We need tools that break the technique used by everyone from Black silos of contemporary social media, allowing Lives Matter to neo-Nazis, as is mobilization, a citizen to follow conversations in dozens and the spaces for solidarity that allow Jen of different spaces with a single tool. Some Brea to manage her disease allow “incels” of these spaces will be ad or subscription to push each other towards violence. While supported, while some might be run by I feel comfortable advocating for respectful local governments with taxpayer funds, but dialog and diverse points of view, someone some subset of social media needs to will see my advocacy as an attempt to push consciously serve the public interest as its politically correct multiculturalism down

12 See civilservant.io

34 primary goal. Finally, farming the press had defenders like Thomas Jefferson, management of online spaces to invisible who declared that if he had to choose workers half a world away from the between “a government without conversations they’re moderating isn’t a newspapers or newspapers without a viable model for maintaining public government, I should not hesitate a discussions. Many of these new spaces will moment to prefer the latter”. The health of be experiments in participatory governance, our digital public spheres is arguably as where participants will be responsible for important, and worth our creative determining and enforcing the local rules of engagement as we imagine and build the road. spaces that help us become better citizens. Social media as a vast wasteland is not We accept the importance of a free inevitable, and it should not be acceptable. and vibrant press to the health of our Envisioning a better way in which we democracy. It’s time to consider the interact with each other online is one of the importance of the spaces where we signature problems of modern democracy deliberate and debate that news, where we and one that demands the attention of form coalitions and alliances, launch plans anyone concerned with democracy’s health and provide support to each other. The free in the 21st century.

35

36 PRIVACY AND CONSUMER CONTROL

J. Howard Beales III and Timothy J. Muris

Professor of Strategic Management and Public Policy, George Washington University George Mason University Foundation Professor of Law at Antonin Scalia Law School, Senior Counsel at Sidley Austin LLP

We address the role of consumer another entity, property is of limited utility. control in protecting privacy. Our focus is Consider, for example, the online payment commercial interactions, such as secondary service Venmo, which lets consumers make uses of information originally collected for and share payments with friends. By another purpose or the widespread practice default, transactions on Venmo are public, of tracking consumers across websites. but either the sender or the recipient of Social media, which attempt to mirror a funds can change those settings. Which of non-commercial setting, raise different us should control the information that you issues. In particular, when information is and I engaged in a transaction? The only shared with friends on (or off of) social sensible answer is that we both have access media, the friend also knows the to that information, and can use it or reveal information and can share it with others or it as we see fit. Or consider genetic use if for a different purpose. information. It is undoubtedly an individual’s genetic profile, but it also We start by discussing policy belongs to children, parents, and siblings. approaches that in fact do little to protect Yet it makes no sense to say that we need most consumers, based in part on defining the permission of all our relatives to obtain privacy in commercial transactions as the our own genetic profiles and use them as property of one side of the transaction. We we desire. then turn to a more practical policy approach based on the adverse Consistent with the notion that consequences of information use. information is property, the traditional (and European) approach to privacy has long Is Personal Information Property? been based on the so-called Fair Many discussions of consumer Information Practices (“FIPs”). The control begin by asserting that information quintessential feature of FIPs is the about a consumer is the property of the seemingly attractive idea of notice and consumer. Because commercial information choice—tell consumers about information is in fact the joint product of an interaction practices and let them choose whether to between an individual consumer and allow that use of information or not.

37 Unfortunately, however, FIPs is fatally their information is reported, however, flawed for protecting consumer privacy. consumers who are bad risks would likely opt out of having their payment history Consider first the problems of notice. reported. The system would be much less Privacy policies are everywhere, but they able to distinguish good credit risks from are seldom read and even less likely to be bad, because many high risk consumers seriously considered in deciding whether to would simply not be reported. Responsible interact with a website. The reason is consumers with thin credit histories would obvious: One study estimated that the be less able to get credit. As another opportunity cost of actually reading online example, consider the property recordation privacy notices would be $781 billion.1 And, system, which records property ownership of course, the cost of reading the myriad of and any liens against the property. A other privacy policies that surround us, from creditor has a perfectly legitimate interest in HIPPA to Gramm Leach Bliley notices and knowing whether a consumer willing to many others in between, is not even a part pledge a house as collateral has already of this substantial cost estimate. made that same promise to other lenders, The costs of simply reading online but if consumers could opt out of having notices greatly exceed what is at stake. their information reported, they would be The entire online advertising market in 2017 far less able to do so. was $88 billion2, just over a tenth of the One could, of course, argue that cost of reading the notices. Moreover, these are exceptions to a general rule. But many consumers see the mere existence of a general rule that applies only in certain a privacy policy as meaning that their unspecified circumstances is not a general privacy is protected, when the policy itself rule at all, and thus it is hardly a guide for may offer no protection at all.3 Simpler sound regulatory policy. privacy notices could help, but even if we could reduce the cost of reading privacy A privacy protection regime that policies by half—a herculean undertaking— relies on consumers deciphering elaborate the costs would be far disproportionate to privacy policies to determine with which the stakes. service providers they are willing to interact is not consumer protection at all. It places The principle of allowing consumer a burden on consumers that is entirely choice fares no better. Critical information unreasonable. Instead, it is, as the systems function only because consumers Europeans admit, about data protection. lack choice about including their Data protection, however, is not an end in information. Credit reporting, for example, itself. Using data in ways that harm is critical for lenders to assess risks and consumers may protect the data if avoid loans to people who will likely not consumers have consented, but it is hardly repay. If consumers could choose whether

1 McDonald, Aleecia M., and Lorrie Faith Cranor. The Cost of reading privacy policies, ISJLP 4 (2008):43. 2 https://www.iab.com/news/digital-ad-spend-reaches-all-time-high-88-billion-2017-mobile- upswing-unabated-accounting-57-revenue/ 3 Martin, K. (2015). Privacy Notices as Tabula Rasa: An Empirical Investigation into how Complying with a Privacy Notice is Related to Meeting Privacy Expectations Online. Journal of Public Policy & Marketing, 34(2), 210–227. https://doi.org/10.1509/jppm.14.139.

38 consumer protection. It is the privacy worth much attention. Although a relatively equivalent of giving consumers a long list of small number of consumers are extremely carcinogenic food additives they should concerned about protecting their privacy, carefully avoid, without a practical guide to most are not so concerned, and are avoidance. therefore unwilling to incur the costs of even thinking about the issue. Property rights are an attractive way to organize society when transactions costs In these circumstances, default rules are relatively low, because they can be about whether information can, or can not, reassigned. If the property is more valuable be shared are therefore likely to determine to someone other than its current owner, it the outcome. As Richard Posner observed can easily be sold, or the parties can about the clear importance of default rules bargain about how best to deploy the asset. for organ donations, “the probability that When transactions costs are high, however, one’s organs will be harvested for use in negotiations to rearrange rights can cost transplantation must be very slight—so more than the potential gain. In a simple slight that it doesn’t pay to think much world, product liability cases would involve about whether one wants to participate in contract law and parties could bargain such a program. When the consequences about the desired degree of safety of making a ‘correct’ decision are slight, precautions. As complexity of both ignorance is rational, and therefore one manufacturing and distribution expects default rules to have their greatest arrangements increase, however, product effect on behavior …”4 And that is what liability involves tort law: The manufacturer experimental studies of opting in versus has a duty to take reasonable safety opting out have consistently shown – the precautions, without the need (or practical default rule controls for most consumers. ability) to negotiate the details. Of course, those who care more are Although transactions costs may not more likely to be willing to think about the seem high between the consumer and the issue, whatever the default rule. website collecting information, as noted Experimental evidence, although limited, above they are significant. And, unlike indicates that those who care most about many other examples in the legal and privacy make more consistent choices when economic literatures, benefits are small. the default rule changes.5 That finding For most consumers, there is very little at argues for an “opt out” rule, if we must stake in considering how a website or choose between opt in and opt out, another commercial entity might use because the people who care about the information about a visit or a transaction. issue are willing to take the time necessary Because the probability of some adverse to consider it. Those who are not event occurring from secondary information concerned do not have to face the costs of use is remote, the question of how the thinking about it. That is an appropriate information might be used is simply not allocation of effort, because those who are

4 Richard Posner, Organ Sales – Posner’s Comment, The Becker-Posner Blog (Jan. 1, 2006), available at http://www.becker-posner-blog.com/2006/01/page/2/. 5 Yee-Lin Lai & Kai-Lung Hui, Internet Opt-in and Opt-out: Investigating the Roles of Frames, Defaults and Privacy Concerns, Proceedings of the 2006 ACM SIGMIS CPR Conference on Computer Personnel Research, 253 (2006).

39 not concerned are happy to defer to the Since we implemented the harm- default rule. As discussed below, however, based approach to privacy regulation at the this default rule is crucial to the support of Federal Trade Commission in 2001, it has online advertising markets, and in turn to been extremely productive. It led directly the principal funding mechanism for the to the National Do Not Call Registry, which internet content we all enjoy. worked well until it was overwhelmed by developments in robocall technology. (Like Are the Consequences of Information spam, robocalls will likely be solved by Use a Better Focus for Privacy technology, and the regulators and phone Regulation? companies are actively pursuing solutions.) Rather than property rights and It led as well to a series of cases to protect default rules, a far superior way to develop the security of information from thieves who privacy policy is to consider the would use it to do harm to consumers. consequences of information use and Focusing regulation on harm is misuse. The reason we care about particularly important because the internet commercial information use or sharing is has enabled substantial benefits from the that something bad might happen to information sharing economy. Fraud consumers, and the goal should be to avoid control tools that look for consistency in those adverse consequences. There is little how an identity is used rely on information reason for concern when information is originally collected for far different used to benefit a consumer, such as when purposes, including even magazine information is exchanged to facilitate a subscriptions, to help assess the risk that a transaction, or when a vendor uses particular transaction is fraudulent. Without information that was originally collected for such tools, identity theft would likely be an a completely different purpose to reduce even more serious problem. Location data the risk of fraud. There is reason for has enabled real-time navigation aids that concern, however, when information is used can help avoid traffic problems and ease the in harmful ways. The harm, however, not daily commute. With the continued rapid the information, should provide the focal growth of internet connected devices, more point for regulation. information will likely be available, and The consequences of inappropriate entrepreneurs will find new ways to use this information use may be physical, if use information to enhance our lives. enables stalking, or locating children online. Targeted advertising is a crucial use They may be economic, in the form of of online commercial information sharing. identity theft. They may be annoyances, in Advertising is the predominant mechanism the form of unwanted telemarketing calls or for financing the internet content we all irritating robocalls. And they may be the enjoy. This is not surprising: For centuries, kinds of more subjective harms that have advertising has been vital to financing news long been actionable as privacy torts: and entertainment, whether it is Intrusion upon seclusion, putting someone newspapers, magazines, radio, or television. in a false light, or publicizing private Although pure subscription models exist, information in a manner highly offensive to where consumers pay directly for content a reasonable person. without advertising, most such content is advertiser supported. Consumer behavior

40 has made clear that most consumers most study found that such advertising sold for of the time will not pay enough to avoid the nearly three times the price of “run of commercials that support much of our network” advertising that might appear favorite programming. anywhere in the advertising network.6 In the digital advertising economy, A second study, in 2013, examined what advertisers will pay for an the impact of additional information on the advertisement depends on what they know price of advertising exposures in two real- about the person who will see that time advertising auction markets, finding advertisement. Just as some audiences are that more information led to a significant more valuable than others in conventional price premium. In particular, if a cookie media, the characteristics of the viewer are was available with the impression, the price an important determinant of the price of was roughly three times higher than without online advertising. In turn, the price a cookie. The longer the cookie had been advertisers will pay determines the revenue in place, the higher the price of the available to support online content. advertisement. Moreover, the study found Anonymity may be attractive to individual that advertising revenue derived from such viewers, but it reduces the value of third party sales was particularly important advertising and the revenue available to to smaller web publishers. Even the largest support the content that the viewer enjoys websites sold almost half of their for “free.” It is, in effect, a subtle form of advertising through these channels, while free riding on the contributions of others. the smaller websites, sold more than two thirds of advertising through third parties.7 The primary source of information about viewers in online advertising markets Third party advertising is data derived from tracking cookies from intermediaries are an important part of the which advertisers develop a profile of the online marketplace, and the most likely user’s browsing behavior and the kind of competition for the companies that today sites likely to be of interest. That dominate online advertising, Google and information in turn helps predict a viewer’s Facebook. These smaller firms, however, likely interest in a particular advertisement. likely are more vulnerable to adverse effects of regulatory intervention, particularly if The effect of information on privacy legislation follows the GDPR model advertising prices is relatively large. In two of enhanced consent. Well-known separate studies, one of us has examined consumer facing companies have an the impact of better information on the inherent advantage in obtaining consent – price of digital advertising. A 2010 study of not because they are necessarily more advertising networks examined prices for trustworthy, but simply because they are behaviorally targeted advertising on a cost known. Consumers are less likely to grant per thousand basis. (Behavioral targeting consent to companies they have never uses browsing history to make better heard of—for example, 33across, Accuen, predictions about likely interests.) The

6 Howard Beales, “The Value of Behavioral Targeting,” published online by Network Advertising Initiative, available at http://www.networkadvertising.org/pdfs/Beales_NAI_Study.pdf, March, 2010. 7 J. Howard Beales & Jeffrey A. Eisenach, “An Empirical Analysis of the Value of Information Sharing in the Market for Online Content,” published online by Digital Advertising Alliance, available at http://www.aboutads.info/resource/fullvalueinfostudy.pdf, January, 2014.

41 Acuity, or Adara—which happen to be the commercial enterprise uses data is far less first four members listed for the Network susceptible to influence via transparency. Advertising Initiative, a self-regulatory Tellingly, if policing of commercial practice organization for third party advertising by those who understand the details of providers. As with other areas of information technology is the goal, we need regulation, large and well known companies to reverse public policy toward privacy have incentives to support privacy rules that policies. To enable “advocate overseers” of insulate them from competitive pressures. commercial privacy practices, we need That outcome harms consumers. privacy policies that provide more detail, not less, and that are likely far more difficult for Current privacy discussions those who are not technologically sometimes promote the value of sophisticated to understand. “transparency.” Why? Most of us have no idea what programs or files are pre-loaded Conclusion on our newly purchased computer, and we Relying on consumer control to should not need to care. We do not know protect the privacy of commercial how the anti-lock brakes or the anti-skid information is a chimera. Information about features on our cars actually operate, and commercial interactions necessarily, and we should not need to care. Most of us appropriately, belongs to both parties to the have no idea of the number of transaction. Either party may need the intermediaries that handle a simple credit information, and may need to use it (or card transaction, we certainly have no idea benefit from using it) in ways that are who those intermediaries might be, and difficult to anticipate at the time of the they may well be different in the transaction. There is no basis for assigning transactions in which we engage. We sole ownership to one party or the other. should not have to care. Information often has significant value A more sensible goal of privacy when it is used for purposes different than protection, however, is not pursuing those for which it was originally collected. transparency; rather, it is making Attempts to limit use to “agreed upon” uses transparency unnecessary. We should be will inevitably preclude valuable secondary able to rely on the fact that our computer uses of data that have yet to be developed. does not include malicious software that will As noted above, fraud control is a clear destroy our data, that our automotive safety example, because models are often built on features will not kill us, and that our credit data that were collected for entirely card transactions will not lead to identity different purposes. theft. That should be the goal of privacy Privacy protection should focus on regulation as well. consumers, not data. We should seek to Transparency is a means to an end, identify, and prevent, harmful uses of data not an end in itself. When the government that likely harm consumers, rather than is an actor, transparency is often critical, relying on consumers to understand the because numerous potentially affected nuances of information sharing and parties can, and likely will, scrutinize the information use to protect themselves. information the government is seeking and Notice and choice, or its close cousin how it intends to use it. That scrutiny is an transparency, is a distraction, not a solution important protector of our liberties. How a

42 REVIEW influence by those possessing greater insight into their determinants. Although most individ- uals are probably unaware of the diverse in- fluences on their concern about privacy, entities Privacy and human behavior in the whose interests depend on information revela- tion by others are not. The manipulation of subtle age of information factors that activate or suppress privacy concern can be seen in myriad realms—such as the choice Alessandro Acquisti,1* Laura Brandimarte,1 George Loewenstein2 of sharing defaults on social networks, or the provision of greater control on social media— This Review summarizes and draws connections between diverse streams of empirical which creates an illusion of safety and encourages research on privacy behavior. We use three themes to connect insights from social and greater sharing. behavioral sciences: people’s uncertainty about the consequences of privacy-related Uncertainty, context-dependence, and mallea- behaviors and their own preferences over those consequences; the context-dependence bilityarecloselyconnected. Context-dependence of people’s concern, or lack thereof, about privacy; and the degree to which privacy is amplified by uncertainty. Because people are concerns are malleable—manipulable by commercial and governmental interests. often “at sea” when it comes to the conse- Organizing our discussion by these themes, we offer observations concerning the role quences of, and their feelings about, privacy, of public policy in the protection of privacy in the information age. they cast around for cues to guide their be- havior. Privacy preferences and behaviors are, fthisistheageofinformation,thenprivacyis decisions about information disclosing and with- in turn, malleable and subject to influence in the issue of our times. Activities that were holding. Those holding this view tend to see large part because they are context-dependent once private or shared with the few now leave regulatory protection of privacy as interfering and because those with an interest in informa- I trails of data that expose our interests, traits, with the fundamentally benign trajectory of in- tion divulgence are able to manipulate context to beliefs, and intentions. We communicate using formation technologies and the benefits such their advantage. e-mails, texts, and social media; find partners on technologies may unlock (7). Others are con- dating sites; learn via online courses; seek re- cerned about the ability of individuals to manage Uncertainty sponses to mundane and sensitive questions using privacy amid increasingly complex trade-offs. Tra- Individuals manage the boundaries between search engines; read news and books in the cloud; ditional tools for privacy decision-making such as their private and public spheres in numerous navigate streets with geotracking systems; and cel- choice and consent, according to this perspective, ways: via separateness, reserve, or anonymity ebrate our newborns, and mourn our dead, on no longer provide adequate protection (8). In- (10); by protecting personal information; but also social media profiles. Through these and other stead of individual responsibility, regulatory inter- through deception and dissimulation (11). People activities, we reveal information—both knowingly vention may be needed to balance the interests establishsuchboundariesformanyreasons,in- and unwittingly—to one another, to commercial of the subjects of data against the power of cluding the need for intimacy and psychological entities, and to our governments. The monitoring commercial entities and governments holding respite and the desire for protection from social of personal information is ubiquitous; its storage that data. influence and control (12). Sometimes, these mo- is so durable as to render one’s past undeletable Are individuals up to the challenge of navigat- tivations are so visceral and primal that privacy- (1)—a modern digital skeleton in the closet. Ac- ing privacy in the information age? To address seeking behavior emerges swiftly and naturally. This companying the acceleration in data collection this question, we review diverse streams of empir- is often the case when physical privacy is intruded— are steady advancements in the ability to ag- ical privacy research from the social and behav- such as when a stranger encroaches in one’sper- gregate, analyze, and draw sensitive inferences ioral sciences. We highlight factors that influence sonal space (13–15) or demonstratively eavesdrops from individuals’ data (2). decisions to protect or surrender privacy and on a conversation. However, at other times (often Both firms and individuals can benefit from the how, in turn, privacy protections or violations including when informational privacy is at stake) sharing of once hidden data and from the appli- affect people’s behavior. Information technolo- people experience considerable uncertainty about cation of increasingly sophisticated analytics to gies have progressively encroached on every as- whether, and to what degree, they should be con- larger and more interconnected databases (3). So pect of our personal and professional lives. Thus, cerned about privacy. too can society as a whole—for instance, when elec- the problem of control over personal data has A first and most obvious source of privacy tronic medical records are combined to observe become inextricably linked to problems of per- uncertainty arises from incomplete and asym- novel drug interactions (4). On the other hand, the sonal choice, autonomy, and socioeconomic power. metric information. Advancements in infor- potential for personal data to be abused—for eco- Accordingly, this Review focuses on the concept mation technology have made the collection nomic and social discrimination, hidden influence of, and literature around, informational privacy and usage of personal data often invisible. As and manipulation, coercion, or censorship—is alarm- (that is, privacy of personal data) but also touches a result, individuals rarely have clear knowl- ing.Theerosionofprivacycanthreatenourauton- on other conceptions of privacy, such as ano- edge of what information other people, firms, omy, not merely as consumers but as citizens (5). nymity or seclusion. Such notions all ultimately and governments have about them or how that Sharing more personal data does not necessarily relate to the permeable yet pivotal boundaries information is used and with what consequences. always translate into more progress, efficiency, between public and private (9). To the extent that people lack such informa- or equality (6). We use three themes to organize and draw tion, or are aware of their ignorance, they are Because of the seismic nature of these develop- connections between streams of privacy research likely to be uncertain about how much infor- ments, there has been considerable debate about that, in many cases, have unfolded independent- mation to share. individuals’ ability to navigate a rapidly evolving ly. The first theme is people’s uncertainty about Two factors exacerbate the difficulty of ascer- privacy landscape, and about what, if anything, thenatureofprivacytrade-offs,andtheirown taining the potential consequences of privacy be- should be done about privacy at a policy level. preferences over them. The second is the powerful havior.First,whereassomeprivacyharmsare Some trust people’s ability to make self-interested context-dependence of privacy preferences: The tangible, such as the financial costs associated same person can in some situations be oblivious with identity theft, many others, such as having 1H. John Heinz III College, Carnegie Mellon University, to, but in other situations be acutely concerned strangers become aware of one’s life history, are 2 Pittsburgh, PA, USA. Dietrich College, Social and about, issues of privacy. The third theme is the intangible. Second, privacy is rarely an unalloyed Decision Sciences, Carnegie Mellon University, Pittsburgh, 16 PA, USA. malleability of privacy preferences, by which we good; it typically involves trade-offs ( ). For *Corresponding author. E-mail: [email protected] mean that privacy preferences are subject to example, ensuring the privacy of a consumer’s 43 SCIENCE sciencemag.org 30 JANUARY 2015 • VOL 347 ISSUE 6221 THE END OF PRIVACY

purchases may protect her from price discrimina- to purchase at a discount with the assistance of which privacy is only one of many considerations tion but also deny her the potential benefits of an anthropomorphic shopping agent. Few, and is typically not highlighted. Individuals en- targeted offers and advertisements. regardless of the group they were categorized gage in privacy-related transactions all the time, Elements that mitigate one or both of these in, exhibited much reluctance to answering the even when the privacy trade-offs may be in- exacerbating factors, by either increasing the tan- increasingly sensitive questions the agent plied tangible or when the exchange of personal data gibility of privacy harms or making trade-offs them with. may not be a visible or primary component of a explicit and simple to understand, will generally Whydopeoplewhoclaimtocareaboutpri- transaction. For instance, completing a query on affect privacy-related decisions. This is illustrated vacy often show little concern about it in their asearchengineisakintosellingpersonaldata by one laboratory experiment in which partici- daily behavior? One possibility is that the para- (one’s preferences and contextual interests) to the pants were asked to use a specially designed search dox is illusory—that privacy attitudes, which are engine in exchange for a service (search results). engine to find online merchants and purchase defined broadly, and intentions and behaviors, “Revealed preference” economic arguments would from them, with their own credit cards, either a which are defined narrowly, should not be ex- then conclude that because technologies for infor- set of batteries or a sex toy (17). When the search pected to be closely related (26, 27). Thus, one mation sharing have been enormously successful, engine only provided links to the merchants’ sites might care deeply about privacy in general but, whereas technologies for information protection and a comparison of the products’ prices from the depending on the costs and benefits prevailing have not, individuals hold overall low valuations different sellers, a majority of participants did not in a specific situation, seek or not seek privacy of privacy. However, that is not always the case: pay any attention to the merchants’ privacy poli- protection (28). Although individuals at times give up personal cies; they purchased from those offering the lowest This explanation for the privacy paradox, how- data for small benefits or discounts, at other times price. However, when the search engine also pro- ever, is not entirely satisfactory for two reasons. they voluntarily incur substantial costs to protect vided participants with salient, easily accessible The first is that it fails to account for situations in their privacy. Context, as further discussed in the information about the differences in privacy pro- which attitude-behavior dichotomies arise under next section, matters. tection afforded by the various merchants, a high correspondence between expressed concerns In fact, attempts to pinpoint exact valuations majority of participants paid a roughly 5% pre- and behavioral actions.Forexample,onestudy that people assign to privacy may be misguided, mium to buy products from (and share their compared attitudinal survey answers to actual as suggested by research calling into question the credit card information with) more privacy- social media behavior (29). Even within the sub- stability, and hence validity, of privacy estimates. protecting merchants. set of participants who expressed the highest In one field experiment inspired by the literature A second source of privacy uncertainty relates degree of concern over strangers being able to on endowment effects (32), shoppers at a mall to preferences. Even when aware of the conse- easily find out their sexual orientation, political were offered gift cards for participating in a non- quences of privacy decisions, people are still views, and partners’ names, 48% did in fact pub- sensitive survey. The cards could be used online or likely to be uncertain about their own privacy licly reveal their sexual orientation online, 47% in stores, just like debit cards. Participants were preferences. Research on preference uncertainty revealed their political orientation, and 21% re- given either a $10 “anonymous” gift card (trans- (18) shows that individuals often have little sense vealed their current partner’sname.Thesecond actions done with that card would not be traceable of how much they like goods, services, or other reason is that privacy decision-making is only in to the subject) or a $12 trackable card (tran- people. Privacy does not seem to be an exception. part the result of a rational “calculus” of costs sactions done with that card would be linked to This can be illustrated by research in which peo- and benefits (16, 28); it is also affected by mis- the name of the subject). Initially, half of the plewereaskedsensitiveandpotentiallyincrimi- perceptions of those costs and benefits, as well participants were given one type of card, and half nating questions either point-blank, or followed as social norms, emotions, and heuristics. Any of the other. Then, they were all offered the op- by credible assurances of confidentiality (19). Al- these factors may affect behavior differently from portunity to switch. Some shoppers, for example, though logically such assurances should lead to how they affect attitudes. For instance, present- were given the anonymous $10 card and were greater divulgence, they often had the opposite bias can cause even the privacy-conscious to asked whether they would accept $2 to “allow my effect because they elevated respondents’ privacy engage in risky revelations of information, if name to be linked to transactions done with the concerns, which without assurances would have the immediate gratification from disclosure trumps card”; other subjects were asked whether they remained dormant. the delayed, and hence discounted, future con- would accept a card with $2 less value to “prevent Theremarkableuncertaintyofprivacyprefer- sequences (30). my name from being linked to transactions done ences comes into play in efforts to measure indi- Preference uncertainty is evident not only in with the card.” Of the subjects who originally held vidual and group differences in preference for studies that compare stated attitudes with behav- thelessvaluablebutanonymouscard,fivetimes privacy (20). For example, Westin (21)famously iors, but also in those that estimate monetary asmany(52.1%)choseitandkeptitoverthe used broad (that is, not contextually specific) pri- valuations of privacy. “Explicit” investigations other card than did those who originally held the vacy questions in surveys to cluster individuals ask people to make direct trade-offs, typically more valuable card (9.7%). This suggests that into privacy segments: privacy fundamentalists, between privacy of data and money. For instance, people value privacy more when they have it pragmatists, and unconcerned. When asked direct- inastudyconductedbothinSingaporeandthe than when they do not. ly,manypeoplefallinthefirstsegment:They United States, students made a series of hypo- The consistency of preferences for privacy is profess to care a lot about privacy and express thetical choices about sharing information with also complicated by the existence of a powerful particular concern over losing control of their websites that differed in protection of personal countervailing motivation: the desire to be pub- personal information or others gaining unau- information and prices for accessing services (31). lic, share, and disclose. Humans are social animals, thorized access to it (22, 23). However, doubts Using conjoint analysis, the authors concluded and information sharing is a central feature of about the power of attitudinal scales to predict that subjects valued protection against errors, im- human connection. Social penetration theory (33) actual privacy behavior arose early in the liter- proper access, and secondary use of personal suggests that progressively increasing levels of self- ature (24). This discrepancy between attitudes information between $30.49 and $44.62. Similar disclosure are an essential feature of the natural and behaviors has become known as the “privacy to direct questions about attitudes and inten- and desirable evolution of interpersonal relation- paradox.” tions, such explicit investigations of privacy ships from superficial to intimate. Such a progres- In one early study illustrating the paradox, valuation spotlight privacy as an issue that re- sion is only possible when people begin social participants were first classified into categories spondents should take account of and, as a re- interactions with a baseline level of privacy. Para- of privacy concern inspired by Westin’s cate- sult, increase the weight they place on privacy in doxically, therefore, privacy provides an essential gorization based on their responses to a survey their responses. foundation for intimate disclosure. Similar to pri- dealing with attitudes toward sharing data Implicit investigations, in contrast, infer valu- vacy, self-disclosure confers numerous objective (25). Next, they were presented with products ations of privacy from day-to-day decisions in and subjective benefits, including psychological 44 30 JANUARY 2015 • VOL 347 ISSUE 6221 sciencemag.org SCIENCE and physical health (34, 35).Thedesireforinter- The way we construe and negotiate public to normative bases of decision-making. For exam- action, socialization, disclosure, and recognition and private spheres is context-dependent because ple, in one online experiment (47) individuals were or fame (and, conversely, the fear of anonymous theboundariesbetweenthetwoaremurky(41): more likely to reveal personal and even incrimi- unimportance) are human motives no less funda- The rules people follow for managing privacy nating information on a website with an un- mental than the need for privacy. The electronic vary by situation, are learned over time, and are professional and casual design with the banner media of the current age provide unprecedented based on cultural, motivational, and purely situ- “How Bad R U” than on a site with a formal opportunities for acting on them. Through so- ational criteria. For instance, usually we may be interface—even though the site with the formal cial media, disclosures can build social capital, more comfortable sharing secrets with friends, interface was judged by other respondents to be increase self-esteem (36), and fulfill ego needs but at times we may reveal surprisingly personal much safer (Fig. 2). Yet in other situations, it is (37). In a series of functional magnetic reso- information to a stranger on a plane (42). The the physical environment that influences privacy nance imaging experiments, self-disclosure was theory of contextual “integrity” posits that social concern and associated behavior (48), sometimes even found to engage neural mechanisms as- expectations affect our beliefs regarding what is even unconsciously. For instance, all else being sociated with reward; people highly value the private and what is public, and that such expec- equal, intimacy of self-disclosure is higher in ability to share thoughts and feelings with others. tations vary with specific contexts (43). Thus, seeking warm, comfortable rooms, with soft lighting, than Indeed, subjects in one of the experiments were privacy in public is not a contradiction; individuals in cold rooms with bare cement and overhead willing to forgo money in order to disclose about can manage privacy even while sharing informa- fluorescent lighting (49). themselves (38). tion, and even on social media (44). For instance, Some of the cues that influence perceptions a longitudinal study of actual disclosure behavior of privacy are one’scultureandthebehaviorof Context-dependence of online social network users highlighted that other people, either through the mechanism of Much evidence suggests that privacy is a uni- over time, many users increased the amount of per- descriptive norms (imitation) or via reciprocity versal human need (Box 1) (39). However, when sonal information revealed to their friends (those (50). Observing other people reveal information people are uncertain about their preferences they connected to them on the network) while simul- increases the likelihood that one will reveal it often search for cues in their environment to taneously decreasing the amounts revealed to oneself (51). In one study, survey-takers were asked provide guidance. And because cues are a func- strangers (those unconnected to them) (Fig. 1) (45). a series of sensitive personal questions regarding tion of context, behavior is as well. Applied to The cues that people use to judge the impor- their engagement in illegal or ethically question- privacy, context-dependence means that individ- tance of privacy sometimes result in sensible be- able behaviors. After answering each question, uals can, depending on the situation, exhibit any- havior. For instance, the presence of government participants were provided with information, ma- thing ranging from extreme concern to apathy regulation has been shown to reduce consumer nipulated unbeknownst to them, about the per- about privacy. Adopting the terminology of Westin, concern and increase trust; it is a cue that people centage of other participants who in the same we are all privacy pragmatists, privacy funda- use to infer the existence of some degree of pri- survey had admitted to having engaged in a given mentalists, or privacy unconcerned, depending vacy protection (46). In other situations, however, behavior. Being provided with information that on time and place (40). cues can be unrelated, or even negatively related, suggested that a majority of survey takers had admitted a certain questionable behavior increased participants’ willingness to disclose their engage- Fig. 1. Endogenous Disclosure behavior in online social media ment in other, also sensitive, behaviors. Other privacy behavior and studies have found that the tendency to recip- Percentage of profiles publicly revealing information over time exogenous shocks. rocate information disclosure is so ingrained that (2005-2011) Privacy behavior is people will reveal more information even to a affected both by 100 percent computer agent that provides information about endogenous motiva- itself (52). Findings such as this may help to tions (for instance, 80 explain the escalating amounts of self-disclosure subjective preferen- we witness online: If others are doing it, people ces) and exogenous 60 seem to reason unconsciously, doing so oneself Shares birthday factors (for instance, must be desirable or safe. publicly on profile changes in user inter- 40 Other people’s behavior affects privacy con- faces). Over time, the cerns in other ways, too. Sharing personal infor- percentage of mem- 20 mation with others makes them “co-owners” of bers in the Carnegie that information (53) and, as such, responsible Mellon University 0 for its protection. Mismanagement of shared Facebook network who information by one or more co-owners causes chose to publicly “turbulence” of the privacy boundaries and, con- reveal personal 100 percent sequently, negative reactions, including anger or information decreased Shares high school mistrust. In a study of undergraduate Facebook dramatically. For 80 publicly on profile users (54), for instance, turbulence of privacy instance, over 80% of boundaries, as a result of having one’sprofile profiles publicly 60 exposed to unintended audiences, dramatically revealed their birthday increased the odds that a user would restrict pro- in 2005, but less than 40 file visibility to friends-only. 20% in 2011. The Likewise, privacy concerns are often a function decreasing trend is not 20 of past experiences. When something in an en- uniform, however. vironment changes, such as the introduction of a After decreasing for 0 camera or other monitoring devices, privacy con- several years, the 2005 2006 2007 2008 2009 2010 2011 cern is likely to be activated. For instance, surveil- percentage of profiles lancecanproducediscomfort(55) and negatively that publicly revealed their high school roughly doubled between 2009 and 2010—after Facebook affect worker productivity (56). However, privacy changed the default visibility settings for various fields on its profiles, including high school (bottom), concern, like other motivations, is adaptive; peo- but not birthday (top) (45). ple get used to levels of intrusion that do not 45 SCIENCE sciencemag.org 30 JANUARY 2015 • VOL 347 ISSUE 6221 THE END OF PRIVACY

change over time. In an experiment conducted in when it comes to data collection. When companies cerns can be muted by the very interventions Helsinki (57), the installation of sensing and mon- do ring them—for example, by using overly fine- intended to protect privacy. Perversely, 62% of itoring technology in households led family mem- tuned personalized advertisements—consumers respondents to a survey believed (incorrectly) that bers initially to change their behavior, particularly are alerted (68) and can respond with negative the existence of a privacy policy implied that a site in relation to conversations, nudity, and sex. And “reactance” (69). could not share their personal information with- yet, if they accidentally performed an activity, such Various so-called “antecedents” (70)affectpri- out permission (40), which suggests that simply as walking naked into the kitchen in front of the vacy concerns and can be used to influence pri- posting a policy that consumers do not read may sensors, it seemed to have the effect of “breaking vacy behavior. For instance, trust in the entity lead to misplaced feelings of being protected. the ice”; participants then showed less concern receiving one’s personal data soothes concerns. Control is another feature that can inculcate about repeating the behavior. More generally, par- Moreover, because some interventions that are in- trust and produce paradoxical effects. Perhaps be- ticipants became inured to the presence of the tended to protect privacy can establish trust, con- cause of its lack of controversiality, control has technology over time. The context-dependence of privacy concern has major implications for the risks associated with modern information and communication technol- Box 1. Privacy: A modern invention? ogy (58). With online interactions, we no longer have a clear sense of the spatial boundaries of our Is privacy a modern, bourgeois, and distinctly Western invention? Or are privacy needs a listeners. Who is reading our blog post? Who is universal feature of human societies? Although access to privacy is certainly affected by looking at our photos online? Adding complexity socioeconomic factors (87) [some have referred to privacy as a “luxury good” (15)], and to privacy decision-making, boundaries between privacy norms greatly differ across cultures (65, 85), the need for privacy seems to be a public and private become even less defined in the universal human trait. Scholars have uncovered evidence of privacy-seeking behaviors across online world (59) where we become social media peoples and cultures separated by time and space: from ancient Rome and Greece (39, 88)to friends with our coworkers and post pictures to preindustrialized Javanese, Balinese, and Tuareg societies (89, 90). Privacy, as Altman (91) an indistinct flock of followers. With different so- noted, appears to be simultaneously culturally specific and culturally universal. Cues of a cial groups mixing on the Internet, separating common human quest for privacy are also found in the texts of ancient religions: The Quran online and offline identities and meeting our and (49:12) instructs against spying on one another (92); the Talmud (Bava Batra 60a) advises others’ expectations regarding privacy becomes home-builders to position windows so that they do not directly face those of one’s neighbors more difficult and consequential (60). (93); the Bible (Genesis, 3:7) relates how Adam and Eve discovered their nakedness after eating the fruit of knowledge and covered themselves in shame from the prying eyes of God Malleability and influence (94) [a discussion of privacy in Confucian and Taoist cultures is available in (95)]. Implicit in Whereas individuals are often unaware of the di- this heterogeneous selection of historical examples is the observation that there exist verse factors that determine their concern about multiple notions of privacy. Although contemporary attention focuses on informational privacy in a particular situation, entities whose privacy, privacy has been also construed as territorial and physical, and linked to concepts as prosperity depends on information revelation by diverse as surveillance, exposure, intrusion, insecurity, appropriation, as well as secrecy, others are much more sophisticated. With the protection, anonymity, dignity, or even freedom [a taxonomy is provided in (9)]. emergence of the information age, growing insti- tutional and economic interests have developed around disclosure of personal information, from online social networks to behavioral advertising. Fig. 2. The impact of cues on It is not surprising, therefore, that some entities disclosure behavior. Ameasure have an interest in, and have developed expertise of privacy behavior often used in in, exploiting behavioral and psychological proces- empirical studies is a subject’s ses to promote disclosure (61). Such efforts play on willigness to answer personal, the malleability of privacy preferences, a term we sometimes sensitive questions— use to refer to the observation that various, some- for instance, by admitting or times subtle, factors can be used to activate or denying having engaged in suppress privacy concerns, which in turn affect questionable behaviors. In an behavior. online experiment (47), individ- Default settings are an important tool used by uals were asked a series of different entities to affect information disclo- intrusive questions about their sure. A large body of research has shown that behaviors, such as “Have you defaultsettingsmatterfordecisionsasimportant ever tried to peek at someone as organ donation and retirement saving (62). else’s e-mail without them Sticking to default settings is convenient, and knowing?” Across conditions, people often interpret default settings as implicit the interface of the question- recommendations (63). Thus, it is not surprising naire was manipulated to look that default settings for one’sprofile’s visibility on more or less professional. The social networks (64), or the existence of opt-in or y axis captures the mean affir- opt-out privacy policies on websites (65), affect mative admission rates (AARs) individuals’ privacy behavior (Fig. 3). to questions that were rated as In addition to default settings, websites can intrusive (the proportion of also use design features that frustrate or even con- questions answered affirma- fuse users into disclosing personal information tively) normed, question by ques- (66), a practice that has been referred to as “ma- tion, on the overall average AAR for the question. Subjects revealed more personal and even incriminating licious interface design” (67). Another obvious information on the website with a more casual design, even though the site with the formal interface was strategy that commercial entities can use to avoid judged by other respondents to be much safer. The study illustrates how cues can influence privacy behavior raising privacy concerns is not to “ring alarm bells” in a fashion that is unrelated, or even negatively related, to normative bases of decision-making. 46 30 JANUARY 2015 • VOL 347 ISSUE 6221 sciencemag.org SCIENCE simpleasthreedotsinastylizedfaceconfigura- Fig. 3. Changes in Facebook Default visibility settings in social media tion (76).Bythesametoken,thedepersonalization default profile visibility set- over time induced by computer-mediated interaction (77), tings over time (2005–2014). either in the form of lack of identifiability or of Over time, Facebook profiles Visible (default setting) Not visible visual anonymity (78), can have beneficial effects, included an increasing amount of such as increasing truthful responses to sensitive fields and, therefore, types of surveys (79, 80). Whether elevating or suppressing data. In addition, default visibility 2005 Networks Wall privacyconcernsissocially beneficial critically de- settings became more revelatory pends, yet again, on context [a meta-analysis of the between 2005 (top) and 2014 Contact Entire Internet information impact of de-identification on behavior is provided (bottom), disclosing more per- in (81)]. For example, perceptions of anonymity sonal information to larger audi- Facebook can alternatively lead to dishonest or prosocial ences, unless the user manually behavior. Illusory anonymity induced by darkness overrode the defaults (fields such Friends caused participants in an experiment (82)tocheat as “Likes” and “Extended Profile Friends in order to gain more money. This can be inter- Data” did not exist in 2005). User preted as a form of disinhibition effect (83), by “Basic profile data” includes which perceived anonymity licenses people to act hometown, current city, high Names in ways that they would otherwise not even con- school, school (status, concen- Birthday sider. In other circumstances, though, anonymity tration, secondary concentration), leads to prosocial behavior—for instance, higher interested in, relationship, willingness to share money in a dictator game, workplace, about you, and quotes. Basic Photos when coupled with priming of religiosity (84). Examples of “Extended profile profile ” data data include life events such as Conclusions new job, new school, engagement, Gender Picture Norms and behaviors regarding private and pub- expecting a baby, moved, bought licrealmsgreatlydifferacrosscultures(85). Amer- a home, and so forth. “Picture” icans,forexample,arereputedtobemoreopen refers to the main profile image. about sexual matters than are the Chinese, whereas “Photos” refers to the additional 2014 Networks Wall the latter are more open about financial matters images that users might have Extended Contact Entire Internet (such as income, cost of home, and possessions). shared in their account. “Names” profile information data And even within cultures, people differ substan- refers to the real name, the user- tially in how much they care about privacy and name, and the user ID. This figure Facebook what information they treat as private. And as we is based on the authors’ data and have sought to highlight in this Review, privacy the original visualization created Friends Likes Friends concerns can vary dramatically for the same in- by M. McKeon, available at dividual, and for societies, over time. http://mattmckeon.com/ User If privacy behaviors are culture- and context- facebook-privacy. dependent, however, the dilemma of what to share Names Birthday and what to keep private is universal across so- cieties and over human history. The task of nav- igating those boundaries, and the consequences of mismanaging them, have grown increasingly Basic Photos profile complex and fateful in the information age, to data the point that our natural instincts seem not Gender Picture nearly adequate. In this Review, we used three themes to organize and draw connections between the social and be- been one of the capstones of the focus of both ineffective. Research has highlighted not only that havioral science literatures on privacy and behav- industry and policy-makers in attempts to balance an overwhelming majority of Internet users do ior. We end the Review with a brief discussion of privacy needs against the value of sharing. Control not read privacy policies (72), but also that few the reviewed literature’s relevance to privacy policy. over personal information is often perceived as a users would benefit from doing so; nearly half of a Uncertainty and context-dependence imply that critical feature of privacy protection (39). In prin- sample of online privacy policies were found to be people cannot always be counted on to navigate ciple, it does provide users with the means to writteninlanguagebeyondthegraspofmost the complex trade-offs involving privacy in a self- manage access to their personal information. Re- Internet users (73). Indeed, and somewhat amus- interested fashion. People are often unaware of search, however, shows that control can reduce ingly, it has been estimated that the aggregate the information they are sharing, unaware of how privacy concern (46), which in turn can have un- opportunity cost if U.S. consumers actually read it can be used, and even in the rare situations intended effects. For instance, one study found the privacy policies of the sites they visit would when they have full knowledge of the conse- that participants who were provided with greater be $781 billion/year (74). quences of sharing, uncertain about their own explicit control over whether and how much of Although uncertainty and context-dependence preferences. Malleability, in turn, implies that peo- their personal information researchers could lead naturally to malleability and manipulation, ple are easily influenced in what and how much publish ended up sharing more sensitive informa- not all malleability is necessarily sinister. Consid- they disclose. Moreover, what they share can be tion with a broader audience—the opposite of the er monitoring. Although monitoring can cause used to influence their emotions, thoughts, and ostensible purpose of providing such control (71). discomfort and reduce productivity, the feeling of behaviors in many aspects of their lives, as in- Similar to the normative perspective on control, being observed and accountable can induce peo- dividuals, consumers, and citizens. Although such increasing the transparency of firms’ data prac- pletoengageinprosocialbehaviorsor(forbetter influenceisnotalwaysornecessarilymalevolent tices would seem to be desirable. However, trans- or for worse) adhere to social norms (75). Prosocial or dangerous, relinquishing control over one’s parency mechanisms can be easily rendered behavior can be heightened by monitoring cues as personal data and over one’s privacy alters the 47 SCIENCE sciencemag.org 30 JANUARY 2015 • VOL 347 ISSUE 6221 THE END OF PRIVACY

balanceofpowerbetweenthoseholdingthedata Behavior (Third ACM Conference on Electronic Commerce, 66. W. Hartzog, Am. Univ. L. Rev. 60, 1635–1671 (2010). and those who are the subjects of that data. Tampa, 2001), pp. 38–47. 67. G. Conti, E. Sobiesk, Malicious Interface Design: Exploiting the Insights from the social and behavioral empir- 26. P. A. Norberg, D. R. Horne, D. A. Horne, J. Consum. Aff. 41, User (19th International Conference on World Wide Web, ACM, 100–126 (2007). Raleigh, 2010), pp. 271–280. ical research on privacy reviewed here suggest 27. I. Ajzen, M. Fishbein, Psychol. Bull. 84,888–918 68. A. Goldfarb, C. Tucker, Mark. Sci. 30, 389–404 that policy approaches that rely exclusively on (1977). (2011). informing or “empowering” the individual are un- 28. P. H. Klopfer, D. I. Rubenstein, J. Soc. Issues 33,52–65 69. T. B. White, D. L. Zahay, H. Thorbjørnsen, S. Shavitt, Mark. Lett. likely to provide adequate protection against the (1977). 19,39–50 (2008). 29. A. Acquisti, R. Gross, in Privacy Enhancing Technologies, – risks posed by recent information technologies. 70. H. J. Smith, T. Dinev, H. Xu, Manage. Inf. Syst. Q. 35, 989 1016 G. Danezis, P. Golle Eds. (Springer, New York, 2006), (2011). – Consider transparency and control, two principles pp. 36 58. 71. L. Brandimarte, A. Acquisti, G. Loewenstein, Soc. Psychol. conceived as necessary conditions for privacy pro- 30. A. Acquisti, Privacy in Electronic Commerce and the Economics Personal. Sci. 4, 340–347 (2013). tection. The research we highlighted shows that of Immediate Gratification (Fifth ACM Conference on Electronic 72. C. Jensen, C. Potts, C. Jensen, Int. J. Hum. Comput. Stud. Commerce, New York, 2004), pp. 21–29. they may provide insufficient protections and even 63, 203–227 (2005). 31. I. Hann, K. Hui, S. T. Lee, I. P. L. Png, J. Manage. Inf. Syst. 24, 73. C. Jensen, C. Potts, Privacy Policies as Decision-Making Tools: – backfire when used apart from other principles 13 42 (2007). An Evaluation of Online Privacy Notices (SIGCHI Conference on of privacy protection. 32. A. Acquisti, L. K. John, G. Loewenstein, J. Legal Stud. 42, Human factors in computing systems, ACM, Vienna, 2004), – The research reviewed here suggests that if 249 274 (2013). pp. 471–478. 33. I. Altman, D. Taylor, Social Penetration: The Development of the goal of policy is to adequately protect pri- 74. A. M. McDonald, L. F. Cranor, I/S: J. L. Policy Inf. Society. Interpersonal Relationships (Holt, Rinehart & Winston, New 4, 540–565 (2008). vacy (as we believe it should be), then we need York, 1973). 75. C. Wedekind, M. Milinski, Science 288, 850–852 – policies that protect individuals with minimal 34. J. Frattaroli, Psychol. Bull. 132, 823 865 (2006). (2000). – requirement of informed and rational decision- 35. J. W. Pennebaker, Behav. Res. Ther. 31, 539 548 (1993). 76. M. Rigdon, K. Ishii, M. Watabe, S. Kitayama, J. Econ. Psychol. 36. C. Steinfield, N. B. Ellison, C. Lampe, J. Appl. Dev. Psychol. 30, 358–367 (2009). making—policies that include a baseline framework – 29, 434 445 (2008). 77. S. Kiesler, J. Siegel, T. W. McGuire, Am. Psychol. 39, 1123–1134 – of protection, such as the principles embedded 37. C. L. Toma, J. T. Hancock, Pers. Soc. Psychol. Bull. 39, 321 331 (1984). in the so-called fair information practices (86). (2013). 78. A. N. Joinson, Eur. J. Soc. Psychol. 31, 177–192 (2001). People need assistance and even protection to aid 38. D. I. Tamir, J. P. Mitchell, Proc. Natl. Acad. Sci. U.S.A. 109, 79. S. Weisband, S. Kiesler, Self Disclosure On Computer Forms: 8038–8043 (2012). in navigating what is otherwise a very uneven Meta-Analysis And Implications (SIGCHI Conference 39. A. Westin, Privacy and Freedom (Athenäum, New York, 1967). Conference on Human Factors in Computing Systems, ACM, – playing field. As highlighted by our discussion, a 40. C. J. Hoofnagle, J. M. Urban, Wake Forest Law Rev. 49, 261 321 Vancouver, 1996), pp. 3–10. goal of public policy should be to achieve a more (2014). 80. R. Tourangeau, T. Yan, Psychol. Bull. 133, 859–883 – even equity of power between individuals, con- 41. G. Marx, Ethics Inf. Technol. 3, 157 169 (2001). (2007). 42. J. W. Thibaut, H. H. Kelley, The Social Psychology Of Groups sumers, and citizens on the one hand and, on the 81. T. Postmes, R. Spears, Psychol. Bull. 123, 238–259 (Wiley, Oxford, 1959). (1998). other, the data holders such as governments and 43. H. Nissenbaum, Privacy in Context: Technology, Policy, and the 82. C. B. Zhong, V. K. Bohns, F. Gino, Psychol. Sci. 21, 311–314 corporations that currently have the upper hand. Integrity of Social Life (Stanford Univ. Press, Redwood City, (2010). To be effective, privacy policy should protect real 2009). 83. J. Suler, Cyberpsychol. Behav. 7, 321–326 (2004). 44. d. boyd, It’s Complicated: The Social Lives of Networked Teens people—who are naïve, uncertain, and vulnerable— 84. A. F. Shariff, A. Norenzayan, Psychol. Sci. 18, 803–809 (Yale Univ. Press, New Haven, 2014). (2007). andshouldbesufficiently flexible to evolve with 45. F. Stutzman, R. Gross, A. Acquisti, J. Priv. Confidential. 4, 85. B. Moore, Privacy: Studies in Social and Cultural History – the emerging unpredictable complexities of the 7 41 (2013). (Armonk, New York, 1984). 46. H. Xu, H. H. Teo, B. C. Tan, R. Agarwal, J. Manage. Inf. Syst. 26, information age. 86. Records, Computers and the Rights of Citizens (Secretary’s 135–174 (2009). Advisory Committee, U.S. Dept. of Health, Education and 47. L. K. John, A. Acquisti, G. Loewenstein, J. Consum. Res. 37, Welfare, Washington, DC, 1973). 858–873 (2011). REFERENCES AND NOTES 87. E. Hargittai, in Social Stratification, D. Grusky Ed. (Westview, 48. I. Altman, The Environment and Social Behavior: Privacy, 1. V. Mayer-Schönberger, Delete: The Virtue of Forgetting In the Boulder, 2008), pp. 936-113. Personal Space, Territory, and Crowding (Cole, Monterey, 1975). Digital Age (Princeton Univ. Press, Princeton, 2011). 88. P. Ariès, G. Duby (Eds.), A History of Private Life: From Pagan 49. A. L. Chaikin, V. J. Derlega, S. J. Miller, J. Couns. Psychol. 23, 2. L. Sweeney, Int. J. Uncert. Fuzziness Knowl. Based Syst. Rome to Byzantium (Harvard Univ. Press, Cambridge, 1992). 479–481 (1976). 10, 557–570 (2002). – 50. V. J. Derlega, A. L. Chaikin, J. Soc. Issues 33, 102–115 (1977). 89. R. F. Murphy, Am. Anthropol. 66, 1257 1274 (1964). – 3. A. McAfee, E. Brynjolfsson, Harv. Bus. Rev. 90,60 66, 68, 128 51. A. Acquisti, L. K. John, G. Loewenstein, J. Mark. Res. 49, 90. A. Westin, in Philosophical Dimensions of Privacy: An Anthology, (2012). 160–174 (2012). F.D. Schoeman Ed. (Cambridge Univ. Press, Cambridge, UK, – 4. N. P. Tatonetti, P. P. Ye, R. Daneshjou, R. B. Altman, Sci. 52. Y. Moon, J. Consum. Res. 26, 323–339 (2000). 1984), pp. 56 74. – Transl. Med. 4, 125ra31 (2012). 53. S. Petronio, Boundaries of Privacy: Dialectics of Disclosure 91. I. Altman, J. Soc. Issues 33,66 84 (1977). – 92. M. A. Hayat, Inf. Comm. Tech. L. 16, 137–148 (2007). 5. J. E. Cohen, Stanford Law Rev. 52, 1373 1438 (2000). (SUNY Press, Albany, 2002). 93. A. Enkin, “Privacy,” www.torahmusings.com/2012/07/privacy 6. K. Crawford, K. Miltner, M. L. Gray, Int. J. Commun. 8, 54. F. Stutzman, J. Kramer-Duffield, Friends Only: Examining a – (2014). 1663 1672 (2014). Privacy-Enhancing Behavior in Facebook (SIGCHI Conference – 94. J. Rykwert, Soc. Res. (New York) 68,29–40 (2001). 7. R. A. Posner, Am. Econ. Rev. 71, 405 409 (1981). on Human Factors in Computing Systems, ACM, Atlanta, 8. D. J. Solove, Harv. Law Rev. 126, 1880–1903 (2013). 95. C. B. Whitman, in Individualism and Holism: Studies in 2010), pp. 1553–1562. 9. D. J. Solove, Univ. Penn. L. Rev. 154, 477–564 (2006). Confucian and Taoist Values, D. J. Munro, Ed. (Center 55. T. Honess, E. Charman, Closed Circuit Television in Public 10. F. Schoeman, Ed., Philosophical dimensions of privacy—An for Chinese Studies, Univ. Michigan, Ann Arbor, 1985), Places: Its Acceptability and Perceived Effectiveness (Police anthology (Cambridge Univ. Press, New York, 1984). pp. 85–100. 11. B. M. DePaulo, C. Wetzel, R. Weylin Sternglanz, M. J. W. Wilson, Research Group, London, 1992). 56. M. Gagné, E. L. Deci, J. Organ. Behav. 26, 331–362 (2005). J. Soc. Issues 59, 391–410 (2003). ACKNOWLEDGMENTS 12. S. T. Margulis, J. Soc. Issues 59, 243–261 (2003). 57. A. Oulasvirta et al., Long-Term Effects of Ubiquitous We are deeply grateful to the following individuals: R. Gross and 13. E. Goffman, Relations in Public: Microstudies of the Public Surveillance in the Home (ACM Conference on Ubiquitous F. Stutzman for data analysis; V. Marotta, V. Radhakrishnan, Order (Harper & Row, New York, 1971). Computing, Pittsburgh, 2012), pp. 41–50. and S. Samat for research; W. Harsch for graphic design; and 14. E. Sundstrom, I. Altman, Hum. Ecol. 4,47–67 (1976). 58. L. Palen, P. Dourish, Unpacking “Privacy” For A Networked A. Adams, I. Adjerid, R. Anderson, E. Barr, C. Bennett, R. Boehme, 15. B. Schwartz, Am. J. Sociol. 73, 741–752 (1968). World (SIGCHI Conference on Human Factors in Computing R. Calo, J. Camp, F. Cate, J. Cohen, D. Cole, M. Culnan, 16. R. S. Laufer, M. Wolfe, J. Soc. Issues 33,22–42 (1977). Systems, ACM, Fort Lauderdale, 2003), pp. 129–136. R. De Wolf, J. Donath, S. Egelman, N. Ellison, S. Fienberg, A. Forget, 17. J. Y. Tsai, S. Egelman, L. Cranor, A. Acquisti, Inf. Syst. Res. 59. Z. Tufekci, Bull. Sci. Technol. Soc. 28,20–36 (2008). – U. Gasser B. Gellman, J. Graves, J. Grimmelmann, J. Grossklags, 22, 254 268 (2011). 60. J. A. Bargh, K. Y. A. McKenna, G. M. Fitzsimons, J. Soc. Issues – S. Guerses, J. Hancock, E. Hargittai, W. Hartzog, J. Hong, 18. P. Slovic, Am. Psychol. 50, 364 371 (1995). 58,33–48 (2002). 19. E. Singer, H. Hippler, N. Schwarz, Int. J. Public Opin. Res. C. Hoofnagle, J. P. Hubaux, K. L. Hui, A. Joinson, S. Kiesler, 61. R. Calo, Geo. Wash. L. Rev. 82, 995–1304 (2014). 4, 256–268 (1992). J. King, B. Knijnenburg, A. Kobsa, B. Kraut, P. Leon, M. Madden, 62. E. J. Johnson, D. Goldstein, Science 302, 1338–1339 20. V. P. Skotko, D. Langmeyer, Sociometry 40, 178–182 (1977). I. Meeker, D. Mulligan, C. Olivola, E. Peer, S. Petronio, S. Preibusch, 21. A. Westin, Harris Louis & Associates, Harris-Equifax Consumer (2003). J. Reidenberg, S. Romanosky, M. Rotenberg, I. Rubinstein, Privacy Survey (Tech. rep. 1991). 63. C. R. McKenzie, M. J. Liersch, S. R. Finkelstein, Psychol. Sci. N. Sadeh, A. Sasse, F. Schaub, P. Shah, R. E. Smith, S. Spiekermann, – 22. M. J. Culnan, P. K. Armstrong, Organ. Sci. 10, 104–115 (1999). 17,414 420 (2006). J. Staddon, L. Strahilevitz, P. Swire, O. Tene, E. VanEpps, J. Vitak, 23. H. J. Smith, S. J. Milberg, S. J. Burke, Manage. Inf. Syst. Q. 20, 64. R. Gross, A. Acquisti, Information Revelation and Privacy in R. Wash, A. Woodruff, H. Xu, and E. Zeide for enormously – 167–196 (1996). Online Social Networks (ACM Workshop Privacy in the valuable comments and suggestions. 24. B. Lubin, R. L. Harrison, Psychol. Rep. 15,77–78 (1964). Electronic Society, New York, 2005), pp. 71–80. 25. S. Spiekermann, J. Grossklags, B. Berendt, E-Privacy in 2nd 65. E. J. Johnson, S. Bellman, G. L. Lohse, Mark. Lett. 13, Generation E-Commerce: Privacy Preferences versus Actual 5–15 (2002). 10.1126/science.aaa1465 48 30 JANUARY 2015 • VOL 347 ISSUE 6221 sciencemag.org SCIENCE THE SUMMER OF HATE SPEECH*

Larry Downes

Project Director, Georgetown Center for Business and Public Policy

Over the past few years, pressure liability project at Stanford’s Center for has been building on online platforms to do Internet and Society. “But we’ll never agree something — anything — about increasingly on what should come down. Whatever the hostile, misleading and distasteful Internet rules, they’ll fail.” Humans and technical content. The 2016 election, Brexit and other filters alike, according to Keller, will polarizing events have brought out some of continue to make “grievous errors.” the worst in human nature, much of it Do not look to the Constitution to amplified and rapidly disseminated on free solve the problem. Contrary to popular digital services. belief, the First Amendment plays no role in Growing conflict over who gets to determining when content hosts have gone say what to whom and where, of course, is too far, or not far enough. That is because, not limited to the Internet. Even here in as I regularly explain to incredulous Berkeley, Calif., the free-speech capital of students, free-speech protections limit only the world, we have been engulfed in fights censorship by governments and then only in — some violent — over which viewpoints the United States. should be heard on the UC campus. Some restrictions on foreign Berkeley Free Speech Movement founder nationals — e.g., electioneering — are Mario Savio would not be proud. permitted. With very limited exceptions, But this year, largely unregulated private actors can press mute on whomever Internet companies have fallen into a black and whatever they want. Indeed, the hole of disgruntled users, hyperventilating Constitution protects the sites from activists and an angry Congress. Facebook, government efforts to impose speech codes Twitter, Instagram, YouTube and other — moral or otherwise. social media companies are innovating But while the First Amendment does wildly, implementing increasingly Rube not apply to the practices of Internet Goldberg-like fixes to adjust their content companies, the inevitable failure of platform policies and the technologies that enforce providers to find the “Goldilocks zone” of them. just-right content moderation underscores “Users are calling on online the wisdom of the Founding Fathers. platforms to provide a moral code,” says Picking and choosing among good and bad Daphne Keller, director of the intermediary

* First published in The Washington Post, August 30, 2018

49 speech is a no-win proposition, no matter assigned a “trustworthiness score.” how good your intentions. Flowcharts guide the review, asking, for example, whether the challenged post So here is my advice to tech CEOS: encourages violence, curses or uses slurs Don’t try. Don’t moderate, don’t filter, don’t against a protected class or is guilty of judge. Allow opinions informed and ignorant “comparing them to animals, disease or alike to circulate freely in what Supreme filth.” Court Justice William O. Douglas famously called “the marketplace of ideas.” Trust National laws and local customs also that, sooner or later, truth will prevail over have to be taken into consideration. The lies and good over evil. Deny yourself the process and the rules are constantly and power to interfere, especially at those opaquely updated, often in response to the excruciating moments when the temptation latest public relations crisis. No surprise few is most irresistible — when the most moderators last a year in the job, according detestable content is flowering to the report. malodorously. As one indication of just how fraught Today, that solution may seem even the complex system has become, more unpalatable than it was when the Bill moderators removed a July Fourth post of Rights was being debated nearly 250 quoting the Declaration of Independence. years ago. But every day brings new Why? A reference to “merciless Indian evidence that the alternative of savages” was deemed hate speech. unaccountable private gatekeepers Yet Facebook’s face-plants seem appointing themselves the task of deciding almost trivial compared with the free- what violates virtual moral codes, especially speech barbarism of other Internet giants. in the chaos of messy and often ugly Consider the social news site Reddit, which political and social disruption, is worse. three years ago announced a confusing set Much worse. of changes to its “Content Policy” in an A sobering report last week on improvised effort to curb sexist posts. Motherboard, for example, details the Forums dominated by such content were “impossible” effort of a beleaguered simply erased. Facebook to reinvent its “community The deleted groups, said then-chief standards” — a daunting task given the executive Ellen Pao, “break our Reddit rules billions of posts a week originating in over a based on their harassment of individuals,” a hundred countries. Acceptable content rules determination made solely by the company. are developed unilaterally by a policy team (Due process is also a government-only “made up of lawyers, public relations requirement.) professionals, ex-public policy wonks and crisis management experts.” After users and volunteer editors revolted over both the policy change and its Enforcement, according to the ham-handed implementation, Reddit’s board report, is now the job of about 7,500 low- of directors dismissed Pao and revised yet wage “moderators,” deciding case by case again the amendments to its policy. But whether to remove posts flagged by Reddit founder and returning chief artificial intelligence software or by executive Steve Huffman still defended the complaining users — with the latter changes. Neither he nor co-founder Alexis

50 Ohanian, Huffman said, had “created Reddit In a supreme gesture of having his to be a bastion of free speech, but rather as cake and censoring it too, Prince then a place where open and honest discussion condemned his own action, fretting “no one can happen.” should have that power.” But he does. (Activists for “net neutrality,” which would Except that Ohanian, in an earlier prohibit blocking access to any website, interview, said precisely the opposite, down notably want restrictions solely for ISPs.) to the same archaic phrasing. When asked what he thought the Founding Fathers Refusing to moderate at all would would have made of the site’s unregulated certainly be easier. But could Internet users free-for-all of opinions, Ohanian boasted, “A stomach it? The First Amendment, after all, bastion of free speech on the World Wide is nearly absolute. The U.S. Supreme Court Web? I bet they would like it.” has carved out a few narrow exceptions, most of them irrelevant to the current Even worse, consider the approach debate over online speech. Discussions of of website security provider Cloudflare, current events and politics, for example, are whose CEO, Matthew Prince, personally considered the most protected category of terminated the account of the neo-Nazi all. Daily Stormer after previously defending his company’s decision to manage the site’s Even the most repulsive opinions are traffic. Prince’s reasoned explanation for the protected from government suppression. As change of heart? “I woke up in a bad mood First Amendment scholar Eugene Volokh and decided someone shouldn’t be allowed reminds politicians, “There is no hate on the Internet,” he wrote in an internal speech exception to the First Amendment.” memo to employees.

51

52 IS THE TECH BACKLASH GOING ASKEW?*

Larry Downes and Blair Levin

Project Director, Georgetown Center for Business and Public Policy

As winter sets in, the dark days for decade or more. Anything less, Wu says, technology companies have been getting amounts to giving “these companies a pass longer. when it comes to antitrust enforcement, allowing them to dominate their markets We sympathize with the increased and buy up their competitors.” anxiety over the poor data hygiene practices of leading tech platforms. And we The goal of Wu’s approach is not to agree that legislation clarifying the duties of actually win so much as it is to distract the those who collect and use personal companies’ leaders. The litigation is not a information is important, as is delineating means but the end in itself. Paraphrasing enforcement responsibilities among Thomas Jefferson, Wu advocates spilling agencies and jurisdictions. the corporate equivalent of the “blood of patriots,” attacking relentlessly regardless of We’re concerned, however, by the whether there’s actually, you know, a tendency of some to shoehorn pet theories sustainable case. into the debate — notably the passionate but incomplete argument that it’s time to That logic is oddly aligned with the jettison decades of antitrust policy that views of some, including President Trump limits the government to intervening only and his former attorney general, Jeff when market concentration has, or could, Sessions, who believe they are justified in cause higher prices for consumers. threatening companies they view as politically hostile on the fuzzy grounds that The vague alternative, proposed by there is a “very antitrust situation.” critics on the left and right, is a return to a failed framework that boils down to, at best, Wu’s best example of how this a general belief that “big is bad” and, at abuse of legal process works was the U.S. worst, to politically-based payback for government’s 13-year crusade in the 1970s companies on the wrong side of an election. to break up IBM. At the time, IBM was the undisputed leader of the computer Writing recently in the New York business. Times, law professor Tim Wu urged antitrust enforcers to launch sweeping Though the government was never lawsuits against Facebook and other “Big able to prove the company had, as accused, Tech” platforms that would likely last a “undertaken exclusionary and predatory

* First published in The Washington Post, January 16, 2019

53 conduct with the aim and effect of capitalism sped up by disruptive innovation, eliminating competition,” Wu believes the or what economist Joseph Schumpeter in cost and uncertainty for the company of the 1942 famously called “creative destruction.” extended legal fight saved the U.S. If the tech sector was immune to economy, giving personal computers an that process, as some allege, we would opening to proliferate and unseat IBM’s expect stagnant productivity and wage mainframe computer “monopoly.” growth, with profits protected and funneled Never mind that IBM was the most to shareholders. successful seller of PCs and, through its But that view doesn’t square with relationship with Lenovo, still is. The recent findings from Michael Mandel, chief company was certainly hurt by the economist at the Progressive Policy ultimately abandoned case, as were, in later Institute. According to Mandel, who has examples, Microsoft, Intel, Qualcomm and been measuring the digital economy for others. decades, the technology sector broadly But do antitrust jihads really help “accounted for almost half of non-health consumers more than it hurts them? private sector growth between 2007 and Probably not. While well-founded 2017.” Technology prices, at the same time, prosecutions, such as those leading to the “fell by 15%, compared to a 21 percent 1982 breakup of AT&T, did open critical increase in the rest of the non-health markets, that success may not be private sector.” duplicated elsewhere. In fact, Philip Annual pay for tech workers Verveer, a Visiting Fellow at the Harvard (including hourly workers at e-commerce Kennedy School and the government’s lead fulfillment centers) rose at more than twice counsel in the AT&T case, recently the rate of other industries. Job growth in concluded that unleashing antitrust against tech was four times faster. today’s platform companies would amount to little more than “an act of faith that a Lower prices, higher pay and successful prosecution would bring about growing productivity: That doesn’t suggest benefits.” a problem, or at least not one requiring radical restructuring of the companies There’s no need to gamble. The driving the gains. more effective regulator of digital markets has always been the happy confluence of Consider the alternative approach engineering and business innovations in taken in Europe, which has ramped up an hardware, software and communications aggressive attack of U.S. technology driving exponential improvements in speed, companies, applying the kind of expansive quality, size, energy usage and, of course, view of competition law urged by Wu and cost. others. European businesses are still largely no-shows in the digital revolution, the result As computing continues to improve, not of monopolies but of the markets become unsettled, innovation micromanagement of employment, flourishes, and new leaders emerge. It’s not investment and infrastructure by regulators. the arbitrary release of the “blood of Rather than freeing up local innovators to patriots” that best corrects market benefit European consumers, the European imbalances. It’s the normal cycles of

54 Union seems content simply to fine the company’s many embarrassing failures successful U.S. businesses. of the past few years amount to violations of a 2011 consent decree or, indeed, new The European approach highlights violations. And the commissioners recently another problem with calls for U.S. antitrust told Congress that they want additional enforcers to punish platform companies just resources and authority to better enforce for their size. Looking ahead to the existing law, joining a bipartisan call for technology drivers of the near future, such targeted legislation, particularly on as artificial intelligence and autonomous consumer data collection and use. vehicles, any hopes for the United States to lead internationally depend on heavy Tech’s loudest critics argue that the investment today in research and gears of government are turning too slowly. development. Many of the highest-risk bets But that’s actually another reason why calls are being placed by, you guessed it, today’s to simply throw out measured approaches “monopoly” companies. to regulating competition are dangerous, despite their populist appeal. Even So what should U.S. regulators do? assuming new standards could be The starting point is vigilance in applying developed that wouldn’t stall the innovation tried-and-true tools to new harms. The engine driving the U.S. economy, rewriting Federal Trade Commission, for example, federal competition law, realistically, would has already brought over 150 enforcement take Congress and the courts decades to actions against tech companies in the last hammer out. decade for violations of consumer protection laws, reaching settlements that Fortunately, the next wave of in many cases include decades of ongoing disruptive technology is always coming. It oversight and reporting. won’t fix everything. But if history is any guide, it will fix an awful lot — and do so The trade agency is gearing up a without breaking everything that’s actually broad review of Facebook to see whether working.

55

56 HOW MORE REGULATION FOR U.S. TECH COULD BACKFIRE*

Larry Downes

Project Director, Georgetown Center for Business and Public Policy

If 2017 was the year that tech Still, the only solution critics can became a lightning rod for dissatisfaction propose for our growing tech malaise is over everything from the last U.S. government intervention — usually presidential election to the possibility of a expressed vaguely as “regulating tech” or smartphone-driven dystopia, 2018 already “breaking up” the biggest and most looks to be that much worse. successful Internet companies. Break-ups, which require a legal finding that the Innovation and its discontents are structure of a company is enabling anti- nothing new, of course, going back at least competitive behavior, seem now to have to the 18th century, when Luddites become a synonym for somehow crippling a physically attacked industrial looms. successful enterprise. Hostility to the internet appeared the moment the Web became a commercial Of course, nobody thinks technology technology, threatening from the outset to companies should be left unregulated. Tech upend traditional businesses and maybe companies, like any other enterprise, are even our deeply embedded beliefs about already subject to a complex tangle of laws, family, society, and government. George varying based on industry and local Mason University’s Adam Thierer, reviewing authority. They all pay taxes, report their a resurgence of books about the “existential finances, disclose significant shareholders, threat” of disruptive innovation, has detailed and comply with the full range of what he calls a “techno-panic template” in employment, health and safety, advertising, how we react to disruptive innovations that intellectual property, consumer protection don’t fit into familiar categories. and anti-competition laws, to name just a few. But with the proliferation of new products and their reach ever-deeper into There are also specialized laws for our work, home, and personal lives, the tech, including limits on how Internet relentless tech revolt of the last year companies can engage with children. In the shouldn’t really have come as any surprise, U.S., commercial drones must be registered especially to those of us in Silicon Valley. with the Federal Aviation Administration. Genetic testing and other health-related

* First published in the Harvard Business Review, February 9, 2018

57 devices must pass muster with the Food nascent industries and potential new and Drug Administration. Increasingly, ride- technologies. By definition, they’ve caused sharing and casual rental services must no measurable harm. meet some of the same standards and In particular, breaking up the most inspections as long-time transportation and successful Internet and cloud-based hospitality incumbents. companies only looks like a solution. It isn’t. There are growing calls, likewise, to Antitrust is meant to punish dominant regulate social media and video platforms companies that use their leverage to raise as if they were traditional print or broadcast costs for consumers. Yet the services news sources, even though doing so would provided by technology companies are often almost certainly run afoul of the very free widely available at little or no cost. Many of speech protections proponents are hoping the products and services of Amazon, to preserve. Apple, Google, Facebook and Microsoft — the internet giants referred to by the New But perhaps what tech critics really York Times as “the frightful five” — are free want are more innovative rules. Traditional for consumers. regulations, after all, were designed in response to earlier technologies and the More to the point, break-ups almost market failures they generated. They don’t always backfire. Think of the former AT&T, cover largely speculative and mostly future- which was regulated as a monopoly utility looking concerns. until 1982, when the government changed its mind and split the company into What if, for example, artificial component long-distance and regional intelligence puts an entire generation out of phone companies. The sum of the parts work? What if genetic manipulations actually increased in value — except for the accidentally unravel the fabric of DNA, long-distance company, which faded in the reversing evolution in one fell swoop? What face of unregulated new competitors. if social media companies learn so much about us that they undermine—intentionally Then, over the next 20 years, the or otherwise—democratic institutions, regional companies put themselves back creating a tyranny of “unregulated” big data together, and, with economies of scale, controlled by a few unelected young CEOs? reemerged as a mobile internet network and Pay TV provider, competing with cable The problem with such speculation is companies and fast-growing internet-based that it is just that. In deliberative video services including YouTube, Amazon government, legislators and regulatory and Netflix. What started as a regulatory agencies must weigh the often-substantial punishment for AT&T led to an even bigger costs of proposed limits against their likely network of companies. benefit, balanced against the harm of simply leaving in place the current legal On the other hand, the constant status quo. threat of a forced divestiture can be disastrous for consumers and enterprise But there’s no scientific method for alike. IBM prevailed against multiple efforts estimating the risk of prematurely shutting to break it up along product lines, but was down experiments that could yield so shaken by the decades-long experience important discoveries. There’s no that the company became dangerously framework for pre-emptively regulating

58 timid about future innovations, missing the soon after. Yahoo and others gave way to shifts first to client-server and then to Google, just as Blackberry faded in Internet-based computing architectures, response to the iPhone. MySpace nearly bankrupting the business. (remember them?) collapsed at the introduction of Facebook, which, at the Microsoft, similarly, was so time, was little more than a bit of software distracted by its multi-year fight to avoid from a college student. Napster lost in court break-up both by U.S. and European (no new laws were needed for that), leaving regulators that it lost essential momentum. Apple to define a working market for digital It mostly missed out on the mobile music. And who remembers the alarm bells revolution, and hesitated in responding to rung in 2000 when then-dominant ISP open-source alternatives to operating America On-Line merged with content systems, desktop applications, and other behemoth Time Warner? software apps that seriously eroded the company’s once-formidable competitive The best regulator of technology, it advantage. (The company is now growing a seems, is simply more technology. And cloud services business, but is still far despite fears that channels are blocked, behind Google and Amazon.) markets are locked up, and gatekeepers have closed networks that the next These examples hint at an generation of entrepreneurs need to reach alternative to random and unproven new their audience, somehow they do it anyway forms of regulation for emerging — often embarrassingly fast, whether the technologies: simply waiting for the next presumed tyrant being deposed is a long- generation of innovations and the time incumbent or last year’s startup entrepreneurs who wield them to disrupt darling. the supposed monopolists right out of their disagreeable behaviors, sometimes fatally. That, in any case, is the theory on which U.S. policymakers across the political Today, it might seem that the spectrum have nurtured technology-based companies in the frightful five have innovation since the founding of the unbeatable leads in retailing and cloud Republic. Taking the long view, it’s clearly services, social media, search, advertising, been a winning strategy, especially when desktop operating systems and mobile compared to the more invasive, command- devices. But the landscape of business and-control approach taken by the history is littered with the corpses of European Union, which continues to lag on supposedly invulnerable giants. In our every measure of the Internet economy. research on wildly successful enterprises (Europe’s strategy now seems to be little who fail to find a second act, Paul Nunes more than to hobble U.S. tech companies and I note that the average life span of and hope for the best.) companies on the Standard & Poor’s 500 has fallen from 67 years in the 1920s to just Or compared to China, which has 15 years today. built tech giants of its own, but only by limiting outside access to its singularly In the early years of the internet enormous local market. And always with the age, a half-dozen companies were serially risk that too much success by Chinese crowned the victor in search, only to be entrepreneurs may one day crash headfirst unseated by more innovative technology

59 into a political culture that is deeply technology-driven apocalypse. Or that uncomfortable with the internet’s openness. existing safeguards — both market and legal — won’t save us from our worst That solution — to stay the course, selves. to continue leaving tech largely to its own correctives — is cold comfort to those who Nor have tech’s growing list of critics believe tomorrow’s problems, coming up proposed anything more specific than fast in the rear-view mirror, are both simply calling for “regulation” to save us. unprecedented and catastrophic. Perhaps that’s because effective remedies are incredibly hard to design. Yet, so far there’s no evidence supporting shrill predictions of a

60 FIXING SOCIAL MEDIA’S GRAND BARGAIN*

Jack Balkin

Professor of Constitutional Law, Yale Law School

To regulate social media in the whether they like it or not, have public twenty-first century, we should focus on its obligations. They play important roles in political economy: the nature of digital organizing and curating public discussion capitalism and how we pay for the digital and they have moral and professional public sphere we have. Our digital public responsibilities both to their end users and sphere is premised on a grand bargain: free to the general public. communications services in exchange for A reinvigorated competition law is pervasive data collection and analysis. This one important way of dealing with the allows companies to sell access to end users problems of social media, as I will describe to the companies’ advertisers and other later on. This essay, however, focuses on businesses. another approach: new fiduciary obligations The political economy of digital that protect end-user privacy and capitalism creates perverse incentives for counteract social media companies’ bad social media companies. It encourages incentives. companies to surveil, addict, and How Do We Pay for the Digital Public manipulate their end users and to strike Sphere? deals with third parties who will further manipulate them. How does the political and economic system pay for the digital public sphere in Treating social media companies as our Second Gilded Age?1 In large part, it public forums or public utilities is not the pays for it through digital surveillance and proper cure. It may actually make things worse. Even so, social media companies,

* Balkin, “Fixing Social Media’s Grand Bargin,” Hoover Working Group on National Security, Technology, and Law, Aegis Series Paper No. 1814 (October 16, 2018), available at https://www.lawfareblog.com/advanced-persistent-manipulators-and-social-media-nationalism-national- security-world-audiences. 1 During the First Gilded Age, which ran from the end of Reconstruction to the beginning of the twentieth century, technological innovation created huge fortunes in the hands of a small number of entrepreneurs and produced increasing inequalities of wealth and deep political corruption. Waves of immigration and increasing racial tensions led to the emergence of populist demagogues. American government was increasingly for sale, and many people despaired for the future of American democracy. The corruption and inequality of the First Gilded Age led to the reforms of the Progressive Era and, eventually, the New Deal. For a general history of the period, see H. W. Brands, American Colossus: The Triumph of Capitalism, 1865–1900 (New York: Anchor, 2010).

61 through finding ever new ways to make What makes targeted advertising money out of personal data. possible is the collection, analysis, and collation of personal data from end users. Twenty-first-century social media Digital communication leaves collectible like Facebook or YouTube differ from traces of interactions, choices, and twentieth-century mass media like activities. Hence digital companies can broadcast radio and television in two collect, analyze, and develop rich dossiers important respects. First, they are of data about end users. These include not participatory, many-to-many media. only the information end users voluntarily Twentieth-century broadcast media are few- share with others, but their contacts, to-many: they publish and broadcast the friends, time spent on various pages, links content of a relatively small number of visited, even keystrokes. The more that people to large audiences. In the twentieth companies know about their end users, the century, most people would never get to more they know about other people who use these facilities of mass communication bear any similarity to them, even if the to speak themselves. They were largely latter spend less time on the site or are not relegated to the role of audiences. even clients. In the digital age, we are all Twenty-first-century social media, by constantly informing, not only on ourselves, contrast, are many-to-many: they depend but on our friends and relatives and, on mass participation as well as mass indeed, on everyone else in society. audiences. They make their money by This is not only true of social media, encouraging enormous numbers of people but of a wide range of digital services. The to spend as much time as possible on their publisher of a paperback book in the 1960s platforms and produce enormous amounts could tell little about the reading habits of of content, even if that contribution is the people who purchased it, while Amazon something as basic as commenting on, can tell a great deal about the reading liking, or repeating somebody else’s habits of the people who use their Kindle contribution. Facebook and Twitter would service, down to the length of time spent, quickly collapse if people didn’t constantly the pages covered, the text highlighted and produce fresh content. Search engines, shared, and so on. As the Internet of things which are key parts of the digital connects more and more devices and infrastructure, also depend on people appliances to digital networks, surveillance creating fresh links and fresh content that spreads to ever more features of daily they can collect and organize. interaction. In general, the more interactive Second, twenty-first-century social and the more social the service, the greater media like Facebook, YouTube, and the opportunities for data collection, data Instagram rely on far more advanced and analysis, and individualized treatment. individualized targeted advertising than was Data collection and analysis allow available to twentieth-century broadcast targeted advertising, which allows more media. Television and radio attempted to efficient advertising campaigns, which allow match advertisers with viewers, but there greater revenues. But data collection and were limits to how finely grained they could analysis offer another advantage: in theory, target their audiences. (And newspapers, of they give social media opportunities to course, relied on very broad audiences to structure and curate content for end users sell classified advertisements.)

62 that they will find most engaging and The Digital Grand Bargain and its interesting. That is important because Problems advertising revenues depend on the amount Social media business models are a of time and attention spent on the site. special case of the grand bargain that has More engaging content means more time made the digital public sphere possible in spent and more attention gained. our Second Gilded Age. The bargain goes Social media companies have something like this: We will give you economic incentives to develop algorithms miraculous abilities. We will give you social that will promote content that engages media that allow you to connect with people. That is because companies’ central anyone, anywhere, anytime, in a fraction of goal is to gain attention share. This leads a second. We will give you search engines them to collect ever more data about their that find anything you are looking for end users so that they can tailor content to instantaneously. We will give you new individual end users to maximize their forms of entertainment that are absorbing, emotional engagement.2 engaging, outrageous, and amusing. We will give you ever more ways to measure This creates a problem. Often what yourself and express yourself to others. engages people the most is material that produces strong emotional reactions—even We will give all of this to you, for if it is polarizing, false, or demagogic. free! And in return, you will let us surveil Companies have economic incentives to you. You will let us collect and analyze your expose people to this material. And habits, your locations, your links, your unscrupulous actors, both domestic and contacts with your friends, your mouse foreign, have learned to take advantage of clicks, your keystrokes, anything we can this feature of social media. As a result, the measure. We will gladly take all of that and same business model that allows companies study it, and draw inferences from it, and to maximize advertising revenues also monetize it, so that we can give you all makes them conduits and amplifiers for these miraculous things. And we will use propaganda, conspiracy theories, and fake that data to perform experiments on you to news.3 figure out how to keep you even more focused on our sites and our products, so

2 See Zeynep Tufecki, “Facebook’s Surveillance Machine,” New York Times, March 19, 2018, accessed September 27, 2018, https:// www .nytimes . com / 2018/ 03 / 19 / opinion/ facebook - cambridge- analytica .html. (“Facebook makes money, in other words, by profiling us and then selling our attention to advertisers, political actors and others. These are Facebook’s true customers, whom it works hard to please.”) These business models and the incentives they create are examples of what Shoshana Zuboff calls “surveillance capitalism.” Shoshana Zuboff, “Big Other: Surveillance Capitalism and the Prospects of an Information Civilization,” Journal of Information Technology 30 (April 2015): 75 (defining “surveillance capitalism” as a “new logic of accumulation” and a “new form of information capitalism [that] aims to predict and modify human behavior as a means to produce revenue and market control”). 3 See, e.g., Paul Lewis, “ ‘Fiction Is Outperforming Reality’: How YouTube’s Algorithm Distorts Truth,” Guardian, February 2, 2018, accessed September 27, 2018, https://www . theguardian . com / technology/ 201 8/ feb / 02 / how - youtubes - algorithm - distorts - truth (explaining how YouTube’s algorithm to engage viewers promotes conspiracy theories).

63 that you can produce even more data for such as family members and friends, or us, which we can monetize. even fellow citizens.) This is the grand bargain of the The problem with the current Second Gilded Age. This is twenty-first- business models for social media companies century data capitalism. And this is also the such as Facebook, Twitter, and YouTube is irony of the digital age: an era that that they give companies perverse promised unbounded opportunities for incentives to manipulate end users—or to freedom of expression is also an era of allow third parties to manipulate end increasing digital surveillance and control. users—if this might increase advertising The same technological advances allow revenues, profits, or both. both results. The infrastructure of digital Manipulation is not a new problem. free expression is also the infrastructure of In the past, businesses have often appealed digital surveillance. to people’s emotions, desires, and What is objectionable about this weaknesses and have taken advantage of grand bargain? The most obvious objection their relative ignorance. So have is that we must surrender individual privacy demagogues and political con artists. But in order to speak. We must make ever more the digital world of social media amplifies detailed portraits of our lives available to the opportunities for manipulation, both by social media companies and their business social media companies and by those who partners. Beyond this, however, lies a use social media to reach end users. deeper concern: the potential for abuse of The digital age exacerbates the power. In particular, the digital grand twentieth-century problem of manipulation bargain creates an increased danger of in several important respects. First, there is manipulation—both by social media the issue of individual targeting. Twentieth- companies and by those who use social century influence campaigns were usually media companies—that is of a different aimed at broad groups of individuals, with degree and kind than that which existed in effects that were often hit-or-miss. With the pre-digital era. By “manipulation” I digital technologies it is now possible to mean techniques of persuasion and tailor influence campaigns to individuals or influence that (1) prey on another person’s to very small groups. Instead of appealing emotional vulnerabilities and lack of to the general emotional vulnerabilities of knowledge (2) to benefit oneself or one’s the public or the vulnerabilities of large allies and (3) reduce the welfare of the demographic groups, digital companies can other person.4 (Successful manipulation can increasingly target the specific also have ripple effects on third parties,

4 This definition of manipulation focuses on leveraging another’s lack of knowledge and emotional vulnerability to benefit oneself at the expense of another’s welfare. This is not the only way to define the concept. See, e.g., Cass R. Sunstein, The Ethics of Influence: Government in the Age of Behavioral Science (New York: Cambridge University Press, 2016), 82 (a technique of influence is “manipulative to the extent that it does not sufficiently engage or appeal to [a person’s] capacity for reflection and deliberation”). That definition, however, raises the problem of how to distinguish manipulation from a wide range of ordinary techniques of marketing. A third approach would focus on real or objective interests: manipulation is persuasion that leverages lack of knowledge and emotional vulnerability to cause people to act against their real interests, however those are defined. This approach raises the question of how we know what people’s real or objective interests are.

64 vulnerabilities and emotional hot buttons of Third, there is the problem of individuals who may not be aware of addiction. The more digital companies know precisely how they have been singled out. about people’s emotional vulnerabilities and predispositions, the more easily they can Second, there are differences in structure individual end-user experience to scale, speed, and interactivity. Digital addict end users to the site.8 Social media technologies allow individualized messages leverage the data they collect about end to be targeted to vast numbers of people users to offer periodic stimulation that simultaneously, something that was not keeps users connected and constantly possible with twentieth-century media. checking and responding to social media. Moreover, end users’ responses can be Media have always been designed to draw collected instantaneously, allowing people’s attention, but the digital companies to continually fine-tune their experience can be especially immersive and approaches, speeding up the Darwinian pervasive, and thus a more powerful lure evolution of the most successful influence than a billboard or magazine advertisement. strategies. On top of this, digital companies Here once again, the digital age far now have the ability to perform interactive outstrips the powers of twentieth-century social science experiments on us to perfect media. their abilities to leverage and control our emotions. Facebook, for example, One might object that, despite all performed experiments to manipulate the this, the digital grand bargain remains freely emotional moods of 700,000 end users chosen and welfare-enhancing. End users without their knowledge.5 It has also are free to use or not to use social media, experimented with ways of encouraging and thus they are free to decide whether people to vote. But such techniques might they will subject themselves to also be used to discourage people from experimentation and emotional voting.6 Moreover, these experiments can manipulation. If the free service is affect the behavior of not only end users sufficiently valuable to them, they will but also those they come into contact with.7 accept the bargain. But this overlooks three

5 “Facebook Admits Failings over Emotion Manipulation Study,” BBC News, October 3, 2014, accessed September 27, 2018, https:// www . bbc . com / news / technology - 29475019 (“the company was widely criticised for manipulating material from people’s personal lives in order to play with user emotions or make them sad”). 6 Jonathan Zittrain, “Engineering an Election,” Harvard Law Review Forum 127 (June 20, 2014): 335–36 (noting that experiment caused an additional 340,000 votes to be cast). 7 Ibid., 336 (describing the “ripple effects” of experiments). 8 See Mike Allen, “Sean Parker Unloads on Facebook: ‘God Only Knows What It’s Doing to Our Children’s Brains,’ ” Axios, November 9, 2017, accessed September 27, 2018, https:// www . axios . com / sean - parker - unloads- on - facebook - god - only - knows - what - its - doing - to - our - childrens - brains - 1513306792 -f855e7b 4 - 4e99 - 4d60 - 8d51 - 2775559c2671 . html (quoting statement by former president of Facebook that social media applications are designed to “exploit a vulnerability in human psychology” using psychological methods to “consume as much of your time and conscious attention as possible” and keep users locked into the site); Paul Lewis, “ ‘Our Minds Can Be Hijacked’: The Tech Insiders who Fear a Smartphone Dystopia,” Guardian, October 6, 2017, accessed September 27, 2018, https:// www . theguardian. com / technology / 2017 / oct / 05 / smartphone - addiction - silicon- valley - dystopia (interviewing former employees at Google and Facebook who report that technologies are designed to addict users and monopolize their attention).

65 important features of the emerging system information, yield surprisingly powerful of digital surveillance that make the insights about individual values, behavior, assumption of a mutually beneficial arm’s- desires, weaknesses, and predispositions. length bargain highly implausible. Because individuals cannot assess the value of what they are giving up, one cannot First, we cannot assume that assume that their decisions enhance their transactions benefit both parties when there welfare. In this environment, the idea of is extreme asymmetry of knowledge, in relying on informed consumer choice to which one party’s behaviors, beliefs, and discipline social media companies is a activities are known to the other party while fantasy. the other party is essentially a black box. Third, as noted above, information Second, individuals suffer from gathered from end users has significant privacy myopia, a characteristic feature of external effects on third parties who are not digital interactions.9 Individuals constantly parties to the bargain. As digital companies generate a broad range of information know more about you, they also can learn about themselves through digital more about other people who are similar to interactions, much of which (for example you or connected to you in some respect.10 location, social connections, timing of In the digital age, we do not simply inform responses, and rate of keystrokes) they on ourselves; we inform on other people as may be only dimly aware. Individuals have well. And when a social media company no way of valuing or assessing the risks experiments with social moods or engineers produced by the collection of particular an election, it affects not only its end users kinds of information about them or how but many other people as well. that information might be employed in the future. That is because the value of such For all these reasons, it is fatuous to information is cumulative and connective. compare the digital grand bargain to a Information that seems entirely irrelevant or mutually beneficial arm’s-length economic innocuous can, in conjunction with other transaction. If we can pay for digital

9 See, e.g., Ryan Calo, “The Boundaries of Privacy Harm,” Indiana Law Journal 86, no. 3 (October 4, 2011): 1131, 1149 (“Many consumers have little idea how much of their information they are giving up or how it will be used”); A. Michael Froomkin, “The Death of Privacy?” Stanford Law Review 52 (2000): 1461, 1502 (“Consumers suffer from privacy myopia: they will sell their data too often and too cheaply”); Daniel J. Solove, “Privacy and Power: Computer Databases and Metaphors for Information Privacy,” Stanford Law Review 53 (2001): 1393, 1452 (“It is difficult for the individual to adequately value specific pieces of personal information”). 10 See Tufekci, “Facebook’s Surveillance Machine,” explaining that Facebook collects “shadow profiles” on nonusers: “even if you are not on Facebook, the company may well have compiled a profile of you, inferred from data provided by your friends or from other data. This is an involuntary dossier from which you cannot opt out in the United States.” Social media users may unwittingly imperil each other’s privacy. The Cambridge Analytica scandal revealed that when Facebook users logged in to a third-party app using their Facebook credentials, they shared the social graphs of all of their Facebook friends without the latter’s consent. See Alexandra Samuel, “The Shady Data-Gathering Tactics Used by Cambridge Analytica Were an Open Secret to Online Marketers. I Know, Because I Was One,” The Verge, March 25, 2018, accessed September 27, 2018, https:// www . theverge .co m/ 2018 / 3 / 25 / 17161726 / facebook - cambridge - analytica- data - online - marketers. (“The tactic of collecting friend data, which has been featured prominently in the Cambridge Analytica coverage, was a well-known way of turning a handful of app users into a goldmine.”)

66 freedom of expression while reducing the First Amendment forbids government dangers of digital manipulation, it is worth entities to make. exploring alternatives. For example, social media sites Public Options might want to require that end users use their real names or easily identifiable Proposals for reform of social media pseudonyms in order to limit trolling and abound these days. One kind of proposal abuse. They might decide to ban hate argues that we should counter the power of speech or dehumanizing speech, especially social media and search engines by treating if they operate around the world. They them as state actors. Courts should apply might choose to ban graphic violence, standard First Amendment doctrine to them nudity, or pornography. They might choose and treat them as public forums, which to ban advocacy of violence or illegal require complete content and viewpoint conduct, or the promotion of suicide. They neutrality. If social media cannot choose might decide to ban certain types of what we see, they cannot manipulate us. harassment or incitement even if that This solution fails to grapple with the harassment or incitement does not central problems of the grand bargain. First, immediately lead to a breach of the peace.11 treating social media as public forums They might ban certain forms of would only affect the ability of social media advertising. All of these regulations would themselves to manipulate end users. It be unconstitutional if a government would do nothing to prevent third parties imposed them in a public forum. More from using social media to manipulate end generally, we should accept that social users, stoke hatred, fear, and prejudice, or media will have to make sometimes quite spread fake news. And because social complicated decisions to discipline abusive media would be required to serve as neutral trolls, maintain civility norms, demote the public forums, they could do little to stop ranking of postings by conspiracy theorists this. Second, even if social media do not and hate mongers, and, in cases of serial curate feeds, they still collect end-user data. abuse, terminate accounts. Many of these That end-user data, in turn, can be policies would be unconstitutional if we harvested and sold to third parties, who can applied the same standards to social media use it on the site or elsewhere. (That is that the First Amendment applies to why, for example, requiring social media municipal streets and parks. companies to offer a tiered service in which At a more basic level, it is impossible people pay not to receive commercial to manage a search engine or a social advertisements does not really deal with the media site without curation, which involves underlying problem of surveillance, data a wide range of content-based judgments collection, and manipulation.) about what content to promote and what to Perhaps equally important, the proposal is unworkable. Social media—and search engines— must make all sorts of editorial and curational judgments that the

11 For examples of what social media sites regulate, see Facebook, Community Standards, https:// www . facebook .com/ communitystandards; and Twitter, The Twitter Rules, https:// help . twitter . com/ en / rules - and - policies/ twitter - rules (both accessed September 27, 2018).

67 demote.12 It is also impractical and self- messages and then allow various groups defeating to manage a social media site and businesses to create their own private without moderation, which requires the moderation systems on top, from which imposition of a wide range of civility rules individuals could choose. The government that the First Amendment forbids might also create an open system in which governments from imposing in public third parties could develop applications that discourse. Moreover, creating individualized allow people to design their own social media feeds and search engine personalized feeds. results inevitably requires content-based A government-provided search judgments. As described below, social engine that is as efficient and effective as media and search engines sometimes make Google’s is a somewhat harder lift and the bad decisions about these matters, but the cost of public provisioning for social media solution is not to impose a set of doctrinal and search engines might be prohibitive. rules crafted for municipal streets and But public provisioning poses a far larger parks. problem: state surveillance. Instead of A second, related proposal argues Facebook and Google scooping up your that we should treat social media sites and personal data, the government would. The search engines as public utilities because Fourth Amendment might not prohibit this they perform what are clearly public under existing doctrines, because people functions. But public utility regulation—for willingly give the information to the public example, of water and power utilities— entity. Therefore any public provisioning generally focuses on two issues: access to system would have to be accompanied by essential services and fair pricing. Neither of very strict self-imposed restrictions on these is particularly relevant. Social media collection, analysis, and use. I am deeply and search engines want everyone to skeptical that law enforcement and national participate and they offer their services for security officials would willingly forgo access free. If the goal of the public utility to all of this information. metaphor is to prevent content Professional and Public-regarding discrimination, it faces the same problems Norms as treating digital media as state actors. We should not treat social media A third and quite different approach companies and search engines as state is public provisioning. Instead of treating actors subject to the First Amendment. Yet existing private companies as arms of the we can still criticize them for arbitrariness state, governments could provide their own and censorship. How is that possible if, as I public options: government-run social media have just explained, these companies must and search engines. For reasons stated engage in content- and viewpoint-based above, these would not really work very judgments to do their jobs? well if they had to be organized as public forums and moderation was forbidden. We can criticize social media There are potential solutions, however. The companies in three ways, none of which government could provide only a basic requires us to treat them as state actors. telecommunications system for social media

12 Tarleton Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (New Haven, CT: Yale University Press, 2018).

68 First, we can criticize them for being right to exercise editorial discretion as they opaque and non-transparent and for see fit, even if they exercise it badly. denying basic norms of fair process. This Nevertheless, they hold these companies to happens when social media do not state a higher standard than ordinary individuals their criteria for governance clearly in expressing their opinions. The public rightly advance and do not offer reasoned assumes that media companies should live explanations for their decisions. up to certain professional standards that are both public-regarding and connected to Second, we can criticize them for democratic life. These include, among other being arbitrary—for not living up to their things, providing the public with important own community guidelines and terms of information necessary to self-government, service. They should apply their own rules striving to cover the news accurately and without fear or favor to the rich and to the fairly, engaging in professional fact- poor, the high and low alike. Twitter and checking, adhering to professional Facebook, to name two examples, have standards of journalistic ethics, and so on. often been lax with violations of their terms of service by famous or well-known people Many media organizations fail to live and strict with violations by people who are up to these standards, often spectacularly not famous or well known.13 This allows the so. And some media organizations have more powerful and famous to abuse the essentially given up on professionalism, less powerful with impunity and it creates fairness, and accuracy. But people generally blind spots in enforcement. understand that this is a valid reason to criticize them, not to exculpate them. Media Third, and perhaps more important, companies hold themselves out as adhering we can criticize social media companies for to professional and public-regarding norms. failing to live up to norms of professionalism Therefore people in a democracy feel that and expertise—that is, for failing to live up they have a right to criticize them when, in to the norms of the kind of entity they their estimation, media fail to live up to purport to be. those norms. Perhaps equally important, Here is an analogy. People criticize because the norms are public-regarding, major newspapers and media outlets all the citizens in a democracy feel that they have time. They criticize them for biased a right to debate what those professional coverage, they criticize them for giving a norms should be, whether or not the media platform to people who make stupid or evil companies assume them or live up to them. arguments, and they criticize them for Social media companies and search failing to educate the public about the engine companies are not newspapers. issues of the day. Even so, they are more than just run-of- In most cases, people understand the-mill companies. They do more than just that these criticisms aren’t supposed to lead serve ads or sell widgets. They perform a to government regulation of newspapers public service—three connected services, in and mass media. People understand that fact. First, they facilitate public participation these companies have a First Amendment in art, politics, and culture. Second, they

13 See Kate Klonick, “The New Governors: The People, Rules, and Processes Governing Online Speech,” Harvard Law Review 131 (April 10, 2018): 1598, 1654–55 (2018) (noting that social media companies may disproportionately favor people with power over other end users).

69 organize public conversation so that people criticize them—and should criticize them—if can easily find and communicate with each they feel that these companies are acting other. Third, they curate public opinion contrary to appropriate professional norms. through individualized results and feeds and Moreover, because these companies through enforcing terms-of-service have taken on these three tasks—facilitating obligations and community guidelines. public participation, organizing public These digital companies are the conversation, and curating public opinion— twenty-first-century successors of they may also impose basic civility norms twentieth-century mass media companies, against abuse, threats, and harassment. even though their functions are somewhat They may also ban hate speech or speech different. The public, not surprisingly, has that denigrates people if they think that this come to view them as having a public- kind of speech will undermine the public- oriented mission. regarding purposes of the site. Social media companies may do this even if the First In fact, these companies encourage Amendment would prevent the federal this understanding through the ways they government from imposing the same civility talk about themselves. The Twitter Rules, norms on a government-operated social for example, begin with the statement, “We media site. believe that everyone should have the power to create and share ideas and But if social media companies decide information instantly, without barriers. In to govern their sites through imposing order to protect the experience and safety civility norms and regulating harassment of people who use Twitter, there are some and abuse, they should abide by the two limitations on the type of content and other basic norms stated above. First, they behavior that we allow.”14 This is a should be transparent about what they are statement of public-regarding, professional doing and why they are doing it. Second, norms for facilitating public participation, they should not be arbitrary in their organizing public discussion, and curating governance. public opinion. Facebook and YouTube have Social media companies have been made similar statements of purpose and only fitfully successful at meeting these justifications for their community guidelines, obligations. Understood charitably, we although their policies differ in some might say that they are at the very respects.15 beginning of a long process of learning how Whether they imagined it or not at to be responsible professionals. They have the outset, these companies have taken on been wildly successful as technology a public function. People may therefore companies, but professionalism is more

14 Twitter, the Twitter Rules. 15 Facebook, Community Standards, “We recognize how important it is for Facebook to be a place where people feel empowered to communicate, and we take our role in keeping abuse off our service seriously. That’s why we have developed a set of Community Standards that outline what is and is not allowed on Facebook. . . . The goal of our Community Standards is to encourage expression and create a safe environment,” YouTube, Policies and Safety, accessed September 27, 2018, https:// www. youtub e .co m/ y t/ abou t/ policies/ #communit y- guidelines , “When you use YouTube, you join a community of people from all over the world. . . . Following the guidelines below helps to keep YouTube fun and enjoyable for everyone.”

70 than technological expertise. Professional reshape the organization of social media judgments may require the application of companies. This is the task of antitrust and norms that do not scale well. Sometimes pro-competition law, which have grown applying these norms will require judgment moribund in the Second Gilded Age and and individualized assessment as well as need a serious rethinking. algorithmic sorting and bright-line rules. Social media companies’ perverse Doing this costs more in human resources incentives derive from their business and attention than purely technological models—selling end users’ information to solutions. To the extent that this is the advertisers and manipulating and addicting case, social media companies should absorb end users so that they spend more time on the extra costs of being professionals and social media and are thus more accessible living up to professional norms. Although to advertisers. Because a small number of their efforts have been halting and often social media dominate end users’ attention, inadequate, social media companies are they also have a stranglehold over digital slowly beginning that arduous process. In advertising. People who wish to advertise the meantime, civil society can play an online must operate primarily through important role by continuing to criticize Facebook’s and Google’s advertising social media companies and by encouraging networks. This reduces revenues for many them to live up to their public news and media sites that are crucial to the responsibilities. health and vibrancy of the digital public Reforming Social Media sphere. I have already said that we should Increased enforcement of existing not use the law to force these companies to antitrust laws and a series of new pro- behave as public-regarding professionals competition policies might have two any more than we can force major salutary effects. First, these reforms might newspapers to adhere to proper journalistic restructure how digital advertising operates, standards. Does this mean that law has no ameliorating the current bottleneck and role to play? No. The law may encourage freeing up revenues for a wider range of these public-regarding norms in certain media companies. Second, reform of limited ways consistent with the First competition policy and stricter antitrust Amendment. enforcement might break up the largest companies into smaller companies that can Instead of directly aiming at the compete with each other or create a space editorial policies of social media companies, for new competitors to emerge. (Facebook reform proposals should focus instead on and Google have often bought up potential the grand bargain that has turned the competitors before they could grow large infrastructure of digital free expression into enough to threaten them.) the infrastructure of digital surveillance and control. Social media companies will More social media companies mean continue to cause a host of social problems more platforms for innovation and more as long as their business models cause different software features and affordances. them not to care about these problems. More companies might also make it more difficult for foreign hackers to disrupt the There are two central ways to digital public sphere. All other things being change their behavior. The first is to equal, it may be harder to hack twelve

71 Facebooks than only one.16 Finally, more into serving their political or ideological different kinds of companies might also agendas. These are all reasons for using provide more models for social spaces and pro-competition laws to ensure a healthy communities and a wider variety of speech number of competing firms organizing policies. public discourse. Precisely because people will demand that huge multinational This last point is especially corporations ban speech they do not like, it important. I have just argued that social is important to have many Facebooks, not media companies must be allowed to just one. If we expect social media sites to enforce civility norms and regulate or even enforce civility norms, we also need multiple ban a wide range of speech that state social media sites serving different values actors may not touch. But modern and different publics. democracies increasingly rely on social media to perform the public functions of Information Fiduciaries organizing public opinion and facilitating A second approach to reform is to public discussion. Therefore it is very make social media companies internalize important to ensure that there are many the costs they impose on society through social media applications and businesses in surveillance, addiction, and manipulation by order to prevent a small number of giving them new social responsibilities. The powerful for-profit companies from short-term goal is to counteract the most dominating how public opinion is organized egregious examples of bad behavior. The and governed. long-term goal is to create legal incentives Moreover, social media companies for social media companies to develop often enforce their terms of service professional cultures and public-oriented imperfectly and arbitrarily and they may norms for organizing and curating public make many questionable judgments. Some, discussion. To do this, I propose reaching like Facebook, attempt to impose the same back to some very old ideas in the law that standards around the world.17 Finally, civil governs the professions: namely, the idea society organizations, mass media, of fiduciary obligation. politicians, and governments have and will We should treat social media put increasing pressure on social media to companies—and many other digital media ban speech that they do not like and expel companies as well— as information speakers who offend them. All of them, in fiduciaries toward their clients and end various ways, will try to coax social media users.18 As information fiduciaries, digital

16 Sally Hubbard, “Fake News is a Real Antitrust Problem,” CPI Antitrust Chronicle, December 2017: 5, accessed September 27, 2018, https:// www . competitionpolicyinternational . com / wp- content / uploads /201 7 / 12 / CPI -Hubbard. pdf . 17 Klonick, “New Governors,” 1642, describing Facebook’s goal of applying its norms worldwide and the resulting compromises; Julia Angwin and Hannes Grassegger, “Facebook’s Secret Censorship Rules Protect White Men from Hate Speech but Not Black Children,” ProPublica, June 28, 2017, accessed September 27, 2018, https:// www . . org /article/ facebook - hate - speech - censorship - internal - documents - algorithms (describing Facebook’s attempts to enforce its hate speech rules worldwide and the arbitrariness of its categories). 18 See Jack M. Balkin, “Information Fiduciaries and the First Amendment,” UC Davis Law Review 49, no. 4 (April 2016): 1183; Jack M. Balkin, “The Three Laws of Robotics in the Age of Big Data,” Ohio State Law Journal 78 (2017): 1217.

72 companies should have duties of care, accountants, and estate managers as confidentiality, and loyalty toward the fiduciaries. Fiduciary relationships require people whose data they collect, store, and good faith and loyalty toward people whom use. This reform is a natural outgrowth of the relationships place in special positions of the grand bargain that has enabled free vulnerability. Accordingly, fiduciaries have expression in the digital age. special duties of care, confidentiality, and loyalty toward their clients and Because of digital companies’ beneficiaries. increasing capacities for surveillance and control, they must take on new legal Because social media companies responsibilities. Put simply, digital collect so much data about their end users, companies know a lot about us, and they use that data to predict and control what can use that knowledge in many ways—but end users will do, and match them with we don’t know a lot about them. Moreover, third parties who may take advantage of people increasingly depend on a wide range end users, they are among the most of digital services that observe them and important examples of the new information collect data about them. That makes people fiduciaries of the digital age. We should increasingly vulnerable to these companies. apply these traditional obligations to the Because the companies’ operations are not changed conditions of a new technological transparent, people have to trust that these era. services will not betray them or manipulate Facebook is not your doctor or them for their own ends. Digital companies lawyer. YouTube is not your accountant or that create and maintain this dependence estate manager. We should be careful to and vulnerability should be considered tailor the fiduciary obligations to the nature information fiduciaries toward their end of the business and to the reasonable users. expectations of consumers. That means There is plenty of precedent for this that social media companies’ fiduciary idea. For centuries, the law has recognized duties will be more limited. that certain people hold power over others Social media companies and search who are vulnerable to them, dependent on engines provide free services in exchange them, and have to trust them. It created for the right to collect and analyze personal the idea of fiduciary obligations for just data and serve targeted ads. This by itself these situations.19 For example, the law has does not violate fiduciary obligations. long maintained that the clients or patients Nevertheless, it creates a perpetual conflict of doctors and lawyers are in special of interest between end users and social relationships of dependence and media companies. Companies will always be vulnerability. We need to trust these tempted to use the data they collect in ways professionals with sensitive personal that increase their profits to their end users’ information about ourselves, but the people disadvantage. Unless we are to ban we trust could use this same information to targeted advertising altogether (which I harm us and enrich themselves in many would oppose and which raises serious First different ways. Therefore the law treats Amendment problems) the goal should be professionals like doctors, lawyers, to ameliorate or forestall conflicts of interest

19 See generally Tamar Frankel, Fiduciary Law (New York: Oxford University Press, 2011).

73 and impose duties of good faith and non- Although the facts are complicated, they manipulation. That means that the law essentially involved Facebook’s decision to should limit how social media companies allow third parties to access its end users’ can make money off their end users, just as data.20 Facebook allowed researchers to do the law limits how other fiduciaries can this for free and took a cut of the profits for make money off their clients and business entities. This allowed it to leverage beneficiaries. its central resource—consumer data—to increase profits. As information fiduciaries, social media companies have three major duties: Aleksandr Kogan, a data scientist, duties of care, duties of confidentiality, and used a personality quiz to gain access to duties of loyalty. The duties of care and Facebook’s end-user data. He thereby confidentiality require fiduciaries to secure obtained not only the data of the 300,000 customer data and not disclose it to anyone people who logged in using their Facebook who does not agree to assume similar credentials, but also all of their Facebook fiduciary obligations. In other words, friends, an estimated 87 million people.21 In fiduciary obligations must run with the data. fact, Kogan was actually working for The duty of loyalty means that fiduciaries Cambridge Analytica, a for-profit political must not seek to advantage themselves at consulting company. Cambridge Analytica their end users’ expense and they must used the end-user data to produce work to avoid creating conflicts of interest psychological profiles that, in turn, it would that will tempt them to do so. At base, the use to target political advertisements to obligations of loyalty mean that digital unsuspecting Facebook users. In fact, these fiduciaries may not act like con artists. They practices were only the tip of a far larger may not induce trust on the part of their iceberg. Facebook made a series of unwise user base and then turn around and betray decisions to allow a range of business that trust in order to benefit themselves. partners access to its end users’ social To see what these obligations would mean in practice, we can use the Cambridge Analytica scandal that propelled the issue of social media regulation to public attention in the spring of 2018.

20 See Carole Cadwalladr and Emma Graham-Harrison, “How Cambridge Analytica Turned Facebook ‘Likes’ into a Lucrative Political Tool,” Guardian, March 17, 2018, accessed September 27, 2018, https:// www.theguardian.com / technology / 2018 / mar /17 / facebook -cambridge - analytica- kogan- data- algorithm; Carole Cadwalladr and Emma Graham-Harrison, “Revealed: 50 Million Facebook Profiles Harvested for Cambridge Analytica in Major Data Breach,” Guardian, March. 17, 2018, accessed September 27, 2018, https:// www . theguardian. com /news / 2018 /mar/1 7/ cambridg e- analytic a - facebook - influence-u s- election; Paul Lewis, “ ‘Utterly Horrifying’: Ex-Facebook Insider Says Covert Data Harvesting Was Routine,” Guardian, March 20, 2018, https://ww w .theguardian.com / news / 2018 / mar / 20/ facebook- data -cambridge- analytic a -sandy- parakilas. 21 See Michael Riley, Sarah Frier, and Stephanie Baker, “Understanding the Facebook-Cambridge Analytica Story: QuickTake,” Washington Post, April 9, 2018, accessed September 27, 2018, https://www. washingtonpost. com /business/ understanding -the - facebook - cambridge - analytica - story- quicktake /201 8/ 0 4/ 09 /0 f18d91c -3 c1c- 11 e8 - 955b - 7d2e19b79966_ story. html (estimating that 300,000 people participated and that 87 million users had their data harvested).

74 graphs and thus make them vulnerable to overreaching and manipulation by their various kinds of manipulation.22 employees and contractors. As an information fiduciary, Finally, if social media companies Facebook violated all three of its duties of are information fiduciaries, they should also care, confidentiality, and loyalty. It did not have a duty not to use end-user data to take sufficient care to vet its academic and addict end users and psychologically business partners. It did not ensure that it manipulate them. Social media companies only gave access to data to entities that engage in manipulation when end users would promise to maintain the same duties must provide information in order to use the of care, confidentiality, and loyalty as service and when companies use this Facebook. It did not take sufficient steps to information to induce end-user decision audit and oversee the operations of these making that benefits the company at the third parties to ensure that they did not expense of the end user and causes harm violate the interests of its end users. It to the end user. Because this creates a allowed third parties to manipulate its end conflict of interest between the company users for profit. And when it discovered and its end users, it violates the duty of what had happened, many years later, it did loyalty. not take sufficient steps to claw back its end It may be useful to compare the users’ data and protect them from further fiduciary approach with the privacy breaches of confidentiality and misuse. obligations of the European Union’s General Fiduciary obligations matter most in Data Protection Regulation (GDPR). There is situations in which social media companies considerable overlap between the two have powerful market incentives not to approaches. But the most important protect their end users: for example, when difference is that the GDPR relies heavily on social media companies give access to data securing privacy by obtaining end-user to third-party companies without adequate consent to individual transactions. In many safeguards to prevent these third parties respects, it is still based on a contractual from manipulating end users. Fiduciary model of privacy protection. Contractual obligations also matter when social media models will prove insufficient if end users companies perform social science are unable to assess the cumulative risk of experiments on their end-user base. Social granting permission and therefore must media companies are not part of depend on the good will of data processors. universities and therefore are not bound by The fiduciary approach to obligation does human-subjects research obligations. As not turn on consent to particular information fiduciaries, however, they would transactions, nor is it bounded by the have legal duties not to create an precise terms of a company’s written unreasonable risk of harm to their end users privacy policy or terms of service, which are or to the public for their own advantage. easy for companies to modify. Rather, the They would have duties, just as university fiduciary approach holds digital fiduciaries scientists do, to minimize harm and prevent to obligations of good faith and non-

22 Lewis, “Covert Data Harvesting Was Routine” (quoting a former Facebook employee who explained that under the company’s policies, “a majority of Facebook users” could have had their data harvested by app developers without their knowledge).

75 manipulation regardless of what their would create a safe harbor provision for privacy policies say. companies that agree to assume fiduciary obligations. The federal government would The fiduciary approach is also preempt state regulation if digital media consistent with the First Amendment. That companies accept the obligations of is because it aims at regulating the information fiduciaries toward their end relationships of vulnerability and trust users. Offering this exchange does not between information fiduciaries and those violate the First Amendment. who must trust them.23 For the most part, the fiduciary The First Amendment treats approach leaves social media companies information gained in the course of a free to decide how they want to curate and fiduciary relationship differently from other organize public discussion, focusing instead kinds of information. Tell a secret to a on protecting privacy and preventing person in the street and he or she can incentives for betrayal and manipulation. It publish it tomorrow and even use it against affects companies’ curation and your interests. But when you reveal organization of public discourse only to the information to a fiduciary—a doctor, nurse, extent that companies violate their duties of or lawyer—he or she has to keep it care, confidentiality, and loyalty. confidential and cannot use it against you. Information gained in the course of a The fiduciary approach has many fiduciary relationship— and that includes advantages. It is not tied to any particular the information that social media companies technology. It can adapt to technological collect about us—is not part of the public change. It can be implemented at the state discourse that receives standard First or the federal level, and by judges, Amendment protection. Instead, the First legislatures, or administrative agencies. Amendment allows governments to regulate The fiduciary approach also meshes fiduciaries’ collection, collation, use, and well with other forms of consumer distribution of personal information in order protection, and it does not exclude other to prevent overreaching and to preserve reforms, like GDPR-style privacy regulation. trust and confidentiality.24 The same In particular, it does not get in the way of principle should apply to the new new pro-competition rules or increased information fiduciaries of the digital age. antitrust enforcement as described above. There may be close cases in which That is because it does not turn on the size we cannot be sure whether a company of an organization (although Congress really is acting as an information fiduciary. might choose to regulate only the larger To deal with these situations, Jonathan sites in order to encourage innovation and Zittrain and I have proposed that Congress avoid barriers to entry). It also does not offer digital companies a different grand turn on the presence or absence of bargain to protect end users’ privacy.25 It monopoly power. It applies whether we

23 On the First Amendment issues, see Balkin, “Information Fiduciaries and the First Amendment,” 6. 24 Ibid. 25 Jack M. Balkin and Jonathan Zittrain, “A Grand Bargain to Make Tech Companies Trustworthy,” Atlantic, October 3, 2016, accessed September 27, 2018, https:// www.theatlantic.com / technology / archive/ 2016 / 10 / information -fiduciary/ 502346.

76 have twelve Facebooks or only one. Indeed, even if we had a wide range of social media companies, all harvesting, analyzing, and using end-user data, there would still be a need for fiduciary obligations to prevent overreaching. The fiduciary approach pays attention to deeper causes. It directs its attention to the political economy of digital media. It focuses on repairing the grand bargain that pays for the digital public sphere in the Second Gilded Age.

77

78 INTERNET, BIG DATA & ALGORITHMS: GATEWAY TO A NEW FUTURE OR A THREAT TO PRIVACY AND FREEDOM The Aspen Institute Congressional Program May 10-13, 2019 Cambridge, Massachusetts

CONFERENCE AGENDA

FRIDAY, MAY 10

Pre-Dinner Remarks WELCOME TO MIT Founded in 1861, Massachusetts Institute of Technology is one of America’s premier institutions of higher education. With 7,000 graduate students and 5,000 undergrads, it is poised to make a significant mark in the fields of artificial intelligence (AI) and advancements of the digital age with its new $1 billion commitment to a College of Computing, set to open in September. The new College, with 50 new faculty positions, will work across MIT’s existing five schools as part of a campus-wide effort to integrate computing and AI more deeply into the curriculum. MIT President Reif will welcome the group with this appropriate backdrop of MIT as the venue for our policy discussions. L. Rafael Reif, President, Massachusetts Institute of Technology

Pre-Dinner Remarks Remarks by Doug Beck, Vice President for the Americas and Northeast Asia, Apple, Inc. Working Dinner Seating is arranged to expose participants to a diverse range of views and provide the opportunity for a meaningful exchange of ideas. Scholars and lawmakers are rotated daily. Discussion will focus on the opportunities, challenges, and potential solutions regarding privacy and the Internet.

79 SATURDAY, MAY 11

Roundtable Discussion THE BENEFITS AND HAZARDS OF ARTIFICIAL INTELLIGENCE ON TRANSPORTATION, HEALTH CARE, NATIONAL SECURITY, MANUFACTURING & THE WORKFORCE Artificial Intelligence has the potential to have significant impact in numerous sectors of society. This session will survey the landscape of what machine learning can have for changes ahead brought about by utilization and expansion of this technology in wider and wider dimensions of everyday life. Hal Abelson, Professor of Computer Science and Engineering, MIT R. David Edelman, Director, Project on Technology, Economy, and National Security, MIT Roundtable Discussion ARTIFICIAL INTELLIGENCE, ALGORITHMS, FAIRNESS, AND THREATS TO PRIVACY Today’s online society is increasingly shaped by automated decision-making systems using algorithms and artificial intelligence learning models. These models are developed by individuals and companies from a particular subset of our society and may not represent a fully accurate or fair view of the world. Mathematical models that increasingly intersect citizens in their daily activities are developed by human beings and they can reflect hidden or deliberate biases. Machines, rather than humans, are making complex and morally difficult decisions on behalf of programmers, with consequences for free speech and nuanced thought. These same machines may even come to learn more about the individuals than the individuals know themselves. This unregulated new era of “Big Data” has implications for privacy and fairness that may require federal attention. • How does this use of algorithms and Big Data impact citizens in areas such as hiring practices, job performance ratings, and credit scores, etc? • Are there built-in inequities that should be taken into account? • Does government have a role in alerting consumers to threats to their privacy? Joy Buolamwini, Founder, Algorithmic Justice League & PhD student, MIT Media Lab Cathy O’Neil, Founder, ORCAA

Roundtable Discussion THREATS TO DEMOCRACY IN THE DIGITAL AGE Four subtopics deserve focus: surveillance, election integrity, misinformation and disinformation, and digital manipulation for malevolent purposes. The explosion of public cameras, done for security purposes, has the potential to change the relationship between citizen and state. Nothing is more essential to the protection of democracy than fair and free elections. Yet, as the U.S. becomes more and more digitized and connected, as hackers take aim at our processes, and as foreign entities try to influence our elections, the integrity of the electoral process is jeopardized. The ease with which anyone can now manipulate information and images digitally opens up new realms of vulnerability with unknowable consequences.

80 • What actions can and should the U.S. Congress take to protect our freedoms and democratic rights with this explosive power of the Digital Age? • Are citizen’s rights infringed by the preponderance of public cameras? • Will artificial intelligence enable a new era of state surveillance of citizens? • Should online companies be subject to greater levels of liability, e.g., for defamation? If so, would these be onerous restrictions of a heavy-handed government limiting free speech or legitimate efforts to protect the public from harmful abuse? • To what degree should governments be involved in monitoring or even regulating the spread of mis- and dis-information on the internet? • What are the consequences for digitally spreading falsehoods? • How do the boundaries of responsible free speech fit the Digital Age? • Is freedom of expression in the digital world at odds with the maintenance of civic discourse? Jonathan Zittrain, Professor of International Law, Harvard Law School Ethan Zuckerman, Director, Center for Civic Media, MIT

SUNDAY, MAY 12

Roundtable Discussion CONSUMER’S CONSENT AND CONTROL OF ONLINE INFORMATION In our modern world, data is key. But who actually owns the data and when or how one consents to having their data collected are disputable topics. For example, once an individual’s data has been harvested and processed, through either voluntarily or involuntarily online interactions, it can be put to use in targeted consumer marketing campaigns, campaign advertisements, and individualized sales pitches. While individuals’ comfort with these techniques varies, one thing is certain: marketing will never be the same. The explosive power of artificial intelligence is being harnessed for commercial advantage, which can be either advantageous or disadvantageous to the consumer depending on what perspective is held. • Does consumer use of social media expose them to the risk of exploitation? • Is there a federal role to protect consumers from unwanted solicitations?

Howard Beales, Professor of Strategic Management and Public Policy, George Washington University Alessandro Acquisti, Professor of Information and Public Policy, Carnegie Mellon University

Roundtable Discussion PROTECTING THE DRIVE FOR INNOVATION WITHIN THE BOUNDARIES OF THE NEED FOR REGULATION Our economy is increasingly dependent on the Internet. Social media entities are incentivized to increase their user base. The major digital companies spent over $60 million in 2018 in lobbying and consolidation in the digital industry has raised questions about the power of dominant major players. Do the practices of the economies of scale serve consumer interest, or is the potential of market dominance to the detriment of consumer choices and costs?

81 • What role does the federal government have in restraining the emergence of dominant major players in this industry? • What can be done to enhance privacy protections? • Are consumer concerns adequately taken into account by the industry? • Do citizens have a right to conduct business online without leaving a digital footprint?

Larry Downes, Project Director, Georgetown Center for Business and Public Policy

Roundtable Discussion BIG DATA’S END GAME: THE USE AND ABUSE OF CONSUMER DATA Though not specified directly in the Constitution, privacy has emerged as a basic human right. Many feel that they have lost control over their personal information. They have. Those who collect information about their online users own it, not the customer. Some have called for personal ownership of the information about them. In Europe, there is a “right to be forgotten,” which requires online search companies to delete information that a court decides should be forgotten. In the U.S. we have relied on the Federal Trade Commission to protect privacy against unfair practices and state law. But the European Union’s General Data Privacy Regulation, and now the state of California, have imposed greater privacy protections for online behavior than previously required. (For example, Google was fined $57 million by French regulators for breaking the GDPR rules.) One solution is to require digital companies to be “information fiduciaries” with a duty of care not to harm users. • Do citizens have a right to maintain and control publicly available data about themselves? • Is there a need to delineate legal boundaries on data use to protect privacy? • What controls should Congress allow users to retain? • Is it time for a federal privacy law for the online world?

Jack M. Balkin, Professor of Constitutional Law, Yale University Law School Latanya Sweeney, Professor of Government and Technology,

Working Lunch EXPLORING PRIVACY IN THE PAST, PRESENT, AND FUTURE The legal and social boundaries of privacy have changed over time, and are based on different assumptions in different cultures and societies. Concepts about privacy rooted in the Constitution may need updating in this era of widespread digital communications with implications for federal legislators.

Daniel Weitzner, Founding Director, MIT Internet Policy Research Initiative

MONDAY, MAY 13 All Participants Depart

82 INTERNET, BIG DATA & ALGORITHMS: GATEWAY TO A NEW FUTURE OR A THREAT TO PRIVACY AND FREEDOM The Aspen Institute Congressional Program May 10-13, 2019 Cambridge, Massachusetts Massachusetts Institute of Technology

CONFERENCE PARTICIPANTS

MEMBERS OF CONGRESS: Representative Ted Budd Representative Rick Larsen and Tiia Karlén Representative Jim Cooper Senator Ed Markey Representative John Curtis and Susan Blumenthal and Susan Curtis Representative Jan Schakowsky Representative Susan Davis and Bob Creamer

Representative Ted Deutch Representative Peter Welch

Representative John Garamendi Senator Roger Wicker and Patti Garamendi and Gayle Wicker

Representative Tom Graves and Julie Graves

SCHOLARS AND SPEAKERS:

Hal Abelson Professor of Computer Science and Engineering, MIT Alessandro Professor of Information Technology and Public Policy, Carnegie Acquisti Mellon University Jack M. Balkin Knight Professor of Constitutional Law and the First Amendment, Yale University Law School Howard Beales Professor of Strategic Management and Public Policy, George Washington University Doug Beck Vice President, Americas and Northeast Asia, Apple, Inc.

Joy Buolamwini Founder, Algorithmic Justice League & PhD student, MIT Media Lab Larry Downes Project Director, Georgetown Center for Business and Public Policy

83 R. David Edelman Director, Project on Technology, Economy and National Security, MIT Charlie Firestone Executive Director, Aspen Institute Communications and Society Program Kristine Gloria- Associate Director, Aspen Institute Communications and Society Garcia Program L. Rafael Reif President, Massachusetts Institute of Technology Cathy O’Neil Founder, O’Neil Risk Consulting and Algorithmic Auditing Daniel Weitzner Founding Director, MIT Internet Policy Research Initiative Jonathan Zittrain George Bemis Professor of International Law, Harvard Law School Ethan Zuckerman Director, Center for Civic Media, MIT Media Lab

FOUNDATION REPRESENTATIVES:

Keesha Gaskins- Director, Democratic Practice, Rockefeller Brothers Fund Nathan Tom Glaisyer Managing Director, Public Square Program, The Democracy Fund

RAPPORTEUR:

Grace Abuhamad Graduate student, Technology and Policy Program, MIT

ASPEN INSTITUTE:

Dan Glickman Executive Director, Congressional Program and Rhoda Glickman Brian Hopkins Program Development Coordinator Lauren Kennedy Manager of Congressional Engagement Bill Nell Deputy Director, Congressional Program Carrie Rowell Conference Director, Congressional Program

84