<<

The Fourth Industrial Revolution A Davos Reader

FOREIGNAFFAIRS.COM Portrait by renowned illustrator Joseph Adolphe.

WILMINGTON TRUST RENOWNED INSIGHT

“In a world where knowledge is power, investment intelligence is the most valuable currency.”

In the spirit of bringing intellectual Aligned impressions. Much capital to our clients, we recently of this spirited talk reaffirmed hosted a captivating conversation Wilmington Trust’s core narrative, between Foreign Affairs editor favoring U.S. stocks and the country’s Gideon Rose and Fareed Zakaria, “THE MIDDLE EAST CNN host and Washington Post IS NOT THAT columnist. The U.S. economy was the IMPORTANT TO THE GLOBAL ECONOMY.” first topic, and Rose was cautiously – Fareed Zakaria, October 2015 hopeful, noting that pessimism can negatively impact portfolios. dominant economy. In a low-return Tony M. Roth On global growth. Focusing environment, we see the preeminence of M.A., J.D., LL.M. () internationally, the two agreed current income. For bonds in particular, Chief Investment Officer ’s economy is growing slowly. we expect challenges to persist, yet see Tony brings more than 20 years However, Zakaria stressed that opportunity for yield to be a durable of experience to bear on client portfolios each and every day. economic barometers such as GDP part of total return. We recommend He leads the team that executes don’t capture intangibles such as portfolios have an allocation for our six-stage investment process, quality of life or access, adding emerging markets, but expect short- which features an innovative U.S. that the “information economy” term struggles with growth, inflation, sector equity allocation approach will revolutionize industries such as and currency depreciation. and a platform of institutional- healthcare and education. Both felt For insight into the global economy quality solutions. For access to would have limited global impact. against a backdrop of our century-long knowledgeable professionals like Tony and the rest of our And with the price of oil so low, Zakaria tradition of holistic asset stewardship team, contact Mark Graham at noted that the Middle East is really not and prudent risk management, visit 302-651-1665. critical to the global economy these days. wilmingtontrust.com/foreignaffairs.

FIDUCIARY SERVICES | WEALTH PLANNING | INVESTMENT MANAGEMENT | PRIVATE BANKING

This article is for informational purposes only and is not intended as an offer or solicitation for the sale of any financial product or service. This article is not designed or intended to provide financial, tax, legal, accounting, or other professional advice since such advice always requires consideration of individual circumstances. If professional advice is needed, the services of your professional advisor should be sought. Private Banking is the marketing name for an offering of M&T Bank deposit and loan products and services. Investments: • Are NOT FDIC-Insured • Have NO Bank Guarantee • May Lose Value Past performance is no guarantee of future results. Asset allocation/diversification does not guarantee a profit or protect against a loss. All investments involve risks, including possible loss of principal. Wilmington Trust is a registered service mark. Wilmington Trust Corporation is a wholly owned subsidiary of M&T Bank Corporation (M&T). Investment management and fiduciary services are provided by Wilmington Trust Company, operating in Delaware only, and Wilmington Trust, N.A., a national bank. Loans, retail and deposits, and other personal and business banking services and products are offered by M&T Bank, member FDIC. Wilmington Trust Investment Advisors, Inc., a subsidiary of M&T Bank, is a SEC-registered investment advisor providing investment management services to Wilmington Trust and M&T affiliates and clients. ©2016 Wilmington Trust Corporation and its affiliates. All rights reserved.

12427_Davos Reader / trim 7”w x 10”h / bleed 7.25”w x 10.25”h The energy to see and the energy to do. Our energy has been traveling around 5 continents for over 60 years. Thanks to the work of all our hands.

eni.com

Q_2654_177e8x254_ForeignAff_2.indd 1 21/12/15 18:55

THE DRIVERS DRIVERS THE Kenneth Neil Cukier and Viktor Mayer-Schoenberger Cukier and Viktor Neil Kenneth The Rise of BigThe Rise of Data Think About theWorld We Changing theWay It’s How Neil Gershenfeld and JP Vasseur GershenfeldNeil and JP Vasseur 2013 May/June As Objects Go Online As Objects Things of the (and Pitfalls) The of Internet Promise March/April 2014 March/April The Digital Fabrication Revolution The Digital Fabrication GershenfeldNeil November/December 2012 November/December Anything Almost to Make How Klaus Schwab Gideon Rose Revolution Industrial The Fourth Introduction January 2016

cover photo: courtesy reuters March/April 2014 The Mobile-Finance Revolution How Cell Phones Can Spur Development Jake Kendall and Rodger Voorhies

November/December 2013 Biology’s Brave New World The Promise and Perils of the Synbio Revolution Laurie Garrett

July/August 2015 The Are Coming How Technological Breakthroughs Will Transform Everyday Life Daniela Rus

THE IMPACT

July/August 2014 New World Order Labor, Capital, and Ideas in the Power Law Economy Erik Brynjolfsson, Andrew McAfee, and Michael Spence

July/August 2015 Will Humans Go the Way of Horses? Labor in the Second Age Erik Brynjolfsson and Andrew McAfee

July/August 2015 Same as It Ever Was Why the Techno-optimists Are Wrong Martin Wolf October 31, 2014 The Future of Cities The Internet of Everything Will Change How We Live John Chambers and Wim Elfrink

July/August 2015 The Coming Dystopia All Too Inhuman Illah Reza Nourbakhsh

January/February 2011 The Political Power of Social Media , the Public Sphere, and Political Change Clay Shirky

March/April 2011 From to Revolution Do Social Media Make Protests Possible? Malcolm Gladwell and Clay Shirky

THE POLICY CHANGES July/August 2015 The Next Safety Net Social Policy for a Digital Age Nicolas Colin and Bruno Palier

August 12, 2015 The Moral Code How To Teach Robots Right and Wrong Nayef Al-Rodhan March/April 2014 Privacy Pragmatism Focus on Data Use, Not Data Collection Craig Mundie

January/February 2015 The Power of Market Creation How Innovation Can Spur Development Bryan C. Mezue, Clayton M. Christensen, and Derek van Bever

January/February 2015 The Innovative State Governments Should Make Markets, Not Just Fix Them Mariana Mazzucato

November/December 2015 Food and the Transformation of Africa Getting Smallholders Connected Kofi Annan and Sam Dryden Introduction

Gideon Rose January 20, 2016

From social media to the Internet of Things, digital fabrication to , virtual reality to , new are racing forward across the board. Together they are ripping up the rule book for people, firms, and governments alike. Mastering this so-called Fourth Industrial Revolution is the theme of the World Economic Forum’s 2016 Annual Meeting, for which this special collection serves as background reading.

Klaus Schwab kicks things off with an overview of the topic, followed by sections on the technological trends driving the revolution; those trends’ economic, social, and political impacts; and the resulting challenges for policy. Drawn from the of Foreign Affairs and the pixels of ForeignAffairs.com, the articles feature world-class experts explaining crucial issues clearly, directly, and authoritatively.

Read Neil Gershenfeld on 3-D printing, John Chambers on the Internet of Things, Daniela Rus and Illah Nourbakhsh on robotics, Laurie Garrett on synthetic biology, and Kenneth Cukier and Viktor Mayer-Schoenberger on big data. Follow debates between Martin Wolf and Erik Brynjolfsson, Andrew McAfee, and Michael Spence on how new the really is, and between Clay Shirky and Malcolm Gladwell on the political power of social media. Learn what Clayton Christensen thinks about the prospects of entrepreneurial innovation in the developing world, how Craig Mundie sees the future of privacy protection, and why Kofi Annan and Sam Dryden believe IT is transforming African agriculture. We’re delighted to showcase all these and other highlights of our coverage of a rapidly changing world. They’ll bring you up-to-date on some of the most important developments going on around us. But at this rate, by the time we’ve truly gotten a handle on the Fourth Industrial Revolution, we’ll probably be well on the way to the Fifth.

GIDEON ROSE is Editor of Foreign Affairs. January 20, 2016 The Fourth Industrial Revolution

What It Means and How to Respond

Klaus Schwab January 20, 2016

We stand on the brink of a that will fundamentally alter the way we live, work, and relate to one another. In its scale, scope, and complexity, the transformation will be unlike anything humankind has experienced before. We do not yet know just how it will unfold, but one thing is clear: the response to it must be integrated and comprehensive, involving all stakeholders of the global polity, from the public and private sectors to academia and civil society.

The First Industrial Revolution used water and steam power to mechanize production. The Second used electric power to create mass production. The Third used electronics and information technology to automate production. Now a Fourth Industrial Revolution is building on the Third, the digital revolution that has been occurring since the middle of the last century. It is characterized by a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres.

There are three reasons why today’s transformations represent not merely a prolongation of the Third Industrial Revolution but rather the arrival of a Fourth and distinct one: velocity, scope, and systems impact. The speed of current breakthroughs has no historical precedent. When compared with previous industrial revolutions, the Fourth is evolving at an exponential rather than a linear pace. Moreover, it is disrupting almost every industry in every country. And the breadth and depth of these changes herald the transformation of entire systems of production, management, and .

The possibilities of billions of people connected by mobile devices, with unprecedented processing power, storage capacity, and access to knowledge, are unlimited. And these possibilities will be multiplied by emerging technology breakthroughs in fields such as , robotics, the Internet of Things, autonomous , 3-D printing, , biotechnology, materials science, energy storage, and quantum computing.

Already, artificial intelligence is all around us, from self- driving cars and drones to virtual assistants and that translate or invest. Impressive progress has been made in AI in recent years, driven by exponential increases in computing power and by the availability of vast amounts of data, from software used to discover new drugs to algorithms used to predict our cultural interests. Digital fabrication technologies, meanwhile, are interacting with the biological world on a daily basis. Engineers, designers, and architects are combining computational design, additive manufacturing, materials engineering, and synthetic biology to pioneer a symbiosis between microorganisms, our bodies, the products we consume, and even the buildings we inhabit.

CHALLENGES AND OPPORTUNITIES

Like the revolutions that preceded it, the Fourth Industrial Revolution has the potential to raise global income levels and improve the quality of life for populations around the world. To date, those who have gained the most from it have been consumers able to afford and access the digital world; technology has made possible new products and services that increase the efficiency and pleasure of our personal lives. Ordering a cab, booking a flight, buying a product, making a payment, listening to , watching a film, or playing a game—any of these can now be done remotely.

In the future, technological innovation will also lead to a supply-side miracle, with long-term gains in efficiency and productivity. Transportation and costs will drop, logistics and global supply chains will become more effective, and the cost of trade will diminish, all of which will open new markets and drive economic growth.

At the same time, as the economists Erik Brynjolfsson and Andrew McAfee have pointed out, the revolution could yield greater inequality, particularly in its potential to disrupt labor markets. As automation substitutes for labor across the entire economy, the net displacement of workers by might exacerbate the gap between returns to capital and returns to labor. On the other hand, it is also possible that the displacement of workers by technology will, in aggregate, result in a net increase in safe and rewarding jobs.

We cannot foresee at this point which scenario is likely to emerge, and suggests that the outcome is likely to be some combination of the two. However, I am convinced of one thing—that in the future, talent, more than capital, will represent the critical factor of production. This will give rise to a job market increasingly segregated into “low-skill/low- pay” and “high-skill/high-pay” segments, which in turn will lead to an increase in social tensions.

In addition to being a key economic concern, inequality represents the greatest societal concern associated with the Fourth Industrial Revolution. The largest beneficiaries of innovation tend to be the providers of intellectual and physical capital—the innovators, shareholders, and —which explains the rising gap in wealth between those dependent on capital versus labor. Technology is therefore one of the main reasons why incomes have stagnated, or even decreased, for a majority of the population in high-income countries: the demand for highly skilled workers has increased while the demand for workers with less education and lower skills has decreased. The result is a job market with a strong demand at the high and low ends, but a hollowing out of the middle.

This helps explain why so many workers are disillusioned and fearful that their own real incomes and those of their children will continue to stagnate. It also helps explain why middle classes around the world are increasingly experiencing a pervasive sense of dissatisfaction and unfairness. A winner- takes-all economy that offers only limited access to the middle class is a recipe for democratic malaise and dereliction.

Discontent can also be fueled by the pervasiveness of digital technologies and the dynamics of information sharing typified by social media. More than 30 percent of the global population now uses social media platforms to connect, learn, and share information. In an ideal world, these interactions would provide an opportunity for cross-cultural understanding and cohesion. However, they can also create and propagate unrealistic expectations as to what constitutes success for an individual or a group, as well as offer opportunities for extreme ideas and ideologies to spread.

THE IMPACT ON BUSINESS

An underlying theme in my conversations with global CEOs and senior business executives is that the acceleration of innovation and the velocity of disruption are hard to comprehend or anticipate and that these drivers constitute a source of constant surprise, even for the best connected and most well informed. Indeed, across all industries, there is clear evidence that the technologies that underpin the Fourth Industrial Revolution are having a major impact on .

On the supply side, many industries are seeing the introduction of new technologies that create entirely new ways of serving existing needs and significantly disrupt existing industry value chains. Disruption is also flowing from agile, innovative competitors who, thanks to access to global digital platforms for , development, marketing, sales, and distribution, can oust well-established incumbents faster than by improving the quality, speed, or price at which value is delivered.

Major shifts on the demand side are also occurring, as growing transparency, consumer engagement, and new patterns of consumer behavior (increasingly built upon access to mobile networks and data) force companies to adapt the way they design, market, and deliver products and services.

A key trend is the development of technology-enabled platforms that combine both demand and supply to disrupt existing industry structures, such as those we see within the “sharing” or “on demand” economy. These technology platforms, rendered easy to use by the smartphone, convene people, assets, and data—thus creating entirely new ways of consuming goods and services in the process. In addition, they lower the barriers for businesses and individuals to create wealth, altering the personal and professional environments of workers. These new platform businesses are rapidly multiplying into many new services, ranging from laundry to shopping, from chores to parking, from massages to travel.

On the whole, there are four main effects that the Fourth Industrial Revolution has on business—on customer expectations, on product enhancement, on collaborative innovation, and on organizational forms. Whether consumers or businesses, customers are increasingly at the epicenter of the economy, which is all about improving how customers are served. Physical products and services, moreover, can now be enhanced with digital capabilities that increase their value. New technologies make assets more durable and resilient, while data and analytics are transforming how they are maintained. A world of customer experiences, data-based services, and asset performance through analytics, meanwhile, requires new forms of , particularly given the speed at which innovation and disruption are taking place. And the emergence of global platforms and other new business models, finally, means that talent, culture, and organizational forms will have to be rethought.

Overall, the inexorable shift from simple digitization (the Third Industrial Revolution) to innovation based on combinations of technologies (the Fourth Industrial Revolution) is forcing companies to reexamine the way they do business. The bottom line, however, is the same: business leaders and senior executives need to understand their changing environment, challenge the assumptions of their operating teams, and relentlessly and continuously innovate.

THE IMPACT ON GOVERNMENT

As the physical, digital, and biological worlds continue to converge, new technologies and platforms will increasingly enable citizens to engage with governments, voice their opinions, coordinate their efforts, and even circumvent the supervision of public authorities. Simultaneously, governments will gain new technological powers to increase their control over populations, based on pervasive surveillance systems and the ability to control digital infrastructure. On the whole, however, governments will increasingly face pressure to change their current approach to public engagement and policymaking, as their central role of conducting policy diminishes owing to new sources of competition and the redistribution and of power that new technologies make possible.

Ultimately, the ability of government systems and public authorities to adapt will determine their survival. If they prove capable of embracing a world of disruptive change, subjecting their structures to the levels of transparency and efficiency that will enable them to maintain their competitive edge, they will endure. If they cannot evolve, they will face increasing trouble.

This will be particularly true in the realm of regulation. Current systems of public policy and decision-making evolved alongside the Second Industrial Revolution, when decision- makers had time to study a specific issue and develop the necessary response or appropriate regulatory framework. The whole process was designed to be linear and mechanistic, following a strict “top down” approach.

But such an approach is no longer feasible. Given the Fourth Industrial Revolution’s rapid pace of change and broad impacts, legislators and regulators are being challenged to an unprecedented degree and for the most part are proving unable to cope.

How, then, can they preserve the interest of the consumers and the public at large while continuing to support innovation and technological development? By embracing “agile” governance, just as the private sector has increasingly adopted agile responses to software development and business operations more generally. This means regulators must continuously adapt to a new, fast-changing environment, reinventing themselves so they can truly understand what it is they are regulating. To do so, governments and regulatory agencies will need to collaborate closely with business and civil society. The Fourth Industrial Revolution will also profoundly impact the nature of national and international security, affecting both the probability and the nature of conflict. The history of warfare and international security is the history of technological innovation, and today is no exception. Modern conflicts involving states are increasingly “hybrid” in nature, combining traditional battlefield techniques with elements previously associated with nonstate actors. The distinction between war and peace, combatant and noncombatant, and even violence and nonviolence (think ) is becoming uncomfortably blurry.

As this process takes place and new technologies such as autonomous or biological weapons become easier to use, individuals and small groups will increasingly join states in being capable of causing mass harm. This new vulnerability will lead to new fears. But at the same time, advances in technology will create the potential to reduce the scale or impact of violence, through the development of new modes of protection, for example, or greater precision in targeting.

THE IMPACT ON PEOPLE

The Fourth Industrial Revolution, finally, will change not only what we do but also who we are. It will affect our identity and all the issues associated with it: our sense of privacy, our notions of ownership, our consumption patterns, the time we devote to work and leisure, and how we develop our careers, cultivate our skills, meet people, and nurture relationships. It is already changing our health and leading to a “quantified” self, and sooner than we think it may lead to human augmentation. The list is because it is bound only by our imagination.

I am a great enthusiast and of technology, but sometimes I wonder whether the inexorable integration of technology in our lives could diminish some of our quintessential human capacities, such as compassion and cooperation. Our relationship with our smartphones is a case in point. Constant connection may deprive us of one of life’s most important assets: the time to pause, reflect, and engage in meaningful conversation.

One of the greatest individual challenges posed by new information technologies is privacy. We instinctively understand why it is so essential, yet the tracking and sharing of information about us is a crucial part of the new connectivity. Debates about fundamental issues such as the impact on our inner lives of the loss of control over our data will only intensify in the years ahead. Similarly, the revolutions occurring in biotechnology and AI, which are redefining what it means to be human by pushing back the current thresholds of life span, health, cognition, and capabilities, will compel us to redefine our moral and ethical boundaries.

SHAPING THE FUTURE

Neither technology nor the disruption that comes with it is an exogenous force over which humans have no control. All of us are responsible for guiding its evolution, in the decisions we make on a daily basis as citizens, consumers, and investors. We should thus grasp the opportunity and power we have to shape the Fourth Industrial Revolution and direct it toward a future that reflects our common objectives and values.

To do this, however, we must develop a comprehensive and globally shared view of how technology is affecting our lives and reshaping our economic, social, cultural, and human environments. There has never been a time of greater promise, or one of greater potential peril. Today’s decision- makers, however, are too often trapped in traditional, linear thinking, or too absorbed by the multiple crises demanding their attention, to think strategically about the forces of disruption and innovation shaping our future.

In the end, it all comes down to people and values. We need to shape a future that works for all of us by putting people first and empowering them. In its most pessimistic, dehumanized form, the Fourth Industrial Revolution may indeed have the potential to “robotize” humanity and thus to deprive us of our heart and soul. But as a complement to the best parts of human nature—creativity, empathy, stewardship—it can also lift humanity into a new collective and moral consciousness based on a shared sense of destiny. It is incumbent on us all to make sure the latter prevails.

Klaus Schwab is Founder and Executive Chairman of the World Economic Forum. January 20, 2016 How to Make Almost Anything

The Digital Fabrication Revolution

Neil Gershenfeld November/December 2012

CHRISTIAN HARTMANN / REUTERS French engineer and professional violinist Laurent Bernadac plays the "3Dvarius", a 3D printed violin made of transparent resin, during an interview with Reuters in , , September 11, 2015.

A new digital revolution is coming, this time in fabrication. It draws on the same insights that led to the earlier digitizations of communication and computation, but now what is being programmed is the physical world rather than the virtual one. Digital fabrication will allow individuals to design and produce tangible objects on demand, wherever and whenever they need them. Widespread access to these technologies will challenge traditional models of business, aid, and education.

The roots of the revolution date back to 1952, when researchers at the Massachusetts Institute of Technology (MIT) wired an early digital computer to a milling machine, creating the first numerically controlled machine tool. By using a computer program instead of a machinist to turn the screws that moved the metal stock, the researchers were able to produce aircraft components with shapes that were more complex than could be made by hand. From that first revolving end mill, all sorts of cutting tools have been mounted on computer-controlled platforms, including jets of water carrying abrasives that can cut through hard materials, lasers that can quickly carve fine features, and slender electrically charged wires that can make long thin cuts.

Today, numerically controlled machines touch almost every commercial product, whether directly (producing everything from laptop cases to jet engines) or indirectly (producing the tools that mold and stamp mass-produced goods). And yet all these modern descendants of the first numerically controlled machine tool share its original limitation: they can cut, but they cannot reach internal structures. This means, for example, that the axle of a wheel must be manufactured separately from the bearing it passes through.

In the 1980s, however, computer-controlled fabrication processes that added rather than removed material (called additive manufacturing) came on the market. Thanks to 3-D printing, a bearing and an axle could be built by the same machine at the same time. A range of 3-D printing processes are now available, including thermally fusing filaments, using ultraviolet light to cross-link polymer resins, depositing adhesive droplets to bind a powder, cutting and laminating sheets of paper, and shining a laser beam to fuse metal particles. Businesses already use 3-D printers to model products before producing them, a process referred to as rapid prototyping. Companies also rely on the technology to make objects with complex shapes, such as jewelry and medical implants. Research groups have even used 3-D printers to build structures out of cells with the goal of printing living organs.

Additive manufacturing has been widely hailed as a revolution, featured on the cover of publications from Wired to . This is, however, a curious sort of revolution, proclaimed more by its observers than its practitioners. In a well-equipped workshop, a 3-D printer might be used for about a quarter of the jobs, with other machines doing the rest. One reason is that the printers are slow, taking hours or even days to make things. Other computer-controlled tools can produce parts faster, or with finer features, or that are larger, lighter, or stronger. Glowing articles about 3-D printers read like the stories in the 1950s that proclaimed that microwave ovens were the future of cooking. Microwaves are convenient, but they don’t replace the rest of the .

The revolution is not additive versus subtractive manufacturing; it is the ability to turn data into things and things into data. That is what is coming; for some perspective, there is a close analogy with the history of computing. The first step in that development was the arrival of large mainframe computers in the 1950s, which only corporations, governments, and elite institutions could afford. Next came the development of minicomputers in the 1960s, led by Digital Equipment Corporation’s PDP family of computers, which was based on MIT’s first transistorized computer, the TX-0. These brought down the cost of a computer from hundreds of thousands of dollars to tens of thousands. That was still too much for an individual but was affordable for research groups, university departments, and smaller companies. The people who used these devices developed the applications for just about everything one does now on a computer: sending e- mail, writing in a word processor, playing video games, listening to music. After minicomputers came hobbyist computers. The best known of these, the MITS Altair 8800, was sold in 1975 for about $1,000 assembled or about $400 in kit form. Its capabilities were rudimentary, but it changed the lives of a generation of computing pioneers, who could now own a machine individually. Finally, computing truly turned personal with the appearance of the IBM personal computer in 1981. It was relatively compact, easy to use, useful, and affordable.

Just as with the old mainframes, only institutions can afford the modern versions of the early bulky and expensive computer-controlled milling devices. In the 1980s, first- generation rapid prototyping systems from companies such as 3D Systems, Stratasys, Epilog Laser, and Universal brought the price of computer-controlled manufacturing systems down from hundreds of thousands of dollars to tens of thousands, making them attractive to research groups. The - generation digital fabrication products on the market now, such as the RepRap, the MakerBot, the Ultimaker, the PopFab, and the MTM Snap, sell for thousands of dollars assembled or hundreds of dollars as parts. Unlike the digital fabrication tools that came before them, these tools have plans that are typically freely shared, so that those who own the tools (like those who owned the hobbyist computers) can not only use them but also make more of them and modify them. Integrated personal digital fabricators comparable to the personal computer do not yet exist, but they will.

Personal fabrication has been around for years as a science- fiction staple. When the crew of the TV series : The Next Generation was confronted by a particularly challenging plot development, they could use the onboard replicator to make whatever they needed. Scientists at a number of labs (including mine) are now working on the real thing, developing processes that can place individual atoms and molecules into whatever structure they want. Unlike 3-D printers today, these will be able to build complete functional systems at once, with no need for parts to be assembled. The aim is to not only produce the parts for a drone, for example, but build a complete that can fly right out of the printer. This goal is still years away, but it is not necessary to wait: most of the computer functions one uses today were invented in the minicomputer era, long before they would flourish in the era of personal computing. Similarly, although today’s digital manufacturing machines are still in their infancy, they can already be used to make (almost) anything, anywhere. That changes everything.

THINK GLOBALLY, FABRICATE LOCALLY

I first appreciated the parallel between personal computing and personal fabrication when I taught a class called “How to Make (almost) Anything” at MIT’s Center for Bits and Atoms, which I direct. CBA, which opened in 2001 with from the National Science Foundation, was developed to study the boundary between computer science and physical science. It runs a facility that is equipped to make and measure things that are as small as atoms or as large as buildings.

We designed the class to teach a small group of research students how to use CBA’s tools but were overwhelmed by the demand from students who just wanted to make things. Each student later completed a semester-long project to integrate the skills they had learned. One made an alarm clock that the groggy owner would have to wrestle with to prove that he or she was awake. Another made a dress fitted with sensors and motorized spine-like structures that could defend the wearer’s personal space. The students were answering a question that I had not asked: What is digital fabrication good for? As it turns out, the “killer app” in digital fabrication, as in computing, is personalization, producing products for a market of one person.

Inspired by the success of that first class, in 2003, CBA began an outreach project with support from the National Science Foundation. Rather than just describe our work, we thought it would be more interesting to provide the tools. We assembled a kit of about $50,000 worth of equipment (including a computer-controlled laser, a 3-D printer, and large and small computer-controlled milling machines) and about $20,000 worth of materials (including components for molding and casting parts and producing electronics). All the tools were connected by custom software. These became known as “fab labs” (for “fabrication labs” or “fabulous labs”). Their cost is comparable to that of a minicomputer, and we have found that they are used in the same way: to develop new uses and new users for the machines.

Starting in December of 2003, a CBA team led by Sherry Lassiter, a colleague of mine, set up the first fab lab at the South End Technology Center, in inner-city Boston. SETC is run by Mel King, an activist who has pioneered the introduction of new technologies to urban communities, from video production to Internet access. For him, digital fabrication machines were a natural next step. For all the differences between the MIT campus and the South End, the responses at both places were equally enthusiastic. A group of girls from the area used the tools in the lab to put on a high-tech street-corner craft sale, simultaneously having fun, expressing themselves, learning technical skills, and earning income. Some of the homeschooled children in the neighborhood who have used the fab lab for hands-on training have since gone on to careers in technology.

The SETC fab lab was all we had planned for the outreach project. But thanks to interest from a Ghanaian community around SETC, in 2004, CBA, with National Science Foundation support and help from a local team, set up a second fab lab in the town of Sekondi-Takoradi, on Ghana’s coast. Since then, fab labs have been installed everywhere from South Africa to Norway, from downtown Detroit to rural . In the past few years, the total number has doubled about every 18 months, with over 100 in operation today and that many more being planned. These labs form part of a larger “maker movement” of high-tech do-it-yourselfers, who are democratizing access to the modern means to make things.

Local demand has pulled fab labs worldwide. Although there is a wide range of sites and funding models, all the labs share the same core capabilities. That allows projects to be shared and people to travel among the labs. Providing Internet access has been a goal of many fab labs. From the Boston lab, a project was started to make antennas, radios, and terminals for wireless networks. The design was refined at a fab lab in Norway, was tested at one in South Africa, was deployed from one in Afghanistan, and is now running on a self-sustaining commercial basis in Kenya. None of these sites had the critical mass of knowledge to design and produce the networks on its own. But by sharing design files and producing the components locally, they could all do so together. The ability to send data across the world and then locally produce products on demand has revolutionary implications for industry.

The first Industrial Revolution can be traced back to 1761, when the Bridgewater Canal opened in Manchester, England. Commissioned by the Duke of Bridgewater to bring coal from his mines in Worsley to Manchester and to ship products made with that coal out to the world, it was the first canal that did not follow an existing waterway. Thanks to the new canal, Manchester boomed. In 1783, the town had one cotton mill; in 1853, it had 108. But the boom was followed by a bust. The canal was rendered obsolete by railroads, then trucks, and finally containerized shipping. Today, industrial production is a race to the bottom, with manufacturers moving to the lowest-cost locations to feed global supply chains.

Now, Manchester has an innovative fab lab that is taking part in a new industrial revolution. A design created there can be sent electronically anywhere in the world for on-demand production, which effectively eliminates the cost of shipping. And unlike the old mills, the means of production can be owned by anyone.

Why might one want to own a digital fabrication machine? Personal fabrication tools have been considered toys, because the incremental cost of mass production will always be lower than for one-off goods. A similar charge was leveled against personal computers. Ken Olsen, founder and CEO of the minicomputer-maker Digital Equipment Corporation, famously said in 1977 that “there is no reason for any individual to have a computer in his home.” His company is now defunct. You most likely own a personal computer. It isn’t there for inventory and payroll; it is for doing what makes you yourself: listening to music, talking to friends, shopping. Likewise, the goal of personal fabrication is not to make what you can buy in stores but to make what you cannot buy. Consider shopping at IKEA. The furniture giant divines global demand for furniture and then produces and ships items to its big-box stores. For just thousands of dollars, individuals can already purchase the kit for a large-format computer-controlled milling machine that can make all the parts in an IKEA flat-pack box. If having the machine saved just ten IKEA purchases, its expense could be recouped. Even better, each item produced by the machine would be customized to fit the customer’s preference. And rather than employing people in remote factories, making furniture this way is a local affair.

This last observation inspired the Fab City project, which is led by Barcelona’s chief architect, Vicente Guallart. Barcelona, like the rest of , has a youth unemployment rate of over 50 percent. An entire generation there has few prospects for getting jobs and leaving home. Rather than purchasing products produced far away, the city, with Guallart, is deploying fab labs in every district as part of the civic infrastructure. The goal is for the city to be globally connected for knowledge but self-sufficient for what it consumes.

The digital fabrication tools available today are not in their final form. But rather than wait, programs like Barcelona’s are building the capacity to use them as they are being developed.

BITS AND ATOMS

In common usage, the term “digital fabrication” refers to processes that use the computer-controlled tools that are the descendants of MIT’s 1952 numerically controlled mill. But the “digital” part of those tools resides in the controlling computer; the materials themselves are analog. A deeper meaning of “digital fabrication” is manufacturing processes in which the materials themselves are digital. A number of labs (including mine) are developing digital materials for the future of fabrication.

The distinction is not merely semantic. Telephone calls used to degrade with distance because they were analog: any errors from noise in the system would accumulate. Then, in 1937, the mathematician Claude Shannon wrote what was arguably the best-ever master’s thesis, at MIT. In it, he proved that on-off switches could compute any logical function. He applied the idea to telephony in 1938, while working at Bell Labs. He showed that by converting a call to a code of ones and zeros, a message could be sent reliably even in a noisy and imperfect system. The key difference is error correction: if a one becomes a 0.9 or a 1.1, the system can still distinguish it from a zero.

At MIT, Shannon’s research had been motivated by the difficulty of working with a giant mechanical analog computer. It used rotating wheels and disks, and its answers got worse the longer it ran. Researchers, including , Jack Cowan, and Samuel Winograd, showed that digitizing data could also apply to computing: a digital computer that represents information as ones and zeros can be reliable, even if its parts are not. The digitization of data is what made it possible to carry what would once have been called a supercomputer in the smart phone in one’s pocket.

These same ideas are now being applied to materials. To understand the difference from the processes used today, compare the performance of a child assembling LEGO pieces to that of a 3-D printer. First, because the LEGO pieces must be aligned to snap together, their ultimate positioning is more accurate than the motor skills of a child would usually allow. By contrast, the 3-D printing process accumulates errors (as anyone who has checked on a 3-D print that has been building for a few hours only to find that it has failed because of imperfect adhesion in the bottom layers can attest). Second, the LEGO pieces themselves define their spacing, allowing a structure to grow to any size. A 3-D printer is limited by the size of the system that positions the print head. Third, LEGO pieces are available in a range of different materials, whereas 3-D printers have a limited ability to use dissimilar materials, because everything must pass through the same printing process. Fourth, a LEGO construction that is no longer needed can be disassembled and the parts reused; when parts from a 3-D printer are no longer needed, they are thrown out. These are exactly the differences between an analog system (the continuous deposition of the 3-D printer) and a digital one (the LEGO assembly). The digitization of material is not a new idea. It is four billion years old, going back to the evolutionary age of the ribosome, the protein that makes proteins. Humans are full of molecular machinery, from the motors that move our muscles to the sensors in our eyes. The ribosome builds all that machinery out of a microscopic version of LEGO pieces, amino acids, of which there are 22 different kinds. The sequence for assembling the amino acids is stored in DNA and is sent to the ribosome in another protein called messenger RNA. The code does not just describe the protein to be manufactured; it becomes the new protein.

Labs like mine are now developing 3-D assemblers (rather than printers) that can build structures in the same way as the ribosome. The assemblers will be able to both add and remove parts from a discrete set. One of the assemblers we are developing works with components that are a bit bigger than amino acids, cluster of atoms about ten nanometers long (an amino acid is around one nanometer long). These can have properties that amino acids cannot, such as being good electrical conductors or magnets. The goal is to use the nanoassembler to build nanostructures, such as 3-D integrated circuits. Another assembler we are developing uses parts on the scale of microns to millimeters. We would like this machine to make the electronic circuit boards that the 3-D integrated circuits go on. Yet another assembler we are developing uses parts on the scale of centimeters, to make larger structures, such as aircraft components and even whole aircraft that will be lighter, stronger, and more capable than today’s planes -- think a jumbo jet that can flap its wings.

A key difference between existing 3-D printers and these assemblers is that the assemblers will be able to create complete functional systems in a single process. They will be able to integrate fixed and moving mechanical structures, sensors and actuators, and electronics. Even more important is what the assemblers don’t create: trash. Trash is a concept that applies only to materials that don’t contain enough information to be reusable. All the matter on the forest floor is recycled again and again. Likewise, a product assembled from digital materials need not be thrown out when it becomes obsolete. It can simply be disassembled and the parts reconstructed into something new.

The most interesting thing that an assembler can assemble is itself. For now, they are being made out of the same kinds of components as are used in rapid prototyping machines. Eventually, however, the goal is for them to be able to make all their own parts. The motivation is practical. The biggest challenge to building new fab labs around the world has not been generating interest, or teaching people how to use them, or even cost; it has been the logistics. Bureaucracy, incompetent or corrupt border controls, and the inability of supply chains to meet demand have hampered our efforts to ship the machines around the world. When we are ready to ship assemblers, it will be much easier to mail digital material components in bulk and then e-mail the design codes to a fab lab so that one assembler can make another.

Assemblers’ being self-replicating is also essential for their scaling. Ribosomes are slow, adding a few amino acids per second. But there are also very many of them, tens of thousands in each of the trillions of cells in the human body, and they can make more of themselves when needed. Likewise, to match the speed of the Star Trek replicator, many assemblers must be able to work in parallel.

GRAY GOO

Are there dangers to this sort of technology? In 1986, the engineer Drexler, whose doctoral thesis at MIT was the first in , wrote about what he called “gray goo,” a doomsday scenario in which a self-reproducing system multiplies out of control, spreads over the , and consumes all its resources. In 2000, , a computing pioneer, wrote in Wired magazine about the threat of extremists building self-reproducing weapons of mass destruction. He concluded that there are some areas of research that humans should not pursue. In 2003, a worried Prince Charles asked the Royal Society, the ’s fellowship of eminent scientists, to assess the risks of nanotechnology and self-replicating systems.

Although alarming, Drexler’s scenario does not apply to the self-reproducing assemblers that are now under development: these require an external source of power and the input of nonnatural materials. Although is a serious concern, it is not a new one; there has been an arms race in biology going on since the dawn of evolution.

A more immediate threat is that digital fabrication could be used to produce weapons of individual destruction. An amateur gunsmith has already used a 3-D printer to make the lower receiver of a semiautomatic rifle, the AR-15. This heavily regulated part holds the bullets and carries the gun’s serial number. A German hacker made 3-D copies of tightly controlled police handcuff keys. Two of my own students, Will Langford and Matt Keeter, made master keys, without access to the originals, for luggage padlocks approved by the U.S. Transportation Security Administration. They x-rayed the locks with a CT scanner in our lab, used the data to build a 3- D computer model of the locks, worked out what the master key was, and then produced working keys with three different processes: numerically controlled milling, 3-D printing, and molding and casting.

These kinds of anecdotes have led to calls to regulate 3-D printers. When I have briefed rooms of intelligence analysts or military leaders on digital fabrication, some of them have invariably concluded that the technology must be restricted. Some have suggested modeling the controls after the ones placed on color laser printers. When that type of printer first appeared, it was used to produce counterfeit currency. Although the fake bills were easily detectable, in the 1990s the U.S. Secret Service convinced laser printer manufacturers to agree to code each device so that it would print tiny yellow dots on every page it printed. The dots are invisible to the naked eye but encode the time, date, and serial number of the printer that printed them. In 2005, the Electronic Frontier Foundation, a group that defends digital rights, decoded and publicized the system. This led to a public outcry over printers invading peoples’ privacy, an ongoing practice that was established without public input or apparent checks.

Justified or not, the same approach would not work with 3-D printers. There are only a few manufacturers that make the print engines used in laser printers. So an agreement among them enforced the policy across the industry. There is no corresponding part for 3-D printers. The parts that cannot yet be made by the machine builders themselves, such as computer chips and stepper motors, are commodity items: they are mass-produced and used for many applications, with no central point of control. The parts that are unique to 3-D printing, such as filament feeders and extrusion heads, are not difficult to make. Machines that make machines cannot be regulated in the same way that machines made by a few manufacturers can be.

Even if 3-D printers could be controlled, hurting people is already a well-met market demand. Cheap weapons can be found anywhere in the world. CBA’s experience running fab labs in conflict zones has been that they are used as an alternative to fighting. And although established elites do not see the technology as a threat, its presence can challenge their authority. For example, the fab lab in Jalalabad, Afghanistan, has provided wireless Internet access to a community that can now, for the first time, learn about the rest of the world and extend its own network. A final concern about digital fabrication relates to the theft of . If products are transmitted as designs and produced on demand, what is to prevent those designs from being replicated without permission? That is the dilemma the music and software industries have faced. Their immediate response -- introducing technology to restrict copying files -- failed. That is because the technology was easily circumvented by those who wanted to cheat and was irritating for everyone else. The solution was to develop app stores that made is easier to buy and sell software and music legally. Files of digital fabrication designs can be sold in the same way, catering to specialized interests that would not support mass manufacturing.

Patent protections on digital fabrication designs can work only if there is some barrier to entry to using the intellectual property and if infringement can be identified. That applies to the products made in expensive foundries, but not to those made in affordable fab labs. Anyone with access to the tools can replicate a design anywhere; it is not feasible to litigate against the whole world. Instead of trying to restrict access, flourishing software businesses have sprung up that freely share their source codes and are compensated for the services they provide. The spread of digital fabrication tools is now leading to a corresponding practice for open-source hardware.

PLANNING INNOVATION

Communities should not fear or ignore digital fabrication. Better ways to build things can help build better communities. A fab lab in Detroit, for example, which is run by the entrepreneur Blair Evans, offers programs for at-risk youth as a social service. It empowers them to design and build things based on their own ideas.

It is possible to tap into the benefits of digital fabrication in several ways. One is top down. In 2005, South Africa launched a national network of fab labs to encourage innovation through its National Advanced Manufacturing Technology Strategy. In the , Representative Bill Foster (D-Ill.) proposed legislation, the National Fab Lab Network Act of 2010, to create a national lab linking local fab labs. The existing national laboratory system houses billion- dollar facilities but struggles to directly impact the communities around them. Foster’s bill proposes a system that would instead bring the labs to the communities.

Another approach is bottom up. Many of the existing fab lab sites, such as the one in Detroit, began as informal organizations to address unmet local needs. These have joined regional programs. These regional programs, such as the United States Fab Lab Network and FabLab.nl, in , , and the , take on tasks that are too big for an individual lab, such as supporting the launch of new ones. The regional programs, in turn, are linking together through the international Fab Foundation, which will provide support for global challenges, such as sourcing specialized materials around the world.

To keep up with what people are learning in the labs, the fab lab network has launched the Fab Academy. Children working in remote fab labs have progressed so far beyond any local educational opportunities that they would have to travel far away to an advanced institution to continue their studies. To prevent such brain drains, the Fab Academy has linked local labs together into a global campus. Along with access to tools, students who go to these labs are surrounded by peers to learn from and have local mentors to guide them. They participate in interactive global video lectures and share projects and instructional materials online.

The traditional model of advanced education assumes that faculty, books, and labs are scarce and can be accessed by only a few thousand people at a time. In computing terms, MIT can be thought of as a mainframe: students travel there for processing. Recently, there has been an interest in distance learning as an alternative, to be able to handle more students. This approach, however, is like time-sharing on a mainframe, with the distant students like terminals connected to a campus. The Fab Academy is more akin to the Internet, connected locally and managed globally. The combination of digital and digital fabrication effectively allows the campus to come to the students, who can share projects that are locally produced on demand.

The U.S. Bureau of Labor Statistics forecasts that in 2020, the United States will have about 9.2 million jobs in the fields of science, technology, engineering, and mathematics. According to data compiled by the National Science Board, the advisory group of the National Science Foundation, college degrees in these fields have not kept pace with college enrollment. And women and minorities remain significantly underrepresented in these fields. Digital fabrication offers a new response to this need, starting at the beginning of the pipeline. Children can come into any of the fab labs and apply the tools to their interests. The Fab Academy seeks to balance the decentralized enthusiasm of the do-it-yourself maker movement and the mentorship that comes from doing it together.

After all, the real strength of a fab lab is not technical; it is social. The innovative people that drive a knowledge economy share a common trait: by definition, they are not good at following rules. To be able to invent, people need to question assumptions. They need to study and work in environments where it is safe to do that. Advanced educational and research institutions have room for only a few thousand of those people each. By bringing welcoming environments to innovators wherever they are, this digital revolution will make it possible to harness a larger fraction of the planet’s brainpower. Digital fabrication consists of much more than 3-D printing. It is an evolving suite of capabilities to turn data into things and things into data. Many years of research remain to complete this vision, but the revolution is already well under way. The collective challenge is to answer the central question it poses: How will we live, learn, work, and play when anyone can make anything, anywhere?

NEIL GERSHENFELD is a Professor at the Massachusetts Institute of Technology and the head of MIT’s Center for Bits and Atoms. September 27, 2012 As Objects Go Online

The Promise (and Pitfalls) of the Internet of Things

Neil Gershenfeld and JP Vasseur March/April 2014

FABRIZIO BENSCH / COURTESY REUTERS A model presents a Samsung Galaxy Gear smartwatch at the IFA consumer electronics fair in Berlin, September 4, 2013.

Since 1969, when the first bit of data was transmitted over what would come to be known as the Internet, that global network has evolved from linking mainframe computers to connecting personal computers and now mobile devices. By 2010, the number of computers on the Internet had surpassed the number of people on earth.

Yet that impressive growth is about to be overshadowed as the things around us start going online as well, part of what is called “the Internet of Things.” Thanks to advances in circuits and software, it is now possible to make a Web server that fits on (or in) a fingertip for $1. When embedded in everyday objects, these small computers can send and receive information via the Internet so that a coffeemaker can turn on when a person gets out of bed and turn off when a cup is loaded into a dishwasher, a stoplight can communicate with roads to route cars around traffic, a building can operate more efficiently by knowing where people are and what they’re doing, and even the health of the whole planet can be monitored in real time by aggregating the data from all such devices.

Linking the digital and physical worlds in these ways will have profound implications for both. But this future won’t be realized unless the Internet of Things learns from the history of the Internet. The open standards and decentralized design of the Internet won out over competing proprietary systems and centralized control by offering fewer obstacles to innovation and growth. This battle has resurfaced with the proliferation of conflicting visions of how devices should communicate. The challenge is primarily organizational, rather then technological, a contest between command-and- control technology and distributed solutions. The Internet of Things demands the latter, and openness will eventually triumph.

THE CONNECTED LIFE

The Internet of Things is not just science fiction; it has already arrived. Some of the things currently networked together send data over the public Internet, and some communicate over secure private networks, but all share common protocols that allow them to interoperate to help solve profound problems.

Take energy inefficiency. Buildings account for three-quarters of all electricity use in the United States, and of that, about one-third is wasted. Lights stay on when there is natural light available, and air is cooled even when the weather outside is more comfortable or a room is unoccupied. Sometimes fans move air in the wrong direction or heating and cooling systems are operated simultaneously. This enormous amount of waste persists because the behavior of thermostats and light bulbs are set when buildings are constructed; the wiring is fixed and the controllers are inaccessible. Only when the infrastructure itself becomes intelligent, with networked sensors and actuators, can the efficiency of a building be improved over the course of its lifetime.

Health care is another area of huge promise. The mismanagement of medication, for example, costs the health- care system billions of dollars per year. Shelves and pill bottles connected to the Internet can alert a forgetful patient when to take a pill, a pharmacist to make a refill, and a doctor when a dose is missed. Floors can call for help if a senior citizen has fallen, helping the elderly live independently. Wearable sensors could monitor one’s activity throughout the day and serve as personal coaches, improving health and saving costs.

Countless futuristic “smart houses” have yet to generate much interest in living in them. But the Internet of Things succeeds to the extent that it is invisible. A refrigerator could communicate with a grocery store to reorder food, with a bathroom scale to monitor a diet, with a power utility to lower electricity consumption during peak demand, and with its manufacturer when maintenance is needed. Switches and lights in a house could adapt to how spaces are used and to the time of day. Thermostats with access to calendars, beds, and cars could plan heating and cooling based on the location of the house’s occupants. Utilities today provide power and plumbing; these new services would provide safety, comfort, and convenience. In cities, the Internet of Things will collect a wealth of new data. Understanding the flow of vehicles, utilities, and people is essential to maximizing the productivity of each, but traditionally, this has been measured poorly, if at all. If every street lamp, fire hydrant, bus, and crosswalk were connected to the Internet, then a city could generate real-time readouts of what’s working and what’s not. Rather than keeping this information internally, city hall could share open-source data sets with developers, as some cities are already doing.

Weather, agricultural inputs, and pollution levels all change with more local variation than can be captured by point measurements and remote sensing. But when the cost of an Internet connection falls far enough, these phenomena can all be measured precisely. Networking nature can help conserve animate, as well as inanimate, resources; an emerging “interspecies Internet” is linking elephants, dolphins, great apes, and other animals for the purposes of enrichment, research, and preservation.

The ultimate realization of the Internet of Things will be to transmit actual things through the Internet. Users can already send descriptions of objects that can be made with personal digital fabrication tools, such as 3-D printers and laser cutters. As data turn into things and things into data, long manufacturing supply chains can be replaced by a process of shipping data over the Internet to local production facilities that would make objects on demand, where and when they were needed.

BACK TO THE FUTURE

To understand how the Internet of Things works, it is helpful to understand how the Internet itself works, and why. The first secret of the Internet’s success is its architecture. At the time the Internet was being developed, in the 1960s and 1970s, telephones were wired to central office switchboards. That setup was analogous to a city in which every road goes through one traffic circle; it makes it easy to give directions but causes traffic jams at the central hub. To avoid such problems, the Internet’s developers created a distributed network, analogous to the web of streets that vehicles navigate in a real city. This design lets data bypass traffic jams and lets managers add capacity where needed.

The second key insight in the Internet’s development was the importance of breaking data down into individual chunks that could be reassembled after their online journey. “Packet switching,” as this process is called, is like a railway system in which each railcar travels independently. Cars with different destinations share the same tracks, instead of having to wait for one long train to pass, and those going to the same place do not all have to take the same route. As long as each car has an address and each junction indicates where the tracks lead, the cars can be combined on arrival. By transmitting data in this way, packet switching has made the Internet more reliable, robust, and efficient.

The third crucial decision was to make it possible for data to flow over different types of networks, so that a message can travel through the wires in a building, into a fiber-optic cable that carries it across a city, and then to a satellite that sends it to another continent. To allow that, computer scientists developed the Internet Protocol, or IP, which standardized the way that packets of data were addressed. The equivalent development in railroads was the introduction of a standard track gauge, which allowed trains to cross international borders. The IP standard allows many different types of data to travel over a common protocol.

The fourth crucial choice was to have the functions of the Internet reside at the ends of the network, rather than at the intermediate nodes, which are reserved for routing traffic. Known as the “end-to-end principle,” this design allows new applications to be invented and added without having to upgrade the whole network. The capabilities of a traditional telephone were only as advanced as the central office switch it was connected to, and those changed infrequently. But the layered architecture of the Internet avoids this problem. Online messaging, audio and video streaming, e-commerce, search engines, and social media were all developed on top of a system designed decades earlier, and new applications can be created from these components.

These principles may sound intuitive, but until recently, they were not shared by the systems that linked things other than computers. Instead, each industry, from heating and cooling to consumer electronics, created its own networking standards, which specified not only how their devices communicated with one another but also what they could communicate. This closed model may work within a fixed domain, but unlike the model used for the Internet, it limits future possibilities to what its creators originally anticipated. Moreover, each of these standards has struggled with the same problems the Internet has already solved: how to assign network names to devices, how to route messages between networks, how to manage the flow of traffic, and how to secure communications.

Although it might seem logical now to use the Internet to link things rather than reinvent the networking wheel for each industry, that has not been the norm so far. One reason is that manufacturers have wanted to establish proprietary control. The Internet does not have tollbooths, but if a vendor can control the communications standards used by the devices in a given industry, it can charge companies to use them.

Compounding this problem was the belief that special- purpose solutions would perform better than the general- purpose Internet. In reality, these alternatives were less well developed and lacked the Internet’s economies of scale and reliability. Their designers overvalued optimal functionality at the expense of interoperability. For any given purpose, the networking standards of the Internet are not ideal, but for almost anything, they are good enough. Not only do proprietary networks entail the high cost of maintaining multiple, incompatible standards; they have also been less secure. Decades of attacks on the Internet have led a large community of researchers and vendors to continually refine its defenses, which can now be applied to securing communications among things.

Finally, there was the problem of cost. The Internet relied at first on large computers that cost hundreds of thousands of dollars and then on $1,000 personal computers. The of the Internet were so far removed from the economics of light bulbs and doorknobs that developers never thought it would be commercially viable to put such objects online; the market for $1,000 light switches is limited. And so, for many decades, objects remained offline.

BIG THINGS IN SMALL PACKAGES

But no longer do economic or technological barriers stand in the way of the Internet of Things. The unsung hero that has made this possible is the microcontroller, which consists of a simple processor packaged with a small amount of memory and peripheral parts. Microcontrollers measure just millimeters across, cost just pennies to manufacture, and use just milliwatts of electricity, so that they can run for years on a battery or a small solar cell. Unlike a personal computer, which now boasts billions of bytes of memory, a microcontroller may contain only thousands of bytes. That’s not enough to run today’s desktop programs, but it matches the capabilities of the computers used to develop the Internet.

Around 1995, we and our colleagues based at MIT began using these parts to simplify Internet connections. That project grew into a collaboration with a group of the Internet’s original architects, starting with the computer scientist Danny Cohen, to extend the Internet into things. Since “Internet2” had already been used to refer to the project for a higher-speed Internet, we chose to call this slower and simpler Internet “Internet 0.”

The goal of Internet 0 was to bring IP to the smallest devices. By networking a smart light bulb and a smart light switch directly, we could enable these devices to turn themselves on and off rather than their having to communicate with a controller connected to the Internet. That way, new applications could be developed to communicate with the light and the switch, and without being limited by the capabilities of a controller.

Giving objects access to the Internet simplifies hard problems. Consider the Electronic Product Code (the successor to the familiar bar code), which retailers are starting to use in radio-frequency identification tags on their products. With great effort, the developers of the EPC have attempted to enumerate all possible products and track them centrally. Instead, the information in these tags could be replaced with packets of Internet data, so that objects could contain instructions that varied with the context: at the checkout counter in a store, a tag on a medicine bottle could communicate with a merchandise database; in a hospital, it could link to a patient’s records.

Along with simplifying Internet connections, the Internet 0 project also simplified the networks that things link to. The quest for ever-faster networks has led to very different standards for each medium used to transmit data, with each requiring its own special precautions. But Morse code looks the same whether it is transmitted using flags or flashing lights, and in the same way, Internet 0 packages data in a way that is independent of the medium. Like IP, that’s not optimal, but it trades speed for cheapness and simplicity. That makes sense, because high speed is not essential: light bulbs, after all, don’t watch broadband movies.

Another innovation allowing the Internet to reach things is the ongoing transition from the previous version of IP to a new one. When the designers of the original standard, called IPv4, launched it in 1981, they used 32 bits (each either a zero or a one) to store each IP address, the unique identifiers assigned to every device connected to the Internet -- allowing for over four billion IP addresses in total. That seemed like an enormous number at the time, but it is less than one address for every person on the planet. IPv4 has run out of addresses, and it is now being replaced with a new version, IPv6. The new standard uses 128-bit IP addresses, creating more possible identifiers than there are stars in the universe. With IPv6, everything can now get its own unique address.

But IPv6 still needs to cope with the unique requirements of the Internet of Things. Along with having limitations involving memory, speed, and power, devices can appear and disappear on the network intermittently, either to save energy or because they are on the move. And in big enough , even simple sensors can quickly overwhelm existing network infrastructure; a city might contain millions of power meters and billions of electrical outlets. So in collaboration with our colleagues, we are developing extensions of the Internet protocols to handle these demands.

THE INEVITABLE INTERNET

Although the Internet of Things is now technologically possible, its adoption is limited by a new version of an old conflict. During the 1980s, the Internet competed with a network called BITNET, a centralized system that linked mainframe computers. Buying a mainframe was expensive, and so BITNET’s growth was limited; connecting personal computers to the Internet made more sense. The Internet won out, and by the early 1990s, BITNET had fallen out of use. Today, a similar battle is emerging between the Internet of Things and what could be called the Bitnet of Things. The key distinction is where information resides: in a smart device with its own IP address or in a dumb device wired to a proprietary controller with an Internet connection. Confusingly, the latter setup is itself frequently characterized as part of the Internet of Things. As with the Internet and BITNET, the difference between the two models is far from semantic. Extending IP to the ends of a network enables innovation at its edges; linking devices to the Internet indirectly erects barriers to their use.

The same conflicting meanings appear in use of the term “smart grid,” which refers to networking everything that generates, controls, and consumes electricity. Smart grids promise to reduce the need for power plants by intelligently managing loads during peak demand, varying pricing dynamically to provide incentives for energy efficiency, and feeding power back into the grid from many small renewable sources. In the not-so-smart, utility-centric approach, these functions would all be centrally controlled. In the competing, Internet-centric approach, they would not, and its dispersed character would allow for a marketplace for developers to design power-saving applications.

Putting the power grid online raises obvious cybersecurity concerns, but centralized control would only magnify these problems. The history of the Internet has shown that security through obscurity doesn’t work. Systems that have kept their inner workings a secret in the name of security have consistently proved more vulnerable than those that have allowed themselves to be examined -- and challenged -- by outsiders. The open protocols and programs used to protect Internet communications are the result of ongoing development and testing by a large expert community. Another historical lesson is that people, not technology, are the most common weakness when it comes to security. No matter how secure a system is, someone who has access to it can always be corrupted, wittingly or otherwise. Centralized control introduces a point of vulnerability that is not present in a distributed system.

The flip side of security is privacy; eavesdropping takes on an entirely new meaning when actual eaves can do it. But privacy can be protected on the Internet of Things. Today, privacy on the rest of the Internet is safeguarded through cryptography, and it works: recent mass thefts of personal information have happened because firms failed to encrypt their customers’ data, not because the hackers broke through strong protections. By extending cryptography down to the level of individual devices, the owners of those devices would gain a new kind of control over their personal information. Rather than maintaining secrecy as an absolute good, it could be priced based on the value of sharing. Users could set up a firewall to keep private the Internet traffic coming from the things in their homes -- or they could share that data with, for example, a utility that gave a discount for their operating their dishwasher only during off-peak hours or a health provider that offered lower rates in return for their making healthier lifestyle choices.

The size and speed of the Internet have grown by nine orders of magnitude since the time it was invented. This expansion vastly exceeds what its developers anticipated, but that the Internet could get so far is a testament to their insight and vision. The uses the Internet has been put to that have driven this growth are even more surprising; they were not part of any original plan. But they are the result of an open architecture that left room for the unexpected. Likewise, today’s vision for the Internet of Things is sure to be eclipsed by the reality of how it is actually used. But the history of the Internet provides principles to guide this development in ways that are scalable, robust, secure, and encouraging of innovation.

The Internet’s defining attribute is its interoperability; information can cross geographic and technological boundaries. With the Internet of Things, it can now leap out of the desktop and data center and merge with the rest of the world. As the technology becomes more finely integrated into daily life, it will become, paradoxically, less visible. The future of the Internet is to literally disappear into the woodwork.

NEIL GERSHENFELD is a Professor at the Massachusetts Institute of Technology and directs MIT’s Center for Bits and Atoms. JP VASSEUR is a Cisco Fellow and Chief Architect of the Internet of Things at Cisco Systems. February 12, 2014 The Rise of Big Data

How It's Changing the Way We Think About the World

Kenneth Neil Cukier and Viktor Mayer- Schoenberger May/June 2013

JOHN ELK / GETTY IMAGES

Everyone knows that the Internet has changed how businesses operate, governments function, and people live. But a new, less visible technological trend is just as transformative: “big data.” Big data starts with the fact that there is a lot more information floating around these days than ever before, and it is being put to extraordinary new uses. Big data is distinct from the Internet, although the Web makes it much easier to collect and share data. Big data is about more than just communication: the idea is that we can learn from a large body of information things that we could not comprehend when we used only smaller amounts.

In the third century BC, the Library of Alexandria was believed to house the sum of human knowledge. Today, there is enough information in the world to give every person alive 320 times as much of it as historians think was stored in Alexandria’s entire collection -- an estimated 1,200 exabytes’ worth. If all this information were placed on CDs and they were stacked up, the CDs would form five separate piles that would all reach to the moon.

This explosion of data is relatively new. As recently as the year 2000, only one-quarter of all the world’s stored information was digital. The rest was preserved on paper, film, and other analog media. But because the amount of digital data expands so quickly -- doubling around every three years -- that situation was swiftly inverted. Today, less than two percent of all stored information is nondigital.

Given this massive scale, it is tempting to understand big data solely in terms of size. But that would be misleading. Big data is also characterized by the ability to render into data many aspects of the world that have never been quantified before; call it “datafication.” For example, location has been datafied, first with the invention of longitude and latitude, and more recently with GPS satellite systems. Words are treated as data when computers mine centuries’ worth of books. Even friendships and “likes” are datafied, via .

This kind of data is being put to incredible new uses with the assistance of inexpensive computer memory, powerful processors, smart algorithms, clever software, and math that borrows from basic statistics. Instead of trying to “teach” a computer how to do things, such as drive a car or translate between languages, which artificial-intelligence experts have tried unsuccessfully to do for decades, the new approach is to feed enough data into a computer so that it can infer the probability that, say, a traffic light is green and not red or that, in a certain context, lumière is a more appropriate substitute for “light” than léger.

Using great volumes of information in this way requires three profound changes in how we approach data. The first is to collect and use a lot of data rather than settle for small amounts or samples, as statisticians have done for well over a century. The second is to shed our preference for highly curated and pristine data and instead accept messiness: in an increasing number of situations, a bit of inaccuracy can be tolerated, because the benefits of using vastly more data of variable quality outweigh the costs of using smaller amounts of very exact data. Third, in many instances, we will need to give up our quest to discover the cause of things, in return for accepting correlations. With big data, instead of trying to understand precisely why an engine breaks down or why a drug’s side effect disappears, researchers can instead collect and analyze massive quantities of information about such events and everything that is associated with them, looking for patterns that might help predict future occurrences. Big data helps answer what, not why, and often that’s good enough.

The Internet has reshaped how humanity communicates. Big data is different: it marks a transformation in how society processes information. In time, big data might change our way of thinking about the world. As we tap ever more data to understand events and make decisions, we are likely to discover that many aspects of life are probabilistic, rather than certain.

APPROACHING "N=ALL"

For most of history, people have worked with relatively small amounts of data because the tools for collecting, organizing, storing, and analyzing information were poor. People winnowed the information they relied on to the barest minimum so that they could examine it more easily. This was the genius of modern-day statistics, which first came to the fore in the late nineteenth century and enabled society to understand complex realities even when little data existed. Today, the technical environment has shifted 179 degrees. There still is, and always will be, a constraint on how much data we can manage, but it is far less limiting than it used to be and will become even less so as time goes on.

The way people handled the problem of capturing information in the past was through sampling. When collecting data was costly and processing it was difficult and time consuming, the sample was a savior. Modern sampling is based on the idea that, within a certain margin of error, one can infer something about the total population from a small subset, as long the sample is chosen at random. Hence, exit polls on election night query a randomly selected group of several hundred people to predict the voting behavior of an entire state. For straightforward questions, this process works well. But it falls apart when we want to drill down into subgroups within the sample. What if a pollster wants to know which candidate single women under 30 are most likely to vote for? How about university-educated, single Asian American women under 30? Suddenly, the random sample is largely useless, since there may be only a couple of people with those characteristics in the sample, too few to make a meaningful assessment of how the entire subpopulation will vote. But if we collect all the data -- “n = all,” to use the terminology of statistics -- the problem disappears.

This example raises another shortcoming of using some data rather than all of it. In the past, when people collected only a little data, they often had to decide at the outset what to collect and how it would be used. Today, when we gather all the data, we do not need to know beforehand what we plan to use it for. Of course, it might not always be possible to collect all the data, but it is getting much more feasible to capture vastly more of a phenomenon than simply a sample and to aim for all of it. Big data is a matter not just of creating somewhat larger samples but of harnessing as much of the existing data as possible about what is being studied. We still need statistics; we just no longer need to rely on small samples.

There is a tradeoff to make, however. When we increase the scale by orders of magnitude, we might have to give up on clean, carefully curated data and tolerate some messiness. This idea runs counter to how people have tried to work with data for centuries. Yet the obsession with accuracy and precision is in some ways an artifact of an information- constrained environment. When there was not that much data around, researchers had to make sure that the figures they bothered to collect were as exact as possible. Tapping vastly more data means that we can now allow some inaccuracies to slip in (provided the data set is not completely incorrect), in return for benefiting from the insights that a massive body of data provides.

Consider language translation. It might seem obvious that computers would translate well, since they can store lots of information and retrieve it quickly. But if one were to simply substitute words from a French-English dictionary, the translation would be atrocious. Language is complex. A breakthrough came in the 1990s, when IBM delved into statistical machine translation. It fed Canadian parliamentary transcripts in both French and English into a computer and programmed it to infer which word in one language is the best alternative for another. This process changed the task of translation into a giant problem of probability and math. But after this initial improvement, progress stalled.

Then Google barged in. Instead of using a relatively small number of high-quality translations, the search giant harnessed more data, but from the less orderly Internet -- “data in the wild,” so to speak. Google inhaled translations from corporate , documents in every language from the European Union, even translations from its giant book- scanning project. Instead of millions of pages of texts, Google analyzed billions. The result is that its translations are quite good -- better than IBM’s were--and cover 65 languages. Large amounts of messy data trumped small amounts of cleaner data.

FROM CAUSATION TO CORRELATION

These two shifts in how we think about data -- from some to all and from clean to messy -- give rise to a third change: from causation to correlation. This represents a move away from always trying to understand the deeper reasons behind how the world works to simply learning about an association among phenomena and using that to get things done.

Of course, knowing the causes behind things is desirable. The problem is that causes are often extremely hard to figure out, and many times, when we think we have identified them, it is nothing more than a self-congratulatory illusion. Behavioral economics has shown that humans are conditioned to see causes even where none exist. So we need to be particularly on guard to prevent our cognitive biases from deluding us; sometimes, we just have to let the data speak.

Take UPS, the delivery company. It places sensors on vehicle parts to identify certain heat or vibrational patterns that in the past have been associated with failures in those parts. In this way, the company can predict a breakdown before it happens and replace the part when it is convenient, instead of on the side of the road. The data do not reveal the exact relationship between the heat or the vibrational patterns and the part’s failure. They do not tell UPS why the part is in trouble. But they reveal enough for the company to know what to do in the near term and guide its investigation into any underlying problem that might exist with the part in question or with the vehicle.

A similar approach is being used to treat breakdowns of the human machine. Researchers in Canada are developing a big- data approach to spot infections in premature babies before overt symptoms appear. By converting 16 vital signs, including heartbeat, blood pressure, respiration, and blood- oxygen levels, into an information flow of more than 1,000 data points per second, they have been able to find correlations between very minor changes and more serious problems. Eventually, this technique will enable doctors to act earlier to save lives. Over time, recording these observations might also allow doctors to understand what actually causes such problems. But when a newborn’s health is at risk, simply knowing that something is likely to occur can be far more important than understanding exactly why.

Medicine provides another good example of why, with big data, seeing correlations can be enormously valuable, even when the underlying causes remain obscure. In February 2009, Google created a stir in health-care circles. Researchers at the company published a paper in Nature that showed how it was possible to track outbreaks of the seasonal flu using nothing more than the archived records of Google searches. Google handles more than a billion searches in the United States every day and stores them all. The company took the 50 million most commonly searched terms between 2003 and 2008 and compared them against historical influenza data from the Centers for Disease Control and Prevention. The idea was to discover whether the incidence of certain searches coincided with outbreaks of the flu -- in other words, to see whether an increase in the frequency of certain Google searches conducted in a particular geographic area correlated with the CDC’s data on outbreaks of flu there. The CDC tracks actual patient visits to hospitals and clinics across the country, but the information it releases suffers from a reporting lag of a week or two -- an eternity in the case of a . Google’s system, by contrast, would work in near- real time.

Google did not presume to know which queries would prove to be the best indicators. Instead, it ran all the terms through an algorithm that ranked how well they correlated with flu outbreaks. Then, the system tried combining the terms to see if that improved the model. Finally, after running nearly half a billion calculations against the data, Google identified 45 terms -- words such as “headache” and “runny nose” -- that had a strong correlation with the CDC’s data on flu outbreaks. All 45 terms related in some way to influenza. But with a billion searches a day, it would have been impossible for a person to guess which ones might work best and test only those.

Moreover, the data were imperfect. Since the data were never intended to be used in this way, misspellings and incomplete phrases were common. But the sheer size of the data set more than compensated for its messiness. The result, of course, was simply a correlation. It said nothing about the reasons why someone performed any particular search. Was it because the person felt ill, or heard sneezing in the next cubicle, or felt anxious after reading the news? Google’s system doesn’t know, and it doesn’t care. Indeed, last December, it seems that Google’s system may have overestimated the number of flu cases in the United States. This serves as a reminder that predictions are only probabilities and are not always correct, especially when the basis for the prediction -- Internet searches -- is in a constant state of change and vulnerable to outside influences, such as media reports. Still, big data can hint at the general direction of an ongoing development, and Google’s system did just that.

BACK-END OPERATIONS Many technologists believe that big data traces its lineage back to the digital revolution of the 1980s, when advances in microprocessors and computer memory made it possible to analyze and store ever more information. That is only superficially the case. Computers and the Internet certainly aid big data by lowering the cost of collecting, storing, processing, and sharing information. But at its heart, big data is only the latest step in humanity’s quest to understand and quantify the world. To appreciate how this is the case, it helps to take a quick look behind us.

Appreciating people’s posteriors is the art and science of Shigeomi Koshimizu, a professor at the Advanced Institute of Industrial Technology in Tokyo. Few would think that the way a person sits constitutes information, but it can. When a person is seated, the contours of the body, its posture, and its weight distribution can all be quantified and tabulated. Koshimizu and his team of engineers convert backsides into data by measuring the pressure they exert at 360 different points with sensors placed in a car seat and by indexing each point on a scale of zero to 256. The result is a digital code that is unique to each individual. In a trial, the system was able to distinguish among a handful of people with 98 percent accuracy.

The research is not asinine. Koshimizu’s plan is to adapt the technology as an antitheft system for cars. A vehicle equipped with it could recognize when someone other than an approved driver sat down behind the wheel and could demand a password to allow the car to function. Transforming sitting positions into data creates a viable service and a potentially lucrative business. And its usefulness may go far beyond deterring auto theft. For instance, the aggregated data might reveal clues about a relationship between drivers’ posture and road safety, such as telltale shifts in position prior to accidents. The system might also be able to sense when a driver slumps slightly from fatigue and send an alert or automatically apply the brakes.

Koshimizu took something that had never been treated as data -- or even imagined to have an informational quality -- and transformed it into a numerically quantified format. There is no good term yet for this sort of transformation, but “datafication” seems apt. Datafication is not the same as digitization, which takes analog content -- books, films, photographs -- and converts it into digital information, a sequence of ones and zeros that computers can read. Datafication is a far broader activity: taking all aspects of life and turning them into data. Google’s augmented-reality glasses datafy the gaze. Twitter datafies stray thoughts. LinkedIn datafies professional networks.

Once we datafy things, we can transform their purpose and turn the information into new forms of value. For example, IBM was granted a U.S. patent in 2012 for “securing premises using surface-based computing technology” -- a technical way of describing a touch-sensitive floor covering, somewhat like a giant smartphone screen. Datafying the floor can open up all kinds of possibilities. The floor could be able to identify the objects on it, so that it might know to turn on lights in a room or open doors when a person entered. Moreover, it might identify individuals by their weight or by the way they stand and walk. It could tell if someone fell and did not get back up, an important feature for the elderly. Retailers could track the flow of customers through their stores. Once it becomes possible to turn activities of this kind into data that can be stored and analyzed, we can learn more about the world -- things we could never know before because we could not measure them easily and cheaply.

BIG DATA IN THE BIG APPLE

Big data will have implications far beyond medicine and consumer goods: it will profoundly change how governments work and alter the nature of politics. When it comes to generating economic growth, providing public services, or fighting wars, those who can harness big data effectively will enjoy a significant edge over others. So far, the most exciting work is happening at the municipal level, where it is easier to access data and to with the information. In an effort spearheaded by New York City Mayor Michael Bloomberg (who made a fortune in the data business), the city is using big data to improve public services and lower costs. One example is a new fire-prevention strategy.

Illegally subdivided buildings are far more likely than other buildings to go up in flames. The city gets 25,000 complaints about overcrowded buildings a year, but it has only 200 inspectors to respond. A small team of analytics specialists in the mayor’s office reckoned that big data could help resolve this imbalance between needs and resources. The team created a database of all 900,000 buildings in the city and augmented it with troves of data collected by 19 city agencies: records of tax liens, anomalies in utility usage, service cuts, missed payments, ambulance visits, local crime rates, rodent complaints, and more. Then, they compared this database to records of building fires from the past five years, ranked by severity, hoping to uncover correlations. Not surprisingly, among the predictors of a fire were the type of building and the year it was built. Less expected, however, was the finding that buildings obtaining permits for exterior brickwork correlated with lower risks of severe fire.

Using all this data allowed the team to create a system that could help them determine which overcrowding complaints needed urgent attention. None of the buildings’ characteristics they recorded caused fires; rather, they correlated with an increased or decreased risk of fire. That knowledge has proved immensely valuable: in the past, building inspectors issued vacate orders in 13 percent of their visits; using the new method, that figure rose to 70 percent -- a huge efficiency gain.

Of course, insurance companies have long used similar methods to estimate fire risks, but they mainly rely on only a handful of attributes and usually ones that intuitively correspond with fires. By contrast, New York City’s big-data approach was able to examine many more variables, including ones that would not at first seem to have any relation to fire risk. And the city’s model was cheaper and faster, since it made use of existing data. Most important, the big-data predictions are probably more on target, too.

Big data is also helping increase the transparency of democratic governance. A movement has grown up around the idea of “open data,” which goes beyond the freedom-of- information laws that are now commonplace in developed democracies. Supporters call on governments to make the vast amounts of innocuous data that they hold easily available to the public. The United States has been at the forefront, with its Data.gov , and many other countries have followed.

At the same time as governments promote the use of big data, they will also need to protect citizens against unhealthy market dominance. Companies such as Google, Amazon, and Facebook -- as well as lesser-known “data brokers,” such as Acxiom and Experian -- are amassing vast amounts of information on everyone and everything. Antitrust laws protect against the monopolization of markets for goods and services such as software or media outlets, because the sizes of the markets for those goods are relatively easy to estimate. But how should governments apply antitrust rules to big data, a market that is hard to define and that is constantly changing form? Meanwhile, privacy will become an even bigger worry, since more data will almost certainly lead to more compromised private information, a downside of big data that current technologies and laws seem unlikely to prevent.

Regulations governing big data might even emerge as a battleground among countries. European governments are already scrutinizing Google over a raft of antitrust and privacy concerns, in a scenario reminiscent of the antitrust enforcement actions the European Commission took against Microsoft beginning a decade ago. Facebook might become a target for similar actions all over the world, because it holds so much data about individuals. Diplomats should brace for fights over whether to treat information flows as similar to free trade: in the future, when China censors Internet searches, it might face complaints not only about unjustly muzzling speech but also about unfairly restraining commerce.

BIG DATA OR BIG BROTHER?

States will need to help protect their citizens and their markets from new vulnerabilities caused by big data. But there is another potential dark side: big data could become Big Brother. In all countries, but particularly in nondemocratic ones, big data exacerbates the existing asymmetry of power between the state and the people.

The asymmetry could well become so great that it leads to big-data authoritarianism, a possibility vividly imagined in science-fiction movies such as Minority Report. That 2002 film took place in a near-future dystopia in which the character played by Tom Cruise headed a “Precrime” police unit that relied on clairvoyants whose visions identified people who were about to commit crimes. The plot revolves around the system’s obvious potential for error and, worse yet, its denial of free will.

Although the idea of identifying potential wrongdoers before they have committed a crime seems fanciful, big data has allowed some authorities to take it seriously. In 2007, the Department of Homeland Security launched a research project called FAST (Future Attribute Screening Technology), aimed at identifying potential terrorists by analyzing data about individuals’ vital signs, body language, and other physiological patterns. Police forces in many cities, including Los Angeles, Memphis, Richmond, and Santa Cruz, have adopted “predictive policing” software, which analyzes data on previous crimes to identify where and when the next ones might be committed.

For the moment, these systems do not identify specific individuals as suspects. But that is the direction in which things seem to be heading. Perhaps such systems would identify which young people are most likely to shoplift. There might be decent reasons to get so specific, especially when it comes to preventing negative social outcomes other than crime. For example, if social workers could tell with 95 percent accuracy which teenage girls would get pregnant or which high school boys would drop out of school, wouldn’t they be remiss if they did not step in to help? It sounds tempting. Prevention is better than punishment, after all. But even an intervention that did not admonish and instead provided assistance could be construed as a penalty -- at the very least, one might be stigmatized in the eyes of others. In this case, the state’s actions would take the form of a penalty before any act were committed, obliterating the sanctity of free will.

Another worry is what could happen when governments put too much trust in the power of data. In his 1999 book, Seeing Like a State, the anthropologist James Scott documented the ways in which governments, in their zeal for quantification and data collection, sometimes end up making people’s lives miserable. They use maps to determine how to reorganize communities without first learning anything about the people who live there. They use long tables of data about harvests to decide to collectivize agriculture without knowing a whit about farming. They take all the imperfect, organic ways in which people have interacted over time and bend them to their needs, sometimes just to satisfy a desire for quantifiable order.

This misplaced trust in data can come back to bite. Organizations can be beguiled by data’s false charms and endow more meaning to the numbers than they deserve. That is one of the lessons of the Vietnam War. U.S. Secretary of Defense Robert McNamara became obsessed with using statistics as a way to measure the war’s progress. He and his colleagues fixated on the number of enemy fighters killed. Relied on by commanders and published daily in newspapers, the body count became the data point that defined an era. To the war’s supporters, it was proof of progress; to critics, it was evidence of the war’s immorality. Yet the statistics revealed very little about the complex reality of the conflict. The figures were frequently inaccurate and were of little value as a way to measure success. Although it is important to learn from data to improve lives, common sense must be permitted to override the spreadsheets.

HUMAN TOUCH

Big data is poised to reshape the way we live, work, and think. A worldview built on the importance of causation is being challenged by a preponderance of correlations. The possession of knowledge, which once meant an understanding of the past, is coming to mean an ability to predict the future. The challenges posed by big data will not be easy to resolve. Rather, they are simply the next step in the timeless debate over how to best understand the world.

Still, big data will become integral to addressing many of the world’s pressing problems. Tackling will require analyzing pollution data to understand where best to focus efforts and find ways to mitigate problems. The sensors being placed all over the world, including those embedded in smartphones, provide a wealth of data that will allow climatologists to more accurately model global warming. Meanwhile, improving and lowering the cost of health care, especially for the world’s poor, will make it necessary to automate some tasks that currently require human judgment but could be done by a computer, such as examining biopsies for cancerous cells or detecting infections before symptoms fully emerge.

Ultimately, big data marks the moment when the “information society” finally fulfills the promise implied by its name. The data take center stage. All those digital bits that have been gathered can now be harnessed in novel ways to serve new purposes and unlock new forms of value. But this requires a new way of thinking and will challenge institutions and identities. In a world where data shape decisions more and more, what purpose will remain for people, or for intuition, or for going against the facts? If everyone appeals to the data and harnesses big-data tools, perhaps what will become the central point of differentiation is unpredictability: the human element of instinct, risk taking, accidents, and even error. If so, then there will be a special need to carve out a place for the human: to reserve space for intuition, common sense, and serendipity to ensure that they are not crowded out by data and machine-made answers.

This has important implications for the notion of progress in society. Big data enables us to experiment faster and explore more leads. These advantages should produce more innovation. But at times, the spark of invention becomes what the data do not say. That is something that no amount of data can ever confirm or corroborate, since it has yet to exist. If Henry Ford had queried big-data algorithms to discover what his customers wanted, they would have come back with “a faster horse,” to recast his famous line. In a world of big data, it is the most human traits that will need to be fostered -- creativity, intuition, and intellectual ambition -- since human ingenuity is the source of progress.

Big data is a resource and a tool. It is meant to inform, rather than explain; it points toward understanding, but it can still lead to misunderstanding, depending on how well it is wielded. And however dazzling the power of big data appears, its seductive glimmer must never blind us to its inherent imperfections. Rather, we must adopt this technology with an appreciation not just of its power but also of its limitations.

KENNETH CUKIER is Data Editor of The Economist. VIKTOR MAYER-SCHOENBERGER is Professor of Internet Governance and Regulation at the Oxford Internet Institute. They are the authors of Big Data: A Revolution That Will Transform How We Live, Work, and Think (Houghton Mifflin Harcourt, 2013), from which this essay is adapted. © by Kenneth Cukier and Viktor Mayer-Schoenberger. Reprinted by permission of Houghton Mifflin Harcourt. April 3, 2013 The Mobile-Finance Revolution

How Cell Phones Can Spur Development

Jake Kendall and Rodger Voorhies March/April 2014

KAI-UWE WAERNER / COURTESY REUTERS Tanzania, October 2011.

The roughly 2.5 billion people in the world who live on less than $2 a day are not destined to remain in a state of chronic poverty. Every few years, somewhere between ten and 30 percent of the world’s poorest households manage to escape poverty, typically by finding steady employment or through entrepreneurial activities such as growing a business or improving agricultural harvests. During that same period, however, roughly an equal number of households slip below the poverty line. Health-related emergencies are the most common cause, but there are many more: crop failures, livestock deaths, farming-equipment breakdowns, even wedding expenses.

In many such situations, the most important buffers against crippling setbacks are financial tools such as personal savings, insurance, credit, or cash transfers from family and friends. Yet these are rarely available because most of the world’s poor lack access to even the most basic banking services. Globally, 77 percent of them do not have a savings account; in sub-Saharan Africa, the figure is 85 percent. An even greater number of poor people lack access to formal credit or insurance products. The main problem is not that the poor have nothing to save -- studies show that they do -- but rather that they are not profitable customers, so banks and other service providers do not try to reach them. As a result, poor people usually struggle to stitch together a patchwork of informal, often precarious arrangements to manage their financial lives.

Over the last few decades, programs -- through which lenders have granted millions of small loans to poor people -- have worked to address the problem. Institutions such as the Grameen Bank, which won the Nobel Peace Prize in 2006, have demonstrated impressive results with new financial arrangements, such as group loans that require weekly payments. Today, the industry provides loans to roughly 200 million borrowers -- an impressive number to be sure, but only enough to make a dent in the over two billion people who lack access to formal financial services.

Despite its success, the microfinance industry has faced major hurdles. Due to the high overhead costs of administering so many small loans, the interest rates and fees associated with microcredit can be steep, often reaching 100 percent annually. Moreover, a number of rigorous field studies have shown that even when lending programs successfully reach borrowers, there is only a limited increase in entrepreneurial activity -- and no measurable decrease in poverty rates. For years, the development community has promoted a narrative that borrowing and have lifted large numbers of people out of poverty. But that narrative has not held up.

Two trends, however, indicate great promise for the next generation of financial-inclusion efforts. First, mobile technology has found its way to the developing world and spread at an astonishing pace. According to the World Bank, mobile signals now cover some 90 percent of the world’s poor, and there are, on average, more than 89 cell-phone accounts for every 100 people living in a developing country. That presents an extraordinary opportunity: mobile-based financial tools have the potential to dramatically lower the cost of delivering banking services to the poor.

Second, economists and other researchers have in recent years generated a much richer fact base from rigorous studies to inform future product offerings. Early on, both sides of the debate over the true value of microcredit programs for the poor relied mostly on anecdotal observations and gut instincts. But now, there are hundreds of studies to draw from. The flexible, low-cost models made possible by mobile technology and the evidence base to guide their design have thus created a major opportunity to deliver real value to the poor.

SHOW THEM THE MONEY

Mobile finance offers at least three major advantages over traditional financial models. First, digital transactions are essentially free. In-person services and cash transactions account for the majority of routine banking expenses. But mobile-finance clients keep their money in digital form, and so they can send and receive money often, even with distant counterparties, without creating significant transaction costs for their banks or mobile service providers. Second, mobile communications generate copious amounts of data, which banks and other providers can use to develop more profitable services and even to substitute for traditional credit scores (which can be hard for those without formal records or financial to obtain). Third, mobile platforms link banks to clients in real time. This means that banks can instantly relay account information or send reminders and clients can sign up for services quickly on their own.

The potential, in other words, is enormous. The benefits of credit, savings, and insurance are clear, but for most poor households, the simple ability to transfer money can be equally important. For example, a recent Gallup poll conducted in 11 sub-Saharan African countries found that over 50 percent of adults surveyed had made at least one payment to someone far away within the preceding 30 days. Eighty-three percent of them had used cash. Whether they were paying utility bills or sending money to their families, most had sent the money with bus drivers, had asked friends to carry it, or had delivered the payments themselves. The costs were high; moving physical cash, particularly in sub- Saharan Africa, is risky, unreliable, and slow.

Imagine what would happen if the poor had a better option. A recent study in Kenya found that access to a mobile-money product called M-Pesa, which allows clients to store money on their cell phones and send it at the touch of a button, increased the size and efficiency of the networks within which they moved money. That came in when poorer participants endured economic shocks spurred by unexpected events, such as a hospitalization or a house fire. Households with access to M-Pesa received more financial support from larger and more distant networks of friends and family. As a result, they were better able to survive hard times, maintaining their regular diets and keeping their children in school.

To consumers, the benefits of M-Pesa are self-evident. Today, according to a study by Kenya’s Financial Sector Deepening Trust, 62 percent of adults in the country have active accounts. And other countries have since launched their own versions of the product. In Tanzania, over 47 percent of households have a family member who has registered. In Uganda, 26 percent of adults are users. The rates of adoption have been extraordinary; by contrast, microlenders rarely get more than ten percent participation in their program areas.

Mobile money is useful for more than just emergency transfers. Regular remittances from family members working in other parts of the country, for example, make up a large share of the incomes of many poor households. A Gallup study in South Asia recently found that 72 percent of remittance- receiving households indicated that the cash transfers were “very important” to their financial situations. Studies of small- business owners show that they make use of mobile payments to improve their efficiency and expand their customer bases.

These technologies could also transform the way people interact with large formal institutions, especially by improving people’s access to government services. A study in Niger by a researcher from Tufts University found that during a drought, allowing people to request emergency government support through their cell phones resulted in better diets for those people, compared with the diets of those who received cash handouts. The researchers concluded that women were more likely than men to control digital transfers (as opposed to cash transfers) and that they were more likely to spend the money on high-quality food.

Governments, meanwhile, stand to gain as much as consumers do. A McKinsey study in India found that the government could save $22 billion each year from digitizing all of its payments. Another study, by the Better Than Cash Alliance, a nonprofit that helps countries adopt electronic payment systems, found that the Mexican government’s shift to digital payments (which began in 1997) trimmed its spending on wages, pensions, and social welfare by 3.3 percent annually, or nearly $1.3 billion.

SAVINGS AND PHONES

In the developed world, bankers have long known that relatively simple nudges can have a big impact on long-term behavior. Banks regularly encourage clients to sign off on automatic contributions to their 401(k) retirement plans, set up automatic deposits into savings accounts from their paychecks, and open special accounts to save for a particular purpose.

Studies in the developing world confirm that, if anything, the poor need such decision aids even more than the rich, owing to the constant pressure they are under to spend their money on immediate needs. And cell phones make nudging easy. For example, a series of studies have shown that when clients receive text messages urging them to make regular savings deposits, they improve their balances over time. More draconian features have also proved effective, such as so- called commitment accounts, which impose financial discipline with large penalty fees.

Many poor people have already demonstrated their interest in financial mechanisms that encourage savings. In Africa, women commonly join groups called rotating savings and credit associations, or ROSCAs, which require them to attend weekly meetings and meet rigid deposit and withdrawal schedules. Studies suggest that in such countries as Cameroon, Gambia, Nigeria, and Togo, roughly half of all adults are members of a ROSCA, and similar group savings schemes are widespread outside Africa, as well. Research shows that members are drawn to the discipline of required regular payments and the social pressure of group meetings.

Mobile-banking applications have the potential to encourage financial discipline in even more effective ways. Seemingly marginal features designed to incentivize financial discipline can do much to set people on the path to financial prosperity. In one experiment, researchers allowed some small-scale farmers in Malawi to have their harvest proceeds directly deposited into commitment accounts. The farmers who were offered this option and chose to participate ended up investing 30 percent more in farm inputs than those who weren’t offered the option, leading to a 22 percent increase in revenues and a 17 percent increase in household consumption after the harvest.

Poor households, not unlike rich ones, are not well served by simple loans in isolation; they need a full suite of financial tools that work in concert to mitigate risk, fund investment, grow savings, and move money. Insurance, for example, can significantly affect how borrowers invest in their businesses. A recent field study in Ghana gave different groups of farmers cash grants to fund investments in farm inputs, crop insurance, or both. The farmers with crop insurance invested more in agricultural inputs, particularly in chemicals, land preparation, and hired labor. And they spent, on average, $266 more on cultivation than did the farmers without insurance. It was not the farmers’ lack of credit, then, that was the greatest barrier to expanding their businesses; it was risk.

Mobile applications allow banks to offer such services to huge numbers of customers in very short order. In November 2012, the Commercial Bank of Africa and the telecommunications firm Safaricom launched a product called M-Shwari, which enables M-Pesa users to open interest-accruing savings accounts and apply for short-term loans through their cell phones. The demand for the product proved overwhelming. By effectively eliminating the time it would have taken for users to sign up or apply in person, M-Shwari added roughly one million accounts in its first three months.

By attracting so many customers and tracking their behavior in real time, mobile platforms generate reams of useful data. People’s calling and transaction patterns can reveal valuable insights about the behavior of certain segments of the client population, demonstrating how variations in income levels, employment status, social connectedness, marital status, creditworthiness, or other attributes shape outcomes. Many studies have already shown how certain product features can affect some groups differently from others. In one Kenyan study, researchers gave clients ATM cards that permitted cash withdrawals at lowered costs and allowed the clients to access their savings accounts after hours and on weekends. The change ended up positively affecting married men and adversely affecting married women, whose husbands could more easily get their hands on the money saved in a joint account. Before the ATM cards, married women could cite the high withdrawal fees or the bank’s limited hours to discourage withdrawals. With the cards, moreover, husbands could get cash from an ATM themselves, whereas withdrawals at the branch office had usually required the wives to go in person during the hours their husbands were at work.

LOCATION, LOCATION, LOCATION

The high cost of basic banking infrastructure may be the biggest barrier to providing financial services to the poor. Banks place ATMs and branch offices almost exclusively in the wealthier, denser (and safer) areas of poor countries. The cost of such infrastructure often dwarfs the potential profits to be made in poorer, more rural areas. In contrast, mobile banking allows customers to carry out transactions in existing shops and even market stalls, creating denser networks of transaction points at a much lower cost.

For clients to fully benefit from mobile financial services, however, access to a physical office that deals in cash remains critical. When researchers studying the M-Pesa program in Kenya cross-referenced the locations of M-Pesa agents and the locations of households in the program, they found that the closer a household was to an M-Pesa kiosk, where cash and customer services were available, the more it benefited from the service. Beyond a certain distance, it becomes infeasible for clients to use a given financial service, no matter how much they need it.

Meanwhile, a number of studies have shown that increasing physical access points to the financial system can help lift local economies. Researchers in India have documented the effects of a regulation requiring banks to open rural branches in exchange for licenses to operate in more profitable urban areas. The data showed significant increases in lending and agricultural output in the areas that received branches due to the program, as well as 4–5 percent reductions in the number of people living in poverty. A similar study in found that in areas where bank branches were introduced, the number of people who owned informal businesses increased by 7.6 percent. There were also ripple effects: an uptick in employment and a seven percent increase in incomes.

In the right hands, then, access to financial tools can stimulate underserved economies and, at critical times, determine whether a poor household is able to capture an opportunity to move out of poverty or weather an otherwise debilitating financial shock. Thanks to new research, much more is known about what types of features can do the most to improve consumers’ lives. And due to the rapid proliferation of cell phones, it is now possible to deliver such services to more people than ever before. Both of these trends have set the stage for yet further by banks, cell- phone companies, microlenders, and entrepreneurs -- all of whom have a role to play in delivering life-changing financial services to those who need them most.

JAKE KENDALL is Senior Program Officer for the Financial Services for the Poor program at the Bill & Melinda Gates Foundation. RODGER VOORHIES is Director of the Financial Services for the Poor program at the Bill & Melinda Gates Foundation. February 12, 2014 Biology's Brave New World

The Promise and Perils of the Synbio Revolution

Laurie Garrett November/December 2013

CORBIS / THOMAS J. DEERINCK / SCIENCE PHOTO LIBRARY Germs 2.0: the first self-replicating bacteria made in a lab, May 2010.

In May 2010, the richest, most powerful man in biotechnology made a new creature. J. Craig Venter and his private-company team started with DNA and constructed a novel genetic sequence of more than one million coded bits of information known as nucleotides. Seven years earlier, Venter had been the first person in history to make a functioning creature from information. Looking at the strings of letters representing the DNA sequence for a virus called phi X174, which infects bacteria, he thought to himself, “I can assemble real DNA based on that computer information.” And so he did, creating a virus based on the phi X174 genomic code. He followed the same recipe later on to generate the DNA for his larger and more sophisticated creature. Venter and his team figured out how to make an artificial bacterial cell, inserted their man- made DNA genome inside, and watched as the organic life form they had synthesized moved, ate, breathed, and replicated itself.

As he was doing this, Venter tried to warn a largely oblivious humanity about what was coming. He cautioned in a 2009 interview, for example, that “we think once we do activate a genome that yes, it probably will impact people’s thinking about life.” Venter defined his new technology as “synthetic genomics,” which would “start in the computer in the digital world from digitized biology and make new DNA constructs for very specific purposes. . . . It can mean that as we learn the rules of life we will be able to develop robotics and computational systems that are self-learning systems.” “It’s the beginning of the new era of very rapid learning,” he continued. “There’s not a single aspect of human life that doesn’t have the potential to be totally transformed by these technologies in the future.”

Today, some call work such as Venter’s novel bacterial creation an example of “4-D printing.” 2-D printing is what we do everyday by hitting “print” on our keyboards, causing a hard copy of an article or the like to spew from our old- fashioned ink-printing devices. Manufacturers, architects, artists, and others are now doing 3-D printing, using computer-generated designs to command devices loaded with , carbon, graphite, and even food materials to construct three-dimensional products. With 4-D printing, manufacturers take the next crucial step: self-assembly or self-replication. What begins as a human idea, hammered out intellectually on a computer, is then sent to a 3-D printer, resulting in a creation capable of making copies of and transforming itself. In solid materials, Skylar Tibbits of the Massachusetts Institute of Technology creates complex physical substances that he calls “programmable materials that build themselves.” Venter and hundreds of synthetic biologists argue that 4-D printing is best accomplished by making life using life’s own building blocks, DNA.

When Venter’s team first created the phi X174 viral genome, Venter commissioned a large analysis of the implications of synthetic genomics for national security and public health. The resulting report warned that two issues were impeding appropriate governance of the new science. The first problem was that work on synthetic biology, or synbio, had become so cheap and easy that its practitioners were no longer classically trained biologists. This meant that there were no shared assumptions regarding the new field’s ethics, professional standards, or safety. The second problem was that existing standards, in some cases regulated by government agencies in the United States and other developed countries, were a generation old, therefore outdated, and also largely unknown to many younger practitioners.

Venter’s team predicted that as the cost of synthetic biology continued to drop, interest in the field would increase, and the ethical and practical concerns it raised would come increasingly to the fore. They were even more prescient than they guessed. Combined with breakthroughs in another area of biology, “gain-of-function” (GOF) research, the synthetic genomics field has spawned a dizzying array of new possibilities, challenges, and national security threats. As the has started debating “human-directed evolution” and the merits of experiments that give relatively benign germs dangerous capacities for disease, the global and biosecurity establishment remains well behind the curve, mired in antiquated notions about what threats are important and how best to counter them. In the United States, Congress and the executive branch have tried to prepare by creating finite lists of known pathogens and toxins and developing measures to surveil, police, and counter them; foreign governments and multilateral institutions, such as the UN and the Biological Weapons Convention, have been even less ambitious. Governance, in short, is focused on the old world of biology, in which scientists observed life from the outside, puzzling over its details and behavior by tinkering with its environment and then watching what happened. But in the new biology world, scientists can now create life themselves and learn about it from the inside. As Venter put it back in 2009, “What we have done so far is going to blow your freakin’ mind.”

CODING LIFE

Shortly after Venter’s game-changing experiment was announced, the National Academy of Sciences’ Institute of Medicine convened a special panel aimed at examining the brave new biology world’s ethical, scientific, and national security dimensions. Andrew Ellington and Jared Ellefson of the University of Texas at Austin argued that a new breed of biologists was taking over the frontiers of science -- a breed that views life forms and DNA much the way the technology wizards who spawned IBM, Cisco, and Apple once looked at basic electronics, transistors, and circuits. These two fields, each with spectacular private-sector and academic engagement, are colliding, merging, and transforming one another, as computer scientists speak of “DNA-based computation” and synthetic biologists talk of “life circuit boards.” The biologist has become an engineer, coding new life forms as desired.

Gerald Joyce of the Scripps Research Institute in La Jolla, California, frets that as the boundaries blur, biologists are now going to be directing evolution and that we are witnessing “the end of Darwinism.” “Life on Earth,” Joyce has noted, “has demonstrated extraordinary resiliency and inventiveness in adapting to highly disparate niches. Perhaps the most significant invention of life is a genetic system that has an extensible capacity for inventiveness, something that likely will not be achieved soon for synthetic biological systems. However, once informational macromolecules are given the opportunity to inherit profitable variation through self-sustained Darwinian evolution, they just may take on a life of their own.”

This is not hyperbole. All the key barriers to the artificial synthesis of viruses and bacteria have been overcome, at least on a proof-of-principle basis. In 2002, researchers at SUNY Stony Brook made a living polio virus, constructed from its genetic code. Three years later, scientists worried about pandemic influenza decided to re-create the devastating 1918 Spanish flu virus for research purposes, identifying key elements of the viral genes that gave that virus the ability to kill at least 50 million people in less than two years. What all this means is that the dual-use dilemma that first hit chemistry a century ago, and then hit a generation later, is now emerging with special force in contemporary biology.

Between 1894 and 1911, the German chemist Fritz Haber figured out how to mass-produce ammonia. This work revolutionized agriculture by generating the modern fertilizer industry. But the same research helped create chemical weapons for German use during World War I -- and Haber was crucial to both the positive and the negative efforts. Three years after Haber won the Nobel Prize in Chemistry, his compatriot Albert Einstein won a Nobel Prize for his contributions to physics. Einstein’s revolutionary theories of relativity, gravity, mass, and energy helped unravel the secrets of the cosmos and paved the way for the harnessing of nuclear energy. They also led to the atom bomb. The problem of “dual-use research of concern” (DURC) -- work that could have both beneficial and dangerous consequences -- was thus identified long ago for chemistry and physics, and it led to international treaties aimed at limiting the most worrisome applications of problematic work in each field. But in this respect, at least, biology lagged far behind, as the United States, the Soviet Union, and many other countries continued to pursue the development of biological weapons with relatively few restrictions. These efforts have not yielded much of military consequence, because those who aspire to use bioweapons have not found ways to transmit and disperse germs rapidly or to limit their effects to the intended targets alone. That could now be changing.

Dual-use concerns in biology have gained widespread publicity in the last couple of years thanks to GOF research, which attempts to start combating potential horrors by first creating them artificially in the lab. On September 12, 2011, Ron Fouchier of the Erasmus Medical Center, in Rotterdam, took the stage at a meeting in Malta of the European Scientific Working Group on Influenza. He announced that he had found a way to turn H5N1, a virus that almost exclusively infected birds, into a possible human-to-human flu. At that time, only 565 people were known to have contracted H5N1 flu, presumably from contact with birds, of which 331, or 59 percent, had died. The 1918 influenza pandemic had a lethality rate of only 2.5 percent yet led to more than 50 million deaths, so H5N1 seemed potentially catastrophic. Its saving grace was that it had not yet evolved into a strain that could readily spread directly from one human to another. Fouchier told the scientists in Malta that his Dutch group, funded by the U.S. National Institutes of Health, had “mutated the hell out of H5N1,” turning the bird flu into something that could infect ferrets (laboratory stand-ins for human beings). And then, Fouchier continued, he had done “something really, really stupid,” swabbing the noses of the infected ferrets and using the gathered viruses to infect another round of animals, repeating the process until he had a form of H5N1 that could spread through the air from one mammal to another.

“This is a very dangerous virus,” Fouchier told Scientific American. Then he asked, rhetorically, “Should these experiments be done?” His answer was yes, because the experiments might help identify the most dangerous strains of flu in nature, create targets for vaccine development, and alert the world to the possibility that H5N1 could become airborne. Shortly after Fouchier’s bombshell announcement, Yoshihiro Kawaoka, a University of Wisconsin virologist, who also received funding from the National Institutes of Health, revealed that he had performed similar experiments, also producing forms of the bird flu H5N1 that could spread through the air between ferrets. Kawaoka had taken the precaution of altering his experimental H5N1 strain to make it less dangerous to human beings, and both researchers executed their experiments in very high-security facilities, designated Biosafety Level (BSL) 3+, just below the top of the scale.

Despite their precautions, Fouchier and Kawaoka drew the wrath of many national security and public health experts, who demanded to know how the deliberate creation of potential pandemic flu strains could possibly be justified. A virtually unknown advisory committee to the National Institutes of Health, the National Science Advisory Board for Biosecurity, was activated, and it convened a series of contentious meetings in 2011–12. The advisory board first sought to mitigate the fallout from the H5N1 experiments by ordering, in December 2011, that the methods used to create these new mammalian forms of H5N1 never be published. Science and Nature were asked to redact the how-to sections of Fouchier’s and Kawaoka’s papers, out of a stated concern on the part of some advisory board members that the information constituted a cookbook for terrorists.

Michael Osterholm, a public health expert at the University of Minnesota and a member of the advisory board, was particularly concerned. He felt that a tipping point had been reached and that scientists ought to pause and develop appropriate strategies to ensure that future work of this sort was safely executed by people with beneficial intentions. “This is an issue that really needs to be considered at the international level by many parties,” Osterholm told journalists. “Influenza is virtually in a class by itself. Many other agents worked on within BSL-4 labs don’t have that transmissibility that we see with influenza. There are many agents worked on in BSL-4 that we wouldn’t want to escape. But I can’t think of any that have the potential to be transmitted around the world as with influenza.”

Paul Keim, a microbiologist at Northern Arizona University who was chair of the National Science Advisory Board for Biosecurity, had played a pivotal role in the FBI’s pursuit of the culprit behind the 2001 anthrax mailings, developing novel genetic fingerprinting techniques to trace the origins of the spores that were inserted into envelopes and mailed to news organizations and political leaders. Keim shared many of Osterholm’s concerns about public safety, and his anthrax experience gave him special anxiety about terrorism. “It’s not clear that these particular [experiments] have created something that would destroy the world; maybe it’ll be the next set of experiments that will be critical,” Keim told reporters. “And that’s what the world discussion needs to be about.”

In the end, however, the December 2011 do-not-publish decision settled nothing and was reversed by the advisory board four months later. It was successfully challenged by Fouchier and Kawaoka, both papers were published in their entirety by Science and Nature in 2012, and a temporary moratorium on dual-use research on influenza viruses was eventually lifted. In early 2013, the National Institutes of Health issued a series of biosafety and clearance guidelines for GOF research on flu viruses, but the restrictions applied only to work on influenza. And Osterholm, Keim, and most of the vocal opponents of the work retreated, allowing the advisory board to step back into obscurity.

A GLOBAL REMEDY?

In the last two years, the World Health Organization has held two summits in the hopes of finding a global solution to the Pandora’s box opened by the H5N1 experiments. The WHO’s initial concern was that flu scientists not violate the delicately maintained agreements among nations regarding disease surveillance and the sharing of outbreak information -- a very real concern, given that the 2005 International Health Regulations, which assign the WHO authority in the event of an epidemic and compel all nations to monitor infectious diseases and report any outbreaks, had taken 14 years to negotiate and had been challenged by some developing countries, such as Indonesia, from the day of their ratification.

Jakarta resisted sharing viral samples on the grounds that Western pharmaceutical companies would seek to patent products derived from them and ultimately reap large profits by selling vaccines and drugs back to poor countries at high prices. So Indonesia refused to share samples of the H5N1 flu virus that was spreading inside its borders; made wild accusations about the global health community in general, and the United States in particular; and even expelled the U.S. negotiator working on the issue. Eventually, a special pandemic-prevention agreement was hammered out and approved by the World Health Assembly (the decision-making body of the WHO) in 2011, serving as a companion to the International Health Regulations. But by 2012, fewer than 35 countries had managed to comply with the safety, surveillance, and research requirements of the regulations, and many samples of H5N1 and other pathogens of concern had yet to be shared with global authorities and databases. Public health experts worried that a pandemic might unfold before authorities knew what they were up against.

The WHO knew that Egypt’s primary public health laboratory in Cairo had been raided during the riots that ultimately toppled the Mubarak regime in early 2011 and that vials of germs had gone missing -- including samples of the H5N1 virus. Egypt has a robust H5N1 problem, with the second- largest number of human cases of the disease (behind, you guessed it, Indonesia). Although it was assumed that the rioters had no idea what was in the test tubes and were merely interested in looting the lab’s electronics and refrigeration equipment, nobody can say with certainty whether the flu vials were destroyed or taken.

From the WHO’s perspective, the Egyptian episode demonstrated that the extensive security precautions taken by the Dutch to ensure the security of Fouchier’s work and the ones that the Americans had adhered to regarding Kawaoka’s were not going to be followed in biology labs in many other countries. Margaret Chan, the WHO’s director general, and Keiji Fukuda, an assistant director general, remembered the SARS epidemic of 2003, during which Chinese leaders dissembled and dragged their feet for months, allowing the disease to spread to 29 countries. They knew that even in countries that claimed to have met all the standards of the International Health Regulations, there were no consistent dual-use safety regulations. Across most of Asia, the very concept of biosafety was a new one, and a source of confusion. Even in , there were no consistent guidelines or definitions for any aspects of dual-use research, biosafety, or biosecurity. European countries were far more concerned about genetically modified food products than about pathogens and microbes; they were preoccupied with enforcing the 2000 Cartagena Protocol on Biosafety, which despite its name has nothing to do with terrorism, national security, or the sorts of issues raised by dual-use research; its focus is genetically modified organisms.

The WHO’s first dual-use summit, in February 2012, pushed Fouchier and Kawaoka to reveal the details of their experimental procedures and outcomes to their scientific colleagues. Fouchier’s boasting about mutations seemed less worrying when the scientist indicated that he had not used synthetic biological techniques and that although his virus had spread between caged ferrets, it had not killed any of them. The technical consultation on H5N1, which was dominated by flu virologists, led the scientists to decide that the work was less dangerous than previously thought and that the moratorium on it could soon be lifted.

An exasperated Osterholm told the New York Academy of Sciences that the United States and the WHO had no clear protocols for DURC, no standards for determining safety, and no plans for a coordinated global response. But many other scientists engaged in the debate were less concerned, and they complained that the potential public health benefits of GOF research might be held back by excessive worries about its potential risks. In meeting after meeting, they claimed, the FBI, the CIA, and other intelligence agencies had proved unable to characterize or quantify the risk of bioweapons terrorism, GOF work, or synthetic biological research.

I BELIEVE THE CHILDREN ARE OUR FUTURE

Advocates for open, fast-paced synthetic biological research, such as Drew Endy of Stanford University and Todd Kuiken of the Wilson Center, the latter one of the leaders of a growing do-it-yourself international biology movement, insist that attention should be paid not just to the dangers of synthetic biology but also to its promise. Endy reckons that two percent of the U.S. economy is already derived from genetic engineering and synthetic biology and that the sector is growing by 12 percent annually. His bioengineering department at Stanford operates on a budget of half a billion dollars a year, and Endy predicts that synthetic biology will in the near future lead to an economic and technological boom like that of Internet and social media technologies during the earlier part of this century.

Many biology students these days see the genetic engineering of existing life forms and the creation of new ones as the cutting edge of the field. Whether they are competing in science fairs or carrying out experiments, they have little time for debates surrounding dual-use research; they are simply plowing ahead. The International Genetically Engineered Machine contest, in which teams of college students compete to build new life forms, began at MIT in 2004; it was recently opened to high school teams as well. Last year’s contest drew more than 190 entries by youngsters from 34 countries. What sounds like science fiction to one generation is already the norm for another.

In just a few years, synthetic biological research has become relatively cheap and easy. In 2003, the Human Genome Project completed the first full sequencing of human DNA. It cost several billion dollars, involved thousands of scientists and technicians toiling in more than 160 labs, and took more than ten years. A decade later, it was possible to buy a sequencing device for several thousand dollars and sequence one’s entire genome at home in less than 24 hours. For even less, a private company will sequence your genome for you, and prices are still dropping. Sequencing costs have plummeted so far that the industry is no longer profitable in the developed world and has largely been outsourced to China. In vast lab warehouses outside Beijing, , and Shenzhen, automated sequencers now decipher, and massive computers store, more genetic information every month than the sum total of the information amassed from James Watson and Francis Crick’s 1953 discovery of DNA to Venter’s 2003 synthesis of the phi X174 genome.

To understand how the field of synthetic biology works now, it helps to use a practical example. Imagine a legitimate public health problem -- say, how to detect arsenic in drinking water in areas where ground-water supplies have been contaminated. Now imagine that a solution might be to create harmless bacteria that could be deposited in a water sample and would start to glow brightly in the presence of arsenic. No such creature exists in nature, but there are indeed creatures that glow (fireflies and some fish). In some cases, these creatures glow only when they are mating or feel threatened, so there are biological on-off switches. There are other microorganisms that can sense the presence of arsenic. And there are countless types of bacteria that are harmless to humans and easy to work with in the lab.

To combine these elements in your lab, you need to install an appropriate software program on your laptop and search the databases of relevant companies to locate and purchase the proper DNA units that code for luminescence, on-off switches, and arsenic sensing. Then, you need to purchase a supply of some sort of harmless bacteria. At that point, you just have to put the DNA components in a sensible sequence, insert the resulting DNA code into the bacterial DNA, and test to see if the bacteria are healthy and capable of replicating themselves. To test the results, all you have to do is drop some arsenic in a bottle of water, add some of your man-made bacteria, and shake: if the water starts to glow, bingo. (This slightly oversimplified scenario is based on one that was actually carried out by a team from the University of Edinburgh in the International Genetically Engineered Machine contest in 2006.) The most difficult part of the process now is putting the DNA components in a sensible sequence, but that is unlikely to be true for long. The world of biosynthesis is hooking up with 3-D printing, so scientists can now load nucleotides into a 3-D “bioprinter” that generates genomes. And they can collaborate across the globe, with scientists in one city designing a genetic sequence on a computer and sending the code to a printer somewhere else -- anywhere else connected to the Internet. The code might be for the creation of a life- saving medicine or vaccine. Or it might be information that turns the tiny phi X174 virus that Venter worked on a decade ago into something that kills human cells, or makes nasty bacteria resistant to antibiotics, or creates some entirely new viral strain.

INFORMATION, PLEASE

What stymies the very few national security and law enforcement experts closely following this biological revolution is the realization that the key component is simply information. While virtually all current laws in this field, both local and global, restrict and track organisms of concern (such as, say, the Ebola virus), tracking information is all but impossible. Code can be buried anywhere -- al Qaeda operatives have hidden attack instructions inside porn videos, and a seemingly innocent tweet could direct readers to an obscure Internet location containing genomic code ready to be downloaded to a 3-D printer. Suddenly, what started as a biology problem has become a matter of information security.

When the WHO convened its second dual-use summit, therefore, in February 2013, about a third of the scientists and government officials in attendance were from the United States, representing at least 15 different agencies as diverse as the FBI, the Centers for Disease Control and Prevention, the Department of Defense, and the Office of the U.S. Trade Representative. Although other countries brought strong contingents, the message from the Obama administration was clear: we are worried.

Each country party to the Biological Weapons Convention is required to designate one agency to be responsible for guaranteeing compliance with the treaty’s provisions. For the United States, that agency is the FBI. So now, a tiny office of the FBI, made even smaller through recent congressional budget cuts and sequestration, engages the scientific community and tries to spot DURC. But the FBI has nothing like the scientific expertise that the biologists themselves have, and so in practice, it must rely on the researchers to police themselves -- an obviously problematic situation.

Other countries have tried to grapple with the dual-use problem in other ways. Denmark, for example, has a licensing procedure for both public- and private-sector research. It requires researchers to register their intentions before executing experiments. The labs and personnel are screened for possible security concerns and issued licenses that state the terms of their allowable work. Some of the applications and licenses are classified, guaranteeing the private sector trade secrecy. Such an effort is possible there, however, only because the scale of biological research in the country is so small: fewer than 100 licenses are currently being monitored.

The Dutch government sought to control Fouchier’s publication of how he modified the H5N1 virus through the implementation of its export-control laws, with the information in question being the commodity deemed too sensitive to export. Although the government lifted the ban after the first WHO summit, a district court later ruled that Fouchier’s publication violated EU law. Fouchier is appealing the decision, which could have profound implications across Europe for the exchange of similar research. Among the lessons of the recent U.S. intelligence leaks, however, is that it may well be impossible to have airtight controls over the transmission of digital information if the parties involved are sufficiently determined and creative.

In line with their emerging engineering perspective, many biologists now refer to their genomics work as “bar-coding.” Just as manufacturers put bar codes on products in the supermarket to reveal the product’s identity and price when scanned, so biologists are racing to genetically sequence plants, animals, fish, birds, and microorganisms all over the world and taxonomically tag them with a DNA sequence that is unique to the species -- its “bar code.” It is possible to insert bar-code identifiers into synthesized or GOF-modified organisms, allowing law enforcement and public health officials to track and trace any use or accidental release of man-made or altered life forms. Such an approach has been used for genetically modified seeds and agricultural products, and there is no good reason not to mandate such labeling for potentially worrisome dual-use work. But bar-coding has to be incorporated by the original researchers, and it is not going to be implemented by those with malicious intentions. So there are no quick or easy technological fixes for the problem.

FROM WHO TO HAJ

The 2013 WHO summit failed to reach meaningful solutions to dual-use research problems. The financially strapped WHO couldn’t find the resources to follow up on any of the recommendations produced by the summit. Worse, the attendees could not even manage to come up with a common framework for discussion of the issue. Poor nations felt it was an extremely low priority, with African representatives complaining that their countries didn’t have the resources to implement biosafety guidelines. As one representative put it, speaking on the condition of anonymity, “We are the ones that actually suffer from all of these diseases. We are the ones that need this research. But we cannot do it. We do not have the facilities. We do not have the resources. And now, with all these DURC worries, our people cannot get into your laboratories to work by your side [in the United States or Europe] for security reasons. This whole DURC issue is simply holding us back, whether that is the intention or not.”

Noticeably quiet at the three-day conference were the representatives from large developing countries such as , China, India, and South Africa. And when any of them did speak up, it was to emphasize their concerns about who would hold the patents on products made with dual-use research, to insist on the need for , or to mouth platitudes about how their countries’ researchers already operated under strict scrutiny. The Chinese delegates, in particular, were adamant: all necessary provisions to ensure biological safety, they assured the gathering, are in place in their country. Two months after the meeting, a team of scientists at China’s National Avian Influenza Reference Laboratory at the Harbin Veterinary Research Institute used GOF techniques to manufacture 127 forms of the influenza virus, all based on H5N1, combined with genetic attributes found in dozens of other types of flu. The Chinese team had taken the work of Fouchier and Kawaoka and built on it many times over, adding some synthetic biological spins to the work. And five of their man- made superflu strains proved capable of spreading through the air between guinea pigs, killing them.

Less than a decade ago, the international virology community went into an uproar when U.S. scientists contemplated inserting a gene into stockpiled smallpox viruses that would have made solutions containing the virus turn green, for rapid identification purposes. What the U.S. researchers thought would be a smart way to track the deadly virus was deemed a “crime against humanity.”

Earlier this year, in contrast, when a new type of bird flu called H7N9 emerged in China, virologists called for GOF research as a matter of public health urgency. When the virus was subjected to genetic scrutiny, both Fouchier and Kawaoka declared it dangerous, noting that the very genetic changes they had made to H5N1 were already present in the H7N9 strain. In August, Fouchier’s group published the results of experiments that showed that the H7N9 virus could infect ferrets and spread through the air from one animal to another. And Fouchier, Kawaoka, and 20 other virologists called for an extensive series of GOF experiments on the H7N9 virus, allowing genetic modifications sufficient to turn the bird flu into a clear human-to-human transmissible pathogen so as to better prepare for countering it.

As health research authorities in the relevant countries mull the scientists’ request to manipulate the H7N9 virus, other microbes offer up mysteries that might be resolved using GOF techniques. The Middle East respiratory syndrome, or MERS, appeared seemingly out of nowhere in June 2012 in Saudi Arabia, and by September 2013, it had infected 132 people, killing almost half of them. Although the virus is similar to SARS, much about the disease and its origins is unknown. There were numerous cases of apparent human-to-human transmission of MERS, especially within hospitals, and Saudi health officials worried about the possible spread of MERS throughout the Islamic world. There is no vaccine or cure for MERS. If work to determine the transmissibility of H7N9 is to be permitted, shouldn’t researchers do something similar to see what it would take to transform MERS into a casually transmitted form, likely to spread, for example, among haj pilgrims?

When HIV emerged in the early 1980s, nobody was sure just how the virus was transmitted, and many health-care workers feared that they could contract the then 99 percent lethal disease through contact with their patients. Schools all over the United States banned HIV-positive children, and most sports leagues forbade infected athletes from playing (until the NBA star Magic Johnson bravely revealed that he was infected, turning the tide against such bans). Had it been technically possible to do so, would it have been wise to deliberately alter the virus then, giving it the capacity to spread through the air or through casual contact?

WHAT NOW?

Scientists and security experts will never come to a consensus about the risks of dual-use research in synthetic biology. After all, almost 35 years after smallpox was eradicated, debates still rage over whether to destroy the last remaining samples of the virus. The benefits of synthetic biological research are difficult to assess. Its proponents believe it will transform the world as much as the ongoing revolution in information technology has, but some others are skeptical. Moving aggressively to contain the possible downsides of dual-use research could hamper scientific development. If it were to get truly seized by the issue, the U.S. government, for example, could start to weave a vast bureaucratic web of regulation and surveillance far exceeding that established elsewhere, succeeding only in setting its own national scientific efforts back while driving cutting-edge research to move abroad. Unilateral action by any government is destined to fail.

What this means is that political leaders should not wait for clarity and perfect information, nor rush to develop restrictive controls, nor rely on scientific self-regulation. Instead, they should accept that the synthetic biology revolution is here to stay, monitor it closely, and try to take appropriate actions to contain some of its most obvious risks, such as the accidental leaking or deliberate release of dangerous organisms.

The first step in this regard should be to strengthen national and global capacities for epidemiological surveillance. In the United States, such surveillance has been weakened by budget cuts and bureaucratic overstretch at the federal, state, and local levels. The Centers for Disease Control and the U.S. Department of Agriculture represent the United States’ first line of defense against microbial threats to human health, plants, and livestock, but both agencies have been cut to the bone. The Centers for Disease Control’s budget has been cut by 25 percent since 2010, and it recently dropped by a further five percent thanks to sequestration, with the cuts including funding that supported 50,000 state, territorial, city, and county public health officers. It should be a no-brainer for Congress to restore that funding and other support for the nation’s public health army.

At the same time, the Centers for Disease Control and the Department of Agriculture must become better at what they do. In the coming age of novel microbes, focusing attention on a small list of special pathogens and toxins, such as the Ebola virus, anthrax, and botulinum, offers a false sense of security. Even the recent suggestion that H5N1 be added to the National Select Agent Registry, which keeps track of potentially dangerous biological agents and toxins, seems beside the point: a simple, ubiquitous microbe such as E. coli, a bacterium that resides in the guts of every human being, can now be transformed into a killer germ capable of wreaking far more havoc than anything on that registry.

Solving the puzzle of just what to watch for now and how to spot it will require cooperative thinking across national and professional boundaries. Within the United States, leaders of organizations such as the Centers for Disease Control, the FBI, the Department of Health and Human Services, the Department of Defense, and the intelligence agencies will need to collaborate and pool their information and expertise. And internationally, multilateral groups such as the WHO and its food and agriculture counterparts will need to work with agencies and institutions such as Interpol, the Association of Southeast Asian Nations, the Pan American Health Organization, and the African Union.

The Biological Weapons Convention process can serve as a multilateral basis for DURC-related dialogue. It offers a neutral platform accessible to nearly every government in the world. But that process is weak at present, unable to provide verification akin to that ensured by its nuclear and chemical weapons counterparts. Given their own problems, in fact, international institutions are currently ill equipped to handle the dual-use research issue. Grappling with severe budget constraints for the third year in a row, the WHO, for example, has shrunk in size and influence, and its epidemiological identification-and-response capacity has been particularly devastated.

It is in the United States’ own interests, as well as those of other countries, to have a thriving global epidemiological response capability housed within the WHO, acting under the provisions of the International Health Regulations. U.S. disease sleuths may not be welcome everywhere in the world, but WHO representatives, at least in principle, are allowed inside nearly every country. Congress should therefore appropriate $100 million a year for five years for direct support of the WHO’s epidemiological surveillance-and- response system. To make sure U.S. underwriting doesn’t become a meaningless crutch, Washington could make it clear to the WHO’s World Health Assembly that some of that American support should be directed toward building indigenous epidemiological surveillance capabilities in developing countries, in order to bring them into compliance with the International Health Regulations. If U.S. legislators feared that such support for the WHO would morph into a multiyear entitlement program, they could have Washington’s financing commitment start in 2014 and gradually decrease to zero by 2019, as other donor countries added their own assistance and recipient countries reached sustainable self- reliance. Congress should also continue the U.S. Agency for International Development’s PREDICT Project, which is tasked with identifying new disease threats and to date has trained 1,500 people worldwide and discovered 200 previously unknown viruses.

Any global surveillance effort will require harmonized standards. At present, however, there are no agreed-on biosafety laboratory standards or definitions of various aspects of biosecurity, GOF research, or even DURC. So key U.S. agencies need to work closely with their foreign counterparts to hash out such standards and definitions and promulgate them. A model for emulation here might be the Codex Alimentarius, established by the UN Food and Agriculture Organization and the WHO in 1963 to standardize all food-safety guidelines worldwide.

In an era when e-mailed gene sequences have rendered test- tube obsolete, the proper boundaries of export and its control are increasingly difficult to define. At the core of the dual-use research problem is information, rather than microbes, and overregulating the flow of information risks stifling science and crippling international collaborative research. To deal with this problem, the U.S. Department of Commerce, the U.S. Department of Agriculture’s Animal and Plant Health Inspection Service, and the Office of the U.S. Trade Representative must create a regulatory framework appropriate to dual-use research. Here, a model for regulation might draw from the experiences of the International Plant Protection Convention and the Animal and Plant Health Inspection Service’s engagement through the U.S. Trade Representative’s Office of Services and Investment. For Internet traffic in genomes, many nucleotide distribution centers already monitor “sequences of concern,” demanding special information on individuals seeking pathogen-related genetic details. This approach should be embraced by governments. So what should governments and institutions be on the lookout for? Evidence of the covert deliberate alteration of a life form that turns a creature into a more dangerous entity. If governments permitted or supported such research, they would be accused of violating the Biological Weapons Convention. The United States is by far the largest funder of basic science and the world’s powerhouse of biological research, and so it would be at the greatest risk of being the target of such accusations. But sunlight is a good disinfectant, and it is legitimate to ask for any such research to be explained and defended openly. The State Department, in concert with the Department of Health and Human Services’ Office of Global Affairs, should develop briefing materials for diplomatic personnel, explaining synthetic biology, GOF work, and DURC and thus balancing the United States’ image as the foremost center of biomedical research against concerns about the creation of man-made pathogens. The State Department should promote cooperation on detecting and controlling DURC and on managing the shared global risk of the inappropriate release of synthetic pathogens; it should also support assistance programs aimed at hardening the safety of labs and monitoring them worldwide.

The tracking of novel DNA and life forms should be implemented on a voluntary or mandatory basis immediately. Private biotechnology companies and distributors of DNA components should assign biosecurity tags to all their man- made products. The trade in genomic sequences should be transparent and traceable, featuring nucleotide tags that can be monitored. The genomic industry should self-finance the necessary monitoring and enforcement of standards of practice and permit unrestricted government inspections in the event of breakdowns in biosafety or lab security.

Last year, Friends of the Earth, the International Center for Technology Assessment, and the ETC Group jointly issued a report called The Principles for the Oversight of Synthetic Biology, which called for the insertion of suicide genes in man-made and GOF-altered organisms -- sequences that can be activated through simple changes in the organisms’ environs, terminating their function. Although such suicide signals may be technically difficult to implement at this time, dual-use research should strive to include this feature. The three organizations have also called on industry to carry damage and liability insurance covering all synthetic biological research and products, a seemingly obvious and wise precaution. The BioBricks Foundation, meanwhile, is the loudest proponent of synthetic biology today, proclaiming its mission as being “to ensure that the engineering of biology is conducted in an open and ethical manner to benefit all people and the planet. . . . We envision synthetic biology as a force for good in the world.” Such ethics-based scientific organizations can drive awareness of the field and its problems and increase sensitivity among researchers to legitimate public concerns, and so their activities should be encouraged and expanded.

The controversies and concerns surrounding dual-use research in synthetic biology have arisen in less than four years, starting from the moment in 2010 when Venter announced his team’s creation of a new life form described as “the first self-replicating species on the planet whose parent is a computer.” Before Venter’s group raced down such a godlike path, it went to the Obama White House, briefing officials on a range of policy and ethical issues the project raised. For a while, the administration considered classifying the effort, worrying that it might spawn grave dangers. Instead, much to Venter’s delight, the White House opted for full transparency and publication. “Perhaps it’s a giant philosophical change in how we view life,” Venter said with a shrug at a Washington press conference. He wasn’t sure. But he did feel confident that what he called “a very powerful set of tools” would lead to flu vaccines manufactured overnight, possibly a vaccine for the AIDS virus, and maybe microbes that consume carbon dioxide and emit a safe energy alternative to fossil fuels. Now that synthetic biology is here to stay, the challenge is how to ensure that future generations see its emergence as more boon than bane.

LAURIE GARRETT is Senior Fellow for Global Health at the Council on Foreign Relations.

October 15, 2013 The Robots Are Coming

How Technological Breakthroughs Will Transform Everyday Life

Daniela Rus July/August 2015

YUYA SHINO / REUTERS A hundred humanoid communication robots called Robi perform a synchronized dance during a promotional event called 100 Robi, for the Weekly Robi Magazine, in Tokyo January 20, 2015.

Robots have the potential to greatly improve the quality of our lives at home, at work, and at play. Customized robots working alongside people will create new jobs, improve the quality of existing jobs, and give people more time to focus on what they find interesting, important, and exciting. Commuting to work in driverless cars will allow people to read, reply to e-mails, watch videos, and even nap. After dropping off one passenger, a driverless car will pick up its next rider, coordinating with the other self-driving cars in a system designed to minimize traffic and wait times—and all the while driving more safely and efficiently than humans.

Yet the objective of robotics is not to replace humans by mechanizing and automating tasks; it is to find ways for machines to assist and collaborate with humans more effectively. Robots are better than humans at crunching numbers, lifting heavy objects, and, in certain contexts, moving with precision. Humans are better than robots at abstraction, generalization, and creative thinking, thanks to their ability to reason, draw from prior experience, and imagine. By working together, robots and humans can augment and complement each other’s skills.

WOLFGANG RATTAY / REUTERS A robot in the Robotic Kitchen prototype created by Moley Robotics cooks a crab soup at the company's booth at the world's largest industrial technology fair, the Hannover Messe, in , April 13, 2015.

Still, there are significant gaps between where robots are today and the promise of a future era of “pervasive robotics,” when robots will be integrated into the fabric of daily life, becoming as common as computers and smartphones are today, performing many specialized tasks, and often operating side by side with humans. Current research aims to improve the way robots are made, how they move themselves and manipulate objects, how they reason, how they perceive their environments, and how they cooperate with one another and with humans.

Creating a world of pervasive, customized robots is a major challenge, but its scope is not unlike that of the problem computer scientists faced nearly three decades ago, when they dreamed of a world where computers would become integral parts of human societies. In the words of Mark Weiser, a chief scientist at Xerox’s Palo Alto Research Center in the 1990s, who is considered the father of so-called ubiquitous computing: “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.” Computers have already achieved that kind of ubiquity. In the future, robots will, too.

YOUR OWN PERSONAL ROBOT A robot’s capabilities are defined by what its body can do and what its brain can compute and control. Today’s robots can perform basic locomotion on the ground, in the air, and in the water. They can recognize objects, map new environments, perform “pick-and-place” operations on an assembly line, imitate simple human motions, acquire simple skills, and even act in coordination with other robots and human partners. One place where these skills are on display is at the annual RoboCup, a robot soccer World Cup, during which teams of robots coordinate to dribble, pass, shoot, and score goals.

This range of functionality has been made possible by innovations in robot design and advances in the algorithms that guide robot perception, reasoning, control, and coordination. Robotics has benefited enormously from progress in many areas: computation, data storage, the scale and performance of the Internet, wireless communication, electronics, and design and manufacturing tools. The costs of hardware have dropped even as the electromechanical components used in robotic devices have become more reliable and the knowledge base available to intelligent machines has grown thanks to the Internet. It has become possible to imagine the leap from the personal computer to the personal robot.

In recent years, the promise of robotics has been particularly visible in the transportation sector. Many major car manufacturers have announced plans to build self-driving cars and predict that they will be able to sell them to consumers by 2020. Google’s self-driving cars have now driven close to two million miles with only 11 minor accidents, most of them caused by human error; the company will begin testing the cars on public roads this summer. Several universities around the world have also launched self-driving- car projects. Meanwhile, California, Florida, Michigan, and Nevada have all passed legislation to allow autonomous cars on their roads, and many other state legislatures in the United States are considering such measures. Recently, an annual report by Singapore’s Land Transportation Authority predicted that “shared autonomous driving”—fleets of self- driving cars providing customized transportation—could reduce the number of cars on the road by around 80 percent, decreasing travel times and pollution. ERIC RISBERG / COURTESY AP Robot, you can drive my car: Google’s self-driving cars, May 2014.

Self-driving cars would not merely represent a private luxury: as the cost of producing and maintaining them falls, their spread could greatly improve public transportation. Imagine a mass transit system with two layers: a network of large vehicles, such as trains and buses, that would handle long- distance trips and complementary fleets of small self-driving cars that would offer short, customized rides, picking up passengers at major hubs and also responding to individual requests for rides from almost anywhere. In 2014, the Future Urban Mobility project, which is part of the Singapore-MIT Alliance for Research and Technology, invited the public to ride on self-driving buggies that resembled golf carts at the Chinese Garden in Singapore, a park with winding alleys surrounded by trees, benches, and strolling people. More than 500 people took part. The robotic vehicles stayed on the paths, avoided pedestrians, and brought their passengers to their selected destinations. So far, that level of autonomous-driving performance has been possible only in low-speed, low-complexity environments. Robotic vehicles cannot yet handle all the complexities of driving “in the wild,” such as inclement weather and complex traffic situations. These issues are the focus of ongoing research.

AS YOU LIKE IT The broad adoption of robots will require a natural integration of intelligent machines into the human world rather than an integration of humans into the machines’ world. Despite recent significant progress toward that goal, problems remain in three important areas. It still takes too much time to make new robots, today’s robots are still quite limited in their ability to perceive and reason about their surroundings, and robotic communication is still quite brittle.

Many different types of robots are available today, but they all take a great deal of time to produce. Today’s robot bodies are difficult to adapt or extend, and thus robots still have limited capabilities and limited applications. Rapidly fabricating new robots, add-on modules, fixtures, and specialized tools is not a real option, as the process of design, assembly, and programming is long and cumbersome. What’s needed are design and fabrication tools that will speed up the customized manufacturing of robots. I belong to a team of researchers from Harvard, MIT, and the University of Pennsylvania currently working to create a “robot compiler” that could take a particular specification—for example, “I want a robot to tidy up the room”—and compute a robot design, a fabrication plan, and a custom programming environment for using the robot. TYRONE SIU / COURTESY REUTERS Bipedal "", primarily developed by the American robotics company , is presented to the media in Hong Kong, October 2013.

Better-customized robots would help automate a wide range of tasks. Consider manufacturing. Currently, the use of automation in factories is not uniform across all industries. The car industry automates approximately 80 percent of its assembly processes, which consist of many repeatable actions. In contrast, only around ten percent of the assembly processes for electronics, such as cell phones, are automated, because such products change frequently and are highly customized. Tailor-made robots could help close this gap by reducing setup times for automation in industries that rely on customization and whose products have short life cycles. Specialized robots would know where things are stored, how to put things together, how to interact with people, how to transport parts from one place to another, how to pack things, and how to reconfigure an assembly line. In a factory equipped with such robots, human workers would still be in control, and robots would assist them. DOES NOT COMPUTE A second challenge involved in integrating robots into everyday life is the need to increase their reasoning abilities. Today’s robots can perform only limited reasoning due to the fact that their computations are carefully specified. Everything a robot does is spelled out with simple instructions, and the scope of the robot’s reasoning is entirely contained in its program. Furthermore, a robot’s perception of its environment through its sensors is quite limited. Tasks that humans take for granted—for example, answering the question, “Have I been here before?”—are extremely difficult for robots. Robots use sensors such as cameras and scanners to record the features of the places they visit. But it is hard for a machine to differentiate between features that belong to a scene it has already observed and features of a new scene that happens to contain some of the same objects. In general, robots collect too much low-level data. Current research on machine learning is focused on developing algorithms that can help extract the information that will be useful to a robot from large data sets. Such algorithms will help a robot summarize its history and thus significantly reduce, for example, the number of images it requires to answer that question, “Have I been here before?”

Robots also cannot cope with unexpected situations. If a robot encounters circumstances that it has not been programmed to handle or that fall outside the scope of its capabilities, it enters an “error” state and stops operating. Often, the robot cannot communicate the cause of the error. Robots need to learn how to adjust their programs so as to adapt to their surroundings and interact more easily with people, their environments, and other machines.

Today, everyone with Internet access—including robots—can easily obtain incredible amounts of information. Robots could take advantage of this information to make better decisions. For example, a dog-walking robot could find weather reports online and then consult its own stored data to determine the ideal length of a walk and the optimal route: perhaps a short walk if it’s hot or raining, or a long walk to a nearby park where other dog walkers tend to congregate if it’s pleasant out.

ROBOT’S LITTLE HELPER The integration of robots into everyday life will also require more reliable communication between robots and between robots and humans. Despite advances in wireless technology, impediments still hamper robot-to-robot communication. It remains difficult to model or predict how well robots will be able to communicate in any given environment. Moreover, methods of controlling robots that rely on current communications technologies are hindered by noise—extraneous signals and data that make it hard to send and receive commands. Robots need more reliable approaches to communication that would guarantee the bandwidth they need, when they need it. One promising new approach to this problem involves measuring the quality of communication around a robot locally instead of trying to predict it using models. TORU HANAI / REUTERS Humanoid communication robot shakes hands with Tomotaka Takahashi, CEO of Robo Garage Co, June 26, 2013.

Communication between robots and people is also currently quite limited. Although audio sensors and speech-recognition software allow robots to understand and respond to basic spoken commands (“Move to the door”), such interactions are both narrow and shallow in terms of scope and vocabulary. More extensive human-robot communication would enable robots to ask humans for help. It turns out that when a robot is performing a task, even a tiny amount of human intervention completely changes the way the robot deals with a problem and greatly empowers the machine to do more. My research group at MIT’s Computer Science and Artificial Intelligence Laboratory recently developed a system that allowed groups of robots to assemble IKEA furniture. The robots worked together as long as the parts needed for the assembly were within reach. When a part, such as a table leg, was out of reach, a robot could recognize the problem and ask humans to hand it the part using English-language sentences. After receiving the part, the robots were able to resume the assembly task. A robot’s ability to understand error and enlist human help represents a step toward more synergistic between humans and robots. DOMO ARIGATO, MR. ROBOTO Current research in robotics is pushing the boundaries of what robots can do and aiming for better solutions for making them, controlling them, and increasing their ability to reason, coordinate, and collaborate. Meeting these challenges will bring the vision of pervasive robotics closer to reality.

In a robot-rich world, people may wake up in the morning and send personal-shopping robots to the supermarket to bring back fruit and milk for breakfast. Once there, the robots may encounter people who are there to do their own shopping but who traveled to the store in self-driving cars and who are using self-driving shopping carts that take them directly to the items they want and then provide information on the freshness, provenance, and nutritional value of the goods—and that can also help visually impaired shoppers navigate the store safely. In a retail environment shaped by pervasive robotics, people will supervise and support robots while offering customers advice and service with a human touch. In turn, robots will support people by automating some physically difficult or tedious jobs: stocking shelves, cleaning windows, sweeping sidewalks, delivering orders to customers.

Personal computers, wireless technology, smartphones, and easy-to-download apps have already democratized access to information and computation and transformed the way people live and work. In the years to come, robots will extend this digital revolution further into the physical realm and deeper into everyday life, with consequences that will be equally profound. DANIELA RUS is Professor of Electrical Engineering and Computer Science and Director of the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. June 16, 2015 New World Order

Labor, Capital, and Ideas in the Power Law Economy

Erik Brynjolfsson, Andrew McAfee, and Michael Spence July/August 2014

MORRIS MAC MATZEN / COURTESY REUTERS Robots at the "Hannover Messe" trade fair in Hanover, , April 2014.

Recent advances in technology have created an increasingly unified global marketplace for labor and capital. The ability of both to flow to their highest-value uses, regardless of their location, is equalizing their prices across the globe. In recent years, this broad factor-price equalization has benefited nations with abundant low-cost labor and those with access to cheap capital. Some have argued that the current era of rapid technological progress serves labor, and some have argued that it serves capital. What both camps have slighted is the fact that technology is not only integrating existing sources of labor and capital but also creating new ones.

Machines are substituting for more types of human labor than ever before. As they replicate themselves, they are also creating more capital. This means that the real winners of the future will not be the providers of cheap labor or the owners of ordinary capital, both of whom will be increasingly squeezed by automation. Fortune will instead favor a third group: those who can innovate and create new products, services, and business models.

The distribution of income for this creative class typically takes the form of a power law, with a small number of winners capturing most of the rewards and a long tail consisting of the rest of the participants. So in the future, ideas will be the real scarce inputs in the world -- scarcer than both labor and capital -- and the few who provide good ideas will reap huge rewards. Assuring an acceptable standard of living for the rest and building inclusive economies and societies will become increasingly important challenges in the years to come.

LABOR PAINS

Turn over your iPhone and you can read an eight-word business plan that has served Apple well: “Designed by Apple in California. Assembled in China.” With a market capitalization of over $500 billion, Apple has become the most valuable company in the world. Variants of this strategy have worked not only for Apple and other large global enterprises but also for medium-sized firms and even “micro- multinationals.” More and more companies have been riding the two great forces of our era -- technology and globalization -- to profits. Technology has sped globalization forward, dramatically lowering communication and transaction costs and moving the world much closer to a single, large global market for labor, capital, and other inputs to production. Even though labor is not fully mobile, the other factors increasingly are. As a result, the various components of global supply chains can move to labor’s location with little friction or cost. About one- third of the goods and services in advanced economies are tradable, and the figure is rising. And the effect of global competition spills over to the nontradable part of the economy, in both advanced and developing economies.

All of this creates opportunities for not only greater efficiencies and profits but also enormous dislocations. If a worker in China or India can do the same work as one in the United States, then the laws of economics dictate that they will end up earning similar wages (adjusted for some other differences in national productivity). That’s good news for overall economic efficiency, for consumers, and for workers in developing countries -- but not for workers in developed countries who now face low-cost competition. Research indicates that the tradable sectors of advanced industrial countries have not been net employment generators for two decades. That means job creation now takes place almost exclusively within the large nontradable sector, whose wages are held down by increasing competition from workers displaced from the tradable sector.

Even as the globalization story continues, however, an even bigger one is starting to unfold: the story of automation, including artificial intelligence, robotics, 3-D printing, and so on. And this second story is surpassing the first, with some of its greatest effects destined to hit relatively unskilled workers in developing nations.

Visit a factory in China’s Guangdong Province, for example, and you will see thousands of young people working day in and day out on routine, repetitive tasks, such as connecting two parts of a keyboard. Such jobs are rarely, if ever, seen anymore in the United States or the rest of the rich world. But they may not exist for long in China and the rest of the developing world either, for they involve exactly the type of tasks that are easy for robots to do. As intelligent machines become cheaper and more capable, they will increasingly replace human labor, especially in relatively structured environments such as factories and especially for the most routine and repetitive tasks. To put it another way, offshoring is often only a way station on the road to automation.

This will happen even where labor costs are low. Indeed, Foxconn, the Chinese company that assembles and , employs more than a million low-income workers -- but now, it is supplementing and replacing them with a growing army of robots. So after many manufacturing jobs moved from the United States to China, they appear to be vanishing from China as well. (Reliable data on this transition are hard to come by. Official Chinese figures report a decline of 30 million manufacturing jobs since 1996, or 25 percent of the total, even as manufacturing output has soared by over 70 percent, but part of that drop may reflect revisions in the methods of gathering data.) As work stops chasing cheap labor, moreover, it will gravitate toward wherever the final market is, since that will add value by shortening delivery times, reducing inventory costs, and the like.

The growing capabilities of automation threaten one of the most reliable strategies that poor countries have used to attract outside investment: offering low wages to compensate for low productivity and skill levels. And the trend will extend beyond manufacturing. Interactive voice response systems, for example, are reducing the requirement for direct person- to-person interaction, spelling trouble for call centers in the developing world. Similarly, increasingly reliable computer programs will cut into transcription work now often done in the developing world. In more and more domains, the most cost-effective source of “labor” is becoming intelligent and flexible machines as opposed to low-wage humans in other countries.

CAPITAL PUNISHMENT

If cheap, abundant labor is no longer a clear path to economic progress, then what is? One school of thought points to the growing contributions of capital: the physical and intangible assets that combine with labor to produce the goods and services in an economy (think of equipment, buildings, patents, brands, and so on). As the economist Thomas Piketty argues in his best-selling book Capital in the Twenty-first Century, capital’s share of the economy tends to grow when the rate of return on it is greater than the general rate of economic growth, a condition he predicts for the future. The “capital deepening” of economies that Piketty forecasts will be accelerated further as robots, computers, and software (all of which are forms of capital) increasingly substitute for human workers. Evidence indicates that just such a form of capital-based is taking place in the United States and around the world.

In the past decade, the historically consistent division in the United States between the share of total national income going to labor and that going to physical capital seems to have changed significantly. As the economists Susan Fleck, John Glaser, and Shawn Sprague noted in the U.S. Bureau of Labor Statistics’ Monthly Labor Review in 2011, “Labor share averaged 64.3 percent from 1947 to 2000. Labor share has declined over the past decade, falling to its lowest point in the third quarter of 2010, 57.8 percent.” Recent moves to “re- shore” production from overseas, including Apple’s decision to produce its new computer in Texas, will do little to reverse this trend. For in order to be economically viable, these new domestic manufacturing facilities will need to be highly automated.

Other countries are witnessing similar trends. The economists Loukas Karabarbounis and Brent Neiman have documented significant declines in labor’s share of GDP in 42 of the 59 countries they studied, including China, India, and Mexico. In describing their findings, Karabarbounis and Neiman are explicit that progress in digital technologies is an important driver of this phenomenon: “The decrease in the relative price of investment goods, often attributed to advances in information technology and the computer age, induced firms to shift away from labor and toward capital. The lower price of investment goods explains roughly half of the observed decline in the labor share.”

But if capital’s share of national income has been growing, the continuation of such a trend into the future may be in jeopardy as a new challenge to capital emerges -- not from a revived labor sector but from an increasingly important unit within its own ranks: digital capital.

In a free market, the biggest premiums go to the scarcest inputs needed for production. In a world where capital such as software and robots can be replicated cheaply, its marginal value will tend to fall, even if more of it is used in the aggregate. And as more capital is added cheaply at the margin, the value of existing capital will actually be driven down. Unlike, say, traditional factories, many types of digital capital can be added extremely cheaply. Software can be duplicated and distributed at almost zero incremental cost. And many elements of computer hardware, governed by variants of Moore’s law, get quickly and consistently cheaper over time. Digital capital, in short, is abundant, has low marginal costs, and is increasingly important in almost every industry.

Even as production becomes more capital-intensive, therefore, the rewards earned by capitalists as a group may not necessarily continue to grow relative to labor. The shares will depend on the exact details of the production, distribution, and governance systems.

Most of all, the payoff will depend on which inputs to production are scarcest. If digital technologies create cheap substitutes for a growing set of jobs, then it is not a good time to be a laborer. But if digital technologies also increasingly substitute for capital, then all owners of capital should not expect to earn outsized returns, either.

TECHCRUNCH DISRUPT

What will be the scarcest, and hence the most valuable, resource in what two of us (Erik Brynjolfsson and Andrew McAfee) have called “the second machine age,” an era driven by digital technologies and their associated economic characteristics? It will be neither ordinary labor nor ordinary capital but people who can create new ideas and innovations.

Such people have always been economically valuable, of course, and have often profited handsomely from their innovations as a result. But they had to share the returns on their ideas with the labor and capital that were necessary for bringing them into the marketplace. Digital technologies increasingly make both ordinary labor and ordinary capital commodities, and so a greater share of the rewards from ideas will go to the creators, innovators, and entrepreneurs. People with ideas, not workers or investors, will be the scarcest resource.

The most basic model economists use to explain technology’s impact treats it as a simple multiplier for everything else, increasing overall productivity evenly for everyone. This model is used in most introductory economics classes and provides the foundation for the common -- and, until recently, very sensible -- intuition that a rising tide of technological progress will lift all boats equally, making all workers more productive and hence more valuable.

A slightly more complex and realistic model, however, allows for the possibility that technology may not affect all inputs equally but instead favor some more than others. Skill-based technical change, for example, plays to the advantage of more skilled workers relative to less skilled ones, and capital-based technical change favors capital relative to labor. Both of those types of technical change have been important in the past, but increasingly, a third type -- what we call superstar-based technical change -- is upending the global economy.

Today, it is possible to take many important goods, services, and processes and codify them. Once codified, they can be digitized, and once digitized, they can be replicated. Digital copies can be made at virtually zero cost and transmitted anywhere in the world almost instantaneously, each an exact replica of the original. The combination of these three characteristics -- extremely low cost, rapid ubiquity, and perfect fidelity -- leads to some weird and wonderful economics. It can create abundance where there had been scarcity, not only for consumer goods, such as music videos, but also for economic inputs, such as certain types of labor and capital.

The returns in such markets typically follow a distinct pattern -- a power law, or Pareto curve, in which a small number of players reap a disproportionate share of the rewards. Network effects, whereby a product becomes more valuable the more users it has, can also generate these kinds of winner-take-all or winner-take-most markets. Consider Instagram, the photo-sharing platform, as an example of the economics of the digital, networked economy. The 14 people who created the company didn’t need a lot of unskilled human helpers to do so, nor did they need much physical capital. They built a digital product that benefited from network effects, and when it caught on quickly, they were able to sell it after only a year and a half for nearly three-quarters of a billion dollars -- ironically, months after the bankruptcy of another photography company, Kodak, that at its peak had employed some 145,000 people and held billions of dollars in capital assets.

Instagram is an extreme example of a more general rule. More often than not, when improvements in digital technologies make it more attractive to digitize a product or process, superstars see a boost in their incomes, whereas second bests, second movers, and latecomers have a harder time competing. The top performers in music, sports, and other areas have also seen their reach and incomes grow since the 1980s, directly or indirectly riding the same trends upward.

But it is not only software and media that are being transformed. Digitization and networks are becoming more pervasive in every industry and function across the economy, from retail and financial services to manufacturing and marketing. That means superstar economics are affecting more goods, services, and people than ever before.

Even top executives have started earning rock-star compensation. In 1990, CEO pay in the United States was, on average, 70 times as large as the salaries of other workers; in 2005, it was 300 times as large. Executive compensation more generally has been going in the same direction globally, albeit with considerable variation from country to country. Many forces are at work here, including tax and policy changes, evolving cultural and organizational norms, and plain luck. But as research by one of us (Brynjolfsson) and Heekyung Kim has shown, a portion of the growth is linked to the greater use of information technology. Technology expands the potential reach, scale, and monitoring capacity of a decision-maker, increasing the value of a good decision-maker by magnifying the potential consequences of his or her choices. Direct management via digital technologies makes a good manager more valuable than in earlier times, when executives had to share control with long chains of subordinates and could affect only a smaller range of activities. Today, the larger the market value of a company, the more compelling the argument for trying to get the very best executives to lead it.

When income is distributed according to a power law, most people will be below the average, and as national economies writ large are increasingly subject to such dynamics, that pattern will play itself out on the national level. And sure enough, the United States today features one of the world’s highest levels of real GDP per capita -- even as its median income has essentially stagnated for two decades.

PREPARING FOR THE PERMANENT REVOLUTION

The forces at work in the second machine age are powerful, interactive, and complex. It is impossible to look far into the future and predict with any precision what their ultimate impact will be. If individuals, businesses, and governments understand what is going on, however, they can at least try to adjust and adapt.

The United States, for example, stands to win back some business as the second sentence of Apple’s eight-word business plan is overturned because its technology and manufacturing operations are once again performed inside U.S. borders. But the first sentence of the plan will become more important than ever, and here, concern, rather than complacency, is in order. For unfortunately, the dynamism and creativity that have made the United States the most innovative nation in the world may be faltering. Thanks to the ever-onrushing digital revolution, design and innovation have now become part of the tradable sector of the global economy and will face the same sort of competition that has already transformed manufacturing. Leadership in design depends on an educated work force and an entrepreneurial culture, and the traditional American advantage in these areas is declining. Although the United States once led the world in the share of graduates in the work force with at least an associate’s degree, it has now fallen to 12th place. And despite the buzz about entrepreneurship in places such as Silicon Valley, data show that since 1996, the number of U.S. start-ups employing more than one person has declined by over 20 percent.

If the trends under discussion are global, their local effects will be shaped, in part, by the social policies and investments that countries choose to make, both in the education sector specifically and in fostering innovation and economic dynamism more generally. For over a century, the U.S. educational system was the envy of the world, with universal K–12 schooling and world-class universities propelling sustained economic growth. But in recent decades, U.S. primary and secondary schooling have become increasingly uneven, with their quality based on neighborhood income levels and often a continued emphasis on rote learning.

Fortunately, the same digital revolution that is transforming product and labor markets can help transform education as well. Online learning can provide students with access to the best teachers, content, and methods regardless of their location, and new data-driven approaches to the field can make it easier to measure students’ strengths, weaknesses, and progress. This should create opportunities for personalized learning programs and continuous improvement, using some of the feedback techniques that have already transformed scientific discovery, retail, and manufacturing. Globalization and technological change may increase the wealth and economic efficiency of nations and the world at large, but they will not work to everybody’s advantage, at least in the short to medium term. Ordinary workers, in particular, will continue to bear the brunt of the changes, benefiting as consumers but not necessarily as producers. This means that without further intervention, economic inequality is likely to continue to increase, posing a variety of problems. Unequal incomes can lead to unequal opportunities, depriving nations of access to talent and undermining the social contract. Political power, meanwhile, often follows economic power, in this case undermining democracy.

These challenges can and need to be addressed through the public provision of high-quality basic services, including education, health care, and retirement security. Such services will be crucial for creating genuine equality of opportunity in a rapidly changing economic environment and increasing intergenerational mobility in income, wealth, and future prospects.

As for spurring economic growth in general, there is a near consensus among serious economists about many of the policies that are necessary. The basic strategy is intellectually simple, if politically difficult: boost public-sector investment over the short and medium term while making such investment more efficient and putting in place a fiscal consolidation plan over the longer term. Public investments are known to yield high returns in basic research in health, science, and technology; in education; and in infrastructure spending on roads, airports, public water and sanitation systems, and energy and communications grids. Increased government spending in these areas would boost economic growth now even as it created real wealth for subsequent generations later. Should the digital revolution continue to be as powerful in the future as it has been in recent years, the structure of the modern economy and the role of work itself may need to be rethought. As a group, our descendants may work fewer hours and live better -- but both the work and the rewards could be spread even more unequally, with a variety of unpleasant consequences. Creating sustainable, equitable, and inclusive growth will require more than business as usual. The place to start is with a proper understanding of just how fast and far things are evolving.

ERIK BRYNJOLFSSON is Schussel Family Professor of Management Science at the MIT Sloan School of Management and Co-Founder of MIT’s Initiative on the Digital Economy. ANDREW MCAFEE is a Principal Research Scientist at the MIT Center for Digital Business at the MIT Sloan School of Management and Co-Founder of MIT’s Initiative on the Digital Economy. MICHAEL SPENCE is William R. Berkley Professor in Economics and Business at the NYU Stern School of Business.

June 4, 2014 Will Humans Go the Way of Horses?

Labor in the Second Machine Age

Erik Brynjolfsson and Andrew McAfee July/August 2015

WATFORD / MIRRORPIX / CORBIS

Peak horse: a horse-drawn fire engine, 1914.

The debate over what technology does to work, jobs, and wages is as old as the industrial era itself. In the second decade of the nineteenth century, a group of English textile workers called the Luddites protested the introduction of spinning frames and power looms, machines of the nascent Industrial Revolution that threatened to leave them without jobs. Since then, each new burst of technological progress has brought with it another wave of concern about a possible mass displacement of labor.

On one side of the debate are those who believe that new technologies are likely to replace workers. Karl Marx, writing during the age of steam, described the automation of the proletariat as a necessary feature of capitalism. In 1930, after electrification and the internal combustion engine had taken off, John Maynard Keynes predicted that such innovations would lead to an increase in material prosperity but also to widespread “technological unemployment.” At the dawn of the computer era, in 1964, a group of scientists and social theorists sent an open letter to U.S. President Lyndon Johnson warning that cybernation “results in a system of almost unlimited productive capacity, which requires progressively less human labor.” Recently, we and others have argued that as digital technologies race ahead, they have the potential to leave many workers behind.

On the other side are those who say that workers will be just fine. They have history on their side: real wages and the number of jobs have increased relatively steadily throughout the industrialized world since the middle of the nineteenth century, even as technology advanced like never before. A 1987 National Academy of Sciences report explained why:

This view has gained enough traction in mainstream economics that the contrary belief—that technological progress might reduce human employment—has been dismissed as the “lump of labor fallacy.” It’s a fallacy, the argument goes, because there is no static “lump of labor,” since the amount of work available to be done can increase without bound. In 1983, the Nobel Prize–winning economist Wassily Leontief brought the debate into sharp relief through a clever comparison of humans and horses. For many decades, horse labor appeared impervious to technological change. Even as the telegraph supplanted the Pony Express and railroads replaced the stagecoach and the Conestoga wagon, the U.S. equine population grew seemingly without end, increasing sixfold between 1840 and 1900 to more than 21 million horses and mules. The animals were vital not only on farms but also in the country’s rapidly growing urban centers, where they carried goods and people on hackney carriages and horse- drawn omnibuses.

But then, with the introduction and spread of the internal combustion engine, the trend rapidly reversed. As engines found their way into automobiles in the city and tractors in the countryside, horses became largely irrelevant. By 1960, the United States counted just three million horses, a decline of nearly 88 percent in just over half a century. If there had been a debate in the early 1900s about the fate of the horse in the face of new industrial technologies, someone might have formulated a “lump of equine labor fallacy,” based on the animal’s resilience up till then. But the fallacy itself would soon be proved false: once the right technology came along, most horses were doomed as labor.

Is a similar tipping point possible for human labor? Are autonomous vehicles, self-service kiosks, warehouse robots, and supercomputers the harbingers of a wave of technological progress that will finally sweep humans out of the economy? For Leontief, the answer was yes: “The role of humans as the most important factor of production is bound to diminish in the same way that the role of horses . . . was first diminished and then eliminated.”

But humans, fortunately, are not horses, and Leontief missed a number of important differences between them. Many of these suggest that humans will remain an important part of the economy. Even if human labor becomes far less necessary overall, however, people, unlike horses, can choose to prevent themselves from becoming economically irrelevant.

WHAT HUMANS WANT The most common reason given for why there is no lump of labor is that human wants are infinite. Indeed, throughout modern history, per capita consumption has steadily risen. As Alfred Marshall put it in his foundational 1890 book, Principles of Economics, “Human wants and desires are countless in number and very various in kind.” Ever since Marshall, people have linked unlimited wants to full employment. After all, who else but workers will be able to fulfill all those wants and desires?

However comforting this argument may be, it is also incorrect, because technology can sever the link between infinite desires and full employment. As recent advances suggest, it’s no longer pure science fiction to contemplate completely automated mines, farms, factories, and logistics networks supplying all the food and manufactured goods a population could require. Many service jobs and much knowledge work could also be automated, with everything from order taking to customer support to payment processing handled by autonomous intelligent systems. Perhaps some innovative humans would still be required in this world to dream up new goods and services to be consumed, but not many. The 2008 animated film WALL-E provides a vivid and unsettling vision of just such an economy: most people exist only to consume and to be marketed to, and they have become so obese that they can hardly move under their own power.

As the WALL-E dystopia suggests, people’s unbounded economic wants are not guarantors of full employment in a world of sufficiently advanced technology. After all, even if humans’ demand for transportation grew infinitely—and it has grown enormously in the past century—that would have little effect on the demand for horses. Technological progress, in short, could be capable of decoupling ever-growing consumption and large-scale human employment, just as it did earlier with equine employment.

Unless, of course, we don’t want to be served exclusively by robots and artificial intelligence. This represents the biggest barrier to a fully automated economy and the strongest reason that human labor will not disappear anytime soon. We humans are a deeply social species, and the desire for human connection carries over to our economic lives. There’s an explicitly interpersonal element in many of the things we spend money on. We come together to appreciate human expression or ability when we attend plays and sporting events. Regulars frequent particular bars and restaurants not only because of the food and drink but also because of the hospitality offered. Coaches and trainers provide motivation that can’t be found in exercise books or videos. Good teachers inspire students to keep learning, and counselors and therapists form bonds with clients that help them heal.

In these cases and many others, human interaction is central to the economic transaction, not incidental to it. Contrary to Marshall’s emphasis on the quantity of human needs, it’s better to focus on the quality of human needs. Humans have economic wants that can be satisfied only by other humans, and that makes us less likely to go the way of the horse or descend into the world of WALL-E.

NOT DEAD YET But are our interpersonal abilities the only ones that will allow us to stave off economic irrelevance? Over at least the next decade, the answer is almost certainly no. That’s because recent technological progress, while moving surprisingly fast, is still not on track to allow robots and artificial intelligence to do everything better than humans can within the next few years. So another reason that humans won’t soon go the way of the horse is that humans can do many valuable things that will remain beyond the reach of technology.

When it comes to navigating and shaping the physical world, humans maintain many advantages. We are far more dexterous and nimble than any single piece of machinery, and we are comparatively lightweight and energy efficient. Plus, our senses provide fast and multidimensional feedback that allows precise movement and control. There’s no robot anywhere in the world right now, for example, that can sort a bowlful of coins as well as the average child or clear a table as well as a restaurant busboy.

Our mental advantages might be even greater than our physical ones. While we’re clearly now inferior to computers at arithmetic and are getting outpaced in some types of pattern recognition—as evidenced by the triumph of Watson, an artificial-intelligence system created by IBM, over human Jeopardy! champions in 2011—we still have vastly better common sense. We’re also able to formulate goals and then work out how to achieve them. And although there are impressive examples of digital creativity and innovation, including machine-generated music and scientific hypotheses, humans are still better at coming up with useful new ideas in most domains. This calls to mind a quote attributed to a 1965 NASA report: “Man is the lowest-cost, 150-pound, nonlinear, all-purpose computer system which can be mass-produced by unskilled labor.” CHRISTIAN CHARISIUS / COURTESY REUTERS The supercomputer Watson, which competed on the Jeopardy! game show during the opening ceremony of the CeBIT computer fair, in Hanover, Germany, February 2011.

It is extraordinarily difficult to get a clear picture of how broadly and quickly technology will encroach on human territory (and a review of past predictions should deter anyone from trying), but it seems unlikely that hardware, software, robots, and artificial intelligence will be able to take over from human labor within the next decade. It is even less likely that people will stop having economic wants that are explicitly interpersonal or social; these will remain, and they will continue to provide demand for human workers.

But will there be enough demand, especially over the long term, for those two types of human labor: that which must be done by people and that which can’t yet be done by machines? There is a real possibility that the answer is no—that human labor will, in aggregate, decline in relevance because of technological progress, just as horse labor did earlier. If that happens, it will raise the specter that the world may not be able to maintain the industrial era’s remarkable trajectory of steadily rising employment prospects and wages for a growing population.

BATTLING THE ROBOTS The story doesn’t end there, however. Having valuable labor to offer is not the only way to remain economically important; having capital to invest or spend also ensures continued relevance. A critical difference between people and horses is that humans can own capital, whereas horses cannot. People, in fact, own all the nongovernmental wealth in capitalist societies. All shares in firms, for example, are owned directly or indirectly (via vehicles such as retirement funds) by individuals. That means that humans can choose to redistribute that capital in order to replace income lost to robots.

The challenge here is that capital ownership appears to have always been highly uneven and has become increasingly skewed recently. As the economist Thomas Piketty writes in Capital in the Twenty-first Century, “In all known societies, at all times, the least wealthy half of the population own virtually nothing (generally little more than 5 percent of total wealth).” Increases over the past few years in the value of stocks, urban real estate, and several other forms of capital have benefited an incredibly small group. Credit Suisse has estimated that in 2014, the richest one percent held 48 percent of the world’s total wealth. In part, this increased unevenness reflects growing inequality in wages and other forms of compensation. Automation and digitization are less likely to replace all forms of labor than to rearrange, perhaps radically, the rewards for skills, talent, and luck. It is not hard to see how this would lead to an even greater concentration of wealth and, with it, power.

It’s possible, however, to imagine a “robot dividend” that created more widespread ownership of robots and similar technologies, or at least a portion of the financial benefits they generated. The state of Alaska provides a possible template: courtesy of the Alaska Permanent Fund, which was established in 1976, the great majority of the state’s residents receive a nontrivial amount of capital income every year. A portion of the state’s oil revenues is deposited into the fund, and each October, a dividend from it is given to each eligible resident. In 2014, this dividend was $1,884.

It’s important to note that the amendment to the Alaska state constitution establishing the Permanent Fund passed democratically, by a margin of two to one. That Alaskans chose to give themselves a bonus highlights another critical difference between humans and horses: in many countries today, humans can vote. In other words, people can influence economic outcomes, such as wages and incomes, through the democratic process. This can happen directly, through votes on amendments and referendums, or indirectly, through legislation passed by elected representatives. It is voters, not markets, who are picking the minimum wage, determining the legality of sharing-economy companies such as and , and settling many other economic issues.

In the future, it’s not unreasonable to expect people to vote for policies that will help them avoid the economic fate of the horse. For example, legislatures might pass restrictions on certain types of job-destroying technologies. Although there appear to be few such explicit limits to date, already there are nascent efforts to draft legislation related to autonomous cars and other technologies with relatively direct implications for labor. And in every democracy, there are candidates for office who espouse a desire to help workers. There is no reason they will not continue to act on those impulses. BECK DIEFENBACH / COURTESY REUTERS San Francisco taxi drivers protest against and Uber which taxi drivers say are operating illegally in San Francisco, California, July 2013.

If and when a large enough group of people become sufficiently displeased with their economic prospects and feel that their government is indifferent or actively hostile to them, a final important difference between horses and humans will become clear: humans can revolt. Recent years have seen explicitly economic uprisings, including both the relatively peaceful Occupy Wall Street movement in the United States and the sporadically violent (and occasionally fatal) anti-austerity protests in Greece.

Over a longer time span, history provides no shortage of examples of uprisings motivated in whole or in part by workers’ concerns. Democracy is no guarantee against such uprisings, nor is the fact that the material conditions of life generally improve over time for most people in most countries. The horse population accepted its economic irrelevance with not a murmur of protest (as far as we can tell). If the same happens to human workers, they are unlikely to be so meek.

A LABOR-LIGHT ECONOMY Current discussions of economic policy focus on how to improve workers’ job and wage prospects. That makes sense, since robots and artificial intelligence are not on the brink of learning how to do every job. The best way to help workers in today’s climate is to equip them with valuable skills and to encourage overall economic growth. Governments should therefore pass education and immigration reform, enact policies to stimulate entrepreneurship, and increase investment in infrastructure and basic research. They might also use some combination of awards, competitions, and financial incentives to encourage technology innovators to develop solutions that explicitly encourage and support human labor rather than primarily substituting for it.

That said, it is more than a bit blithe to assume that human labor will forever remain the most important factor of production. As Leontief pointed out, technological progress can change that, just as it did for the horse. If and when this happens, humans’ other differences from horses will become critical. Once many, even most, people see their income from labor recede, their views on the ownership of capital and the distribution of its proceeds, as expressed through votes or revolts, will matter even more than they do now.

It’s time to start discussing what kind of society we should construct around a labor-light economy. How should the abundance of such an economy be shared? How can the tendency of modern capitalism to produce high levels of inequality be muted while preserving its ability to allocate - resources efficiently and reward initiative and effort? What do fulfilling lives and healthy communities look like when they no longer center on industrial-era conceptions of work? How should education, the social safety net, taxation, and other important elements of civic society be rethought? The history of horse labor offers no answers to these questions. Nor will answers come from the machines themselves, no matter how clever they become. They will come instead from the goals we set for the technologically sophisticated societies and economies we are creating and the values embedded in them.

ERIK BRYNJOLFSSON is Schussel Family Professor of Management Science at the MIT Sloan School of Management, Co-Founder of MIT’s Initiative on the Digital Economy, and Chair of the MIT Sloan Management Review. ANDREW MCAFEE is Principal Research Scientist at the MIT Sloan School of Management and Co-Founder of MIT’s Initiative on the Digital Economy. They are the authors of The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. June 16, 2015 Same as It Ever Was

Why the Techno-optimists Are Wrong

Martin Wolf July/August 2015

WIKIMEDIA COMMONS Urbanization: Little , Manhattan, circa 1900.

Belief in “the green light, the orgiastic future that year by year recedes before us,” as F. Scott Fitzgerald wrote in The Great Gatsby, is a characteristic American trait. But hope in a better future is not uniquely American, even if it has long been a more potent secular faith in the United States than elsewhere. The belief has older roots. It was the product of a shift in the temporal location of the golden age from a long- lost past to an ever-brighter future.

That shift was conceived and realized with the Enlightenment and then the Industrial Revolution. As human beings gained ever-greater control of the forces of nature and their economies became ever more productive, they started to hope for lives more like those of the gods their ancestors had imagined.

People might never be immortal, but their lives would be healthy and long. People might never move instantaneously, but they could transport themselves and their possessions swiftly and cheaply across great distances. People might never live on Mount Olympus, but they could enjoy a temperate climate, 24-hour lighting, and abundant food. People might never speak mind to mind, but they could communicate with as many others as they desired, anywhere on the planet. People might never enjoy infinite wisdom, but they could gain immediate access to the knowledge accumulated over millennia.

All of this has already happened in the world’s richest countries. It is what the people of the rest of the world hope still to enjoy.

Is a yet more orgiastic future beckoning? Today’s Gatsbys have no doubt that the answer is yes: humanity stands on the verge of breakthroughs in information technology, robotics, and artificial intelligence that will dwarf what has been achieved in the past two centuries. Human beings will be able to live still more like gods because they are about to create machines like gods: not just strong and swift but also supremely intelligent and even self-creating.

Yet this is the optimistic version. Since Mary Shelley created the cautionary tale of Frankenstein, the idea of intelligent machines has also frightened us. Many duly point to great dangers, including those of soaring unemployment and inequality. But are we likely to experience such profound changes over the next decade or two? The answer is no.

SMALL CHANGE

In reality, the pace of economic and social transformation has slowed in recent decades, not accelerated. This is most clearly shown in the rate of growth of output per worker. The economist Robert Gordon, doyen of the skeptics, has noted that the average growth of U.S. output per worker was 2.3 percent a year between 1891 and 1972. Thereafter, it only matched that rate briefly, between 1996 and 2004. It was just 1.4 percent a year between 1972 and 1996 and 1.3 percent between 2004 and 2012.

On the basis of these data, the age of rapid productivity growth in the world’s frontier economy is firmly in the past, with only a brief upward blip when the Internet, e-mail, and e- commerce made their initial impact.

Those whom Gordon calls “techno-optimists”—Erik Brynjolfsson and Andrew McAfee of the Massachusetts Institute of Technology, for example—respond that the GDP statistics omit the enormous unmeasured value provided by the free entertainment and information available on the Internet. They emphasize the plethora of cheap or free services (Skype, Wikipedia), the scale of do-it-yourself entertainment (Facebook), and the failure to account fully for all the new products and services. Techno-optimists point out that before June 2007, an iPhone was out of reach for even the richest man on earth. Its price was infinite. The fall from an infinite to a definite price is not reflected in the price indexes. Moreover, say the techno-optimists, the “consumer surplus” in digital products and services—the difference between the price and the value to consumers—is huge. Finally, they argue, measures of GDP underestimate investment in intangible assets. These points are correct. But they are nothing new: all of this has repeatedly been true since the nineteenth century. Indeed, past innovations generated vastly greater unmeasured value than the relatively trivial innovations of today. Just consider the shift from a world without telephones to one with them, or from a world of oil lamps to one with electric light. Next to that, who cares about Facebook or the iPad? Indeed, who really cares about the Internet when one considers clean water and flushing toilets?

Over the past two centuries, historic breakthroughs have been responsible for generating huge unmeasured value. The motor vehicle eliminated vast quantities of manure from urban streets. The refrigerator prevented food from becoming contaminated. Clean running water and vaccines delivered drastic declines in child mortality rates. The introduction of running water, gas and electric cookers, vacuums, and washing machines helped liberate women from domestic labor. The telephone removed obstacles to speedy contact with the police, fire brigades, and ambulance services. The discovery of electric light eliminated forced idleness. Central heating and air conditioning ended discomfort. The introduction of the railroad, the steam ship, the motor car, and the airplane annihilated distance.

The radio, the gramophone, and the television alone did far more to revolutionize home entertainment than the technologies of the past two decades have. Yet these were but a tiny fraction of the cornucopia of innovation that owed its origin to the so-called general-purpose technologies—industrialized chemistry, electricity, and the internal combustion engine—introduced by what is considered the Second Industrial Revolution, which occurred between the 1870s and the early twentieth century. The reason we are impressed by the relatively paltry innovations of our own time is that we take for granted the innovations of the past. Gordon also notes how concentrated the period of great breakthroughs was. As he writes:

And the benefits of these mainstays of the Second Industrial Revolution, Gordon points out, “included subsidiary and complementary inventions, from elevators, electric machinery and consumer appliances; to the motorcar, truck, and airplane; to highways, suburbs, and supermarkets; to sewers to carry the wastewater away.”

PAST, NOT PROLOGUE

The technologies introduced in the late nineteenth century did more than cause three generations of relatively high productivity growth. They did more, too, than generate huge unmeasured economic and social value. They also brought with them unparalleled social and economic changes. An ancient Roman would have understood the way of life of the United States of 1840 fairly well. He would have found that of 1940 beyond his imagination.

Among the most important of these broader changes were urbanization and the huge jumps in life expectancy and standards of education. The United States was 75 percent rural in the 1870s. By the mid-twentieth century, it was 64 percent urban. Life expectancy rose twice as fast in the first half of the twentieth century as in the second half. The collapse in child mortality is surely the single most beneficial social change of the past two centuries. It is not only a great good in itself; it also liberated women from the burden, trauma, and danger of frequent pregnancies. The jump in high school graduation rates—from less than ten percent of young people in 1900 to roughly 80 percent by 1970—was a central driver of twentieth-century economic growth.

All these changes were also, by their nature, one-offs. This is also true of the more recent shift of women entering the labor force. It has happened. It cannot be repeated.

Yet there is something else of compelling importance in the contrast between the breakthroughs of the nineteenth and early twentieth centuries and those of the second half of the twentieth and the early twenty-first century. The former were vastly broader, affecting energy; transportation; sanitation; food production, distribution, and processing; entertainment; and, not least, entire patterns of habitation. Yes, computers, mobile telecommunications, and the Internet are of great significance. Yet it is also essential to remember what has not changed to any fundamental degree. Transportation technologies and speeds are essentially the same as they were half a century ago. The dominant source of commercial energy remains the burning of fossil fuels—introduced with coal and steam in the First Industrial Revolution, of the late eighteenth and early nineteenth centuries—and even nuclear power is now an elderly technology. Although fracking is noteworthy, it does not compare with the opening of the petroleum age in the late nineteenth century.

GEORGE MARKS / GETTY IMAGES

Killer app: vacuuming the den, circa 1950. The only recent connections between homes and the outside world are satellite dishes and broadband. Neither is close to being as important as clean water, sewerage, gas, electricity, and the telephone. The great breakthroughs in health—clean water, sewerage, refrigeration, packaging, vaccinations, and antibiotics—are also all long established.

THE FUTURE'S NOT WHAT IT USED TO BE

The so-called Third Industrial Revolution—of the computer, the Internet, and e-commerce—is also itself quite old. It has already produced many changes. The armies of clerks who used to record all transactions have long since disappeared, replaced by computers; more recently, so have secretaries. E- mail has long since replaced letters. Even the Internet and the technologies that allow it to be searched with ease are now 15 years old, or even older, as is the e-commerce they enabled.

Yet the impact of all of this on measured productivity has been modest. The economic historian Paul David famously argued in 1989 that one should remember how long it took for industrial processes to adapt to electricity. But the computer itself is more than half a century old, and it is now a quarter of a century since David made that point. Yet except for the upward blip between 1996 and 2004, we are still—to adapt the Nobel laureate Robert Solow’s celebrated words of 1987—seeing the information technology age “everywhere but in the productivity statistics.”

Meanwhile, other, more recent general-purpose technologies—biotechnology and nanotechnology, most notably—have so far made little impact, either economically or more widely.

The disappointing nature of recent growth is also the theme of an influential little book, The Great Stagnation, by the economist Tyler Cowen, which is subtitled How America Ate All the Low-Hanging Fruit of Modern History, Got Sick, and Will (Eventually) Feel Better. As Cowen writes:

In considering the disappointing impact of recent innovations, it is important to note that the world’s economies are vastly bigger than they used to be. Achieving a two percent economy-wide annual rise in labor productivity may simply be a much bigger challenge than it was in the past.

More important, the share of total output of the sectors with the fastest growth in productivity tends to decline over time, while the share of the sectors where productivity growth has proved hardest to increase tends to rise. Indeed, it is possible that productivity growth will essentially cease because the economic contribution of the sectors where it is fastest will become vanishingly small. Raising productivity in manufacturing matters far less now that it generates only about an eighth of total U.S. GDP. Raising productivity in caring for the young, the infirm, the helpless, and the elderly is hard, if not impossible. WIKIMEDIA COMMONS A radio apparatus similar to the one used to transmit the first wireless signal across the Atlantic Ocean, 1901.

Yet perhaps paradoxically, recent technological progress might still have had some important effects on the economy, and particularly the distribution of income, even if its impact on the size of the economy and overall standards of living has been relatively modest. The information age coincided with—and must, to some extent, have caused—adverse economic trends: the stagnation of median real incomes, rising inequality of labor income and of the distribution of income between labor and capital, and growing long-term unemployment.

Information technology has turbocharged globalization by making it vastly easier to organize global supply chains, run 24-hour global financial markets, and spread technological know-how. This has helped accelerate the catch-up process of emerging-market economies, notably China. It has also allowed India to emerge as a significant exporter of technological services. Technology has also brought about the rise of winner-take-all markets, as superstars have come to bestride the globe. Substantial evidence exists, too, of “skills-biased” technological change. As the demand for and rewards offered to highly skilled workers (software programmers, for example) rise, the demand for and rewards offered to those with skills in the middle of the distribution (such as clerks) decline. The value of intellectual property has also risen. In brief, a modest impact on aggregate output and productivity should not be confused with a modest impact across the board.

NO CRYSTAL BALL REQUIRED

The future is, at least to some extent, unknowable. Yet as Gordon suggests, it is not all that unknowable. Back in the nineteenth and early twentieth centuries, many had already realized the changes that the recent inventions might bring. The nineteenth-century French novelist Jules Verne is a famous example of such foresight.

The optimistic view is that we are now at an inflection point. In their book The Second Machine Age, Brynjolfsson and McAfee offer as a parallel the story of the inventor of chess, who asked to be rewarded with one grain of rice on the first square of his board, two on the second, four on the third, and so forth. Manageable in size on the first half of the board, the reward reaches mountainous proportions toward the end of the second. Humanity’s reward from Moore’s law—the relentless doubling of the number of transistors on a computer chip every two years or so—will, they argue, grow similarly.

These authors predict that we will experience

In the near term, however, the widely mentioned possibilities—medicine, even bigger data, robots, 3-D printing, self-driving cars—look quite insignificant.

The impact of the biomedical advances so far has been remarkably small, with pharmaceutical companies finding it increasingly difficult to register significant breakthroughs. So-called big data is clearly helping decision-making. But many of its products—ultra-high-speed trading, for example—are either socially and economically irrelevant or, quite possibly, harmful. Three-D printing is a niche activity—fun, but unlikely to revolutionize manufacturing.

Making robots replicate all the complex abilities of human beings has proved extremely difficult. Yes, robots can do well- defined human jobs in well-defined environments. Indeed, it is quite possible that standard factory work will be entirely automated. But the automation of such work is already very far advanced. It is not a revolution in the making. Yes, it is possible to imagine driverless cars. But this would be a far smaller advance than were cars themselves.

Inevitably, uncertainty is pervasive. Many believe that the impact of what is still to come could be huge. The economist Carl Benedikt Frey and the machine-learning expert Michael Osborne, both of Oxford University, have concluded that 47 percent of U.S. jobs are at high risk from automation. In the nineteenth century, they argue, machines replaced artisans and benefited unskilled labor. In the twentieth century, computers replaced middle-income jobs, creating a polarized labor market.

Over the next decades, they write, “most workers in transportation and logistics occupations, together with the bulk of office and administrative support workers, and labour in production occupations, are likely to be substituted by computer capital.” Moreover, they add, “computerisation will mainly substitute for low-skill and low-wage jobs in the near future. By contrast, high-skill and high-wage occupations are the least susceptible to computer capital.” That would exacerbate already existing trends toward greater inequality. But remember that previous advances also destroyed millions of jobs. The most striking example is, of course, in agriculture, which was the dominant employer of humanity between the dawn of the agricultural revolution and the nineteenth century.

The economists Jeffrey Sachs and Laurence Kotlikoff even argue that the rise in productivity generated by the coming revolution could make future generations worse off in the aggregate. The replacement of workers by robots could shift income from the former to the robots’ owners, most of whom will be retired, and the retired are assumed to save less than the young. This would lower investment in human capital because the young could no longer afford to pay for it, and it would lower investment in machines because savings in this economy would fall.

Beyond this, people imagine something far more profound than robots able to do gardening and the like: the “technological singularity,” when intelligent machines take off in a rapid cycle of self-improvement, leaving mere human beings behind. In this view, we will someday create machines with the abilities once ascribed to gods. Is that imminent? I have no idea.

BEEN THERE, DONE THAT

So how might we respond now to these imagined futures?

First, new technologies bring good and bad. We must believe we can shape the good and manage the bad.

Second, we must understand that education is not a magic wand. One reason is that we do not know what skills will be demanded three decades hence. Also, if Frey and Osborne are right, so many low- to middle-skilled jobs are at risk that it may already be too late for anybody much over 18 and for many children. Finally, even if the demand for creative, entrepreneurial, and high-level knowledge services were to grow on the required scale, which is highly unlikely, turning us all into the happy few is surely a fantasy.

Third, we will have to reconsider leisure. For a long time, the wealthiest lived a life of leisure at the expense of the toiling masses. The rise of intelligent machines would make it possible for many more people to live such lives without exploiting others. Today’s triumphant puritanism finds such idleness abhorrent. Well then, let people enjoy themselves busily. What else is the true goal of the vast increases in prosperity we have created?

Fourth, we may need to redistribute income and wealth on a large scale. Such redistribution could take the form of a basic income for every adult, together with funding for education and training at any stage in a person’s life. In this way, the potential for a more enjoyable life might become a reality. The revenue could come from on bads (pollution, for example) or on rents (including land and, above all, intellectual property). Property rights are a social creation. The idea that a small minority should overwhelmingly benefit from new technologies should be reconsidered. It would be possible, for example, for the state to obtain an automatic share of the income from the intellectual property it protects.

Fifth, if labor shedding does accelerate, it will be essential to ensure that demand for labor expands in tandem with the rise in potential supply. If we succeed, many of the worries over a lack of jobs will fade away. Given the failure to achieve this in the past seven years, that may well not happen. But we could do better if we wanted to.

The rise of truly intelligent machines, if it comes, would indeed be a big moment in history. It would change many things, including the global economy. Their potential is clear: they would, in principle, make it possible for human beings to live far better lives. Whether they end up doing so depends on how the gains are produced and distributed.

It is also possible that the ultimate result might be a tiny minority of huge winners and a vast number of losers. But such an outcome would be a choice, not a destiny. Techno- feudalism is unnecessary. Above all, technology itself does not dictate the outcomes. Economic and political institutions do. If the ones we have do not give the results we want, we will need to change them.

As for the singularity, it is hard to conceive of such a state of the world. Would a surpassed humanity live happily ever after, tended, like children, by solicitous machines? Would people find meaning in a world in which their intellectual progeny were so vastly superior to themselves?

What we know for the moment is that there is nothing extraordinary in the changes we are now experiencing. We have been here before and on a much larger scale. But the current and prospective rounds of changes still create problems—above all, the combination of weak growth and significant increases in inequality. The challenge, as always, is to manage such changes. The only good reason to be pessimistic is that we are doing such a poor job of this.

The future does not have to be a disappointment. But as Gatsby learned, it can all too easily be just that.

MARTIN WOLF is Chief Economics Commentator for the . This article draws on a column he published in the Financial Times in 2014. The Future of Cities

The Internet of Everything will Change How We Live

John Chambers and Wim Elfrink October 31, 2014

ALBERT GEA / COURTESY REUTERS A man walks past a stand at the Mobile World Congress in Barcelona, February 27, 2012.

As much as the Internet has already changed the world, it is the Web’s next phase that will bring the biggest opportunities, revolutionizing the way we live, work, play, and learn.

That next phase, which some call the Internet of Things and which we call the Internet of Everything, is the intelligent connection of people, processes, data, and things. Although it once seemed like a far-off idea, it is becoming a reality for businesses, governments, and academic institutions worldwide. Today, half the world’s population has access to the Internet; by 2020, two-thirds will be connected. Likewise, some 13.5 billion devices are connected to the Internet today; by 2020, we expect that number to climb to 50 billion. The things that are—and will be—connected aren’t just traditional devices, such as computers, tablets, and phones, but also parking spaces and alarm clocks, railroad tracks, street lights, garbage cans, and components of jet engines.

All of these connections are already generating massive amounts of digital data—and it doubles every two years. New tools will collect and share that data (some 15,000 applications are developed each week!) and, with analytics, that can be turned into information, intelligence, and even wisdom, enabling everyone to make better decisions, be more productive, and have more enriching experiences.

And the value that it will bring will be epic. In fact, the Internet of Everything has the potential to create $19 trillion in value over the next decade. For the global private sector, this equates to a 21 percent potential aggregate increase in corporate profits—or $14.4 trillion. The global public sector will benefit as well, using the Internet of Everything as a vehicle for the digitization of cities and countries. This will improve efficiency and cut costs, resulting in as much as $4.6 trillion of total value. Beyond that, it will help (and already is helping) address some of the world’s most vexing challenges: aging and growing populations rapidly moving to urban centers; growing demand for increasingly limited natural resources; and massive rebalancing in economic growth between briskly growing emerging market countries and slowing developed countries.

PHYSICAL LIMITS

More than half of the world’s population now lives in or near a major urban area, and the move toward ever-greater urbanization shows no signs of slowing. According to the United Nations, the global population is expected to grow from seven billion today to 9.3 billion by 2050, and the world’s cities will have to accommodate about 70 percent more residents.

The traditional ways of dealing with the influx—simply adding more physical infrastructure—won’t work, given limited resources and space. New ways of incorporating technology will be required to provide urban services, whether it’s roads, water, electricity, gas, work spaces, schools, or healthcare. In the future, there will be less emphasis on physical connections and more on access to virtual connections.

Cities also face budgetary challenges, battling rising costs and shrinking resources. The world’s cities account for 70 percent of greenhouse-gas emissions, and according to UN- HABITAT, energy-related costs are one of the biggest municipal budget items. Technology could provide a simple fix just by updating aging street lighting systems. That would also improve citizen safety and create a more favorable environment for business investments.

There are similar issues in many of the world’s water systems, with aging pipes in desperate need of replacing. For instance, the United States’ water infrastructure is near the end of its lifecycle with approximately 240,000 water main breaks each year. The cost of fixing this crumbling infrastructure could exceed $1 trillion over the next 25 years, assuming that all pipes are replaced. By placing networked sensors in water mains and underground pipe systems as they are repaired and replaced, cities could more effectively monitor and better anticipate future leaks and other potential problems as the infrastructure is upgraded.

More people also means more waste. The amount of municipal solid waste generated around the world is expected to reach 2.2 billion tons by 2025—up from 1.3 billion in 2012. Globally, solid waste management costs will rise to about $375.5 billion by 2025, according to predictions by the World Bank. Once again, the Internet of Everything offers ways to better manage and reduce these costs. For example, sensors in residential and commercial garbage containers could alert a city waste management system when they are full. Each morning, the drivers would receive their optimized route to empty the full containers. Compared to today’s fixed-route system, the new system could save millions of dollars by increasing efficiencies and worker productivity.

The intelligent and efficient stewardship of growing cities must take top priority. And there, we are convinced that the Internet of Everything will bring one of the most significant technology transitions since the birth of the Internet. Connections between things and people, supported by networked processes, will enable everyone to turn data into actionable information that can be used to do things that weren’t possible before, or to do them better. We can more quickly discover patterns and trends; we can predict and prepare for anything from bus or assembly line breakdowns to natural disasters and quick surges in product demand.

PUBLIC GOOD

Perhaps surprisingly, the public sector has been the most effective and innovative early adopter when it comes to making use of the Internet of Everything, especially in major metropolitan areas. New and innovative solutions are already transforming green fields and rundown urban centers into what we call Smart + Connected Communities, or Smart Cities. According to IHS Technology, the total number of Smart Cities will quadruple from 21 to 88 between 2013 and 2025. At Cisco, we are engaged with more than 100 cities in different stages of Smart City development.

By definition, Smart Cities are those that integrate information communications technology across three or more functional areas. More simply put, a Smart City is one that combines traditional infrastructure (roads, buildings, and so on) with technology to enrich the lives of its citizens. Creative platforms and killer apps have helped reduce traffic, parking congestion, pollution, energy consumption, and crime. They have also generated revenue and reduced costs for city residents and visitors.

For instance, one-third of the world’s streetlights use technology from the 1960s. Cities that update aging systems with networked -detection lights save administrative and management time as well as electricity and costs—as much as 70–80 percent, according to an independent, global trial of LED technology. By using such energy-saving technologies, cities can drastically lower their municipal expenditures on electricity. Cisco estimates that smart street lighting initiatives can also reduce area crime by seven percent because of better visibility and more content citizenry. Further, connected light poles can serve as wireless networking access points, enabling citizens and city managers to take advantage of pervasive connectivity. And networked sensors incorporated into utility lines could help reduce costs for both consumers and providers, with meters being “read” remotely, and much more accurately. Cities such as Nice, France are already implementing smart lighting, which monitors lamp intensity and traffic sensors to reduce car theft, assaults, and even home burglary. These lighting initiatives are also expected to reduce the city’s energy bill by more than $8 million.

Smart Cities are also saving energy indoors. Buildings outfitted with intelligent sensors and networked management systems can collect and analyze energy-use data. Such technologies have the potential to reduce energy consumption and cut costs by $100 billion globally over the next decade. Thanks to higher traffic, cities generate more than 67 percent of greenhouse gases released into our atmosphere. Experts predict that this figure will rise to 74 percent by 2030. In the United States alone, traffic congestion costs $121 billion a year in wasted time and fuel. Incredibly, drivers looking for a parking space cause 30 percent of urban congestion, not to mention pollution. To overcome this problem, the city of San Carlos, California has embedded networked sensors into parking spaces that relay to drivers real-time information about—and directions to—available spots. This program has helped reduce congestion, pollution, and fuel consumption. Moreover, parking fees can be dynamically adjusted for peak times, which generates more revenue for cities.

Cities can also integrate sensors that collect and share real- time data about public transportation systems to improve traffic flow and better monitor the use of buses and trains, giving them the ability to adjust route times and frequency of stops based on changing needs. This alone will cut costs and bring new efficiencies. Mobile apps that aggregate the information, meanwhile, can help citizens track delays or check pick-up times for a more seamless commute. Barcelona, Spain has already changed the typical experience of waiting for a bus by deploying smart bus stops, where citizens can use touchscreen monitors to view up-to-date bus schedules, maps, locations for borrowing city-owned bikes, and local businesses and entertainment.

Innovative municipal leaders understand the Internet of Everything’s incredible promise. In fact, these days, the most innovative cities have their own chief information officers or even chief digital officers.

SUPER CITIES

There are a number of iconic examples of cities that have put the Internet of Everything into use. They range from the ancient—Barcelona, Spain—to the new—Songdo, South Korea.

Barcelona, which, with a population of about 1.6 million people, is Spain’s second largest city, has embraced the Internet of Everything and is reaping the rewards—approximately $3.6 billion in value over the next decade. About $1 billion of this will come from productivity improvements. Other gains are from reductions in operational, resource, and environmental costs. Still more comes via revenues from new businesses focused on innovation.

City leaders have incorporated connected technology into the mayor’s office and the city council, not to mention the water management, waste management, parking, and public- transportation systems. These technologies have contributed significantly to Barcelona’s profitability (it is one of the few cities in Europe that is running a budget surplus) and have improved the quality of life of its citizens. For example, the city has deployed free Wi-Fi and created a rich assortment of citizen and government apps. Barcelona is also using the Internet of Everything to improve the city’s water- management system (generating $58 million in savings annually), install smart street lighting ($47 million), and embed sensors in parking spaces to let drivers know where open spaces exist ($67 million).

It’s no wonder, then, that in early March, the European Union named Barcelona Europe’s most innovative city. The same month, Fortune also recognized the city’s mayor, Xavier Trias, as one of the world’s 50 “Greatest Leaders.” The publication wrote, “Barcelona has its Mediterranean port, its Gaudí treasures, and since 2011, a mayor who is busy transforming the cultural gem of Spain’s Catalonia region into the smartest ‘smart city’ on the planet. Partnerships with companies like Cisco and Microsoft are fueling development, a new tech- campus hub is in the works, and he’s connecting citizens to government services through mobile technology.”

On the other side of the globe, Songdo, South Korea, is the world’s first truly green field city developed from the ground up with sustainability metrics—economic, social, and environmental—in mind. Through the city’s network, citizens can access a host of urban services—healthcare, government, transportation, utilities, safety and security, healthcare, and education—from the convenience of their living rooms or within a 12-minute walk. Real-time traffic information helps them plan their commutes. Remote healthcare services and information reduce expenses and travel time. Remotely automated building security improves safety and lowers costs.

Through a unique public–private partnership, the city is evolving as a living lab for urban management and service delivery. It can serve as a model for other communities built from the ground up. The aim is not only to develop urban services that enhance citizens’ daily lives and reduce the city’s resource footprint, but also to deliver economic value to the city by attracting new citizens and companies. These initiatives have the potential to create true economic value over the next 15 years, including as many as 300,000 jobs and $26.4 billion in gross regional domestic product (GRDP) growth. What’s more, Booz Allen & Company has estimated that the city will be able to reduce carbon dioxide emissions by up to 4.5 million tons.

NEW NORMAL

How can the world make Barcelona and Songdo the norm rather than the exception?

First, it is important to establish a process for prioritizing potential Internet of Everything initiatives based on the problems that need to be addressed. Articulating the real benefits of such programs and then gathering metrics on those initiatives once they are launched can generate support for the programs internally and with the public. City leaders should also consider starting with replicable initiatives that have worked well in other similar jurisdictions, such as smart parking and other transportation-based projects. Transportation officials often have the requisite budgets and authority to launch scalable pilot projects, and metrics of success are relatively easy to develop and communicate to stakeholders.

Second, the world must rethink IT investments. This means moving away from purchasing isolated services and instead focusing on end-to-end solutions that are integrated across disparate or siloed systems. By adapting to a technology infrastructure that is application-friendly and can be automated, as well as putting in place an expansive network that can handle a multitude of devices and sensors, cities and countries can reduce costs by billions of dollars. Integrating connected technology across systems, including water and waste management, municipal processes, smart buildings, energy systems, and so on, will allow for the biggest impact.

Third, governments should start looking at IT as a value creator rather than a cost center. Indeed, IT enables governments to carry out their overall strategies and will allow cities to thrive over the long term. In many instances, measurable returns on IT investment can be realized within a few years or less. With new connections, governments and their agencies can improve employee productivity, attracts talent and jobs, generate new revenue (without raising taxes), and also create quantifiable benefits for citizens. The Internet of Everything offers $4.6 trillion in value in the public sector alone. That number speaks for itself.

Fourth, the world can’t be afraid of embracing technology in new ways. This means rethinking the contract with citizens and the services IT firms and governments provide them. As the Internet of Everything evolves, the technology industry must also continuously improve security and privacy measures throughout the end-to-end value chain. We believe that industry self-regulation adhering to the highest international standards can be effective in protecting privacy and security. Such security regimes can be strengthened by innovative tools that provide users with the choice to opt in or out of programs and that help users understand how their data are collected and used.

Fifth, Smart Cities require cooperation between public and private partners. Such collaboration helps defray costs, solve pressing problems, and increase benefits for government, citizens, and industries. We have found that Smart Cities require five things: innovative and bold city leadership championing clear programs and outcomes across departments; hyper collaborative partnerships between the public and private sectors; information communications technology master plans and workshops to define and develop holistic and specific projects; and adherence to deadlines—perhaps one of the most important priorities. When the risks and rewards from projects are shared among partners, such as government leaders, private citizens, investors and technology companies, issues are more likely to be resolved and projects are more likely to be completed, because all parties have a stake in their investments. These partnerships are key to managing and financing projects that require advanced infrastructure and technology architecture.

Finally, start piloting now. City leaders have already shown that Internet of Everything solutions can solve difficult problems and improve the lives of citizens. And these leaders are enthusiastic about its potential to do even more. In Cisco surveys, they cited the importance of using pilots to obtain stakeholder sponsorship, prove the business case, and get the technology right. Pilots should be scalable and have clear metrics of success. Perseverance in the face of technical and political challenges can be the difference between success and failure.

This year signals a major inflection point for the Internet of Everything, which will have a much bigger impact on the world and its cities than the Internet did in its first 20 years. The Internet of Everything is already revolutionizing the way our cities operate, creating a more dynamic global economy and also bringing new, richer experiences to citizens. Soon, we will live in a world where everything—and everyone—can be connected to everything else. Streets will be safer, homes will be smarter, citizens will be healthier and better educated. The Internet of Everything will change how we work—more information, better decisions, more agile supply chains, more responsive manufacturing, and increased economic value. The foundation of the city of the future will be the Internet of Everything, and those embracing this technology are leading the way.

JOHN CHAMBERS is Chairman and CEO of Cisco. WIM ELFRINK is Executive Vice President for Industry Solutions and Chief Globalisation Officer of Cisco. October 31, 2014 The Coming Robot Dystopia

All Too Inhuman

Illah Reza Nourbakhsh July/August 2015

GLEB GARANICH / REUTERS Wooden model Cylon is posed to look out of the window of the flat of its maker, Ukrainian Dmitry Balandin, in Zaporizhzhya August 6, 2013.

The term “robotics revolution” evokes images of the future: a not-too-distant future, perhaps, but an era surely distinct from the present. In fact, that revolution is already well under way. Today, military robots appear on battlefields, drones fill the skies, driverless cars take to the roads, and “telepresence robots” allow people to manifest themselves halfway around the world from their actual location. But the exciting, even seductive appeal of these technological advances has overshadowed deep, sometimes uncomfortable questions about what increasing human-robot interaction will mean for society.

Robotic technologies that collect, interpret, and respond to massive amounts of real-world data on behalf of governments, corporations, and ordinary people will unquestionably advance human life. But they also have the potential to produce dystopian outcomes. We are hardly on the brink of the nightmarish futures conjured by Hollywood movies such as The Matrix or The Terminator, in which intelligent machines attempt to enslave or exterminate humans. But those dark fantasies contain a seed of truth: the robotic future will involve dramatic tradeoffs, some so significant that they could lead to a collective identity crisis over what it means to be human.

LUKE MACGREGOR / COURTESY REUTERS A robot is pictured in front of the Houses of Parliament and Westminster Abbey as part of the Campaign to Stop Killer Robots in , April 2013. This is a familiar warning when it comes to technological innovations of all kinds. But there is a crucial distinction between what’s happening now and the last great breakthrough in robotic technology, when manufacturing automatons began to appear on factory floors during the late twentieth century. Back then, clear boundaries separated industrial robots from humans: protective fences isolated robot workspaces, ensuring minimal contact between man and machine, and humans and robots performed wholly distinct tasks without interacting. Such barriers have been breached, not only in the workplace but also in the wider society: robots now share the formerly human-only commons, and humans will increasingly interact socially with a diverse ecosystem of robots. The trouble is that the rich traditions of moral thought that guide human relationships have no equivalent when it comes to robot-to- human interactions. And of course, robots themselves have no innate drive to avoid ethical transgressions regarding, say, privacy or the protection of human life. How robots interact with people depends to a great deal on how much their creators know or care about such issues, and robot creators tend to be engineers, programmers, and designers with little training in ethics, human rights, privacy, or security. In the United States, hardly any of the academic engineering programs that grant degrees in robotics require the in-depth study of such fields.

One might hope that political and legal institutions would fill that gap, by steering and constraining the development of robots with the goal of reducing their potential for harm. Ideally, the rapid expansion of robots’ roles in society would be matched by equally impressive advances in regulation and in tort and liability law, so that societies could deal with the issues of accountability and responsibility that will inevitably crop up in the coming years. But the pace of change in robotics is far outstripping the ability of regulators and lawmakers to keep up, especially as large corporations pour massive investments into secretive robotics projects that are nearly invisible to government regulators.

There is every reason to believe that this gap between robot capability and robot regulation will widen every year, posing all kinds of quandaries for law and government. Imagine an adaptive robot that lives with and learns from its human owner. Its behavior over time will be a function of its original programming mixed with the influence of its environment and “upbringing.” It would be difficult for existing liability laws to apportion responsibility if such a machine caused injury, since its actions would be determined not merely by computer code but also by a deep neural-like network that would have learned from various sources. Who would be to blame? The robot? Its owner? Its creator?

We face a future in which robots will test the boundaries of our ethical and legal frameworks with increasing audacity. There will be no easy solutions to this challenge—but there are some steps we can take to prepare for it. Research institutes, universities, and the authorities that regulate them must help ensure that people trained to design and build intelligent machines also receive a rigorous education in ethics. And those already on the frontlines of innovation need to concentrate on investing robots with true agency. Human efforts to determine accountability almost always depend on our ability to discover and analyze intention. If we are going to live in a world with machines who act more and more like people and who make ever more “personal” choices, then we should insist that robots also be able to communicate with us about what they know, how they know it, and what they want.

A DOUBLE-EDGED SWORD For a good illustration of the kinds of quandaries that robots will pose by mixing clear social benefits with frustrating ethical dilemmas, consider the wheelchair. Today, more than 65 million people are confined to wheelchairs, contending with many more obstacles than their walking peers and sitting in a world designed for standing. But thanks to robotics, the next two decades will likely see the end of the wheelchair. Researchers at Carnegie Mellon; the University of California, Berkeley; and a number of other medical robotics laboratories are currently developing exoskeletal robotic legs that can sense objects and maintain balance. With these new tools, elderly people who are too frail to walk will find new footing, knowing that a slip that could result in a dangerous fracture will be far less likely. For visually impaired wheelchair users, exoskeletal robotic legs combined with computerized cameras and sensors will create a human-robot team: the person will select a high-level strategy—say, going to a coffee shop—and the legs will take care of the low-level operations of step-by-step navigation and motion.

Such outcomes would represent unqualified gains for humanity. But as robotic prosthetics enter the mainstream, the able-bodied will surely want to take advantage of them, too. These prosthetics will house sensors and cloud-connected software that will exceed the human body’s ability to sense, store, and process information. Such combinations are the first step in what futurists such as Hans Moravec and Ray Kurzweil have dubbed “”: a post-evolutionary transformation that will replace humans with a hybrid of man and machine. To date, hybrid performance has mostly fallen short of conventional human prowess, but it is merely a matter of time before human-robot couplings greatly outperform purely biological systems.

These superhuman capabilities will not be limited to physical action: computers are increasingly capable of receiving and interpreting brain signals transmitted through electrodes implanted in the head (or arranged around the head) and have even demonstrated rudimentary forms of brain-based machine control. Today, researchers are primarily interested in designing one-way systems, which can read brain signals and then send them to devices such as prosthetic limbs and cars. But no serious obstacles prevent computer interfaces from sending such signals right back, arming a human brain with a silicon turbocharge. The ability to perform complex mathematical calculations, produce top-quality language translation, and even deliver virtuosic musical performances might one day depend not solely on innate skill and practice but also on having access to the best brain-computer hybrid architecture.

Such advantages, however, would run headlong into a set of ethical problems: just as a fine line separates genetic engineering from eugenics, so, too, is there no clear distinction between robotics that would lift a human’s capabilities to their organic limit and those that would vault a person beyond all known boundaries. Such technologies have the potential to vastly magnify the already-significant gaps in opportunity and achievement that exist between people of different economic means. In the robotic future, today’s intense debates about social and economic inequality will seem almost quaint.

EVERY STEP YOU TAKE Democracy and capitalism rely on a common underlying assumption: if informed individuals acting rationally can express their free will, their individual choices will combine to yield the best outcome for society as a whole. Both systems thus depend on two conditions: people must have access to information and must have the power to make choices. The age of “big data” promises greater access to information of all kinds. But robotic technologies that collect and interpret unprecedented amounts of data about human behavior actually threaten both access to information and freedom of choice.

A fundamental shift has begun to take place in the relationship between automation technologies and human behavior. Conventional interactions between consumers and firms are based on direct economic exchanges: consumers pay for goods and services, and firms provide them. In the digital economy, however, consumers benefit more and more from seemingly free service, while firms profit not by directly charging consumers but by collecting and then monetizing information about consumers’ behavior, often without their knowledge or acquiescence. This kind of basic data mining has become commonplace: think, for example, of how Google analyzes users’ search histories and e-mail messages in order to determine what products they might be interested in buying and then uses that information to sell targeted advertising space to other firms.

JOHN GRESS / COURTESY REUTERS Zac Vawter, a 31-year-old software engineer, uses the world's first neural- controlled Bionic leg in Chicago, November 2012.

As more automation technologies begin to appear in the physical world, such processes will become even more invasive. In the coming years, digital advertisements will incorporate pupil-tracking technology—currently in development at Carnegie Mellon and elsewhere—that can monitor the gazes of passersby from meters away. Fitted with sophisticated cameras and software that can estimate a passerby’s age and gender and observe facial cues to recognize moods and emotions, interactive billboards will not merely display static advertisements to viewers but also conduct ongoing tests of human responses to particular messages and stimuli, noting the emotional responses and purchasing behaviors of every subcategory of consumer and compiling massive, aggregated histories of the effect of each advertisement.

This very concept was depicted in the 2002 science-fiction film Minority Report during a scene in which the protagonist (played by Tom Cruise) walks through a shopping center where holographic signs and avatars bombard him with marketing messages, calling out his name and offering him products and services specifically tailored to him. Far from suggesting a shopper’s paradise, the scene is deeply unsettling, because it captures the way that intelligent machines might someday push humans’ buttons so well that we will become the automatons, under the sway (and even control) of well-informed, highly social robots that have learned how to influence our behavior.

A less fantastic, shorter-term concern about the effects of robotics and machine learning on human agency and well- being revolves around labor. In The Second Machine Age, the economist Erik Brynjolfsson and the information technology expert Andrew McAfee demonstrate that robotic technology is increasingly more efficient than human labor, offering a significant return on investment when performing both routine manual jobs and simple mental tasks. Unlike human workers, whose collective performance doesn’t change much over time, robot employees keep getting more efficient. With each advance in robot capability, it becomes harder to justify employing humans, even in jobs that require specialized skills or knowledge. No fundamental barrier exists to stop the onward march of robots into the labor market: almost every job, blue collar and white collar, will be at risk in an age of exponential progress in computing and robotics. The result might be higher unemployment, which, in turn, could contribute to rising economic inequality, as the wealth created by new technologies benefits fewer and fewer people.

ONE SINGULAR SENSATION In discussions and debates among technologists, economists, and philosophers, such visions of the future sit alongside a number of less grim prognostications about what the world will look like once artificial intelligence and machine learning have produced the “technological singularity”: computer systems that can themselves invent new technologies that surpass those created by their original human creators. The details of such predictions vary depending on the forecaster. Some, such as Moravec, foresee a post-evolutionary successor to Homo sapiens that will usher in a new leisure age of comfort and prosperity. Others envision robotic vessels able to “upload” human consciousness. And Kurzweil has suggested that the technological singularity will offer people a kind of software-based immortality.

These long-term views, however, can distract from the more prosaic near-term consequences of the robotics revolution—not the great dislocations caused by a superhuman machine consciousness but rather the small train wrecks that will result from the spread of mediocre robot intelligence. Today, nearly all our social interactions take place with other humans, but we are on the cusp of an era in which machines will become our usual interlocutors. Our driverless cars will join in our fights with one another over parking spots: when an argument leads to a fender-bender, we will insist to our robot mechanics that they have not repaired our robot cars properly. We will negotiate with robot hostesses for corner tables at restaurants where the food is prepared by robot chefs. Every day, we will encounter robots, from hovering drones to delivery machines to taxis, that will operate seamlessly with and without human remote control; daily life will involve constantly interacting with machines without knowing just how much another person might be involved in the machine’s response. There will be no room in such infinitely adjustable human-robot systems for us to treat robots one way and humans another; each style of interaction will infect the other, and the result will be an erosion of our sense of identity.

ALAMY Terminator.

But the result need not be a robot dystopia. A clear set of decisions about robot design and regulation stand between today’s world of human agency and tomorrow’s world of robot autonomy. Inventors must begin to combine technological ingenuity with sociological awareness, and governments need to design institutions and processes that will help integrate new, artificial agents into society. Today, all civil engineers are required to study ethics because an incorrectly designed bridge can cause great public harm. Roboticists face this same kind of responsibility today, because their creations are no longer mere academic pursuits. Computer science departments, which typically sponsor robotics research, must follow the lead of civil engineering departments and require that every degree candidate receive sufficient training in ethics and some exposure to . But preparing tomorrow’s robot creators will help only so much; the clock is ticking, and today’s roboticists must begin to think more clearly about how to build intelligent machines able to integrate themselves into societies. An important first step would be to make clear distinctions between robotic appliances and robotic agents. Robots that follow fixed directions and make no autonomous decisions should wear their limited cognitive abilities on their sleeves. This means they should not have faces, and they should not speak or communicate like people or express human emotions: a robotic vacuum cleaner shouldn’t tell its owner that it misses him when he’s at work. As for robots designed to formulate goals, make decisions, and convince people of their agency, they need to grow up. If roboticists want such machines to have anthropomorphic qualities, then their robots must also accept direct accountability: people must be able to question these machines about their knowledge, their goals, their desires, and their intentions.

Knowledge and transparency, the most valuable goods promised by the dawn of the information age in the last century, will take on even greater importance in the age of automation. Educators and regulators must help robot inventors acquire knowledge, and the inventors, in turn, must pledge to create more transparent artificial beings.

ILLAH REZA NOURBAKHSH is Professor of Robotics at the Robotics Institute of Carnegie Mellon University and the author of Robot Futures. June 16, 2015 The Political Power of Social Media

Technology, the Public Sphere, and Political Change

Clay Shirky January/February 2011

On January 17, 2001, during the impeachment trial of Philippine President Joseph Estrada, loyalists in the Philippine Congress voted to set aside key evidence against him. Less than two hours after the decision was announced, thousands of Filipinos, angry that their corrupt president might be let off the hook, converged on Epifanio de los Santos Avenue, a major crossroads in Manila. The protest was arranged, in part, by forwarded text messages reading, "Go 2 EDSA. Wear blk." The crowd quickly swelled, and in the next few days, over a million people arrived, choking traffic in downtown Manila.

The public's ability to coordinate such a massive and rapid response -- close to seven million text messages were sent that week -- so alarmed the country's legislators that they reversed course and allowed the evidence to be presented. Estrada's fate was sealed; by January 20, he was gone. The event marked the first time that social media had helped force out a national leader. Estrada himself blamed "the text-

messaging generation" for his downfall. Since the rise of the Internet in the early 1990s, the world's networked population has grown from the low millions to the low billions. Over the same period, social media have become a fact of life for civil society worldwide, involving many actors -- regular citizens, activists, nongovernmental organizations, telecommunications firms, software providers, governments. This raises an obvious question for the U.S. government: How does the ubiquity of social media affect U.S. interests, and how should U.S. policy respond to it?

As the communications landscape gets denser, more complex, and more participatory, the networked population is gaining greater access to information, more opportunities to engage in public speech, and an enhanced ability to undertake collective action. In the political arena, as the protests in Manila demonstrated, these increased freedoms can help loosely coordinated publics demand change.

The Philippine strategy has been adopted many times since. In some cases, the protesters ultimately succeeded, as in Spain in 2004, when demonstrations organized by text messaging led to the quick ouster of Spanish Prime Minister José María Aznar, who had inaccurately blamed the Madrid transit bombings on Basque separatists. The Communist Party lost power in Moldova in 2009 when massive protests coordinated in part by text message, Facebook, and Twitter broke out after an obviously fraudulent election. Around the world, the Catholic Church has faced lawsuits over its harboring of child rapists, a process that started when The Boston Globe's 2002 exposé of sexual abuse in the church went viral online in a matter of hours.

There are, however, many examples of the activists failing, as in Belarus in March 2006, when street protests (arranged in part by e-mail) against President Aleksandr Lukashenko's alleged vote rigging swelled, then faltered, leaving Lukashenko more determined than ever to control social media. During the June 2009 uprising of the Green Movement in Iran, activists used every possible technological coordinating tool to protest the miscount of votes for Mir Hossein Mousavi but were ultimately brought to heel by a violent crackdown. The Red Shirt uprising in Thailand in 2010 followed a similar but quicker path: protesters savvy with social media occupied downtown Bangkok until the Thai government dispersed the protesters, killing dozens.

The use of social media tools -- text messaging, e-mail, photo sharing, social networking, and the like -- does not have a single preordained outcome. Therefore, attempts to outline their effects on political action are too often reduced to dueling anecdotes. If you regard the failure of the Belarusian protests to oust Lukashenko as paradigmatic, you will regard the Moldovan experience as an outlier, and vice versa. Empirical work on the subject is also hard to come by, in part because these tools are so new and in part because relevant examples are so rare. The safest characterization of recent quantitative attempts to answer the question, Do digital tools enhance democracy? (such as those by Jacob Groshek and Philip Howard) is that these tools probably do not hurt in the short run and might help in the long run -- and that they have the most dramatic effects in states where a public sphere already constrains the actions of the government.

Despite this mixed record, social media have become coordinating tools for nearly all of the world's political movements, just as most of the world's authoritarian governments (and, alarmingly, an increasing number of democratic ones) are trying to limit access to it. In response, the U.S. State Department has committed itself to "Internet freedom" as a specific policy aim. Arguing for the right of people to use the Internet freely is an appropriate policy for the United States, both because it aligns with the strategic goal of strengthening civil society worldwide and because it resonates with American beliefs about freedom of expression. But attempts to yoke the idea of Internet freedom to short- term goals -- particularly ones that are country-specific or are intended to help particular dissident groups or encourage regime change -- are likely to be ineffective on average. And when they fail, the consequences can be serious.

Although the story of Estrada's ouster and other similar events have led observers to focus on the power of mass protests to topple governments, the potential of social media lies mainly in their support of civil society and the public sphere -- change measured in years and decades rather than weeks or months. The U.S. government should maintain Internet freedom as a goal to be pursued in a principled and regime-neutral fashion, not as a tool for effecting immediate policy aims country by country. It should likewise assume that progress will be incremental and, unsurprisingly, slowest in the most authoritarian regimes.

THE PERILS OF INTERNET FREEDOM

In January 2010, U.S. Secretary of State Hillary Clinton outlined how the United States would promote Internet freedom abroad. She emphasized several kinds of freedom, including the freedom to access information (such as the ability to use Wikipedia and Google inside Iran), the freedom of ordinary citizens to produce their own public media (such as the rights of Burmese activists to ), and the freedom of citizens to converse with one another (such as the Chinese public's capacity to use instant messaging without interference).

Most notably, Clinton announced funding for the development of tools designed to reopen access to the Internet in countries that restrict it. This "instrumental" approach to Internet freedom concentrates on preventing states from censoring outside Web sites, such as Google, YouTube, or that of . It focuses only secondarily on public speech by citizens and least of all on private or social uses of digital media. According to this vision, Washington can and should deliver rapid, directed responses to censorship by authoritarian regimes.

The instrumental view is politically appealing, action-oriented, and almost certainly wrong. It overestimates the value of broadcast media while underestimating the value of media that allow citizens to communicate privately among themselves. It overestimates the value of access to information, particularly information hosted in the West, while underestimating the value of tools for local coordination. And it overestimates the importance of computers while underestimating the importance of simpler tools, such as cell phones.

The instrumental approach can also be dangerous. Consider the debacle around the proposed censorship-circumvention software known as Haystack, which, according to its developer, was meant to be a "one-to-one match for how the [Iranian] regime implements censorship." The tool was widely praised in Washington; the U.S. government even granted it an export license. But the program was never carefully vetted, and when security experts examined it, it turned out that it not only failed at its goal of hiding messages from governments but also made it, in the words of one analyst, "possible for an adversary to specifically pinpoint individual users." In contrast, one of the most successful anti-censorship software programs, Freegate, has received little support from the United States, partly because of ordinary bureaucratic delays and partly because the U.S. government is wary of damaging U.S.-Chinese relations: the tool was originally created by Falun Gong, the spiritual movement that the Chinese government has called "an evil cult." The challenges of Freegate and Haystack demonstrate how difficult it is to weaponize social media to pursue country-specific and near- term policy goals.

New media conducive to fostering participation can indeed increase the freedoms Clinton outlined, just as the printing press, the postal service, the telegraph, and the telephone did before. One complaint about the idea of new media as a political force is that most people simply use these tools for commerce, social life, or self-distraction, but this is common to all forms of media. Far more people in the 1500s were reading erotic novels than Martin Luther's "Ninety-five Theses," and far more people before the American Revolution were reading Poor Richard's Almanack than the work of the Committees of Correspondence. But those political works still had an enormous political effect.

Just as Luther adopted the newly practical printing press to protest against the Catholic Church, and the American revolutionaries synchronized their beliefs using the postal service that Benjamin Franklin had designed, today's dissident movements will use any means possible to frame their views and coordinate their actions; it would be impossible to describe the Moldovan Communist Party's loss of Parliament after the 2009 elections without discussing the use of cell phones and online tools by its opponents to mobilize. Authoritarian governments stifle communication among their citizens because they fear, correctly, that a better-coordinated populace would constrain their ability to act without oversight.

Despite this basic truth -- that communicative freedom is good for political freedom -- the instrumental mode of Internet statecraft is still problematic. It is difficult for outsiders to understand the local conditions of dissent. External support runs the risk of tainting even peaceful opposition as being directed by foreign elements. Dissidents can be exposed by the unintended effects of novel tools. A government's demands for Internet freedom abroad can vary from country to country, depending on the importance of the relationship, leading to cynicism about its motives.

The more promising way to think about social media is as long-term tools that can strengthen civil society and the public sphere. In contrast to the instrumental view of Internet freedom, this can be called the "environmental" view. According to this conception, positive changes in the life of a country, including pro-democratic regime change, follow, rather than precede, the development of a strong public sphere. This is not to say that popular movements will not successfully use these tools to discipline or even oust their governments, but rather that U.S. attempts to direct such uses are likely to do more harm than good. Considered in this light, Internet freedom is a long game, to be conceived of and supported not as a separate agenda but merely as an important input to the more fundamental political freedoms.

THE THEATER OF COLLAPSE

Any discussion of political action in repressive regimes must take into account the astonishing fall of communism in 1989 in eastern Europe and the subsequent collapse of the Soviet Union in 1991. Throughout the Cold War, the United States invested in a variety of communications tools, including broadcasting the Voice of America radio station, hosting an American pavilion in Moscow (home of the famous Nixon- Khrushchev "kitchen debate"), and smuggling Xerox machines behind the Iron Curtain to aid the underground press, or samizdat. Yet despite this emphasis on communications, the end of the Cold War was triggered not by a defiant uprising of Voice of America listeners but by economic change. As the price of oil fell while that of wheat spiked, the Soviet model of selling expensive oil to buy cheap wheat stopped working. As a result, the Kremlin was forced to secure loans from the West, loans that would have been put at risk had the government intervened militarily in the affairs of non-Russian states. In 1989, one could argue, the ability of citizens to communicate, considered against the background of macroeconomic forces, was largely irrelevant.

But why, then, did the states behind the Iron Curtain not just let their people starve? After all, the old saying that every country is three meals away from revolution turned out to be sadly incorrect in the twentieth century; it is possible for leaders to survive even when millions die. Stalin did it in the 1930s, Mao did it in the 1960s, and Kim Jong Il has done it more than once in the last two decades. But the difference between those cases and the 1989 revolutions was that the leaders of East Germany, Czechoslovakia, and the rest faced civil societies strong enough to resist. The weekly demonstrations in East Germany, the Charter 77 civic movement in Czechoslovakia, and the Solidarity movement in all provided visible governments in waiting.

The ability of these groups to create and disseminate literature and political documents, even with simple photocopiers, provided a visible alternative to the communist regimes. For large groups of citizens in these countries, the political and, even more important, economic bankruptcy of the government was no longer an open secret but a public fact. This made it difficult and then impossible for the regimes to order their troops to take on such large groups.

Thus, it was a shift in the balance of power between the state and civil society that led to the largely peaceful collapse of communist control. The state's ability to use violence had been weakened, and the civil society that would have borne the brunt of its violence had grown stronger. When civil society triumphed, many of the people who had articulated opposition to the communist regimes -- such as Tadeusz Mazowiecki in Poland and Václav Havel in Czechoslovakia -- became the new political leaders of those countries. Communications tools during the Cold War did not cause governments to collapse, but they helped the people take power from the state when it was weak.

The idea that media, from the Voice of America to samizdat, play a supporting role in social change by strengthening the public sphere echoes the historical role of the printing press. As the German philosopher Jürgen Habermas argued in his 1962 book, The Structural Transformation of the Public Sphere, the printing press helped democratize Europe by providing space for discussion and agreement among politically engaged citizens, often before the state had fully democratized, an argument extended by later scholars, such as Asa Briggs, Elizabeth Eisenstein, and Paul Starr.

Political freedom has to be accompanied by a civil society literate enough and densely connected enough to discuss the issues presented to the public. In a famous study of political opinion after the 1948 U.S. presidential election, the sociologists Elihu Katz and Paul Lazarsfeld discovered that mass media alone do not change people's minds; instead, there is a two-step process. Opinions are first transmitted by the media, and then they get echoed by friends, family members, and colleagues. It is in this second, social step that political opinions are formed. This is the step in which the Internet in general, and social media in particular, can make a difference. As with the printing press, the Internet spreads not just media consumption but media production as well -- it allows people to privately and publicly articulate and debate a welter of conflicting views.

A slowly developing public sphere, where public opinion relies on both media and conversation, is the core of the environmental view of Internet freedom. As opposed to the self-aggrandizing view that the West holds the source code for democracy -- and if it were only made accessible, the remaining autocratic states would crumble -- the environmental view assumes that little political change happens without the dissemination and adoption of ideas and opinions in the public sphere. Access to information is far less important, politically, than access to conversation. Moreover, a public sphere is more likely to emerge in a society as a result of people's dissatisfaction with matters of economics or day-to-day governance than from their embrace of abstract political ideals.

To take a contemporary example, the Chinese government today is in more danger of being forced to adopt democratic norms by middle-class members of the ethnic Han majority demanding less corrupt local governments than it is by Uighurs or Tibetans demanding autonomy. Similarly, the One Million Signatures Campaign, an Iranian women's rights movement that focuses on the repeal of laws inimical to women, has been more successful in liberalizing the behavior of the Iranian government than the more confrontational Green Movement.

For optimistic observers of public demonstrations, this is weak tea, but both the empirical and the theoretical work suggest that protests, when effective, are the end of a long process, rather than a replacement for it. Any real commitment by the United States to improving political freedom worldwide should concentrate on that process -- which can only occur when there is a strong public sphere.

THE CONSERVATIVE DILEMMA

Disciplined and coordinated groups, whether businesses or governments, have always had an advantage over undisciplined ones: they have an easier time engaging in collective action because they have an orderly way of directing the action of their members. Social media can compensate for the disadvantages of undisciplined groups by reducing the costs of coordination. The anti-Estrada movement in the Philippines used the ease of sending and forwarding text messages to organize a massive group with no need (and no time) for standard managerial control. As a result, larger, looser groups can now take on some kinds of coordinated action, such as protest movements and public media campaigns, that were previously reserved for formal organizations. For political movements, one of the main forms of coordination is what the military calls "shared awareness," the ability of each member of a group to not only understand the situation at hand but also understand that everyone else does, too. Social media increase shared awareness by propagating messages through social networks. The anti- Aznar protests in Spain gained momentum so quickly precisely because the millions of people spreading the message were not part of a hierarchical organization.

The Chinese anticorruption protests that broke out in the aftermath of the devastating May 2008 earthquake in Sichuan are another example of such ad hoc synchronization. The protesters were parents, particularly mothers, who had lost their only children in the collapse of shoddily built schools, the result of collusion between construction firms and the local government. Before the earthquake, corruption in the country's construction industry was an open secret. But when the schools collapsed, citizens began sharing documentation of the damage and of their protests through social media tools. The consequences of government corruption were made broadly visible, and it went from being an open secret to a public truth.

The Chinese government originally allowed reporting on the post-earthquake protests, but abruptly reversed itself in June. Security forces began arresting protesters and threatening journalists when it became clear that the protesters were demanding real local reform and not merely state reparations. From the government's perspective, the threat was not that citizens were aware of the corruption, which the state could do nothing about in the short run. Beijing was afraid of the possible effects if this awareness became shared: it would have to either enact reforms or respond in a way that would alarm more citizens. After all, the prevalence of camera phones has made it harder to carry out a widespread but undocumented crackdown.

This condition of shared awareness -- which is increasingly evident in all modern states -- creates what is commonly called "the dictator's dilemma" but that might more accurately be described by the phrase coined by the media theorist Briggs: "the conservative dilemma," so named because it applies not only to autocrats but also to democratic governments and to religious and business leaders. The dilemma is created by new media that increase public access to speech or assembly; with the spread of such media, whether photocopiers or Web browsers, a state accustomed to having a monopoly on public speech finds itself called to account for anomalies between its view of events and the public's. The two responses to the conservative dilemma are censorship and propaganda. But neither of these is as effective a source of control as the enforced silence of the citizens. The state will censor critics or produce propaganda as it needs to, but both of those actions have higher costs than simply not having any critics to silence or reply to in the first place. But if a government were to shut down Internet access or ban cell phones, it would risk radicalizing otherwise pro-regime citizens or harming the economy.

The conservative dilemma exists in part because political speech and apolitical speech are not mutually exclusive. Many of the South Korean teenage girls who turned out in Seoul's Cheonggyecheon Park in 2008 to protest U.S. beef imports were radicalized in the discussion section of a Web site dedicated to Dong Bang Shin Ki, a South Korean boy band. DBSK is not a political group, and the protesters were not typical political actors. But that online community, with around 800,000 active members, amplified the second step of Katz and Lazarsfeld's two-step process by allowing members to form political opinions through conversation.

Popular culture also heightens the conservative dilemma by providing cover for more political uses of social media. Tools specifically designed for dissident use are politically easy for the state to shut down, whereas tools in broad use become much harder to censor without risking politicizing the larger group of otherwise apolitical actors. Ethan Zuckerman of Harvard's Berkman Center for Internet and Society calls this "the cute cat theory of digital activism." Specific tools designed to defeat state censorship (such as proxy servers) can be shut down with little political penalty, but broader tools that the larger population uses to, say, share pictures of cute cats are harder to shut down.

For these reasons, it makes more sense to invest in social media as general, rather than specifically political, tools to promote self-governance. The norm of free speech is inherently political and far from universally shared. To the degree that the United States makes free speech a first-order goal, it should expect that goal to work relatively well in democratic countries that are allies, less well in undemocratic countries that are allies, and least of all in undemocratic countries that are not allies. But nearly every country in the world desires economic growth. Since governments jeopardize that growth when they ban technologies that can be used for both political and economic coordination, the United States should rely on countries' economic incentives to allow widespread media use. In other words, the U.S. government should work for conditions that increase the conservative dilemma, appealing to states' self-interest rather than the contentious virtue of freedom, as a way to create or strengthen countries' public spheres.

SOCIAL MEDIA SKEPTICISM

There are, broadly speaking, two arguments against the idea that social media will make a difference in national politics. The first is that the tools are themselves ineffective, and the second is that they produce as much harm to democratization as good, because repressive governments are becoming better at using these tools to suppress dissent.

The critique of ineffectiveness, most recently offered by Malcolm Gladwell in The New Yorker, concentrates on examples of what has been termed "slacktivism," whereby casual participants seek social change through low-cost activities, such as joining Facebook's "Save Darfur" group, that are long on bumper-sticker sentiment and short on any useful action. The critique is correct but not central to the question of social media's power; the fact that barely committed actors cannot click their way to a better world does not mean that committed actors cannot use social media effectively. Recent protest movements -- including a movement against fundamentalist vigilantes in India in 2009, the beef protests in South Korea in 2008, and protests against education laws in Chile in 2006 -- have used social media not as a replacement for real-world action but as a way to coordinate it. As a result, all of those protests exposed participants to the threat of violence, and in some cases its actual use. In fact, the adoption of these tools (especially cell phones) as a way to coordinate and document real-world action is so ubiquitous that it will probably be a part of all future political movements.

This obviously does not mean that every political movement that uses these tools will succeed, because the state has not lost the power to react. This points to the second, and much more serious, critique of social media as tools for political improvement -- namely, that the state is gaining increasingly sophisticated means of monitoring, interdicting, or co-opting these tools. The use of social media, the scholars Rebecca MacKinnon of the New America Foundation and Evgeny Morozov of the Open Society Institute have argued, is just as likely to strengthen authoritarian regimes as it is to weaken them. The Chinese government has spent considerable effort perfecting several systems for controlling political threats from social media. The least important of these is its censorship and surveillance program. Increasingly, the government recognizes that threats to its legitimacy are coming from inside the state and that blocking the Web site of The New York Times does little to prevent grieving mothers from airing their complaints about corruption.

The Chinese system has evolved from a relatively simple filter of incoming Internet traffic in the mid-1990s to a sophisticated operation that not only limits outside information but also uses arguments about nationalism and public morals to encourage operators of Chinese Web services to censor their users and users to censor themselves. Because its goal is to prevent information from having politically synchronizing effects, the state does not need to censor the Internet comprehensively; rather, it just needs to minimize access to information.

Authoritarian states are increasingly shutting down their communications grids to deny dissidents the ability to coordinate in real time and broadcast documentation of an event. This strategy also activates the conservative dilemma, creating a short-term risk of alerting the population at large to political conflict. When the government of Bahrain banned Google Earth after an annotated map of the royal family's annexation of public land began circulating, the effect was to alert far more Bahrainis to the offending map than knew about it originally. So widely did the news spread that the government relented and reopened access after four days.

Such shutdowns become more problematic for governments if they are long-lived. When antigovernment protesters occupied Bangkok in the summer of 2010, their physical presence disrupted Bangkok's shopping district, but the state's reaction, cutting off significant parts of the Thai telecommunications infrastructure, affected people far from the capital. The approach creates an additional dilemma for the state -- there can be no modern economy without working phones -- and so its ability to shut down communications over large areas or long periods is constrained.

In the most extreme cases, the use of social media tools is a matter of life and death, as with the proposed death sentence for the blogger Hossein Derakhshan in Iran (since commuted to 19 and a half years in prison) or the suspicious hanging death of Oleg Bebenin, the founder of the Belarusian opposition Web site Charter 97. Indeed, the best practical reason to think that social media can help bring political change is that both dissidents and governments think they can. All over the world, activists believe in the utility of these tools and take steps to use them accordingly. And the governments they contend with think social media tools are powerful, too, and are willing to harass, arrest, exile, or kill users in response. One way the United States can heighten the conservative dilemma without running afoul of as many political complications is to demand the release of citizens imprisoned for using media in these ways. Anything that constrains the worst threats of violence by the state against citizens using these tools also increases the conservative dilemma.

LOOKING AT THE LONG RUN

To the degree that the United States pursues Internet freedom as a tool of statecraft, it should de-emphasize anti- censorship tools, particularly those aimed at specific regimes, and increase its support for local public speech and assembly more generally. Access to information is not unimportant, of course, but it is not the primary way social media constrain autocratic rulers or benefit citizens of a democracy. Direct, U.S. government-sponsored support for specific tools or campaigns targeted at specific regimes risk creating backlash that a more patient and global application of principles will not.

This entails reordering the State Department's Internet freedom goals. Securing the freedom of personal and social communication among a state's population should be the highest priority, closely followed by securing individual citizens' ability to speak in public. This reordering would reflect the reality that it is a strong civil society -- one in which citizens have freedom of assembly -- rather than access to Google or YouTube, that does the most to force governments to serve their citizens.

As a practical example of this, the United States should be at least as worried about Egypt's recent controls on the mandatory licensing of group-oriented text-messaging services as it is about Egypt's attempts to add new restrictions on press freedom. The freedom of assembly that such text-messaging services support is as central to American democratic ideals as is freedom of the press. Similarly, South Korea's requirement that citizens register with their real names for certain Internet services is an attempt to reduce their ability to surprise the state with the kind of coordinated action that took place during the 2008 protest in Seoul. If the United States does not complain as directly about this policy as it does about Chinese censorship, it risks compromising its ability to argue for Internet freedom as a global ideal.

More difficult, but also essential, will be for the U.S. government to articulate a policy of engagement with the private companies and organizations that host the networked public sphere. Services based in the United States, such as Facebook, Twitter, Wikipedia, and YouTube, and those based overseas, such as QQ (a Chinese instant-messaging service), WikiLeaks (a repository of leaked documents whose servers are in Sweden), Tuenti (a Spanish social network), and Naver (a Korean one), are among the sites used most for political speech, conversation, and coordination. And the world's wireless carriers transmit text messages, photos, and videos from cell phones through those sites. How much can these entities be expected to support freedom of speech and assembly for their users?

The issue here is analogous to the questions about freedom of speech in the United States in private but commercial environments, such as those regarding what kind of protests can be conducted in shopping malls. For good or ill, the platforms supporting the networked public sphere are privately held and run; Clinton committed the United States to working with those companies, but it is unlikely that without some legal framework, as exists for real-world speech and action, moral suasion will be enough to convince commercial actors to support freedom of speech and assembly.

It would be nice to have a flexible set of short-term digital tactics that could be used against different regimes at different times. But the requirements of real-world statecraft mean that what is desirable may not be likely. Activists in both repressive and democratic regimes will use the Internet and related tools to try to effect change in their countries, but Washington's ability to shape or target these changes is limited. Instead, Washington should adopt a more general approach, promoting freedom of speech, freedom of the press, and freedom of assembly everywhere. And it should understand that progress will be slow. Only by switching from an instrumental to an environmental view of the effects of social media on the public sphere will the United States be able to take advantage of the long-term benefits these tools promise -- even though that may mean accepting short-term disappointment.

CLAY SHIRKY is Professor of New Media at New York University and the author of Cognitive Surplus: Creativity and Generosity in a Connected Age. December 20, 2010 From Innovation to Revolution

Do Social Media Make Protests Possible?

Malcolm Gladwell and Clay Shirky March/April 2011

AN ABSENCE OF EVIDENCEMalcolm Gladwell

While reading Clay Shirky's "The Political Power of Social Media" (January/February 2011), I was reminded of a trip I took just over ten years ago, during the dot-com bubble. I went to the catalog clothier Lands' End in Wisconsin, determined to write about how the rise of the Internet and e- commerce was transforming retail. What I learned was that it was not. Having a Web site, I was told, was definitely an improvement over being dependent entirely on a paper catalog and a phone bank. But it was not a life-changing event. After all, taking someone's order over the phone is not that much harder than taking it over the Internet. The innovations that companies such as Lands' End really cared about were bar codes and overnight delivery, which utterly revolutionized the back ends of their businesses and which had happened a good ten to 15 years previously.

The lesson here is that just because innovations in communications technology happen does not mean that they matter; or, to put it another way, in order for an innovation to make a real difference, it has to solve a problem that was actually a problem in the first place. This is the question that I kept wondering about throughout Shirky's essay-and that had motivated my New Yorker article on social media, to which Shirky refers: What evidence is there that social revolutions in the pre-Internet era suffered from a lack of cutting-edge communications and organizational tools? In other words, did social media solve a problem that actually needed solving? Shirky does a good job of showing how some recent protests have used the tools of social media. But for his argument to be anything close to persuasive, he has to convince readers that in the absence of social media, those uprisings would not have been possible.

MALCOLM GLADWELL is a Staff Writer for The New Yorker.

SHIRKY REPLIES

Malcolm Gladwell's commercial comparison is illustrative. If you look at the way the Internet has affected businesses such as Lands' End, you will indeed conclude that not much has changed, but that is because you are looking at the wrong thing. The effect of the Internet on traditional businesses is less about altering internal practices than about altering the competitive landscape: clothing firms now have to compete with Zappos, bookstores with Amazon, newspapers with , and so on.

The competitive landscape gets altered because the Internet allows insurgents to play by different rules than incumbents. (Curiously, the importance of this difference is best explained by Gladwell himself, in his 2009 New Yorker essay "How David Beats Goliath.") So I would break Gladwell's question of whether social media solved a problem that actually needed solving into two parts: Do social media allow insurgents to adopt new strategies? And have those strategies ever been crucial? Here, the historical record of the last decade is unambiguous: yes, and yes.

Digital networks have acted as a massive positive supply shock to the cost and spread of information, to the ease and range of public speech by citizens, and to the speed and scale of group coordination. As Gladwell has noted elsewhere, these changes do not allow otherwise uncommitted groups to take effective political action. They do, however, allow committed groups to play by new rules.

It would be impossible to tell the story of Philippine President Joseph Estrada's 2000 downfall without talking about how texting allowed Filipinos to coordinate at a speed and on a scale not available with other media. Similarly, the supporters of Spanish Prime Minister José Luis Rodríguez Zapatero used text messaging to coordinate the 2004 ouster of the People's Party in four days; anticommunist Moldovans used social media in 2009 to turn out 20,000 protesters in just 36 hours; the South Koreans who rallied against beef imports in 2008 took their grievances directly to the public, sharing text, photos, and video online, without needing permission from the state or help from professional media. Chinese anticorruption protesters use the instant-messaging service QQ the same way today. All these actions relied on the power of social media to synchronize the behavior of groups quickly, cheaply, and publicly, in ways that were unavailable as recently as a decade ago.

As I noted in my original essay, this does not mean insurgents always prevail. Both the Green Movement and the Red Shirt protesters used novel strategies to organize, but the willingness of the Iranian and Thai governments to kill their own citizens proved an adequate defense of the status quo. Given the increased vigor of state reaction in the world today, it is not clear what new equilibriums between states and their citizens will look like. (I believe that, as with the printing press, the current changes will result in a net improvement for democracy; the scholars Evgeny Morozov and Rebecca MacKinnon, among others, dispute this view.)

Even the increased sophistication and force of state reaction, however, underline the basic point: these tools alter the dynamics of the public sphere. Where the state prevails, it is only by reacting to citizens' ability to be more publicly vocal and to coordinate more rapidly and on a larger scale than before these tools existed.

CLAY SHIRKY is Professor of New Media at New York University and the author of Cognitive Surplus: Creativity and Generosity in a Connected Age.

January 19, 2011 The Next Safety Net

Social Policy for a Digital Age

Nicolas Colin and Bruno Palier July/August 2015

ANDREW BURTON / GETTY IMAGES

Factory of the future: a coffee shop in Detroit, September 2013.

As advanced economies become more automated and digitized, almost all workers will be affected, but some more than others. Those who have what the economists Maarten Goos and Alan Manning call “lovely jobs” will do fine, creating and managing robots and various digital applications and adding lots of value in service sectors such as finance. Those who have what Goos and Manning call “lousy jobs,” however—in sectors such as manufacturing, retail, delivery, or routine office work—will fare less well, facing low pay, short contracts, precarious employment, and outright job loss. Economic inequality across society as a whole is likely to grow, along with demands for increased state expenditures on social services of various kinds—just as the resources to cover such expenditures are dropping because of lower tax contributions from a smaller work force.

These trends will create a crisis for modern welfare states, the finances of which will increasingly become unsustainable. But making the situation even worse will be the changing nature of employment. Twentieth-century social insurance systems were set up to address the risks met by people who worked in mass industrialized economies—ones in which there were generally plenty of jobs available for all kinds of workers. The basic assumption behind them was that almost all adults would be steadily employed, earning wages and paying taxes, and the government would step in to help take care of the unemployable—the young, the old, the sick and disabled, and so forth. Social insurance—provided by the state in Europe and by the market in the United States—was aimed at guaranteeing income security for those with stable jobs.

In twenty-first-century digital economies, however, employment is becoming less routine, less steady, and generally less well remunerated. Social policy will therefore have to cover the needs of not just those outside the labor market but even many inside it. Just as technological development is restructuring the economy, in other words, so the welfare state will need to be restructured as well, to adapt itself to the conditions of the day.

THE WORKING LIFE The future of social policy will depend on how digitization changes the economy and employment. In the shift from the industrial to the digital economy, many jobs and activities are being destroyed, but new wealth is also being created. Robots are replacing humans in many situations, but new technologies and business models are generating a vast array of new goods, services, and applications, as well as the jobs necessary to create and operate them.

Technology doesn’t only allow old things to be done better and cheaper; it also opens up new potential business models and the means to satisfy previously unidentified needs. Those who can intuit and develop such models and satisfy such needs—entrepreneurs—are the kings of this new world, putting their talents to use in listening to customers, identifying their unmet desires, and creating businesses to serve them. In such efforts, digital technology is a means to an end, making businesses more scalable and customizable and increasing the return on invested capital. Software and robots don’t do all the work in such operations; humans continue to play a crucial role. But the nature of the human work involved often changes. Stable, long-term employment in routinized jobs is often no longer necessary; formal and informal collaboration on temporary projects is more the norm.

A clear distinction between job and home life erodes, moreover, as work becomes ad hoc and can be done anywhere, even as the so-called marketizes a range of direct peer-to-peer transactions outside standard corporate channels. People who can no longer find stable wage jobs look for ways to make ends meet with gigs offered on the huge platforms of the on-demand economy. As those platforms expand, everybody can sell things (on eBay), rent out a spare room (on Airbnb), perform a task (on Amazon’s Mechanical Turk), or share a ride (on BlaBlaCar).

So why the need for a new social policy? Why not just rely on entrepreneurial activity to redeploy the work force based on the new activities? Partly because various legal and regulatory barriers stand in the way of this brave new world, barriers erected precisely to avoid a situation in which all of life becomes subject to market operations—but also because there are plentiful dangers lurking, as well as opportunities. Innovation is the coin of the digital realm, for example, and innovation is routinely accompanied by failure. The dynamism of the digital economy is matched by its volatility. Few start- ups find a viable business model, let alone a sustained market. New companies emerge out of nowhere but often crash as quickly as they have soared. The entrepreneurs at the head of such operations may reap rich rewards during their brief time in the sun, but the same might not be true for their employees lower down on the food chain, who absorb much of the same risk and churn without partaking of the outsize benefits. So in the digital economy, a few lucky individuals will find significant or sustained income and security, while many more unlucky ones will see their employers go bankrupt and have to seek new ways to make ends meet. Many current social benefits, finally—such as pensions—are organized around the old economy, so people transitioning to the new one end up sacrificing a lot. SHANNON STAPLETON / COURTESY REUTERS A supporter of Airbnb holds a sign during a rally in New York, January 2015.

Unless social policy evolves, therefore, automation and digitization will aggravate inequality and leave many workers worse off than before. With proper innovations, however, new kinds of social policy can reduce inequality, protect workers, and even promote job creation. The digital work force can be enabled and empowered, firms can benefit from a more productive work force, and government can prove its relevance and effectiveness.

THE SEARCH FOR SECURITY Some of the challenges future social policy will need to address are traditional, such as health care, old-age pension, and senior care. Others will have a new twist. Affordable housing, for example, is likely to become an increasing concern, as the digitization of the economy concentrates economic activities in major cities, aggravating the scarcity of real estate there. As the economist Enrico Moretti suggests in The New Geography of Jobs, the real estate market in Silicon Valley offers a glimpse of how difficult it will become for most people to find decent housing close to the dense innovation clusters where new jobs will be located.

The greatest challenge, however, will be dealing with mass intermittent employment, as most of the work force will have to switch jobs relatively often and face temporary unemployment in between. For many today, the concept of intermittent work carries with it a sense of dread or shame, but that is only because it is approached with attitudinal baggage from the old economy. In the twenty-first century, stable, long-term employment with a single employer will no longer be the norm, and unemployment or underemployment will no longer be a rare and exceptional situation. Intermittence will increasingly prevail, with individuals serving as wage earners, freelancers, entrepreneurs, and jobless at different stages of their working lives.

With twentieth-century social policies, such a career pattern would be a disaster, because many benefits would be tied to certain kinds of jobs, and workers without those jobs might fall through the gaps in the social safety net. The task of twenty-first-century social policy is to make a virtue of necessity, finding ways to enable workers to have rich, full, and successful lives even as their careers undergo great volatility.

One commonly touted alternative approach to social policy is government provision of a universal, unconditional basic income to all citizens. The idea, promoted, for instance, by the political economist Philippe Van Parijs, is to pay each citizen a basic income that would guarantee access to basic necessary goods. This would guarantee freedom for all, the argument runs, giving people the option of choosing the jobs and lives they truly wanted. Such an approach would be both extremely expensive and insufficient, however. It would ensure that everyone had some money in their pockets at the beginning of each month, but it wouldn’t ensure that they would choose or even be able to afford decent health care or housing. Simply adding money to the demand side of the market would not necessarily produce more or better results on the supply side. So while some form of increased assistance may well be a necessary part of the puzzle, a guaranteed basic income does not amount to comprehensive or effective social policy reform. ROBERT GALBRAITH / COURTESY REUTERS A worker tends to the lawn on the roof of Twitter headquarters in San Francisco, California, October 2013.

Another possible approach is government provision not of incomes but of jobs. Public job creation was a major feature of the New Deal in the United States and similar programs elsewhere, and even today, there are some cases in which it makes sense for public authorities to at least finance the cost of collectively useful jobs—for example, in childcare, eldercare, education, and basic skills training. But governments have neither the means nor the agility to supplant most entrepreneurial activity in the private sector, inventing and deploying new business models that can trigger significant job creation in the digital economy.

Instead of attempting to replace or compete with entrepreneurs, governments should try to support and help them—by eliminating the legal barriers that often stand in the way of creating and growing the businesses that can provide jobs. In many places today, for example, existing fleets of taxis and taxi drivers cannot be replaced by masses of occasional, on-demand drivers working for companies such as Uber or Lyft because of government regulations that artificially limit the supply of transportation services. Modifying or abolishing such regulations could lead to a virtuous circle in which the availability of more drivers would create greater demand for more personalized or affordable services. And a similar process could take place in the health- care sector as well. In an increasingly digitized economy, many routine health-care tasks that under current law require doctors could in fact be accomplished by nurses supported by software and other technology. Regulatory reform could thus simultaneously lower costs, increase employment, and improve health-care outcomes.

The best approach to reforming social policy would be to build on the notion of “flexicurity,” which has long been a popular model in the Nordic countries (especially Denmark) and the Netherlands. The essence of flexicurity—shorthand for “flexible security”—is separating the provision of benefits from jobs. If the government can guarantee citizens access to health care, housing, education and training, and the like on a universal basis without regard to their employment status, the argument runs, people won’t be so terrified of switching jobs or losing a job. This, in turn, would allow the government to deregulate labor markets, leaving decisions about hiring and firing of employees to be made by firms themselves, according to economic logic. The result is greater efficiency, dynamism, and productivity, all built around workers’ needs rather than on their backs.

Twentieth-century welfare states emerged from the trauma of the Great Depression, when it became clear that cushioning mass publics from some of the harsher blows of unfettered markets was necessary to ensure capitalism’s efficiency and its broader democratic legitimacy. Flexicurity approaches take matters a step further, embodying a more social democratic notion that states and markets can and should work together to achieve a greater public good that marries a healthy economy with a healthy society. In this view, government social policy doesn’t just compensate for occasional market failures; it also works alongside markets to help sustain a flexible, well-trained, highly productive work force. By assuming public responsibility for the mitigation of certain basic kinds of risk—by dealing with health care, say, not at the level of an individual or a company but rather at the level of society as a whole—such an activist approach actually fosters a more fluid and entrepreneurial economy, with all the benefits that flow from that. In the end, therefore, the best recipe for social policy in a fast-paced, highly competitive digital economy may ironically be one that involves more state activism than digital entrepreneurs themselves usually favor—but activism that is more sensitive to and supportive of market mechanisms than statists have often been in the past.

NICOLAS COLIN, a former senior civil servant in the French Ministry for the Economy and Finance, is Co-Founder and Partner of TheFamily, an investment firm based in London and Paris. BRUNO PALIER is CNRS Research Director at the Center for European Studies and Co-Director of the Laboratory for Interdisciplinary Evaluation of Public Policies at Sciences Po. June 16, 2015 The Moral Code

How To Teach Robots Right and Wrong

Nayef Al-Rodhan August 12, 2015

YUYA SHINO / REUTERS SoftBank's human-like robot named '' performs during a news conference in Chiba, Japan, June 18, 2015.

At the most recent International Joint Conference on Artificial Intelligence, over 1,000 experts and researchers presented an open letter calling for a ban on offensive autonomous weapons. The letter, signed by Tesla’s Elon Musk, Apple co- founder , Google DeepMind CEO Demis Hassabis, and Professor Stephen Hawking, among others, warned of a “military artificial intelligence arms race.” Regardless of whether these campaigns to ban offensive autonomous weapons are successful, though, robotic technology will be increasingly widespread in many areas of military and economic life.

Over the years, robots have become smarter and more autonomous, but so far they still lack an essential feature: the capacity for moral reasoning. This limits their ability to make good decisions in complex situations. For example, a robot is not currently able to distinguish between combatants and noncombatants or to understand that enemies sometimes disguise themselves as civilians.

DAVE KAUP / COURTESY REUTERS Auto workers feed aluminum panels to robots at Ford's Kansas City Assembly Plant in Claycomo, Missouri, May 2015.

To address this failing, in 2014, the U.S. Office of Naval Research offered a $7.5 million grant to an interdisciplinary research team from Brown, Georgetown, Rensselaer Polytechnic Institute, Tufts, and Yale to build robots endowed with moral competence. They intend to capture human moral reasoning as a set of algorithms, which will allow robots to distinguish between right and wrong and to override rigid instructions when confronted with new situations. The idea of formalizing ethical guidelines is not new. More than seven decades ago, science-fiction writer Isaac Asimov described the “three laws of robotics”—a moral compass for artificial intelligence. The laws required robots to protect humans, obey instructions, and preserve themselves, in that order. The fundamental premise behind Asimov’s laws was to minimize conflicts between humans and robots. In Asimov’s stories, however, even these simple moral guidelines lead to often disastrous unintended consequences. Either by receiving conflicting instructions or by exploiting loopholes and ambiguities in these laws, Asimov’s robots ultimately tend to cause harm or lethal injuries to humans.

Today, robotics requires a much more nuanced moral code than Asimov’s “three laws.” Robots will be deployed in more complex situations that require spontaneous choices. The inevitable next step, therefore, would seem to be the design of “artificial moral agents,” a term for intelligent systems endowed with moral reasoning that are able to interact with humans as partners. In contrast with software programs, which function as tools, artificial agents have various degrees of autonomy.

However, robot morality is not simply a binary variable. In their seminal work Moral Machines, Yale’s Wendell Wallach and Indiana University’s Colin Allen analyze different gradations of the ethical sensitivity of robots. They distinguish between operational morality and functional morality. Operational morality refers to situations and possible responses that have been entirely anticipated and precoded by the designer of the robot system. This could include the profiling of an enemy combatant by age or physical appearance.

Functional morality involves robot responses to scenarios unanticipated by the programmer, where the robot will need some ability to make ethical decisions alone. Here, they write, robots are endowed with the capacity to assess and respond to “morally significant aspects of their own actions.” This is a much greater challenge.

The attempt to develop moral robots faces a host of technical obstacles, but, more important, it also opens a Pandora’s box of ethical dilemmas.

PATRICK T. FALLON / COURTESY REUTERS A General Atomics MQ-9 Reaper drone stands on the runway at Naval Base Ventura County Sea Range, Point Mugu, near Oxnard, California, July 2015.

WHOSE VALUES?

The most critical of these dilemmas is the question of whose morality robots will inherit. Moral values differ greatly from individual to individual, across national, religious, and ideological boundaries, and are highly dependent on context. For example, ideas of duty or sacrifice vary across cultures. During World War II, Japanese banzai attacks were supported by a cultural expectation that saw death as a soldier’s duty and surrender as an unforgivably shameful act. Similarly, notions of freedom and respect for life have very different connotations in peacetime or war. Even within any single category, these values develop and evolve over time.

Human morality is already tested in countless ways, and so too will be the morality of autonomous robots and artificial intelligence. Uncertainty over which moral framework to choose underlies the difficulty and limitations of ascribing moral values to artificial systems. The Kantian deontological (duty-based) imperative calls for rigid ethical constraints on one’s actions. It requires acting in a way that reflects universal values and sees humanity as an end, not as a means. In contrast, utilitarianism stresses that one should calculate only the consequences of one’s action—even if that action is not initially recognizably moral—and choose the most beneficial course. However, do we trust a robot to anticipate and weigh the numerous possible consequences of its actions? To implement either of these frameworks effectively, a robot would need to be equipped with an almost impossible amount of information. Even beyond the issue of a robot’s decision- making process, the specific issue of cultural relativism remains difficult to resolve: no one set of standards and guidelines for a robot’s choices exists.

For the time being, most questions of relativism are being set aside for two reasons. First, the U.S. military remains the chief patron of artificial intelligence for military applications and Silicon Valley for other applications. As such, American interpretations of morality, with its emphasis on freedom and responsibility, will remain the default. Second, for the foreseeable future, artificial moral agents will not have to confront situations outside of the battlefield, and the settings in which they will be given autonomy will be highly constrained.

LEARNING BY DOING

Even if the ethical questions are eventually answered, major technical challenges would still remain in coding something as abstract as morality into transistors.

There are two mainstream approaches. First is the top-down approach, which requires encoding specific moral values into an algorithm. These moral values are determined by the robot’s developers and can be based on frameworks such as religion, philosophical doctrines, or legal codes. To many neuroscientists and psychologists, this approach holds severe limitations. It devalues the fundamental role that experience, learning, and intuition play in shaping our understanding of the world and thus our moral code.

YUYA SHINO / COURTESY REUTERS SoftBank's human-like robot named "Pepper" gestures during its welcome ceremony as a bank concierge at a branch of Mizuho Financial Group's Mizuho bank in Tokyo, Japan, July 2015. The second approach is bottom-up and is based on letting robots acquire moral competence through their own learning, trial and error, growth, and evolution. In computational terms, this system is extremely challenging, but the advent of neuromorphic computing could make it a reality. Neuromorphic (“brainlike”) chips aim to replicate the morphology of human neurons and emulate the neural architecture of the brain in real time. Neuromorphic chips would enable robots to process data similarly to humans—nonlinearly and with millions of interconnected artificial neurons. This would be a far cry from conventional computing technology, which relies on linear sequences of calculations. This may sound like science fiction, but IBM has already developed the TrueNorth chip, which is able to mimic over one million human neurons. Robots with neuromorphic chips would possess humanlike intelligence and be able to grasp the world in unique (humanlike) ways. The ability to learn and experience offers no guarantee that a robot would consistently adhere to a “high” moral code. A robot equipped with a neuromorphic chip may appear ideal, but it does not promise “moral” outcomes in all situations, simply because human morality itself is often suboptimal and flawed.

In fact, the dissimilarity between robots and humans is sometimes touted as their greatest advantage. Proponents of moral robots argue that a robot, unlike a human, could not be affected by the stress of combat or succumb emotionally under pressure. While humans are inconsistent and get bored or tired, robots could apply codes of conduct more systematically. For instance, they would not act erratically or shoot indiscriminately at a crowd in a moment of panic.

Neuromorphic chips, and the humanlike behavior they may bring, would therefore not necessarily be a net gain in terms of moral benefits. Robots could develop humanlike weaknesses: hesitation, selfishness, or misunderstandings that could hinder their ability to accomplish their duties.

AN EXISTENTIAL RISK?

If humans successfully develop neuromorphic chips that enable robots to grasp the world in humanlike ways, what would constitute robots’ moral framework? There are several possible answers, but I prefer to look to neuroscience.

Neuroscience and brain imaging today suggest that humans are inherently neither moral nor immoral, but amoral. We function as a “predisposed tabula rasa.” That is, our moral compass is shaped by our upbringing and environment, but our propensity to be moral varies according to our perceived emotional self-interest. Humans are also fundamentally egoistic: our actions will, in most cases, be guided by our desire to maximize our chances of survival. The fundamental human instinct for survival and dominance is coded in our genetics and is a powerful motivator throughout our existence.

The very concept of making moral robots implies that they cannot be originally amoral. Even with neuromorphic technology, they cannot learn moral values from absolute scratch; they would still be programmed with basic preferences or biases established by their programmers. Eventually, a more sophisticated robot capable of writing its own source code could start off by being amoral and develop its own moral compass through learning and experience. Such robots, like humans, might ultimately be driven by self- interestand an intrinsic desire to ensure their own survival.

If this comes to pass, the implications are daunting. Robots might compete with humans for survival and dominance. Alternatively, robotics could be used to enhance human cognition. The future is uncertain. In the best-case scenario, robots will be successfully programmed with benign moral values and will constitute no threat. However, a more likely scenario is the development of autonomous robots that may be amoral or even immoral—a serious challenge to the future of humanity.

NAYEF AL-RODHAN is an Honorary Fellow at St. Antony’s College, Oxford University. Privacy Pragmatism

Focus on Data Use, Not Data Collection

Craig Mundie March/April 2014

RAFE SWAN / CORBIS

Ever since the Internet became a mass social phenomenon in the 1990s, people have worried about its effects on their privacy. From time to time, a major scandal has erupted, focusing attention on those anxieties; last year’s revelations concerning the U.S. National Security Agency’s surveillance of electronic communications are only the most recent example. In most cases, the subsequent debate has been about who should be able to collect and store personal data and how they should be able to go about it. When people hear or read about the issue, they tend to worry about who has access to information about their health, their finances, their relationships, and their political activities.

But those fears and the public conversations that articulate them have not kept up with the technological reality. Today, the widespread and perpetual collection and storage of personal data have become practically inevitable. Every day, people knowingly provide enormous amounts of data to a wide array of organizations, including government agencies, Internet service providers, telecommunications companies, and financial firms. Such organizations -- and many other kinds, as well -- also obtain massive quantities of data through “passive” collection, when people provide data in the act of doing something else: for example, by simply moving from one place to another while carrying a GPS-enabled cell phone. Indeed, there is hardly any part of one’s life that does not emit some sort of “data exhaust” as a byproduct. And it has become virtually impossible for someone to know exactly how much of his data is out there or where it is stored. Meanwhile, ever more powerful processors and servers have made it possible to analyze all this data and to generate new insights and inferences about individual preferences and behavior.

This is the reality of the era of “big data,” which has rendered obsolete the current approach to protecting individual privacy and civil liberties. Today’s laws and regulations focus largely on controlling the collection and retention of personal data, an approach that is becoming impractical for individuals, while also potentially cutting off future uses of data that could benefit society. The time has come for a new approach: shifting the focus from limiting the collection and retention of data to controlling data at the most important point -- the moment when it is used.

USER ILLUSION In the middle of the twentieth century, consumers around the world enthusiastically adopted a disruptive new technology that streamlined commerce and made it possible for ordinary people to do things that, until then, only businesses and large organizations could do. That technology was the credit card. In return for a line of revolving credit and the convenience of cashless transactions, credit card users implicitly agreed to give financial institutions access to large amounts of data about their spending habits. Companies used that information to infer consumers’ behavior and preferences and could even pinpoint users’ whereabouts on any given day. As the rapid global adoption of credit cards demonstrated, consumers mostly thought the tradeoff was worth it; in general, they did not feel that credit card companies abused their information.

In order to ensure that companies used all this new data responsibly, the Organization for Economic Cooperation and Development produced a set of guidelines on the protection of privacy and the flow of data across borders. Those principles, established in 1980, created the general framework that corporations still rely on when it comes to protecting individual privacy. The guidelines directed companies on the proper way to collect and retain personal data, ensure its quality and security, and provide meaningful opportunities for individuals to consent to the collection and have access to the data collected about them. According to the OECD, the guidelines helped increase the proportion of member countries with privacy laws from one-third to nearly all 34 of them, while also influencing EU privacy laws.

Thirty-four years later, these well-intentioned principles have come to seem ill suited to the contemporary world. They predated the mainstream adoption of personal computers, the emergence of the Internet, and the proliferation of cell phones and tablet computers. They were developed when only science-fiction writers imagined that someday soon more than a billion people would carry pocket-sized computers that could track their locations down to the meter, every second of every day; nearly all communication and commerce would be mediated electronically; and retailers would use data and computer modeling to determine what a particular consumer wants even before he is aware of it.

Such changes have exposed the limitations of the late- twentieth-century approach to protecting personal data, which focused almost entirely on regulating the ways such information was collected. Today, there is simply so much data being collected, in so many ways, that it is practically impossible to give people a meaningful way to keep track of all the information about them that exists out there, much less to consent to its collection in the first place. Before they can run an application on their smartphones or sign up for a service on a website, consumers are routinely asked to agree to end-user license agreements (EULAs) that can run to dozens of pages of legalese. Although most EULAs are innocuous, some contain potentially unwelcome clauses buried deep within them. In one recent example, a Web application included a clause in its EULA that granted the software maker permission to use the user’s spare computing power to generate Bitcoins, a form of digital currency, without any compensation for the user. Another example is a popular flashlight application called Brightest Flashlight Free, which collected its users’ location data and then sold it to marketing companies -- without revealing that it was doing so. (Last December, the U.S. Federal Trade Commission forced the application’s maker to abandon this deceptive practice.)

In both cases, users technically had an opportunity to consent to these practices. But that consent was effectively meaningless, since it did not offer a clear understanding of how, when, and where their personal data might be used. The Bitcoin-mining application’s dense, 5,700-word EULA was so vague that even a user who made the unusual choice to actually read it carefully might not have understood that it gave the application’s maker the right to virtually hijack the computing capacity of the user’s device. Although the flashlight application explicitly requested access to users’ location data (a request most people reflexively approved), that was more of a ruse than an honest business practice, since the company hid the fact that it provided the data to others.

Other forms of data collection can be even more challenging to regulate. More and more collection happens passively, through sensors and on servers, with no meaningful way for individuals to be made aware of it, much less consent to it. Cell phones continually share location data with cellular networks. Tollbooths and traffic cameras photograph cars (and their occupants) and read license plates. Retailers can track individuals as they move around a store and use computer-backed cameras to determine the gender and approximate age of customers in order to more precisely target advertising. In a scenario reminiscent of the 2002 film Minority Report, stores might soon be able to use facial- recognition technology to photograph shoppers and then match those images with data from online social networks to identify them by name, offer them discounts based on their purchasing histories, and suggest gifts for their friends.

Using powerful new computing tools and huge data sets gathered from many different sources, corporations and organizations can now generate new personal data about individuals by drawing inferences and making predictions about their preferences and behaviors based on existing information. The same techniques also make it harder to keep personal information anonymous. Companies that have access to multiple sources of partial personal information will find it increasingly easy to stitch pieces of data together and figure out whom each piece belongs to, effectively removing the anonymity from almost any piece of data. GOOD INTENTIONS, BAD EFFECTS

Many people understandably find this state of affairs troubling, since it seems to suggest that their privacy has already been compromised beyond repair. But the real issue is not necessarily that their privacy has been violated -- just because the information is out there and could be abused does not mean that it has been. Rather, it is that people do not know who possesses data related to them and have no way to know whether the information is being used in acceptable ways.

One common reaction is to demand stricter controls on who can collect personal information and how they can collect it by building user consent into the process at every stage. But if an individual were given the opportunity to evaluate and consent to every single act of data collection and creation that happens, he would be forced to click “yes” or “no” hundreds of times every day. Worse, he would still have no way to easily verify what happened to his data after he consented to its collection. And yet it would be very hard for most people to opt out altogether, since most people enjoy, or even rely on, services such as social networks that require personal data about their users in order to function or applications and services (such as e-mail software, productivity tools, or games) that they use for free in exchange for agreeing to receive targeted advertising.

Officials, legislators, and regulators all over the world have yet to grasp this reality, and many well-meaning attempts to address public concerns about privacy reflect an outdated understanding of the contemporary data ecosystem. Consider, for instance, the EU’s General Data Protection Regulation, which is expected to take effect in 2016. This new regulation requires individual consent for the collection of data and the disclosure of the intended use at the time of collection. It also creates a “right to be forgotten” (the requirement that all data on an individual be deleted when that individual withdraws consent or his data is no longer needed), ensures the availability of personal data in a form people can easily access and use, and imposes fines on companies or organizations that fail to comply with the rules.

Although well intentioned, this new regulation is flawed in its focus on the collection and retention of data. It will help unify laws and practices regarding privacy, but it does not adequately address the practical realities of data collection today. It requires valid consent for collecting data, but it does not consider sensitive information that is created by algorithms using data from completely public sources that can infer an individual’s age, marital status, occupation, estimated income, and political leanings based on his posts to various social networks. Nor will the new rules apply when data is collected passively, without a meaningful opportunity for consent. And besides the “right to be forgotten,” the rules will not do much to address the crucial question of how data can and cannot be used.

Such efforts to restrict data collection can also produce unintended costs. Much of the information collected today has potential benefits for society, some of which are still unknown. The ability to analyze large amounts of aggregated personal data can help governments and organizations better address public health issues, learn more about how economies work, and prevent fraud and other crimes. Governments and international organizations should not prevent the collection and long-term retention of data that might have some as-yet-undiscovered beneficial use.

For instance, in 2011, researchers at the health-care giant Kaiser Permanente used the medical records of 3.2 million individuals to find a link between autism spectrum disorders in children and their mothers’ use of antidepressant drugs. They determined that if a mother used antidepressants during pregnancy, her child’s risk of developing such a disorder doubled. The researchers had access to those medical records only because they had been collected earlier for some other reason and then retained. The researchers were able to find a particularly valuable needle, so to speak, only because they had a very large haystack. They would almost certainly not have made the discovery if they had been able to conduct only a smaller, “opt-in” study that required people to actively consent to providing the particular information the researchers were looking for.

Further medical breakthroughs of this kind will become more likely with the emergence of wearable devices that track users’ movements and vital signs. The declining cost of genome sequencing, the growing adoption of electronic medical records, and the expanding ability to store and analyze the resulting data sets will also lead to more vital discoveries. Crucially, many of the ways that personal data might be used have not even been imagined yet. Tightly restricting this information’s collection and retention could rob individuals and society alike of a hugely valuable resource.

WRAPPER'S DELIGHT

When people are asked to give a practical example of how their privacy might be violated, they rarely talk about the information that is being collected. Instead, they talk about what might be done with that information, and the consequences: identity theft or impersonation, personal embarrassment, or companies making uncomfortable and unwelcome inferences about their preferences or behavior. When it comes to privacy, the data rarely matters, but the use always does.

But how can governments, companies, and individuals focus more closely on data use? A good place to start would be to require that all personal data be annotated at its point of origin. All electronic personal data would have to be placed within a “wrapper” of metadata, or information that describes the data without necessarily revealing its content. That wrapper would describe the rules governing the use of the data it held. Any programs that wanted to use the data would have to get approval to “unwrap” it first. Regulators would also impose a mandatory auditing requirement on all applications that used personal data, allowing authorities to follow and observe applications that collected personal information to make sure that no one misused it and to penalize those who did. For example, imagine an application that sends users reminders -- about errands they need to run, say, or appointments they have to keep -- based on their location. Such an application would likely require ongoing access to the GPS data from users’ cell phones and would thus have to negotiate and acquire permission to use that data in accordance with each user’s preferences.

Such approaches are feasible because data and applications always work together. The raw materials of personal information -- a row of numbers on a spreadsheet, say -- remain inert until a program makes use of them. Without a computer program, there is no use -- and without use, there is no misuse. If an application were required to tell potential users what it intended to do with their data, people might make more informed decisions about whether or not to use that application. And if an application were required to alert users whenever it changed the way it used their data and to respond to their preferences at any time, people could modify or withdraw their consent.

A progenitor of this approach emerged in the past decade as consumers began listening to music and watching movies online -- and, in many cases, illegally downloading them. With profits threatened by such widespread piracy, the entertainment industry worked with technology firms to create digital rights management systems that encrypt content and add metadata to files, making it much harder for them to be illegally opened or distributed. For example, movies purchased from Apple’s iTunes store can be played on only a limited number of computers, which users must authorize by linking them to their Apple accounts. Such mechanisms were given legal weight by legislation, including the 1998 Digital Millennium Copyright Act, which criminalized the circumvention of copyright-protection systems. Although there was some resistance to early forms of digital rights management that proved cumbersome and interfered with legitimate and reasonable consumer behavior, the systems gradually matured and gained acceptance.

Digital rights management has also worked well in other areas. The rights protections built into Microsoft’s Office software uses encryption and metadata to allow users to specify who can and who cannot read, edit, print, or forward a file. This control gives individuals and organizations a simple and manageable way to protect confidential or sensitive information. It is not difficult to imagine a similar but more generalized scheme that would regulate the use of personal data.

Focusing on use would also help secure data that is already out there. Software that works with personal data has a shelf life: it is eventually upgraded or replaced, and regulators could require that programmers build new protections into the code whenever that happens. Regulators could also require all existing applications to officially register and bring their data usage into compliance.

IDENTITY CRISIS

Any uniform, society-wide effort to control the use of data would rely on the force of law and a variety of enforcement regimes. Requiring applications to wrap data and make it possible to audit the way they use it would represent a major change in the way thousands of companies do business. In addition to the likely political fights such a system would engender, there are a number of technical obstacles that any new legal regime would have to overcome. The first is the issue of identity. Most people’s online identities are loosely connected to their e-mail addresses and social networking profiles, if at all. This works fine for digital rights management, in which a single entity owns and controls the assets in question (such as a digital copy of a film or a song). But personal data lives everywhere. A person’s expressed preferences cannot be honored if they cannot be attached to a verifiable identity. So privacy protections focused on the use of personal data would require governments to employ better systems for connecting legally recognized online identities to individual people.

The winds of online identity are already blowing in this direction. Facebook requires that people sign up for the service using their real names, and Twitter takes steps to verify that certain accounts, such as those connected to celebrities and public figures, actually represent the people they claim to represent. One intermediate step toward a more systemic creation of verified online identities would allow people to designate which of their online personas was the authoritative one and use that to specify their privacy preferences.

But even if governments could devise ways to more rigorously connect individuals with verifiable online identities, three additional kinds of validated “identities” would need to be created: for applications, for the computers that run them, and for each particular role that people play when they use an application on a computer. Only specific combinations of identities would be able to access any given piece of personal data. For example, a researcher working on an epidemiologic study might be allowed to use a specific application to work with a specific data set on his institution’s computer system, but an actuary would not be permitted to use that application or that data on his computer to set prices for health insurance. Or a physician attending to a patient in the emergency room might have access to information for treatment that he would not have access to in another role or circumstance.

Lawmakers would have to put in place significant penalties to make sure people played by the new rules. The only effective deterrents would be punishments strong enough to give pause to any reasonable person. Given the value that can be extracted from personal data, a fine -- even a significant one -- could be perceived by a bad actor (whether an individual or a corporation) as merely part of the cost of doing business. So privacy violations would have to be considered serious criminal offenses, akin to fraud and embezzlement -- not like mere “parking tickets,” which would not deter rogue operators and companies.

If someone suspected that his personal data had been misused, he could contact the appropriate regulator, which would investigate and prosecute the abuse, treating it as a crime like any other. A suspect incident could include receiving a targeted advertisement informed by something a person had not agreed to allow advertisers to know or noticing that one’s health insurance premiums had gone up after posting about participating in extreme sports on a social network.

Moving from the current model to this new way of controlling privacy would require political will and popular support. It would also require people to constantly reevaluate what kinds of uses of their personal data they consider acceptable. Whether a particular use is appropriate depends on the context of the use and the real or perceived value that individuals and society get in return. People might gladly consent to the use of their social networking activity by researchers in a population-wide health study, but not necessarily by insurers who want to use information about their health, recreation, and eating habits to determine their coverage or premiums.

Another challenge would be figuring out practical ways for individuals to express their preferences about personal data, given the wide range of possible uses and the many different contexts in which such data comes into play. It would be impossible to write all the rules in advance or to craft a law that would cover every class of data and every potential use. Nor would it be sensible to ask people to take a few hours and write down how they might feel about any current or theoretical future use of their information.

One potential solution might be to allow people to delegate some choices about their preferences to organizations they trust. These organizations would serve as watchdogs, keeping track of applications and uses as they emerge and change and helping guide regulators’ investigations and enforcement. If someone were concerned about the use of personal data by advertisers, for instance, he might choose to delegate the management of his preferences to an organization that specialized in keeping an eye on marketers’ evolving techniques and behaviors. If this user’s preferences changed or he was not satisfied with the organization’s work, he would be able to withdraw his specific consent or delegate this work to a different organization.

A system of this sort would require a combination of innovative new national and international laws and regulations, since the infrastructure of the Internet does not stop at national borders. Here again, the example of digital rights management is instructive: the U.S. Digital Millennium Copyright Act put into law the provisions of two treaties signed by members of the World Intellectual Property Organization in 1996 and created mechanisms for the federal government to oversee and enforce the law.

FOCUS ON USE; LIMIT ABUSE

The vast majority of uses of personal information that concern people take place in the private sector. But governments also collect, retain, and use more personal data than ever before, as the disclosures last year regarding the extent of the U.S. National Security Agency’s electronic surveillance made abundantly clear. The success of a use-based model of privacy protection would depend on trustworthy institutions and legal systems that would place constitutional limits on governments’ access to data and the surveillance of citizens. Assuming those conditions existed, a use-based privacy model would actually strengthen the rule of law and government accountability, by helping eliminate much of the ambiguity surrounding governments’ exploitation of data by encouraging (and sometimes forcing) regulators and authorities to be more specific about what they do with the information they collect. The fact that such an approach is quite likely to develop, at least in the United States, is one of many good reasons why the U.S. government must do everything possible to live up to Americans’ expectations of lawful behavior.

More broadly, shifting policymakers’ and regulators’ focus toward controlling the use of data, rather than just its collection, would empower citizens. Such an approach would allow people to take firmer control of the information they care about by extending individual agency beyond the simple act of consent and permitting people to adapt their preferences and even rescind their consent over time. This new paradigm would protect people against everything from unwanted communication, unfair discrimination, fraud, and identity theft. At the same time, by preserving more information, it would greatly benefit individuals and societies, as the ecosystem of data, computing capabilities, and innovative new software continues to expand.

CRAIG MUNDIE is Senior Adviser to the CEO of Microsoft and the company’s former Chief Research and Strategy Officer. February 12, 2014 The Power of Market Creation

How Innovation Can Spur Development

Bryan C. Mezue, Clayton M. Christensen, and Derek van Bever January/February 2015

ROOSEVELT CASSIO / COURTESY REUTERS Growth engine: testing at Embraer in Brazil, October 2014

Most explanations of economic growth focus on conditions or incentives at the global or national level. They correlate prosperity with factors such as geography, demography, natural resources, political development, national culture, or official policy choices. Other explanations operate at the industry level, trying to explain why some sectors prosper more than others. At the end of the day, however, it is not societies, governments, or industries that create jobs but companies and their leaders. It is entrepreneurs and businesses that choose to spend or not, invest or not, hire or not.

In our research on growth, therefore, we have taken the opposite approach, working not from the top down but from the bottom up, adopting the perspective of the firm and the manager. From this vantage point, we have learned that different types of innovation have radically different effects on economic and employment growth. This insight gives entrepreneurs, policymakers, and investors the ability to collaborate as never before to create the conditions most likely to unlock sustained prosperity, particularly in the developing world. We argue that there exists a well- established model of company-level investment and innovation that leads to transformative economic development and national prosperity, has been remarkably consistent at explaining past successes, and can provide direction to stakeholders in what to look for and what to build in the future.

VARIETIES OF INNOVATION Our model targets innovation as the fundamental unit of analysis, since most investments are focused on that. And innovation, in turn, comes in three varieties. The first is what we call “sustaining innovation,” the purpose of which is to replace old products with new and better ones. Such innovations are important, because they keep markets vibrant and competitive. Most of the changes that one sees in the market are sustaining innovations. But these are by nature substitutive, in that if a business succeeds in selling a better product to its existing customers, they won’t buy the old product anymore. When Samsung releases an improved model of its flagship smartphone, sales of its old versions drop quickly. When Toyota convinces consumers to buy a hybrid Prius, they don’t buy a Camry. Investments in sustaining innovations, therefore, rarely create much net growth within the companies that develop and sell them. And rarely do they lead to new jobs to fuel macroeconomic growth.

“Efficiency innovation” is the second type; it helps companies produce more for less. Efficiency innovations allow companies to make and sell established products or services at lower prices. The Walmart retail model, for example, is an efficiency innovation. Walmart can sell the same products to the same customers as a traditional department store, such as Macy’s, does at prices 15 percent lower and with half the inventory. In every competitive economy, efficiency innovations are critical to companies’ survival. But by their very nature of producing more with less, efficiency innovations entail eliminating jobs or outsourcing them to an even more efficient provider. In addition to being able to produce more with fewer people, efficiency innovations make capital more efficient, improving cash flow.

The third type is “market-creating innovation.” When most industries emerge, their products and services are so costly and inaccessible that only the wealthy can buy and use them. Market-creating innovations transform such offerings into products and services that are cheap enough and accessible enough to reach an entirely new population of customers. The Model T Ford, the personal computer, the smartphone, and online equity trading are examples of market-creating innovations. Because many more people can buy such products, the innovators need to hire more people to make, distribute, and service them. And because market-creating innovations are simpler and lower cost, the supply chains that are used for sustaining innovations don’t support them. This makes it necessary to build new supply networks and establish new distribution channels in order to create a new market. Market-creating innovators create new growth and new jobs.

Market-creating investments require two things: an entrepreneur who spots an unfulfilled customer need and the presence of an economic platform (that is, an enabling technology or feature in the product or business model that brings significant advantages in economies of scale). Kenya’s M-Pesa service, for example, has succeeded in addressing the lack of consumption of banking services across the country by using a wireless telecommunications platform. When M-Pesa was released in 2007, fewer than 20 percent of Kenyans used banks; today, more than 80 percent do. The South African telecommunications giant MTN, meanwhile, ushered in the cell-phone revolution across the continent by combining telecommunications infrastructure with low-cost phones targeted at nonconsumers.

Any strong economy has a mix of all three classes of innovation at any given time. But only market-creating innovations bring the permanent jobs that ultimately create prosperity. By targeting nonconsumption, market-creating innovations turn the liabilities of developing nations—the diverse unmet needs of their populaces—into assets. In the process, they create new value networks, build new capabilities, and generate sustained employment. This feeds a virtuous circle, as innovators move up the ladder to more sophisticated nonconsumption opportunities.

HOW MARKET-CREATING INNOVATION WORKS Our early research seems to show that market-creating innovation has been an important factor in every nation that has managed to achieve transformative growth and prosperity. Postwar Japan offers perhaps the best example. The Japanese economy was obliterated in World War II, so its challenge in many ways was less to develop from scratch than to rebuild. Japan’s success in that effort has often been attributed to national pride and a strong work ethic, to the vision of government agencies such as what was then the Ministry of International Trade and Industry, or to excellence in science and engineering education. But such explanations have lost their persuasive force as Japan’s economy has stagnated in recent decades: a constant cannot explain a variable. In retrospect, a more powerful explanation for the country’s postwar growth is its success with market-creating innovations in motorcycles, automobiles, consumer electronics, office equipment, and steel.

Consider the Japanese motorcycle industry. From a group of more than 200 motorcycle makers in the 1950s, Honda, Kawasaki, Suzuki, and Yamaha emerged to captain the industry’s development at home and abroad. These “Big Four” firms did not seek growth by stealing market share from existing leaders in motorcycles. Rather, they targeted nonconsumption. When the Japanese Diet passed an amendment to the country’s Road Traffic Control Law in 1952 allowing younger drivers to ride motorcycles, Suzuki was one of the first companies to adapt its offerings for younger consumers, with its low-end 60cc Diamond Free bike. Similarly, Honda launched the 1952 50cc Cub F-Type to target the growing number of small businesses that needed delivery vehicles but couldn’t afford large ones. Honda positioned the motorcycle at the affordable price of 25,000 yen (about $70) and provided a 12-month installment financing plan. Domestic competition among firms vying for the business of consumers with little disposable income caused them to integrate backward in components and forward in distribution channels. This created jobs in Japan beyond the Big Four themselves, and it also gave them the ability to export their motorcycles to the United States and Europe and compete for new consumers in those markets as well.

The same pattern was seen with Panasonic, Sharp, and Sony in consumer electronics; Nissan and Toyota in cars; and Canon, Kyocera, and Ricoh in office equipment. They all followed a two-stage strategy of competing against nonconsumption in the domestic Japanese market first and then pursuing the same strategy abroad.

This model has been replicated in South Korea, where market-creating innovators, such as Samsung, which have been pivotal to the country’s economic rise, studied the Japanese experience closely. Samsung was founded as a trading company but launched an electronics subsidiary in 1969 to manufacture products that would eradicate domestic nonconsumption of entertainment and cooling technologies. Samsung Electronics’ first product was a black-and-white TV, produced jointly with the Japanese companies NEC and Sumitomo. Soon after, Samsung studied Japanese models to produce some of South Korea’s first cheap electric fans, and then the company graduated to low-cost air conditioners. By launching a continuous stream of market-creating innovations, Samsung has become one of the most recognized brands in the world and one of the largest single contributors to South Korea’s GDP.

In China, too, market creators have built domestic niches into regional or global footholds, in industries ranging from consumer durables to construction equipment. Haier started out in 1984 as a market-creating innovator that produced mini-refrigerators for Chinese nonconsumers, and then it leveraged a partnership with the German firm Liebherr to acquire technology and equipment. By 2011, the company had disrupted many global incumbents in the “white goods” market with product lines inspired by its experience in China, gaining a global market share of 7.8 percent. Similarly, Sany was launched in 1989 as a small materials-welding shop for an underserved town in Hunan Province. It leveraged its understanding of local needs and the latest technological advances to produce cheap construction equipment for China’s booming construction market. Today, it has a higher domestic market share than its main rival, the U.S. firm Caterpillar, and is also gaining market share in foreign markets. The same pattern appears in other countries as well. In Chile, government reform and the booming copper industry have received significant acclaim, but market-creating innovations seem to have been the true engine of growth. For example, the blossoming of Chile’s agriculture sector was based on market creation—before Chile’s innovations, nonconsumption of fresh fruits and vegetables was pervasive during most of the year in nontropical advanced countries. Chile’s agricultural exporters leveraged the improving science of cultivation and modern logistics to transform the availability of produce and provide fresh goods all year round.

In India, many health providers are embracing market- creating innovations to make quality health care more accessible. The Aravind Eye Hospital launched with the goal of providing low-cost eye surgery to poor nonconsumers. By introducing innovations such as the high utilization of medical staff and tiered service levels for paying and nonpaying customers, Aravind has become the largest eye hospital in the world. And just as in the Japanese case, Indian firms are using their domestic platforms to target nonconsumers abroad: Narayana Health, for example, is setting up shop in the Cayman Islands to reach cost-conscious Americans, and India has become a leader in health tourism, serving over a million foreigners every year.

And in Brazil, where extensive oil and lumber resources often capture attention, market-creating innovators such as Embraer have been hard at work creating jobs and capabilities. Like most aircraft manufacturers, Embraer was initially supported by government subsidies and focused on producing aircraft for military purposes. But the company trained its sights on the commercial market, delivering low- cost aircraft to domestic airlines. Today, Embraer has acquired a broad set of capabilities, has created extensive domestic supply networks, and makes planes for several dozen leading international airlines, including major U.S. ones, such as American, Delta, JetBlue, and United. And Grupo Multi, another Brazilian market-creating success, targeted the nonconsumption of foreign-language learning by developing a new model to teach Brazilians how to speak English. It now maintains over 2,600 franchised schools, has generated over 20,000 jobs, and has trained over 800,000 students.

THE RIGHT KIND OF INVESTMENT Such examples suggest the wide range of opportunities available to grow by targeting nonconsumption and creating robust domestic franchises that can then achieve regional or global scale. Looking at things this way also sheds light on what role resources and investments actually play in development. Several ostensibly valuable levers for development—such as investments in natural resource industries, major infrastructure projects, and routine foreign direct investment—have rarely brought the benefits their backers expected. Why not? In part, because such investments don’t create markets.

Economists have long wondered why nations endowed with oil (such as Iran, Iraq, Mexico, Nigeria, and Venezuela) or precious metals (such as Mongolia, Peru, and Russia) generate billions upon billions in revenues and profits yet manage to create few jobs and little national economic growth. The answer is that investments in resource industries in developing nations lead to efficiency innovations, designed to produce more with less. From the day these rigs and refineries go into operation, the objective of their managers is to increase productivity by reducing employment. This is the logic of efficiency innovation, and its outcome is net job loss, not gain.

Many infrastructure projects, such as communications towers, power plants, and roads, meanwhile, are also efficiency investments. They reduce the cost of operations for domestic companies, allowing them to better serve their existing customers, but they do not directly lead to the creation of sustained growth and prosperity. In fact, if they cannot be combined with a different set of investments specifically targeting unfulfilled customer needs, their benefits will remain limited to existing customers and their economic impact will be limited as well. This is why infrastructure investments in developing countries, championed by organizations such as the World Bank and the International Monetary Fund, so often fail to drive long-term growth.

Most foreign direct investment, finally, is similarly oriented toward efficiency. The most common type is when a multinational company sets up a low-cost factory to provide components or services for products with an established end use. Often, these investments are “migratory”: as soon as the low-cost rationale for the investment in country X has played out, the company moves its factory to lower-cost country Y, if possible. These are investments to get things into and out of the country, not to develop a long-term, stable source of production and jobs.

Some types of foreign direct investment, of course, do bring more substantial benefits to developing nations. One example is when an investment supports a product that is creating a new market abroad. Typically, the market for the end products and services is growing faster than efficiency innovations are decreasing costs. Such an investment puts people to work to build and run the initial factory, and then the company keeps hiring additional employees to keep pace with customer growth. This distinction explains why foreign direct investment did not create fundamental growth in Mexico but did in Taiwan. Most U.S. investments in Mexico funded efficiency innovations embedded within established end-use markets—in industries such as automobiles, appliances, and electric motors. In contrast, most of the companies that have driven Taiwan’s economic development—including ASUSTeK Computer, HTC, Hon Hai Precision Industry, MediaTek, and the Taiwan Semiconductor Manufacturing Company—have provided efficiency innovations embedded within market-creating innovations: more efficient components or services used in market- creating innovations such as laptop and tablet computers and smartphones. Because the growth from market-creating innovations is typically greater than the rate of reduction caused by increased efficiency, the broader economy became more prosperous.

HOW TO CREATE SUSTAINED GROWTH Given that most investments in developing economies have been conceived from the top down and have focused on efficiency, it should come as no surprise that there has been little growth in areas that seemed otherwise to possess such promise. To do better in the future, both the public and the private sector should work to support market-creating innovations—and innovators—in their home markets.

Perhaps the most critical move to boost market-creating innovation would be to put in place platforms and incentives that would accelerate the flow of capital between investors and market-creating innovators. Some of this work simply involves adapting existing tools to the particular challenges of investing in emerging markets. Online investment platforms, such as AngelList and Gust, which directly connect investors to entrepreneurs, have the potential to accelerate many market-creating investments (provided they can be adapted to address the legitimate trust concerns of investors), and both are already going global. So-called networks, such as and , can also be targeted more precisely at market-creating activity, with a particular focus on giving investors from developing countries’ ethnic diasporas the chance to invest. And in resource-rich nations, policymakers can play a bridging role by diverting a portion of the revenues from those resources toward funds specifically designated for market-creating investments. Such funds should be managed autonomously, by investors who understand how to spot and support market-creating innovations.

Most entrepreneurs focus on introducing products and services into existing, established markets, but market- creating innovation is built on targeting nonconsumption—unfulfilled needs in new markets. To help entrepreneurs tap the abundant nonconsumption opportunities available in developing nations, adequate training programs must teach entrepreneurs how to see such nonconsumption and estimate the rewards of eradicating it. In coordination with universities and companies, such programs should study how market-creating innovations have taken hold in comparable nations and identify emerging high- potential technologies. Several case studies are already available that can teach entrepreneurs the critical elements of market creation. For example, Godrej & Boyce’s chotuKool, a portable refrigerator—a disruptive product bringing affordable cooling capability to the 80 percent of rural Indian consumers without access to reliable refrigeration—shows how creativity and patience can bring life-changing products to segments of the market that had long assumed that such luxuries were beyond their reach.

A long-standing concern of entrepreneurs and investors trying to build businesses in the developing world has been the seemingly unavoidable roadblock of corruption. There is evidence, however, that systemic corruption can be circumvented. Thus, despite India’s high degree of corruption at all levels of society, information technology companies in its southern states have prospered because the Internet has essentially become a conduit around the corruption rather than through it. This principle holds promise for other businesses around the world. Rather than spending managerial time applying for or negotiating fees for certificates, licenses, permits, and registrations, executives should work with reform-minded leaders to create ways of getting them easily and virtually, bypassing the multiple opportunities for corruption along the ordinary routes.

For certain system-level constraints, finally, instead of waiting for the system itself to change, entrepreneurs are best served by trying to internalize the problem and control more of the outcome. For example, although traditional capital markets may not be keen on market-creating innovations, the concept of “royalty financing” could help individual businesses. Under this scheme, rather than raising traditional equity or debt, the entrepreneur can license capital. The investor receives nothing until revenues are generated, and then the entrepreneur pays a royalty to the investor—a percentage of revenues—just as is common with licenses for intellectual property. As revenues increase, royalties increase, until the accumulated royalties paid have reached some multiple of the initial principal amount. Such an approach precludes the need for a liquidity event, that is, an opportunity to cash out, whose outcome is hard to predict when capital markets are poorly organized and policed. Instead, investors benefit from a liquidity process, which they can monitor and confirm firsthand.

Skilled talent is even scarcer than capital, and here, too, companies can move to internalize the problem. By embracing in-house vocational programs or working more closely with schools and universities, companies can address the problem directly. At the extreme, in South Korea, the steel company POSCO set up its own university to train capable engineers. Observing that “you can import coal and machines, but you cannot import talent,” POSCO’s founder, Park Tae-joon, led the company to establish the Pohang University of Science and Technology to provide needed education in science and technology. The school consistently tops domestic and international university rankings and has been rated number one by the London-based Times Higher Education’s “100 Under 50,” a ranking of the top 100 universities under 50 years old.

Armed with a strong causal explanation for what provides sustained growth, and operating in supportive conditions and joined by sympathetic policymakers, entrepreneurs in developing countries can create new markets and new opportunities. The result of doing so will be not just successful businesses but also more broad-based job creation and more robust and lasting prosperity for their countries and fellow citizens.

BRYAN C. MEZUE is a Fellow at the Forum for Growth and Innovation at the Harvard Business School. CLAYTON M. CHRISTENSEN Kim B. Clark Professor of Business Administration at the Harvard Business School. DEREK VAN BEVER is Senior Lecturer of Business Administration at the Harvard Business School and Director of the school’s Forum for Growth and Innovation. December 15, 2014 The Innovative State

Governments Should Make Markets, Not Just Fix Them

Mariana Mazzucato January/February 2015

KEVIN LAMARQUE / COURTESY REUTERS

The conventional view of what the state should do to foster innovation is simple: it just needs to get out of the way. At best, governments merely facilitate the economic dynamism of the private sector; at worst, their lumbering, heavy-handed, and bureaucratic institutions actively inhibit it. The fast- moving, risk-loving, and pioneering private sector, by contrast, is what really drives the type of innovation that creates economic growth. According to this view, the secret behind Silicon Valley lies in its entrepreneurs and venture capitalists. The state can intervene in the economy—but only to fix market failures or level the playing field. It can regulate the private sector in order to account for the external costs companies may impose on the public, such as pollution, and it can invest in public goods, such as basic scientific research or the development of drugs with little market potential. It should not, however, directly attempt to create and shape markets. A 2012 Economist article on the future of manufacturing encapsulated this common conception. “Governments have always been lousy at picking winners, and they are likely to become more so, as legions of entrepreneurs and tinkerers swap designs online, turn them into products at home and market them globally from a garage,” the article stated. “As the revolution rages, governments should stick to the basics: better schools for a skilled workforce, clear rules and a level playing field for enterprises of all kinds. Leave the rest to the revolutionaries.”

That view is as wrong as it is widespread. In fact, in countries that owe their growth to innovation, the state has historically served not as a meddler in the private sector but as a key partner of it—and often a more daring one, willing to take the risks that businesses won’t. Across the entire innovation chain, from basic research to commercialization, governments have stepped up with needed investment that the private sector has been too scared to provide. This spending has proved transformative, creating entirely new markets and sectors, including the Internet, nanotechnology, biotechnology, and clean energy.

Today, however, it has become harder and harder for governments to think big. Increasingly, their role has been limited to simply facilitating the private sector and, perhaps, nudging it in the right direction. When governments step beyond that role, they immediately get accused of crowding out private investment and ineptly trying to pick winners. The notion of the state as a mere facilitator, administrator, and regulator started gaining wide currency in the 1970s, but it has taken on newfound popularity in the wake of the global financial crisis. Across the globe, policymakers have targeted public debt (never mind that it was private debt that led to the meltdown), arguing that cutting government spending will spur private investment. As a result, the very state agencies that have been responsible for the technological revolutions of the past have seen their budgets shrink. In the United States, the budget “sequestration” process has resulted in $95 billion worth of cuts to federal R & D spending from 2013 to 2021. In Europe, the eu’s “fiscal compact,” which requires states to drop their fiscal deficits down to three percent of gdp, is squeezing educational and R & D spending.

What’s more, thanks in part to the conventional wisdom about its dynamism and the state’s sluggishness, the private sector has been able to successfully lobby governments to weaken regulations and cut capital gains taxes. From 1976 to 1981 alone, after heavy lobbying from the National Association, the capital gains tax rate in the United States fell from 40 percent to 20 percent. And in the name of bringing Silicon Valley’s dynamism to the United Kingdom, in 2002, the government of British Prime Minister Tony Blair reduced the time that funds have to be invested to be eligible for tax reductions from ten years to two years. These policies increase inequality, not investment, and by rewarding short-term investments at the expense of long-term ones, they hurt innovation.

Getting governments to think big about innovation is not just about throwing more taxpayer money at more activities. It requires fundamentally reconsidering the traditional role of the state in the economy. Specifically, that means empowering governments to envision a direction for technological change and invest in that direction. It means abandoning the shortsighted way public spending is usually evaluated. It means ending the practice of insulating the private sector from the public sector. And it means figuring out ways for governments and taxpayers to reap some of the rewards of public investment, instead of just the risks. Only once policymakers move past the myths about the state’s role in innovation will they stop being, as John Maynard Keynes put it in another era, “the slaves of some defunct economist.”

THE FAILURE OF MARKET FAILURE

According to the neoclassical economic theory that is taught in most economics departments, the goal of government policy is simply to correct market failures. In this view, once the sources of failure have been addressed—a monopoly reined in, a public good subsidized, or a negative externality taxed— market forces will efficiently allocate resources, enabling the economy to follow a new path to growth. But that view forgets that markets are blind, so to speak. They may neglect societal or environmental concerns. And they often head in suboptimal, path-dependent directions. Energy companies, for example, would rather invest in extracting oil from the deepest confines of the earth than in clean energy.

In addressing societal challenges such as climate change, youth unemployment, obesity, aging, and inequality, states must lead—not by simply fixing market failures but by actively creating markets. They must direct the economy toward new “techno-economic paradigms,” in the words of the technology and innovation scholar Carlota Perez. These directions are not generated spontaneously from market forces; they are largely the result of deliberate state decisions. In the mass-production revolution, for example, the state invested in both the underlying technologies and their diffusion across the economy. On the supply side, the U.S. military-industrial complex, beginning in World War II, invested in improvements in aerospace, electronics, and materials. On the demand side, the U.S. government’s postwar subsidization of suburban living—building roads, backing mortgages, and guaranteeing incomes through the welfare state—enabled workers to own homes, buy cars, and consume other mass-produced goods.

As Michael Shellenberger and his colleagues at the progressive think tank the Breakthrough Institute have documented, despite the mythmaking about how the shale gas boom is being driven by wildcatting entrepreneurs operating independently from the state, the U.S. federal government invested heavily in the technologies that unleashed it. In 1976, the Morgantown Energy Research Center and the Bureau of Mines launched the Eastern Gas Shales Project, which demonstrated how natural gas could be recovered from shale formations. That same year, the federal government opened the Gas Research Institute, which was funded through a tax on natural gas production and spent billions of dollars on research into shale gas. And the Sandia National Laboratories, part of the U.S. Department of Energy, developed the 3-D geologic mapping technology used for fracking operations.

Likewise, as the physician Marcia Angell has shown, many of the most promising new drugs trace their origins to research done by the taxpayer-funded National Institutes of Health, which has an annual budget of some $30 billion. Private pharmaceutical companies, meanwhile, tend to focus more on the D than the R part of R & D, plus slight variations of existing drugs and marketing.

Silicon Valley’s techno-libertarians might be surprised to find out that Uncle Sam funded many of the innovations behind the information technology revolution, too. Consider the iPhone. It is often heralded as the quintessential example of what happens when a hands-off government allows genius entrepreneurs to flourish, and yet the development of the features that make the iPhone a smartphone rather than a stupid phone was publicly funded. The progenitor of the Internet was ARPANET, a program funded by the Defense Advanced Research Projects Agency (DARPA), which is part of the Defense Department, in the 1960s. Gps began as a 1970s U.S. military program called Navstar. The iPhone’s touchscreen technology was created by the company FingerWorks, which was founded by a professor at the publicly funded University of Delaware and one of his doctoral candidates, who received grants from the National Science Foundation and the CIA. Even , the iPhone’s cheery, voice- recognizing personal assistant, can trace its lineage to the U.S. government: it is a spinoff of a darpa artificial- intelligence project. None of this is to suggest that and his team at Apple were not brilliant in how they put together existing technologies. The problem, however, is that failing to admit the public side of the story puts future government-funded research at risk.

For policymakers, then, the question should not be whether to pick particular directions when it comes to innovation, since some governments are already doing that, and with good results. Rather, the question should be how to do so in a way that is democratically accountable and that solves the most pressing social and technological challenges.

A SMARTER STATE State spending on innovation tends to be assessed in exactly the wrong way. Under the prevailing economic framework, market failures are identified and particular government investments are proposed. Their value is then appraised through a narrow calculation that involves heavy guesswork: Will the benefits of a particular intervention exceed the costs associated with both the offending market failure and the implementation of the fix? Such a method is far too static to evaluate something as dynamic as innovation. By failing to account for the possibility that the state can create economic landscapes that never existed before, it gives short shrift to governments’ efforts in this area. No wonder economists often characterize the public sector as nothing more than an inefficient version of the private sector. This incomplete way of measuring public investment leads to accusations that by entering certain sectors, governments are crowding out private investment. That charge is often false, because government investment often has the effect of “crowding in,” meaning that it stimulates private investment and expands the overall pie of national output, which benefits both private and public investors. But more important, public investments should aim not only to kick-start the economy but also, as Keynes wrote, “to do those things which at present are not done at all.” No private companies were trying to put a man on the moon when NASA undertook the Apollo project.

Without the right tools for evaluating investments, governments have a hard time knowing when they are merely operating in existing spaces and when they are making things happen that would not have happened otherwise. The result: investments that are too narrow, constrained by the prevailing techno-economic paradigm. A better way of evaluating a given investment would be to consider whether it taught workers new skills and whether it led to the creation of new technologies, sectors, or markets. When it comes to government spending on pharmaceutical research, for example, it might make sense to move past the private sector’s fixation on drugs and fund more work on diagnostics, surgical treatments, and lifestyle changes.

Governments suffer from another, related problem when it comes to contemplating investments: as a result of the dominant view that they should stick to fixing market failures, they are often ill equipped to do much more than that. To avoid such problems as a regulatory agency getting captured by business, the thinking goes, the state must insulate itself from the private sector. That’s why governments have increasingly outsourced key jobs to the private sector. But that trend often rids them of the knowledge necessary for devising a smart strategy for investing in innovation and makes it harder to attract top talent. It creates a self-fulfilling prophecy: the less big thinking a government does, the less expertise it is able to attract, the worse it performs, and the less big thinking it is allowed to do. Had there been more information technology capacity within the U.S. government, the Obama administration would probably not have had such difficulty rolling out HealthCare.gov, and that failure will likely lead to only more outsourcing.

In order to create and shape technologies, sectors, and markets, the state must be armed with the intelligence necessary to envision and enact bold policies. This does not mean that the state will always succeed; indeed, the uncertainty inherent in the innovation process means that it will often fail. But it needs to learn from failed investments and continuously improve its structures and practices. As the economist Albert Hirschman emphasized, the policymaking process is by its nature messy, so it is important for public institutions to welcome the process of trial and error. Governments should pay as much attention to the business school topics of strategic management and organizational behavior as private companies do. The status quo approach, however, is to focus not on making the government more competent but on downsizing it.

PROFIT AND LOSS Since governments often undertake courageous spending during the riskiest parts of the innovation process, it is key that they figure out how they can socialize not just the risks of their investments but also the rewards. The U.S. government’s Innovation Research program, for example, offers high-risk financing to companies at much earlier stages than most private venture capital firms do; it funded Compaq and Intel when they were start-ups. Similarly, the Small Business Investment Company program, an initiative under the auspices of the U.S. Small Business Administration, has provided crucial loans and grants to early stage companies, including Apple in 1978. In fact, the need for such long-term investments has only increased over time as venture capital firms have become more short term in their outlook, emphasizing finding an “exit” for each of their investments (usually through a public offering or a sale to another company) within three years. Real innovation can take decades.

As is the nature of early stage investing in technologies with uncertain prospects, some investments are winners, but many are losers. For every Internet (a success story of U.S. government financing), there are many Concordes (a white elephant funded by the British and French governments). Consider the twin tales of Solyndra and Tesla Motors. In 2009, Solyndra, a solar-power-panel start-up, received a $535 million guaranteed loan from the U.S. Department of Energy; that same year, Tesla, the electric-car manufacturer, got approval for a similar loan, for $465 million. In the years afterward, Tesla was wildly successful, and the firm repaid its loan in 2013. Solyndra, by contrast, filed for bankruptcy in 2011 and, among fiscal conservatives, became a byword for the government’s sorry track record when it comes to picking winners. Of course, if the government is to act like a venture capitalist, it will necessarily encounter many failures. The problem, however, is that governments, unlike venture capital firms, are often saddled with the costs of the failures while earning next to nothing from the successes. Taxpayers footed the bill for Solyndra’s losses yet got hardly any of Tesla’s profits.

Economists may argue that the state already receives a return on its investments by taxing the resulting profits. The truth is more complicated. For one thing, large corporations are masters of tax evasion. Google—whose game-changing search algorithm, it should be noted, was developed with funding from the National Science Foundation—has lowered its U.S. tax bill by funneling some of its profits through Ireland. Apple does the same by taking advantage of a race to the bottom among U.S. states: in 2006, the company, which is based in Cupertino, California, set up an investment subsidiary in Reno, Nevada, to save money.

Fixing the problem is not just a matter of plugging the loopholes. Tax rates in the United States and other Western countries have been falling over the past several decades precisely due to a false narrative about how the private sector serves as the sole wealth creator. Government revenues have also shrunk due to tax incentives aimed at promoting innovation, few of which have been shown to produce any R & D that would not have happened otherwise. What’s more, given how mobile capital is these days, a particular government that has funded a given company might not be able to tax it since it may have moved abroad. And although taxes are effective at paying for the basics, such as education, health care, and research, they don’t begin to cover the cost of making direct investments in companies or specific technologies. If the state is being asked to make such investments—as will increasingly be the case as financial markets become even more focused on the short term—then it will have to recover the inevitable losses that arise from this process.

There are various ways to do so. One is to attach strings to the loans and guarantees that governments hand out to businesses. For example, just as graduates who receive income-contingent student loans get their repayments adjusted based on their salaries, the recipients of state investments could have their repayments adjusted based on their profits.

Another way for states to reap greater returns involves reforming the way they partner with businesses. Public- private partnerships should be symbiotic, rather than parasitic, relationships. In 1925, the U.S. government allowed AT&T to retain its monopoly over the phone system but required the company to reinvest its profits in research, a deal that led to the formation of Bell Labs. Today, however, instead of reinvesting their profits, large companies hoard them or spend them on share buybacks, stock options, and executive pay. Research by the economist William Lazonick has borne this out: “The 449 companies in the S&P 500 index that were publicly listed from 2003 through 2012 . . . used 54% of their earnings—a total of $2.4 trillion—to buy back their own stock.”

An even bolder plan would allow the state to retain equity in the companies it supports, just as private venture capital firms do. Indeed, some countries adopted this model long ago. Israel’s Yozma Group, which manages public venture capital funds, has backed—and retained equity in—early stage companies since 1993. The Finnish Innovation Fund, or Sitra, which is operated under the Finnish parliament, has done the same since 1967, and it was an early investor in Nokia’s transformation from a rubber company into a cell-phone giant. Had the U.S. government had a stake in Tesla, it would have been able to more than cover its losses from Solyndra. The year Tesla received its government loan, the company went public at an opening price of $17 a share; that figure had risen to $93 by the time the loan was repaid. Today, shares in Tesla trade above $200.

The prospect of the state owning a stake in a private corporation may be anathema to many parts of the capitalist world, but given that governments are already investing in the private sector, they may as well earn a return on those investments (something even fiscal conservatives might find attractive). The state need not hold a controlling stake, but it could hold equity in the form of preferred stocks that get priority in receiving dividends. The returns could be used to fund future innovation. Politicians and the media have been too quick to criticize public investments when things go wrong and too slow to reward them when things go right.

THE NEXT REVOLUTION Past technological revolutions—from railroads to the automobile to the space program to information technology— did not come about as the result of minor tinkering with the economic system. They occurred because states undertook bold missions that focused not on minimizing government failure but on maximizing innovation. Once one accepts this more proactive state purpose, the key questions of economic policy get reframed. Questions about crowding out private investment and unwisely picking winners fall by the wayside as more dynamic questions—about creating the types of public-private interactions that can produce new industrial landscapes—rise to the top.

Today, many countries, from China to Denmark to Germany, have settled on their next mission: green energy. Given the potential benefits and the amount of money at play, it is crucial that governments back this mission the right way. For starters, they must not only pick various technologies or sectors to invest in but also ask what they want from those sectors. For example, if what governments want from the energy sector is a stable energy supply, then shale gas will do, but if the mission is to mitigate climate change, then it won’t. In fact, mission-oriented policies need to foster interactions among multiple fields. NASA’s mission to the moon required the interaction of many different sectors, from rocketry to telecommunications to textiles. Likewise, the green energy revolution will require investment not just in wind energy, solar power, and biofuels but also in new engines, new ways of more efficiently maintaining infrastructure, and new ways of making products last longer. Accordingly, the state should take its from the venture capital world and diversify its portfolio, spreading capital across many different technologies and enterprises. In making green investments, governments should fund those technologies that the private sector has ignored and provide a strong, clear direction for change, letting various entrepreneurs experiment with the specifics. Governments should provide ambitious targets, not in the old command-and- control style but through a combination of carrots and sticks. The German government has followed this approach in its energy-transition initiative, or Energiewende, which is designed to phase out nuclear energy and substitute it with renewables; it is doing this by setting lofty goals for carbon emissions reductions and subsidizing technological development of wind and solar power.

More broadly, governments should strike agreements that allow them to share in the profits from their successful investments. And most of all, they should build the public agencies of the future, turning them into hotbeds of creativity, adaptation, and exploration. That will require abandoning the current obsession with limiting the state’s intervention to fixing problems after they have happened—and smashing the popular myth that the state cannot innovate.

MARIANA MAZZUCATO is Professor of the Economics of Innovation in the Research Unit at the University of Sussex. She is the author of The Entrepreneurial State: Debunking Public vs. Private Sector Myths.

December 15, 2014 Food and the Transformation of Africa

Getting Smallholders Connected

Kofi Annan and Sam Dryden November/December 2015

THOMAS KOEHLER / PHOTOTHEK VIA GETTY IMAGES Field of dreams: on a maize farm near Bangui, Central African Republic, March 2014.

African agriculture has long been a symbol of the continent’s poverty. Officials considered the hundreds of millions of African smallholder farmers too backward to thrive; the future would arrive not by investing in them but rather by bypassing them. But all that is changing.

In recent years, African agricultural policies have been haphazard and inconsistent. Some countries have neglected smallholders in favor of commercial farmers. Others have given them attention but focused narrowly on increasing their productivity. African farms’ harvests are indeed much smaller than harvests elsewhere, so increasing productivity is important. But agriculture is about more than yields. A vast food system spreads beyond farm and table to touch almost every aspect of life in every society. Making that system in Africa as robust as possible will not merely prevent starvation. It will also fight poverty, disease, and malnutrition; create businesses and jobs; and boost the continent’s economies and improve its trade balances.

Food systems cannot be created quickly out of whole cloth. They tend to evolve incrementally over time. But in digital technology, today’s African leaders have a powerful tool they can deploy to help clear away the primary obstacle to progress: the profound isolation of the vast majority of smallholder farmers. Until now, it has been very hard to get information to or from smallholders, preventing their efficient integration into the broader economy. But mobile communications can shatter this isolation and enable the creation of a new food system suited to contemporary needs. If farsighted leaders seize this opportunity, they can transform African agriculture from a symbol of poverty and backwardness into a powerful engine of economic and social development.

FIVE PRINCIPLES The new African food system should be built around the idea that agriculture is about more than producing calories; it is about changing society. Its five components should be valuing the smallholder farmer, empowering women, focusing on the quality as well as the quantity of food, creating a thriving rural economy, and protecting the environment.

Neither of us is sentimental about small farms, but we recognize the need to be practical. More than 80 percent of African agricultural production comes from smallholders. Any rational food system for Africa must put its smallholders first. Over the years, many African governments have tried to bypass the existing agricultural sector by investing in large- scale commercial farms, on the theory that they would be more efficient. But allocating large blocks of land to foreign investors, reserving water for industrial-sized operations, and concentrating research and development on a few cash crops doesn’t help most farmers. It also hasn’t generated enough produce to feed the continent’s rapidly growing urban areas, which is why food imports are going through the roof—and why city dwellers are spending more than they should on food.

In fact, Africa’s smallholders are more than capable of feeding the continent—so long as they boost their yields by using the latest agronomic practices in combination with appropriately adapted seeds and fertilizer. Most have not adopted these improvements, however, because they don’t know about them, or can’t get to a place where they can buy them, or can’t afford them. The infrastructure to link most smallholders to markets simply doesn’t exist, which means that many farmers have little incentive to increase their productivity in order to generate surpluses to sell. Enabling smallholder farmers to grow more food and sell it in formal markets for a fair price would change life for almost every poor person in Africa.

The keys to fixing this problem are supplying smallholders with appropriate seeds and fertilizer, providing education and training, and ensuring easy access to markets and larger economic networks. Mobile technology can help on all these fronts. Cell phones and digital videos, for example, can revolutionize education and training. Digital Green, an organization that broadcasts videos of farmers conducting training sessions in local languages, is the next generation of farmer extension programs. Because farmers tend to trust their peers more than outside experts, Digital Green’s model has led farmers to adopt better methods at very high rates. The organization expanded from India into Ethiopia and is exploring pilot programs in Ghana, Mozambique, and Tanzania.

Women, meanwhile, provide the majority of the labor on African farms, but on average, they are less productive than men—13 to 25 percent less productive, according to a report published last year by the World Bank and the ONE Campaign. The reasons for this are complicated, ranging from sex discrimination in extension programs to cultural norms that can make it difficult for women to hire and manage labor during the harvest. But fixing it is a necessity. Not only do women form a major part of the agricultural work force; they also spend much more of what they earn than men do on goods such as education, nutrition, and health care, which have large positive multiplier effects. So when women have money and the power to decide how to spend it, everybody benefits.

Here again, digital technology can be incredibly useful. Giving women cell phones allows them to transact business directly, without mediators; open bank accounts only they can access; receive information and training that local men might not support; and get market prices in real time in order to negotiate effectively with potential buyers.

As for food quality, only now is the true impact of malnutrition on poor countries beginning to be understood. It is an underlying cause of almost half of all the deaths of children under five around the world and leaves tens of millions more children cognitively or physically impaired for the rest of their lives. Food everywhere is less nutritious than it should be; in the United States, for example, the food system is designed to supply people with as many calories as possible, that taste as good as possible, for as little money as possible. As a result, American agriculture focuses on corn as a vehicle for sugar, breeds that corn for high yields rather than nutritional value, and processes it to remove whatever nutrients might still remain. This means that Americans get lots of cheap, tasty breakfast cereal that isn’t good for them.

The current African food system shares some of these features. The seeds available in Africa are bred for yield almost to the exclusion of other traits; the breeders who develop these seeds focus mostly on corn and wheat, so crops such as cassava and sorghum remain unimproved; and roller mills remove nutritional value in Africa just as they do in North America. But there are some reasons to be optimistic. For example, the fortification of food that has long been standard in developed countries has begun coming to Africa as well. Rice in Ghana, maize in Zambia, and sweet potato in several countries are now being fortified with vitamin A. And biofortification promises even bigger opportunities, as advances in genetics have made it easier to breed seeds with specific nutritional characteristics, such as high-zinc wheat and high-iron pearl millet.

In a robust food system, farms support a range of businesses. Farmers need financial services, seeds, and fertilizer before they begin planting; after they harvest, they need storage, transport, processing, and marketing. Every step in this process can be an opportunity for entrepreneurial activity, so in theory, a healthy food system could nurture an entire rural sector that creates wealth and provides off-farm employment opportunities to spread it around. LUC GNAGO / REUTERS Workers dry cocoa beans in a village in western Ivory Coast, August, 2015.

So far, such businesses have been few and far between in Africa, but that may be changing. In Nigeria, for example, for 40 years, the government bought seeds and fertilizer and then had them delivered to farmers. Not only did the system not work—little of the seeds and fertilizer ever reached smallholders—but it also crowded out entrepreneurs who could have served rural communities directly. To address these issues, Nigeria recently dismantled the public procurement system and implemented policies to spur new businesses. By giving farmers a 50 percent subsidy (via vouchers sent to their cell phones), the government has helped generate demand for seeds and fertilizer. In the meantime, to make sure there is enough supply to meet that demand, the Ministry of Agriculture and the Central Bank of Nigeria launched a risk-sharing program to encourage local banks to make agricultural loans. And with the partial guarantee, banks have quadrupled their lending to the agriculture sector. The number of seed companies operating in Nigeria has gone from just 11 to more than 100, and there are now thousands of local mom-and-pop shops selling these companies’ seeds directly to farmers.

The green revolution of the 1950s and 1960s, finally, introduced new and highly productive agricultural technologies and methods and fed a billion people in Asia and Latin America. But it also ended up doing significant damage to the environment of those regions, depleting the soil and reducing biodiversity. We now know that ensuring the long- term sustainability of the African agricultural environment is more critical than ever, given the problems already being caused by climate change.

The good news is that with digital education in basic conservation techniques, such as crop rotation with legumes, so-called green manure, and good water management, smallholder farmers can not only increase yields in the short term but also restore soil health over time. This is crucial, since African soils are the most depleted in the world.

THE PROMISE OF DIGITAL Digital technology can help advance all these principles simultaneously. It makes connections possible, transfers information instantaneously, and can help build virtual communities even among widely separated and remotely located individuals and communities.

Some appropriate digital applications are already in use, and more are in development. In 2014, for example, Ethiopia’s Agricultural Transformation Agency launched an agricultural hot line, and it has already logged almost 6.5 million calls. It also sends text messages and automated calls containing up- to-date agronomic information to 500,000 users. The agency is also developing the Ethiopian Soil Information System, or EthioSIS, a digital soil map analyzing the country’s soils down to a resolution of ten kilometers by ten kilometers. Eventually, these two systems will merge, transmitting cutting-edge, highly tailored information to millions of farmers.

Digital technology can also revolutionize farmer organizations. Membership in agricultural cooperatives has always lagged in Africa, because smallholders are too spread out. New, digitally powered organizations, however, can succeed in doing what farmer cooperatives are supposed to do: purchase seeds and fertilizer in bulk and pass on the savings to their members, serve as trusted sources of information on farming practices, and help farmers aggregate and warehouse produce and negotiate fair prices.

The digital infrastructure for interacting with smallholders is already being put in place, so now is the time to make sure it gets done right. This means making sure that all farmers are included from the start, especially the poorest and most remote. Digital agricultural applications need to be run on neutral digital platforms to which any farmer can connect, rather than proprietary platforms for a select few. It doesn’t matter who builds the platforms—whether governments, agribusinesses, or telecommunications companies—so long as they are made accessible to all. To get the most out of these platforms, moreover, farmers need to be assigned unique user identifiers, so that they can receive services tailored to their needs. And information needs to be governed in a way that makes most of it open source. Ethiopia’s digital soil map, for example, is public, so anybody can use the data.

As the two of us began our careers, one of the big questions in development was whether the world would be able to feed itself in decades to come. Many predicted a coming global famine, so simply avoiding mass starvation has to be considered a significant success. But it is high time to move beyond simple calorie provision and think about agriculture in the developing world in a more holistic way. Smallholder farmers in Africa can finally be seen not just as part of the problem but also as part of the solution. Using digital technology to reach them, listen to them, support them, and help them organize holds out the potential for another agricultural revolution. Making sure the opportunity is seized will require policy changes, investments, and a great deal of effort on the part of everyone from government officials and entrepreneurs to agronomists and coders. But what is needed most is leaders who can envision a continent transformed.

KOFI ANNAN was UN Secretary-General from 1997 to 2006. SAM DRYDEN is a Senior Fellow at Imperial College London and was Director of Agricultural Development in the Global Development Program at the Bill & Melinda Gates Foundation. October 16, 2015