<<

AN INTELLIGENT FUTURE? Maximising the opportunities and minimising the risks of in the UK ACKNOWLEDGMENTS WRITTEN AND RESEARCHED BY: OLLY BUSTON, ROBERT HART, AND CATH ELLISTON. WITH THANKS TO: AMY BARRY, IRAKLI BERIDZE, MILES BRUNDAGE, KAY FIRTH-BUTTERFIELD, STEPHEN CAVE, JESSICA MONTGOMERY, RICHARD MOYES, HUW PRICE, NICK PURSER, DEOK JOO RHEE, JANE ROWE, MURRAY SHANAHAN, AND KATIE WARD. DESIGN AND LAYOUT: NICKPURSERDESIGN.COM / MEDIA AND COMMUNICATIONS: DIGACOMMUNICATIONS.COM

2 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK AN INTELLIGENT FUTURE? Maximising the opportunities and minimising the risks of artificial intelligence in the UK

Endorsements

“Making the best of AI is one of the most important challenges of our century, a challenge we all face together. Future Advocacy are doing a commendable job of encouraging well-informed debate about these crucial issues, in government and in the public sphere.” [Huw Price, Bertrand Russell Professor of Philosophy, Cambridge University & Academic Director of the Leverhulme Centre for the Future of Intelligence]

“This excellent report helps show us how we can ensure Artificial Intelligence delivers on the needs and wants of real people. AI is a powerful and flexible tool that will increasingly transform businesses, governments, and societies. We need to get this right.” [Kay Firth-Butterfield, Former CO, Lucid AI’s Ethics Advisory Panel & Co-Founder, Consortium for Law and Ethics of Artificial Intelligence and Robotics, University of Texas, Austin]

“Artificial intelligence creates great opportunities for improving diagnosis and treatment. At the same time it brings challenges in areas such as privacy and accountability. This report provides great food for thought on how we can get the balance right.” [Dame Sally Davies, UK Chief Medical Officer]

“The development of AI will have profound global consequences and the UN has a vital role to play in making sure the opportunities are maximised and the risks are minimised. The UNICRI Centre on AI and Robotics seeks to enhance understanding through improved coordination, knowledge collection and dissemination, awareness-raising and outreach activities. Future Advocacy is a valuable contributor to these important global efforts.” [Irakli Beridze, Senior Strategy and Policy Advisor, United Nations Interregional Crime and Justice Research Institute]

AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK 3 EXECUTIVE SUMMARy

The Intelligence Revolution the ability of AI to replace humans in military decision-making raises profound questions. We are in the early stages of an intelligence revolution. Artificial intelligence (AI) already The more distant future is hard to predict. Oxford permeates many aspects of our lives. AI systems University Professor and others have trade on the stock market, filter our email spam, speculated about the catastrophic risks of a ‘super- recommend things for us to buy, navigate driverless intelligence’ that humans struggle to control. While cars, and in some places can determine whether Bostrom’s doomsday scenario may be extraordinarily you are paid a visit by the police.1 unlikely, the possibility of it happening only needs to be extremely small for it to warrant attention. Although AI is not new, there has been a recent explosion of activity and interest in the field which has largely been driven by advances in machine The UK’s Unique Position learning. These are computer programs that The UK was the crucible of the industrial revolution automatically learn and improve with experience.2 and is one of the key crucibles of the intelligence revolution. It is home to world-leading AI companies Progress in machine learning has allowed more and world-leading academic centres of AI research, versatile AI systems to be developed that can perform and is well placed to reap great economic and social well at a range of tasks, particularly those that benefits from the development of AI. involve sorting data, finding patterns, and making predictions. The UK also hosts world-leading academic centres focused on the safety of AI.3 This, alongside the UK’s membership of key multilateral policymaking Opportunities and Risks fora such as the G7, G20, NATO, UN, and OECD, means the UK could and should play an important The fast-moving development of AI presents huge role in shaping and directing global debate and economic and social opportunities. Over the coming ensuring that the opportunities of AI are maximised years AI will drive economic productivity and and its risks are minimised. growth; improve public services; and enable scientific breakthroughs. Engaging the Public and Politicians But there are also risks. The intelligence revolution will cause great disruption to employment markets. As part of our research for this report we commis- Concerns about privacy and accountability will sioned a YouGov poll to assess British public be amplified as AI makes possible increasingly opinion on a range of issues relating to AI. The sophisticated analysis of our personal data. And results of the poll are featured throughout this report.

1. The Chicago police department have used predictive policing to visit those at a high risk of committing an offence to offer them opportunities to reduce this risk, such as drug and alcohol rehabilitation or counseling. See Saunders, J., Hunt, P., & Hollywood, J. S. (2016). Predictions put into practice: a quasi-experimental evaluation of Chicago’s predictive policing pilot. Journal of Experimental Criminology, 12(3), 347-371 and Stroud, M. (2016, 19 August) Chicago’s predictive policing tool just failed a major test. The Verge (retrieved from http://theverge.com, accessed on 11 October, 2016). Areas of the UK, such as Kent, are beginning to use predictive policing. E.g. see O’Donoghue, R. (2016, 5 April) Is Kent’s Predictive Policing project the future of crime prevention? KentOnline (retrieved from http://kentonline.co.uk, accessed on 11 October, 2016). 2. Mitchell, T. (1997) Machine Learning. London, UK: McGraw-Hill Education. 3. These include Cambridge’s Centre for the Study of Existential Risk and Oxford’s Future of Humanity Institute.

4 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK YouGov Poll Graph 1

Which ONE, if either, of the following statements BEST describes your view towards Artificial Intelligence (AI)?

AI is more of an opportunity 28% WOMEN for humanity than a risk 43% MEN

AI is more of a risk to 30% humanity than an opportunity 29% – Significantly fewer women think that AI is more of an Neither of these 15% opportunity for humanity 13% than a risk. Don’t know 26% 16%

0% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50%

YouGov Poll Graph 2

In general, do you think the UK Government should pay more or less attention to the potential opportunities and risks of Artificial Intelligence, or the same as it does currently?

– British people think the The UK Government should pay more attention 42% government should pay more attention to the opportunities The UK Government should 8% and risks of AI. pay less attention The UK Government should pay the same amount of 26% attention as it does currently

Don’t know 24%

0% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50%

All figures, unless otherwise stated, are from YouGov Plc. Total sample size was 2070 adults. Fieldwork was undertaken between 10th - 11th October 2016. The survey was carried out online. The figures have been weighted and are representative of all UK adults (aged 18+).

We need to have a much deeper and more informed Science and Technology Select Committee into public debate about AI in order to build the trust, Robotics and AI6 will provide much-needed stimulus. understanding, and acceptance that are vital to realise the benefits of this technology and to ensure Our recommendations to the UK Government are that AI is developed in ways that fit human wants summarised below. The Government is not the only important actor in this space, but it does have a vital and needs.4 role to play, alongside industry, AI researchers, the media, and the public. We also need to have a much deeper and more 5 informed political debate about AI. The words We welcome discussion as well as critique of our ‘artificial intelligence’ have only been said 32 times recommendations with the intention that this report in the House of Commons since electronic records will help stretch political horizons and shape an began, compared to 923 for ‘beer’ and 564 for increasingly informed public debate on this dynamic ‘tea’. Hopefully the recently published report of the and important subject.

4. Some commendable work has been undertaken in this area, notable examples including the Royal Society and Nesta, and these will serve as useful starting points for wider public discourse. 5. There is some promising work within the Civil Service in this regard, notably the Government Office for Science’s forthcoming report on AI and governance; and published ethical guidelines on the use of data science tools for government analysts. 6. Science and Technology Committee (2016, 12 September) Robotics and Artificial Intelligence. HC 145 2016-17.

AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK 5 Summary of policy recommendations

The UK Government should: 7. Support a ban on Lethal Autonomous Weapons Systems (LAWS) and work with international 1. Make the AI opportunity a central pillar of the partners to develop a plan for the enforcement of Prime Minister’s proposed industrial strategy the ban and for the prevention of the proliferation and of the trade deals that the UK must negotiate of LAWS. post-Brexit.

2. Commission UK-specific research to assess 8. Give appropriate attention to long-term issues which jobs are most at risk by sector, geography, of AI safety: support research into AI safety and age group, and gender. And then implement horizon scanning; support the institutionalisation a smart strategy to address future job losses of safe AI research conduct in all sectors including through retraining, job creation, financial support, the development of a code of ethics; develop and psychological support. standards and guidelines for whistle-blowers; and ensure students and researchers are trained 3. Draft a White Paper on adapting the education system to maximise the opportunities and in the ethical implications of their work. minimise the risks created by AI. 9. Facilitate a House of Commons debate on 4. Agree a ‘new deal on data’ between citizens, maximising the opportunities and minimising businesses, and government with policies on the risks of AI in the UK. privacy, consent, transparency, and accountability through a nation-wide debate led by a respected 10. Establish a Standing Commission on AI to examine and impartial public figure. the social, ethical, and legal implications of 5. Promote transparency and accountability in AI recent and potential developments in AI. decision-making by supporting research that facilitates an opening of the ‘black box’ of 11. Develop mechanisms to enable fast transfers intelligent algorithms and supporting open data of information and understanding between initiatives. researchers, industry, and government to facilitate swift and accurate policy-making based on fact. 6. Establish systems of liability, accountability, justification, and redress for decisions made on the basis of AI. This would promote fairness and 12. Launch a prize for the application of AI to tackling justice, and could encourage companies to today’s major social challenges and delivering invest in more transparent AI systems. public goods.

6 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK INTRODUCTION

“The dream is finally arriving. This is what it was all leading up to…We’ve made more progress in the last five years than at any time in history.”7 Bill Gates

What is AI? As early as 1997 IBM’s Deep Blue beat world champion Garry Kasparov at chess, a game asso- Defining ‘artificial intelligence’ is a complicated ciated with high intelligence. That was impressive. task, mainly because the concept of intelligence But Deep Blue could not play scrabble. Deep Blue itself is hard to pin down. In this paper we use an was “narrow” AI, which means it was very good at inclusive definition of intelligence as ‘problem a particular task but could not switch between tasks. solving’ and consider an ‘intelligent system’ as one that takes the best possible action in a particular In recent years, significant progress in ‘machine situation.8 learning’ has meant that AI systems are becoming more flexible. ‘Machine learning’ refers to AI systems that are able to improve their performance at a task over time and have the ability to adapt their own rules and features based on their own output and experience.9 An example, also referred to as ‘deep learning’, is the way in which Google DeepMind’s AI system became exceptionally good at a wide range of Atari computer games. The system was instructed to maximise its score on various games, and the only input it received was the score and video game pixels.

Flexibility and adaptability are what makes current (and potential future) AI such a powerful tool, along with its ability to find patterns in, provide useful insights about, and make predictions from, vast datasets.

The State of Play Today

We are at the start of an intelligence revolution that could herald even greater economic and social change than the industrial revolution over a shorter Garry Kasparov, former World Chess Champion, and considered by many to be the best player of all time, lost to IBM’s Deep Blue in 1997. timeframe. As the graphic below shows, AI is al- ready being used to perform a wide range of tasks.

7. Prigg, M. (2016, 2 June) Bill Gates claims ‘AI dream is finally arriving’ - and says machines will outsmart humans in some areas within a decade. Daily Mail (retrieved from http://dailymail.co.uk, accessed 12 October, 2016). 8. Russell, S. J., and Norvig, P., (1995) Artificial Intelligence: A Modern Approach, Englewood Cliffs, NJ: Prentice Hall. 9. Mitchell, T. (1997) Machine Learning. London, UK: McGraw-Hill Education.

AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK 7 Examples of AI use today

Virtual assistants: Search engines: Purchase Music and film Siri, Cortana, and Google improves its prediction: recommendations: Google Now are all results using intelligent Amazon suggests Spotify and Netflix driven by AI. algorithms. products we may like suggest new songs and based on our purchase shows based on what or search history. we have previously listened to or watched.

Legal advice: Creating art: Water management: Agriculture: The chatbot DoNotPay AI has been used to AI is being used to AI is being used to diagnose has successfully contested compose music, write coordinate drones to problems in crop growth; 160,000 parking tickets poetry, and produce test the water quality of in smart tractors that can in London and New a number of European selectively spray weeds paintings. with herbicide; and in York.10 rivers, including the 11 satellite imaging to identify Thames. areas where farmers will require more support.12

NEWS

Transport: Wildlife Protection: Journalism: Health: AI underpins a number AI has been successfully AI is being used to AI is being used to of features in SatNav deployed to inform draft short articles and interpret eye scans; systems; is used to rangers’ patrol routes reports. The Washington improve treatment of ease congestion in in efforts to combat Post deployed AI in its severe combat wounds; cities; and is behind poaching in Uganda coverage of the and reduce hospital- 14 recent advances in and Malaysia.13 Olympics. acquired infections.15 driverless cars.

10. Gibbs, S. (2016, 28 June) Chatbot lawyer overturns 160,000 parking tickets in London and New York The Guardian (retrieved from https:// theguardian.com, accessed 11 October, 2016). 11. Guerrini, F. (2016, 11 May) From Lake Garda to the Thames: Why boat drones are taking to the water. ZDNet (retrieved from http://zdnet.com, accessed 11 October, 2016). 12. E.g. see Simon, M. (2016, 25 May) The Future of Humanity’s Food Supply Is in the Hands of AI. Wired (retrieved from https://wired.com, accessed 11 October, 2016). 13. Snow, J. (2016, 12 June) Rangers Use Artificial Intelligence to Fight Poachers.National Geographic (retrieved from http://news.nationalgeographic. com, accessed 11 October, 2016). 14. WashPostPR (2016, 5 August) The Washington Post experiments with automated storytelling to help power 2016 Rio Olympics coverage. The Washington Post (retrieved from https://washingtonpost.com, accessed 11 October, 2016). 15. Moorfields Eye Hospital (retrieved from http://www.moorfields.nhs.uk/news/moorfields-announces-research-partnership). Executive Office of the President [2016, October] Preparing for the Future of Artificial Intelligence.

8 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK 2016 in particular has seen a number of break- have the opportunity to develop a fundamentally throughs. Earlier this year, Google DeepMind’s better society in which AI is used to help solve some AlphaGo AI system beat Lee Sedol, the world’s of our most pressing problems, including disease leading player of the ferociously complicated game and climate change. But these technological of ‘Go’, which originated in China. advances do not come without risk.

DeepMind also announced a new partnership with Moorfields Eye Hospital, extending existing The Structure of this Report partnerships with the NHS.16 And in September, Uber began trialling driverless cars with members This report is made up of four main sections: of the public in Pittsburgh thanks to AI’s ability to navigate complicated urban environments.17 • Section 1 explores the impact of AI on employment. • Section 2 looks at the interaction of AI with ‘big IBM CEO Ginni Rometty has said that her organi- data’ and how it will amplify current challenges sation is betting the company on AI. And according around privacy, fairness, and accountability. to Google CEO Sundar Pichai, the tech giant is • Section 3 focuses on the military use of AI with “thoughtfully applying it across all our products, be an emphasis on autonomy. it search, ads, YouTube, or Play.”18 • Section 4 looks at the more distant future, including the risk that human beings might lose In the future, increasingly powerful and flexible AI control of AI that possesses greater-than-human will be deployed in almost every area imaginable. intelligence. Ahead of us lies enormous potential. AI could turbo-charge productivity, empower citizens, and Each section makes concrete recommendations to deliver cheap and safe transport, with innumerable the UK Government about how it can help maximise benefits to businesses, society, and individuals. We the opportunities and minimise the risks of AI.

16. Shead, S. (2016, July 10) ‘Google DeepMind: How, why, and where it’s working with the NHS’ Business Insider UK (Retrieved from http:// uk.businessinsider.com, accessed 6 October, 2016). 17. Mui, C. (2016, August 22) ‘Uber Is Positioned To Slingshot Ahead Of Google In Driverless Cars’ Forbes (retrieved from http://forbes.com , accessed 6 October, 2016). 18. Executive Office of the President [2016, October] Preparing for the Future of Artificial Intelligence.

AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK 9 SECTION 1: AI AND EMPLOYMENT

“The business plans of the next 10,000 start-ups are easy to forecast: Take X and add AI.”19 Kevin Kelly, Founding Editor Wired Magazine

Increasing Productivity and Economic to increasing inequality. Various economists have Growth predicted that developments in AI, alongside advances in other technologies, will usher in an age Artificial intelligence is already enabling a wave of of mass unemployment.21,22 innovation across every sector of the UK economy. It helps businesses use resources more efficiently, One Oxford study predicts that 35% of UK jobs are allows new approaches to old problems, and enables at high risk of automation over the next 20 years.23 entirely new business models to be developed, often The Bank of England’s Chief Economist Andy Haldane built around AI’s powerful ability to interrogate large data sets. thinks this could be higher, with 15 million (half of all today’s workers) likely to be replaced.24 President Obama’s Chief Economist Jason Furman has The Pessimistic View of AI and suggested that 83% of jobs making less than $20 Employment per hour in the US will face serious pressure from automation. For middle-income work that pays History offers many examples of workers being between $20 and $40 per hour, that number is still replaced by new technologies. In the nineteenth as high as 31%.25 century the mass replacement of skilled textile workers by industrial looms provoked the ‘Luddites’ to break Most at risk are jobs with routine intellectual compo- the looms that were putting them out of work. More nents, cutting across all sectors of the economy. This recently nearly all of the 60,000 jobs in 9000 Blockbuster video stores worldwide at the company’s includes many jobs traditionally viewed as ‘safe’ peak in 2004 have now disappeared.20 from automation like medicine, law, and journalism. AI is already being used to interpret scans and The pessimistic view is that the intelligence revolution complex medical data, to sort through legal docu- will drive a relentless wave of redundancy, leading ments, and to write brief articles and sports reports.

19. Kelly, K. [kevin2kelly]. (2016, 7 April) The business plans of the next 10,000 start-ups are easy to forecast: Take X and add AI. #theinevitable [tweet] (retrieved from https://twitter.com/kevin2kelly/status/718166465216512001, accessed 6 October, 2016). 20. Harress, C. (2013, 5 December) The Sad End Of Blockbuster Video: The Onetime $5 Billion Company Is Being Liquidated As Competition From Online Giants Netflix And Hulu Prove All Too Much For The Iconic Brand. International Business Times (retrieved from http://ibtimes.com, accessed 11 October, 2016). 21. Ford, M. (2015). Rise of the Robots: Technology and the Threat of a Jobless Future. New York: Basic Books. 22. See Brynjolfsson, E. (2015, 4 June) Open Letter on the Digital Economy. MIT Technology Review. (retrieved from https://technologyreview.com, accessed 6 October, 2016) and Brynjolfsson, E., & McAfee, A. (2011). Race against the machine. Lexington, MA: Digital Frontier Press. 23. Frey, C. B, and Osborne, M. A. (2013) The Future of Employment: how susceptible are jobs to computerisation? Oxford Martin School, . See also: BBC (2015, 11 September). Will a robot take your job? BBC. (retrieved from http://www.bbc.co.uk/news/technology-34066941, accessed 6 October, 2016) and Knowles-Cutler, A., Frey, C. B., and Osborne, M. A. (2014). Agile town: the relentless march of technology and London’s response. Deloitte. (retrieved from: http://.deloitte.com, accessed 6 October, 2016). 24. McGoogan, C. (2015, 13 November) Bank of England: 15 million British jobs at risk from robots. Wired (retrieved from https://wired.co.uk, accessed 6 October, 2016). 25. AI Now (2016) The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term: A summary of the AI Now public symposium, hosted by the White House and New York University’s Information Law Institute, July 7th, 2016. AI Now (retrieved from https:// artificialintelligencenow.com, accessed 10 October, 2016).

10 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK Uber began trialling driverless cars in Pittsburgh in September 2016.

The Modern Transport Bill, announced in this year’s two decades. This is followed by the transport and Queen’s Speech, will aim to “put the UK at the storage sector where 1,524,000 jobs (74% of the forefront of autonomous and driverless vehicles workforce) are likely to be automated and human ownership and use.” One must assume that jobs health and social work where 1,351,000 jobs involving driving are very clearly at risk. Uber now (28% of the workforce) are at risk.30 operates in over 20 UK cities,26 employing 25,000 people in London alone.27 The company’s stated The development of AI may also affect employment goal is to replace their drivers entirely, which would patterns and inequality between countries. In the drive down costs and accident numbers, but also past, many developing economies achieved growth jobs.28 The competition to lead the transition to by exploiting high numbers of low paid workers. driverless vehicles is fierce. Google, Tesla, Baidu, This strategy was successful (in terms of growth and nuTonomy29 are among the other big players in generation) for the so-called East Asian ‘Tiger’ this race which will have a profound impact on taxi economies and later for China and India and has drivers, bus drivers, lorry drivers, and the transport helped them to catch up with richer economies in sector as a whole. terms of GDP. It is a strategy that may not be available in future if more and more routine physical According to Deloitte the UK sector which has the and intellectual tasks are automated. Developing highest number of jobs with a high risk of automation countries may therefore need to pursue leap-frog- is wholesale and retail. In this sector 2,168,000 jobs ging strategies aimed at being competitive in those (or 59% of the total current workforce in this sector) areas of employment that will not be impacted by have a high chance of being automated in the next automation.

26. Telegraph Reporters (2016, 16 May) What is Uber and what should I think about the controversies? The Telegraph (retrieved from https://telegraph. co.uk , accessed 6 October, 2016). 27. Titcomb, J. (2016, 2 June) Majority of Uber drivers in London work part time, study says. The Telegraph (retrieved from https://telegraph.co.uk, accessed 6 October, 2016). 28. Newman, J. (2014, 28 May) Uber CEO Would Replace Drivers With Self-Driving Cars. Time (retrieved from https://time.com, accessed on 6 October, 2016). 29. nuTonomy beat Uber to the punch by a matter of weeks when it launched trials for a self-driving taxi in Singapore and aims for a driverless fleet by 2018. See Vasegar, J. (2016, 29 August) nuTonomy looks to beat Uber at its own game. Financial Times (retrieved from https://ft.com, accessed 11 October, 2016). 30. Quoted in Science and Technology Committee (2016, 12 September) Robotics and Artificial Intelligence. HC 145 2016-17.

AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK 11 YouGov Poll Graph 3

How worried, if at all, are you that your job will be replaced by Artificial Intelligence (e.g. robots, machines) in the near future? – British people tend Very worried 2% not to be worried that their jobs will be Fairly worried 6% replaced by Artificial Intelligence, robots, Not very worried 20% or machines in the near future. Not at all worried 29%

Don’t know 2%

Not applicable - Not currently working 41%

Net: Worried 8%

Net: Not worried 49%

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

All figures, unless otherwise stated, are from YouGov Plc. Total sample size was 2070 adults. Fieldwork was undertaken between 10th - 11th October 2016. The survey was carried out online. The figures have been weighted and are representative of all UK adults (aged 18+).

The Optimistic View of AI and from the limitations of our imagination.33 Historically, Employment technological advances have brought new demand, creating jobs in previously unimaginable sectors. A utopian spin on these predictions of AI-fuelled Enter the words ‘social media jobs’ into any recruit- unemployment is that a new era will arrive where ment website and you will find hundreds of jobs that work itself becomes optional and humans are freed from the financial and temporal constraints of simply did not exist ten years ago. One estimate employment to pursue other more fulfilling activities predicts that 65% of children in primary school in a world of increasing abundance.31 Assuming today will be working in a job that doesn’t exist this new age of machine workers eliminates scarcity, yet.34 At the same time, while machines may have our chief economic problem will be that of distribution, a comparative advantage in routine tasks, humans not production.32 will retain an edge in roles that require creativity, Other economists paint a different picture, with lateral thinking, interpersonal skills, caring, and some believing that fears of unemployment stem adaptability for many years to come.35

31. E.g. see Wohlsen, M. (2014, 14 August) When Robots Take All the Work, What’ll Be Left for Us to Do? Wired (retrieved from https://wired.co.uk, accessed 7 October, 2016). 32. Autor, D. H., (2015). Why are there still so many jobs? The history and future of workplace automation. The Journal of Economic Perspectives, 29(3), 3-30. 33. Mokyr, J., Vickers, C., and Ziebarth, N. L. (2015). The history of technological anxiety and the future of economic growth: Is this time different? The Journal of Economic Perspectives, 29(3), 31-50. 34. McLeod, Scott and Karl Fisch, “Shift Happens”, https://shifthappens. wikispaces.com. 35. Autor, D. H., (2015). Why are there still so many jobs? The history and future of workplace automation. The Journal of Economic Perspectives, 29(3), 3-30.

12 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK Jobs of the future

Though many jobs will disappear, the intelligence revolution will open up other avenues of employment, increasing demand in some existing areas or creating new demands entirely.

In the short-term, demand for computational and technical literacy will increase, and the ability to interpret data will be a valued skill. At least in the short-term, software developers, coders, and data analysts will all be in high demand, and we will still need mechanics and technicians to maintain and repair automated systems.

There are also those jobs that will resist automation. Complex manual jobs that require a great deal of dexterity will endure. It is therefore unlikely that those employed as hairdressers, chefs, gardeners, dentists, and cleaners will be replaced soon.

Creative roles will also display resilience, and will likely experience an increase in demand. Entrepreneurs, creative writers, and scientists – disciplines requiring complex and creative thinking – are all in this category. Machines are also currently exceptionally poor at tasks involving social skills, meaning carers can expect to remain in demand. Many people may simply prefer to be cared for by human carers.

Other jobs will be transformed rather than replaced, with employees freed from routine tasks to focus on more cognitively demanding areas. This may be the case with higher-level medical, legal, management, and teaching work.

Many new areas will also emerge, a number of these facilitated by developments in AI. There will be handypersons to assist in setting up smart homes as the Internet of Things connects increasing areas of our lives, and traffic monitors for fleets of driverless vehicles. Advances in 3D printing will spur demand for a new sector of designers and innovators, and as our personal data becomes increasingly revealing and difficult to keep confidential professions will emerge to help us manage our data.

AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK 13 For many tasks it may be that a combination of Employed population working as man and machine will be the most productive. It contact centre staff, by region is heartening to note that although Deep Blue beat Kasparov in 1997, combinations of humans and machines became the world’s best chess players in the mid-2000’s, beating the best humans and the best computers by combining tactical support from the computer with strategic guidance from the human.36 The fact that Lee Sedol appears to have become a better ‘Go’ player since being beaten by Google DeepMind’s AlphaGo earlier this year is perhaps further evidence of the power of such human-machine synergy.37 Job growth in the future is likely to be in roles that complement technology rather than those that can be substituted by it.

A Prudent Approach

Ultimately, we should take care not to enter into complacent optimism or pessimism. Our inability to imagine jobs of the future does not mean we are bound to total unemployment. On the other hand, a historical precedent of new jobs offers little by way of assurance that such trends will continue as machines become as good as humans at many physical and intellectual tasks. 6% of employed population 5-6% of employed population What does seem highly likely is that the rate and 3-4% of employed population 2-3% of employed population scope of change in employment markets will be BASED ON 2014 DATA unprecedented with rapid disruption across almost 2% of employed population FROM CONTACTBABEL all sectors. It may be, for example, that most of the more than 1 million jobs38 in UK call centres simply disappear once AI-based call centre systems cross As with all change, the impact will be different on a certain quality threshold. This would be a major different genders, geographic regions,39 and age employment shock and would be concentrated in groups. Improving our understanding of where the certain geographical areas (see map) many of which impact is likely to be felt is vital if the government is have already been hit hard by de-industrialisation. to develop a smart and proactive policy response.

36. Shanahan, M. (2015). The Technological Singularity. Cambridge, MA: MIT Press. P.191. 37. Hassabis, D. [demishassabis]. (2016, 5 May) ‘Lee Sedol has won every single game he has played since the #AlphaGo match inc. using some new AG-like strategies - truly inspiring to see!’ [tweet] (Retrieved from https://twitter.com/demishassabis/status/728020177992945664, accessed 10 October, 2016). 38. Unison (2013) Unison Calling: a guide to organising in call centres. Unison. 39. A Deloitte study reveals that London will be significantly safer in terms of jobs, with 51% at low risk compared to 40% for the UK as a whole see Knowles-Cutler, A., Frey, C. B., and Osborne, M. A. (2014). Agile town: the relentless march of technology and London’s response. Deloitte. (retrieved from: http://.deloitte.com, accessed 6 October, 2016). A Nesta study of creative industries, which are likely to resist automation for longer, also shows an uneven distribution, with “a dominant presence in London and the South-East of England”. See Mateos-Garcia, J. and Bakhshi, H. (2016) The Geography of Creativity in the UK. Nesta.

14 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK Education and Training In the longer term, there should be greater emphasis in the curriculum on skills that are likely to remain Our education system will need to be radically uniquely human for longer, such as creativity, lateral reformed to maximise the opportunities and minimise thinking, interpersonal skills, and caring. The schools the risks that AI presents. of the future will need to be more like Montessori schools, with less emphasis on knowledge acquisition and rote learning and more on creative and flexible Skills like coding and STEM (Science, Technology, thought.41 Engineering, and Mathematics) subjects are already in high demand, and this will only increase in the Given the likely rate and scope of change in job short term. While coding is now being taught in markets over the coming years, a focus on self- schools and efforts to increase the uptake of STEM directed, lifelong learning techniques will be essential subjects in under-represented groups are underway, to creating a flexible and dynamic workforce. we need to continue and expand these initiatives. The recent Science and Technology Select Committee People should also be directed, via educational Report rightly criticises the Government for its failure opportunities, towards jobs such as caregivers to publish its long awaited Digital Strategy to address which are in demand and likely to be resilient to the digital skills crisis.40 change for longer.42

POLICY RECOMMENDATIONS

1. The UK Government should make the AI opportunity a central pillar of the Prime Minister’s proposed industrial strategy43 and of the trade deals that the UK must negotiate post-Brexit.

2. The UK Government should commission UK-specific research to assess which jobs are most at risk by sector, geography, age group, and gender. And then implement a smart strategy to address future job losses through retraining, job creation, financial support, and psychological support.44

3. The Department of Education should draft a White Paper on adapting the education system to maximise the opportunities and minimise the risks created by AI.

40. Science and Technology Committee (2016, 12 September) Robotics and Artificial Intelligence. HC 145 2016-17. 41. It is interesting to note that a disproportionately large number of the ‘global creative elite’ went to Montessori schools including Google founders Larry Page and Sergei Brin, Amazon’s Jeff Bezos, videogame pioneer Will Wright, and Wikipedia founder Jimmy Wales. This has led to the coining of the phrase “Montessori Mafia”. E.g. see Denning, S. (2011, 2 August) Is Montessori The Origin Of Google And Amazon? Forbes (retrieved from http://forbes.com, accessed 12 October, 2016). 42. In some countries with ageing populations robot carers may take on a bigger role sooner. This may be the case in Japan, for example, which has invested heavily in robotics for care. See Hudson, A. (2013, 16 November) ‘A robot is my friend’: Can machines care for elderly? BBC (retrieved from http://bbc.co.uk, accessed 11 October, 2016). 43. Measures could include fostering tech start-ups; strengthening links between universities and industry; exploring collaborations on public infrastructure (e.g. smart cities, driverless cars, hospitals etc.); and exploring opportunities for government to support and fast-track innovative uses of AI. Early adopters are likely to become centres of this new revolution and reap the most rewards. We might look to nations like Singapore, for instance, to see how collaboration can be driven between sectors to foster an environment that is conducive to innovation. E.g. see Daga, A. and Armstrong, R. (2014, 4 April) Singapore targets investment in ‘disruptive’ technologies. Reuters (retrieved from http://reuters.com, accessed on 11 October, 2016). 44. In the USA the White House will conduct such a study and recommend policy responses by the end of 2016: Executive Office of the President [2016, October] Preparing for the Future of Artificial Intelligence.

AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK 15 SECTION 2: PERSONAL DATA: PRIVACY, FAIRNESS, AND ACCOUNTABILITY

“Some of the most prominent and successful companies have built their businesses by lulling their customers into complacency about their personal information. They’re gobbling up everything they can learn about you and trying to monetize it.”45 Tim Cook, Apple CEO

Introduction our lives to networks. We produce so much data, in fact, that it is impossible to store,46 let alone analyse, As we go about our lives we generate vast quantities it all. This proliferation of data will continue as our of data. We shop online, bank online, date online, use of digital technology and connectivity expand. and watch TV online. We communicate via email and text message. Increasingly, through the Internet Analysing all this data to derive powerful insights into of Things (IoT), we are connecting growing parts of people’s lives is the sort of task that AI is very good at.

YouGov Poll Graph 4

What do British people think AI should be used for?

AI should be used for this Gathering police intelligence 54% 26% 20%

AI should not be used for this Determining eligibility for mortgages 30% 47% 23% Dont know Child risk assessment by social services 19% 60% 21%

Determining eligibility for jobs 19% 58% 23%

Determining eligibility for insurance 34% 41% 25%

Helping to diagnose diseases 45% 32% 23%

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

All figures, unless otherwise stated, are from YouGov Plc. Total sample size was 2070 adults. Fieldwork was undertaken between 10th - 11th October 2016. The survey was carried out online. The figures have been weighted and are representative of all UK adults (aged 18+).

45. Panzarino, M. (2015, 2 June) Apple’s Tim Cook Delivers Blistering Speech On Encryption, Privacy. TechCrunch (retrieved from https://techcrunch. com, accessed 12 October, 2016). 46. E.g. see Manyika, J., Chui, M., Brown, B., Bughin, J., Dobbs, R., Roxburgh, C., and Byers, A. H. (2011). Big data: The next frontier for innovation, competition, and productivity. McKinsey Global Institute (retrieved from https://mckinsey.com, accessed 10 October, 2016) and Reinsel, D., Chute, C., Schlichting, W., McArthur, J., Minton, S., Xheneti, I, Toncheva, A. and Manfrediz, A. (2007). The Expanding Digital Universe. White paper, IDC.

16 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK Providing a Better Service Consent

Businesses such as Netflix and Amazon use AI to Most of us are clueless about what data is collected tailor their offerings to customers, recommending about us, by whom, and for what purpose. This is films you may like and the books people like you recognised by both parties, with the necessary are reading. The value that this kind of personalisa- acquisition of consent literally and figuratively a tion delivers for customers translates into the profits mere box-ticking exercise. We need to consider and giant stock market valuations of data-driven whether access to our personal data remains a companies like Google and Facebook. Between 2015-20, big data analytics and the IoT alone are reasonable condition of use of everyday services, expected to add £322 billion to the UK economy.47 from email to Facebook.

Increasingly, AI algorithms are also being used to Given the swift advances in data analytics it is tailor public services and deliver public goods such impossible to imagine all the uses data may serve as healthcare, improving energy efficiency,48 easing in the future, making it hard to assure the protection traffic flow,49 and controlling the spread of commu- of data subjects. For example, Moorfields Eye 50,51 nicable diseases. In the future, we can expect AI Hospital drew heavy criticism this year for providing insights to be applied more widely. Google with sensitive patient information, seemingly without informed patient consent.

Privacy Bias The personal and social benefits arising from AI’s ability to interrogate big data may be enormous, but AI systems depend on the data on which they are there is also the risk that information people would trained and which they are given to assess, and rather have kept confidential will be revealed. One may reflect back biases in that data in the action example of this was when a father accidentally they recommend. Biases may exist in data because opened the coupons for baby supplies mailed to his of weaknesses in data collection; these may be daughter by the store Target based apparently on addressed by ‘cleaning the data’55 or improving 52 an algorithmic prediction that she was pregnant. data collection. Certain forms of data, such as commercial and medical information, are collected and stored Bias may also occur when a process being modelled under conditions of anonymity. However, advances itself exhibits unfairness. For example, if data on in AI make anonymity increasingly fragile and it may become increasingly possible to re-assign job applications was gathered from an industry that identity to particular sets of information because of systematically hired men over women and this data AI’s ability to cross-reference between vast quantities was then used to help select likely strong candidates of data in multiple data sets.53 These developments in the future, this could then reinforce sexism in hiring worsen existing concerns about privacy and raise decisions. Addressing this kind of bias may require new ones.54 a combination of common sense along with more

47. Hogan, O., Holdgate, L., and Jayasuriya, R., (2016) The Value of Big Data and the Internet of Things to the UK Economy. Report for SAS. Centre for Economics and Business Research Ltd. 48. Cisco (2014) IoE-Driven Smart Street Lighting Project Allows Oslo to Reduce Costs, Save Energy, Provide Better Service. Cisco (retrieved from https:// cisco.com, accessed 10 October, 2016). 49. Morris, D. (2015, 5 August) Big data could improve supply chain efficiency-if companies would let it. Fortune (retrieved from http://fortune.com, accessed 10 October, 2016). 50. Grant, E. (2012) The promise of big data. Harvard T. Chan. School of Public Health (retrieved from https://www.hsph.harvard.edu, accessed 10 October, 2016). 51. Parslow, W. (2014, 17 January) How big data could be used to predict a patient’s future. The Guardian (retrieved from https://theguardian.com, retrieved on 10 October, 2016). 52. Duhigg, C. (2012, 16 February) How Companies Learn Your Secrets. The New York Times (retrieved from http://nytimes.com, accessed 10 October, 2016). 53. See Ohm, P. (2010). Broken promises of privacy: Responding to the surprising failure of anonymization. UCLA law review, 57, 1701. 54. the Digital Economy Bill, currently being read in Parliament, is a case in point. It has been criticised for its ‘thin safeguards’ regarding the sharing of publicly held data, and for a lack of precision in defining data sharing. Advances in AI will only serve to worsen existing tensions over how data is used by government. 55. Data cleaning refers to identifying incomplete, incorrect, inaccurate, irrelevant, etc. parts of a data set and then replacing, modifying, or deleting them.

AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK 17 complex and political kinds of interventions to avoid be considered less of a problem or may not be reinforcing unfair stereotypes and inequalities.56 identified when it occurs. With all this potential for bias, discrimination laws may be an important In spite of these well-researched potential sources mechanism for ensuring that AI does not make society of bias in AI decision-making, there remains a more unequal and unfair. Of course, concerns tendency to view AI decisions as neutral.57 Concerns about data bias and machine prejudice must be about bias are compounded by the severe lack of considered alongside the existence of prejudice diversity in the AI field raising fears that bias may and bias in the human processes they replace.

BOX 1: EXAMPLES OF DATA BIAS AND MACHINE PREJUDICE

Google ads promising help getting jobs paying more than $200,000 were shown to significantly fewer women than men.58

Amazon was criticised when its ‘Prime’ same day delivery service was only available in largely white, affluent areas due to decisions based on customer data.59

Recidivism software widely used by American courts to assess the likelihood of an individual re-offending was found to be two times less likely to falsely flag white people and two times more likely to falsely flag black people60 (this example was especially concerning as the algorithm was protected under Intellectual Property Laws and was not open to scrutiny).

Face recognition software has failed offensively on numerous occasions. Google’s photo app classified black people as gorillas;61 Nikon cameras thought Asian people were blinking;62 and Hewlett-Packard computers struggled to recognise black faces.63

Transparency significant stores of data are not in the public domain, meaning it is impossible to test or challenge results. AI decision-making systems are often deployed as a This is one reason why organisations like the London- background process, unknown and unseen by those based Open Data Institute advocate for this they impact. Further problems arise from our inability information to be made public. to see how AI arrives at the decisions it makes. This is particularly true of some complicated machine Governments and businesses need to be able to learning algorithms which evolve over time. This provide an explanation that people can under- ‘black box’ issue is exacerbated by the fact that stand as to why decisions have been made, and

56. AI Now (2016) The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term: A summary of the AI Now public symposium, hosted by the White House and New York University’s Information Law Institute, July 7th, 2016. AI Now (retrieved from https:// artificialintelligencenow.com, accessed 10 October, 2016). 57. E.g. see Zarsky, T. (2016). The trouble with algorithmic decisions an analytic road map to examine efficiency and fairness in automated and opaque decision making. Science, Technology & Human Values, 41(1), 118-132. 58. Spice, B. (2015, 7 July) Questioning the Fairness of Targeting Ads Online. Carnegie Mellon University. News (retrieved from http://cmu.edu, accessed 10 October 2016). 59. Ingold, D. and Soper, S., (2016, 21 April) Amazon Doesn’t Consider the Race of Its Customers. Should It? Bloomberg (retrieved from http://bloomberg. com, accessed 10 October, 2016). 60. Angwin, J., Larson, J., Mattu, S., and Kirchner, L. (2016, 23 May) Machine Bias. Pro Publica (retrieved from https://propublica.org, accessed 10 October, 2016). 61. Barr, A. (2015, 1 July) Google Mistakenly Tags Black People as ‘Gorillas,’ Showing Limits of Algorithms. The Wall Street Journal (retrieved from http://blogs.wsj.com, accessed 10 October, 2016). 62. Rose, A. (2010, 22 January) Are Face-Detection Cameras Racist? Time (retrieved from http://content.time.com, accessed 10 October 2016). 63. Chen, B. X., (2009, 22 December) HP Investigates Claims of ‘Racist’ Computers. Wired (retrieved from https://wired.com, accessed 10 October, 2016).

18 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK citizens must be able to challenge them if unfair. more important, as the amount of information held Opening up the ‘black box’ in this way should about us grows and the ability to analyse it also make it easier to retain meaningful human improves. They must be dealt with sensitively moving control over AI in the long-term. forward.

The agreement by the Government to establish a Limits to AI in Data Analytics ‘Council of Data Ethics’ to address some of these issues is a welcome step, but this does not obviate It is important to recognise the limitations of data the need for deeper public involvement and for analysis. Correlation does not equal causation. further innovative ways to channel public opinion AI today is capable of recognising patterns, and into policy on this very complicated issue. large diverse datasets can throw up many patterns indeed. Some are meaningful, others are not. This We need a ‘new deal on data’ between citizens, should be borne in mind as our use of these business, and governments.65 This is in the interests insights increases, especially when used to inform of business and government as it will build trust. public policy. If we do not have a deeper public debate we risk undermining public confidence in this new technology, Google’s ability to predict flu outbreaks, for in- sparking opposition to its uptake. stance, failed after what seemed like strong initial successes.64 The government needs to ensure all stakeholders can raise concerns in an open and constructive manner. Greater clarity is needed about who collects The Urgent Need For Public Debate what, and for what purpose. People need to understand the rights of various parties and how All these developments challenge our current to access information about how their own personal understanding of privacy, consent, and accountability, data is stored and used. Public debate should also and our ability to make choices about information focus on the uncertainties around how data might relating to us. Over time these issues will become be used in the future.

BOX 2: AI AND ROBOTICS REGULATION AROUND THE WORLD

• Japan is a leader in advanced robotics, and in 2015 the government published an action plan to facilitate the coming ‘robot revolution’ and maintain its status as a ‘robot superpower’. Because of Japan’s ageing population and declining workforce, an emphasis has been placed on robotics applied to manufacturing, service, care, and medicine.66 Japan is modifying its intellectual property and copyright laws to account for AI creations.67

• South Korea announced in 2007 that it was working on a ‘Robot Ethics Charter’ to prevent the misuse of robots and to safeguard human wellbeing.68

64. Lazer, D. and Kennedy, R. (2015, 1 October) What We Can Learn From the Epic Failure of Google Flu Trends. Wired (retrieved from https://wired.com, accessed 11 October, 2016). 65. The case for a ‘new deal on data’ has been made in the USA by Alex “Sandy” Pentland. See for example Harvard Business Review (retrieved from https://hbr.org/2014/11/with-big-data-comes-big-responsibility accessed 10 October 2016). 66. The Headquarters for Japan’s Economic Revitalization (2015) New Robot Strategy: Japan’s Robot Strategy. Ministry of Economy, Trade and Industry (retrieved from http://meti.go.jp/english, accessed 10 October, 2016). 67. Segawa, N. (2016, 15 April) Japan eyes rights protection for AI artwork. Nikkei Asian Review (retrieved from http://asia.nikkei.com, accessed 10 October, 2016). 68. New Scientist (2007, 8 March) South Korea creates ethical code for righteous robots. New Scientist (retrieved from https://newscientist.com, accessed 10 October, 2016).

AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK 19 • The European Parliament this summer released a proposal suggesting that robots be classed as ‘electronic persons’. This is a response to the increasing abilities and uses of robotics and AI systems, necessitating a rethink in how we conceive of taxation, legal liability, and social security.69

• The European Parliament adopted new regulations on data protection in 2016. These regulations, likely to be in effect in 2018, include the ‘right to an explanation’ over algorithmic decisions that ‘significantly affect’ individuals’ lives.

• The US became the first nation to announce a policy on fully autonomous weapons in 2012. Directive Number 3000.09 requires, for up to 10 years, humans to be ‘in the loop’ when decisions are made about lethal force. Effectively, this is a ban on Lethal Autonomous Weapons Systems, though it is fairly weak, being both temporary and capable of being overridden by senior officials.70

• The UK has a liberal driverless cars policy, under which tests can be conducted anywhere in the UK without special permits.71

• Four US states have passed laws allowing driverless cars (Nevada, Florida, California, and Mich- igan),72 and the National Highway Traffic Safety Administration has said the AI system driving a car could be considered the car’s driver under federal law.73

POLICY RECOMMENDATIONS

The UK Government should:

1. Agree a ‘new deal on data’ between citizens, businesses, and government with policies on privacy, consent, transparency, and accountability through a nation-wide debate led by a respected and impartial public figure.74

2. Promote transparency and accountability in AI decision-making by supporting research that facilitates an opening of the ‘black box’ of intelligent algorithms and supporting open data initiatives.

3. Establish systems of liability, accountability, justification, and redress for decisions made on the basis of AI. This would promote fairness and justice, and could encourage companies to invest in more transparent AI systems.75

69. See Prodhan, G. (2016, 21 June) Europe’s robots to become ‘electronic persons’ under draft plan Reuters (retrieved from http://reuters.com, accessed 10 October, 2016) and European Parliament (2016) Draft Report: Motion for a European Parliament Resolution with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)) European Parliament (retrieved from http://europarl.europa.eu, accessed 10 October, 2016). 70. See Department of Defense (2012) Directive Number 3000.09 United States of America Department of Defense and Human Rights Watch (2013, 16 April) US: Ban Fully Autonomous Weapons. Human Rights Watch (retrieved from https://hrw.org, accessed 10 October, 2016). 71. Murgia, M. (2016, 11 April) Britain leads the world in putting driverless vehicles on the roads. The Telegraph (retrieved from http://telegraph.co.uk, accessed 10 October, 2016). 72. Murphy, M., and Jee, C. (2016, 11 July) The great driverless car race: Where will the UK place? Techworld (retrieved from http://techworld.com, accessed 10 October, 2016). 73. Shepardson, D and Lienert, P. (2016, 10 February) Exclusive: In boost to self-driving cars, U.S. tells Google computers can qualify as drivers. Reuters (retrieved from http://reuters.com, accessed 11 October, 2016). 74. Similar approaches have been adopted when discussing other emerging revolutionary technologies, such as Baroness Warnock’s examination of IVF. 75. These could be built on the proposed European ‘right to explanation’, regarding machine-made decision over important real-life decisions. See Goodman, B., and Flaxman, S. (2016). EU regulations on algorithmic decision-making and a” right to explanation”. arXiv preprint arXiv:1606.08813.

20 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK SECTION 3: MILITARY USES OF AI

“The endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.”76 Open Letter, Convened by

Introduction

New technologies have always been put to military use. Artificial intelligence is no different, and it is already being applied across a wide range of military contexts.

BOX 3: HOW IS AI BEING PUT TO MILITARY USE TODAY?

• Cyber defences: intelligent systems are routinely deployed in defending against cyber attack.

• Cyber attack: information put in the public domain by Edward Snowden suggests that the USA is developing a program called MonsterMind, which can launch retaliatory cyber attacks as well as possessing defensive capabilities.77

• Intelligence: AI is used to analyse vast amounts of data, including satellite images and telephone records, to inform military operations.

• Missile guidance: AI has been used to enhance missile targeting, and could allow for more precise and flexible control.

• In the air: AI is able to automate the processes of drones, including surveillance, patrolling, and targeting. It is possible this will lead to fully autonomous systems in the near future.

• At sea: the US ‘Sea Hunter’ is an anti-submarine vessel capable of significant autonomy.78

• Border defence: the South Korean Super aEgis II turret – deployed along the demilitarised zone – is capable of identifying and firing without human intervention. Safeguards were added removing automated firing capabilities due to fears it would make mistakes.79

76. Future of Life Institute (2015) Autonomous weapons: an open letter from AI and robotics researchers. Future of Life Institute (retrieved from http://futureoflife. org, accessed 10 October, 2016). 77. Zetter, K. (2014, 13 August) Meet MonsterMind, the NSA Bot That Could Wage Cyberwar Autonomously Wired (retrieved from https://wired.com, accessed 11 October, 2016). 78. Masunaga, S. (2016, 18 August) Say hello to underwater drones: The Pentagon is looking to extend its robot fighting forces.LA Times (retrieved from http://latimes.com, accessed 11 October, 2016). 79. See Parkin, S. (2015, 16 July) Killer robots: The soldiers that never sleep. BBC (retrieved from http://bbc.com, accessed 11 October, 2016).

AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK 21 Advances in AI make it possible to increasingly take be possessed by the US program ‘MonsterMind’.83 humans out of the loop when it comes to military This issue is only magnified by the difficulties one decision-making. This move towards increasing encounters in assigning attribution to cyber attacks, autonomy threatens to upend existing concepts of meaning a counter-strike could potentially be warfare, and has been highlighted by many military launched against an innocent party. powers as an area of key strategic interest.80 Though there are many big issues when it comes to military It is imperative that means of regulating autonomous uses of AI, it is autonomy that perhaps raises the cyber systems are researched before the technology most challenging and novel problems.81 becomes more widely available; this space remains critically under-explored. It remains useful to distinguish between the use of AI in cyber warfare and the use of AI in actual physical ‘kinetic’ weapons and systems, although Drones the lines between ‘cyber’ and ‘real-world’ can be very blurred. For example, the Stuxnet cyber attack The advent of military Unmanned Vehicles (UVs) caused significant physical real-world damage to or drones has radically changed modern warfare. Iran’s nuclear centrifuges. UVs already provide a number of strategic advan- tages in areas where human performance might be suboptimal or undesirable, such as very dangerous Cyber Warfare operations, or those that might be dull, repetitive or lengthy. Major military powers have highlighted A clear indication of the importance that the UK the importance they place on drones for future now places on cyber defence was the recent operations, and it is likely their use will continue to announcement of a new National Cyber Security increase in coming years. Centre in London.82 This year, as part of the wider, twice-yearly ‘Exercise In the cyber domain, the speed of attacks and the Joint Warrior’, unmanned drones will take to the quantity of information involved make it impossible stage in a NATO-wide military drill. Held off the for humans to respond effectively without assistance. coast of Scotland, ‘Unmanned Warrior 2016’ will The growing use of intelligent cyber weapons allow military powers and arms producers to exacerbates this issue, and makes pre-designed showcase their achievements in this area. This high- cyber defences alone insufficient to protect digital lights the UK’s efforts to integrate drones into systems. Automation is required to respond to these military operations. attacks in real time. Advances in drone-to-drone communication are Automated cyber defences are fairly uncontroversial, likely to be particularly transformative, permitting codified in the right to defend oneself. Much more swarming behaviour.84 Swarms would be capable controversial are automated cyber defences which of overwhelming tradition defences, which are are permitted to counter-strike as part of their geared towards large, singular entities. They would manoeuvres, a capacity Edward Snowden claims to also mark a departure from recent trends towards

80. E.g. see Kaspersen, A. (2016, 15 June) On the Brink of an Artificial Intelligence Arms Race.World Economic Forum (retrieved from https://weforum.org, accessed 10 October, 2016). 81. Many other areas are worthy of consideration. For instance, some uses of AI-enabled algorithms to analyse data and identify potential terrorists has been built upon questionably small datasets. E.g. see Robbins, M. (2016, 18 February) Has a rampaging AI algorithm really killed thousands in Pakistan? The Guardian (retrieved from https://theguardian.com, accessed 10 October, 2016). 82. Bourke, J., Cecil, N., and Prynn, J. (2016, 30 September) National Cyber Security Centre to lead digital war from new HQ in the heart of London. Evening Standard (retrieved from http://standard.co.uk, accessed 10 October, 2016). 83. Zetter, K. (2014, 13 August) Meet MonsterMind, the NSA Bot That Could Wage Cyberwar Autonomously Wired (retrieved from https://wired.com, accessed 10 October, 2016). 84. E.g. see Arquilla, J., and Ronfeldt, D. (2000) Swarming and the Future of Conflict, Santa Monica, Ca., RAND.

22 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK large, expensive equipment, encouraging the use of In a widely-publicised open letter led by the Future small, replaceable and cheaper objects which are of Life Institute, a number of high profile figures capable of acting as an adaptable and complex (including and the Google entity. It is likely that drones and swarms will recon- DeepMind founders Demis Hassabis and Mustafa figure the battlefield, and require novel strategies Suleyman) argued that a failure to condemn and for attack and defence. prohibit LAWS will lead to a dangerous AI-based arms race,86 the consequences of which would be highly damaging to humanity.87 It is likely that an AI As AI improves, many drone processes are likely arms race will be inherently less stable than that of to be automated. Automation permits more rapid the Cold War, and it could upset delicate geopolitical responses, reducing the human labour required for balances. It is also likely that research into safer AI 85 operations, and removing the need for a secure systems will be of lower priority in an arms race and robust connection to the UV, allowing operations situation, thus increasing the long-term risk of AI, in a wider range of environments. which we explore further in the next section.88 Such an arms race is already showing signs of starting.89 For many purposes, automation is fairly uncontro- versial. Routine, dull, or dangerous operations, such Others question whether there are any circumstances as surveillance, patrolling, or bomb disposal all under which use of LAWS could comply with inter- benefit from automation and present few challenges national humanitarian law, making them a funda- 88 that are not already raised by virtue of remote mentally unethical component of any arsenal. For operation. Many of these processes are already many people, giving machines power over human automated to varying degrees today. life and death is a fundamental affront to human dignity.91

International support is growing for a ban on LAWS, Lethal Autonomous Weapons Systems and currently fourteen countries support this posi- tion.92 The UK has explicitly voiced its opposition to Lethal Autonomous Weapons Systems (LAWS) are a ban, or other international regulation, though it broadly defined as systems capable of identifying, insists weapons ‘will always be under human over- targeting, and killing without human intervention. sight and control’.93 The current US position is that Such systems raise profound ethical, legal and there should be a human in the loop,94 however they, political questions, and have spawned a global and a number of major military powers, are actively movement calling for a pre-emptive ban on their use developing, and in some instances deploying, and development. weapons with high degrees of autonomy.

85. A single drone, for instance, can require hundreds of remote operatives e.g. see Sloan, E. (2015). Robotics at war. Survival: Global Politics and Strategy. 57(5), 107-120. 86. There are already signals that this may be beginning. For instance, The United States’ Third Offset Strategy places an emphasis on the importance of keeping ahead with advanced technology, and they, along with Russia and China, are investing heavily in AI and robotics. E.g. see Kaspersen, A. (2016, 15 June) On the Brink of an Artificial Intelligence Arms Race.World Economic Forum (retrieved from https://weforum.org, accessed 10 October, 2016). 87. Future of Life Institute (2015) Autonomous weapons: an open letter from AI and robotics researchers. Future of Life Institute (retrieved from http:// futureoflife.org, accessed 10 October, 2016). 88. Also see Armstrong, S., Bostrom, N., and Shulman, C. (2016). Racing to the precipice: a model of artificial intelligence development. AI and Society, 31(2), 201-206. 89. Kaspersen, A. (2016, 15 June) On the Brink of an Artificial Intelligence Arms Race.World Economic Forum (retrieved from https://weforum.org, accessed 10 October, 2016). 90. E.g. see Rahim, R. A., (2015, 12 November) Ten reasons why it’s time to get serious about banning ‘Killer Robots’. Amnesty International (retrieved from https://amnesty.org, accessed 10 October, 2016). 91. E.g. see Asaro, P., (2012) On banning autonomous weapon systems: human rights, automation, and the dehumanization of lethal decision-making. International Review of the Red Cross 94(886). For potential problems with the human dignity argument see Saxton, A., (2016) (Un)Dignified Killer Robots? The Problem with the Human Dignity Argument Lawfare Institute (retrieved from https://lawfareblog.com, accessed 10 October, 2016). 92. In alphabetical order: Algeria, Chile, Costa Rica, Cuba, Bolivia, Ecuador, Egypt, Ghana, Holy See, Mexico, Nicaragua, Pakistan, State of Palestine, and Zimbabwe. 93. Bowcott, O. (2015, 13 April) UK opposes international ban on developing ‘killer robots’ The Guardian (retrieved from http://theguardian.com, accessed 10 October, 2016). 94. Defense Science Board (2016) Report of the Defense Science Board Summer Study on Autonomy. Defense Science Board.

AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK 23 is regularly debated at the UN, it is unlikely that it will be resolved swiftly. Indeed, it is possible that fully autonomous weapons will be available before any meaningful regulation is in place. 2016 is a critical year for action on this issue with the 5-yearly review conference of the Convention on Conventional Weapons taking place from 12-16 December. Non-governmental organisations, including the Campaign Against Killer Robots, are trying to secure A model of BAE systems Taranis drone on display at Farnborough action on LAWS in this context. Airshow in 2008. A pre-emptive ban, especially if supported by major military powers, would likely set a global precedent, Until recently the official position of the UK’s major and the risk of condemnation would raise the arms manufacturer BAE systems echoed the UK political costs of other nations pursuing LAWS. The Government’s position that there will “always be UK, as a member of the UN Security Council and a need for a man in the loop”.95 This position NATO, is influential in global military matters, and appeared to shift somewhat, however, when the could sway a number of nations to follow suit. company recently revealed that it is pushing ahead Historically, the UK has taken a leading international with development of armed Taranis drones and role in disarmament, particularly in relation to chemical proceeding on the basis that an autonomous strike and biological weapons. It has the opportunity to capability could be required in future.96 do the same here. Our YouGov poll finds that 50% of British people think the Government should support The notion of meaningful human control is the a ban, while only 34% think the Government should over-arching issue in this debate. Though the matter oppose one (see graph 5).

YouGov Poll Graph 5

In general, to what extent do you think the UK Government should support or oppose a pre-emptive ban on Lethal Autonomous Weapons, which could attack targets without human approval?97

– British people support a ban Strongly support 37% on Lethal Autonomous Weapons.

Tend to support 13%

Tend to oppose 13%

Strongly oppose 21%

Don’t know 16%

Net: Support 50%

Net: Oppose 34%

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

All figures, unless otherwise stated, are from YouGov Plc. Total sample size was 2070 adults. Fieldwork was undertaken between 10th - 11th October 2016. The survey was carried out online. The figures have been weighted and are representative of all UK adults (aged 18+).

95. World Economic Forum (2016) What if robots go to war? World Economic Forum (retrieved from https://weforum.org, accessed 10 October, 2016). 96. Dean, J., (2016, 10 June) RAF drone could strike without human sanction. The Times (retrieved from http://thetimes.co.uk, accessed 10 October, 2016). 97. Respondents were prompted with this statement “The following question is about Lethal Autonomous Weapons. Lethal Autonomous Weapons are weapons that can identify and attack human targets without human intervention. At the moment humans have to give the final command to go ahead. There are currently proposals for a pre-emptive international ban on Lethal Autonomous Weapons which could attack targets without this human approval.”

24 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK Difficulties of Enforcing a Ban on LAWS Matters are further complicated by the current lack of an established definition for LAWS, with groups The practical difficulties of enforcing a ban on LAWS often talking past one another as a result. Autonomy, would be significant. The perceived strategic benefits control, and harm all occur on a spectrum. To some, will create significant incentives for the development an autonomous weapon would be as simple as of this weaponry. AI-based weapons systems can be missiles with greater freedom in their targeting systems, built with humans in the loop but with the removal while to others they are intelligent learning entities. of the human as a very simple final step. It would perhaps only take one real-world situation in future Furthermore, military decisions are made in complicated where the case for humans to be taken out of the networks. One would need to differentiate between loop is overwhelmingly strong, for the use of LAWS human control at the stage of development and to start to become normalised. deployment of an autonomous system and human control at the stage of the weapon system’s operation The extraordinary difficulties of enforcing a ban (that is when it independently selects and attacks a on LAWS are compounded by the ease of access target). Work is needed to define these terms and to AI, the small physical scale of AI research, the issues more clearly. ‘copyability’ of software, and the dual use (military and non-military) of components. Looking for sites of clandestine LAWS development will be much harder The approach of the UK should be to support a ban than trying to detect large nuclear facilities. on LAWS and in addition to work with the international community to develop a plan for the complicated The involvement of private companies in AI research issue of its enforcement and the prevention of LAWS means that much of this technology could be widely proliferation. For legal, ethical, and military- available; potentially placing advanced military operational reasons it is vital that human control capabilities in the hands of non-state actors and over weapons systems and the use of force must be terrorists. retained.

POLICY RECOMMENDATIONS

The UK Government should:

1. Support a ban on Lethal Autonomous Weapons Systems (LAWS) and work with international partners to develop a plan for the enforcement of the ban and for the prevention of the proliferation of LAWS.98

98. The term LAWS is generally taken to be synonymous with the term ‘killer robots’, and much debate is focused around this idea. When developing regulation in this area it is also important to recognise the broader functions of autonomy in weapons systems, which extend beyond simply ‘killer robots’.

AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK 25 SECTION 4: AND THE MORE DISTANT FUTURE

“It’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right.”99 Stephen Hawking

Introduction If AGI is capable of broadly matching humans across many domains, it is plausible that it will be Despite recent increases in the power and flexibility capable of turning its abilities towards engineering of AI, existing AI systems are still only able to a smarter computer. This could trigger an ‘intelligence accomplish tasks within limited domains. We are explosion’, resulting in an artificial ‘superintelligence’ yet to realise artificial general intelligence (AGI), (ASI).104 These ideas have been explored most which is capable of attempting more or less any problem a human can. famously by Nick Bostrom in his book “Superintel- ligence”.105 It is reasonable to expect, however, that given sufficient advances in computer science and processing power100 An ASI will likely have a particular task encoded in it will be possible to achieve AGI at some point in its programming. Bostrom argues that whatever the the future.101 Experts, when surveyed, predicted a main task there are a number of sub-goals which median timeline of 2040 for the arrival of human will almost always be helpful in achieving it. These level machine intelligence,102 though within this sub-goals include resource acquisition, improved group opinions varied from it being just around the corner to being impossible.103 intellect, self-preservation, and subduing competition. Bostrom suggests that such sub-goals could prove disastrous to humans (who represent competition, From AGI to ASI: the Risks of material resources, and a potential threat to continued Superintelligence existence) if such a system was not aligned with human values. Recently a number of high profile figures (Stephen Hawking, Bill Gates, Steve Wozniak, and ) as well as a number of leading computer Though this is a simplified account of a complex scientists (Murray Shanahan and Stuart Russell) have argument, it serves to highlight that we have reason expressed concern over the risks associated with the to think carefully about the unrestricted development development of so-called ‘superintelligence’. of AGI, and the risks it entails.106

99. Griffin, A. (2015, 8 October) Stephen Hawking: Artificial intelligence could wipe out humanity when it gets too clever as humans will be like ants. The Independent (retrieved from http://independent.co.uk, accessed 12 October, 2016). 100. Processing power has doubled roughly every two years for several decades (as per Moore’s law). 101. These are but two possible areas which might facilitate the development of an AGI. 102. Defined as a system ‘that can carry out most human professions at least as well as a typical human’. 103. Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford, OUP p. 19. 104. The dynamics of such an ‘explosion’, or even its possibility, are contested, and the nature of this would have profound impacts upon the types of risks it poses and our ability to withstand them. For discussion see Hanson, R. (2013). The Hanson-Yudkowsky AI-Foom Debate. Berkeley, CA: Machine Intelligence Research Institute. 105. Bostrom (2014). 106. For a more full description see Bostrom (2014).

26 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK These long-term risks of AI development have Scenarios are likely to differ if, for instance, only prompted Elon Musk to donate $10 million to one actor (company or government) manages to research aiming to tackle them, as well as supporting develop this technology, as opposed to concurrent the OpenAI initiative, which aims to facilitate the breakthroughs in multiple areas. Whether AGI is safe development of AI and to ensure its benefits developed in the military, industrial, or public domain are accessible to all. It has also contributed to the could also have significant implications for its social launch the £10 million Leverhulme Centre for the and political impact. Better mapping of the potential Future of Intelligence, a collaboration between spaces of AGI development would therefore be useful. Oxford, Cambridge, Imperial, and the University 107 of California, Berkley. The scale of these efforts As the race to develop increasingly powerful AI remains fairly small compared to the size of the intensifies in different sectors it is vital this does not challenge. lead to a ‘race to the bottom’ when it comes to AI safety. It is important to instil and institutionalise the need for safe and mindful AI research in all sectors. We Need to Think about Safety Now

We do not advocate attending to these longer-term Short-term exigencies make it extremely hard for problems in lieu of present ones. This is not an ‘either- governments to implement policies where the impact or’ choice: one can consider the risks associated is uncertain and could be felt 20 or 30 years away. with superintelligence alongside pressing problems However, even though the probability of a dangerous today. Indeed, a number of solutions serve a dual superintelligence emerging is incredibly low, the purpose. Encouraging the transparency of AI systems, magnitude of the risk involved justifies its consider- for instance, facilitates fairness and justice, but also ation now. preserves meaningful human control and under- standing of these systems, which will better enable Safety measures may take a significant amount of time to implement and a failure to consider safety us to prevent undesirable events in the further future. now could lock us into a path of unsafe development. One example is the current lack of transparency The recent publication by Google researchers and in some existing algorithms, which undermines the others of a paper on ‘concrete problems in AI safety’ ability for a human to intervene in and understand is a welcome example of seeking to address present 110 AI decision-making. challenges with one eye also on longer term safety. In another example, Google DeepMind was also Much is unknown about the possible dynamics of reported, in June 2016, to be working with academics an intelligence explosion or the development of an at Oxford University to develop a ‘kill switch’; code AGI. It is unknown, for example, what the necessary that would ensure an AI system could “be repeatedly components are for developing AGI, making it difficult and safely interrupted by human overseers without to predict when or if an AGI might arise. Identifying [the system] learning how to avoid or manipulate signposts that signal this would be valuable.108,109 these interventions.”111

107. See University of Cambridge (2015) The future of intelligence: Cambridge University launches new centre to study AI and the future of humanity University of Cambridge (retrieved from http://cam.ac.uk, accessed 10 October, 2016). 108. Examples of such signposts might include: success in mapping and simulating the human brain, or advances in computer processing. 109. In the USA the Executive Office of the President recently recommended that the National Science and Technology Council Subcommittee on Machine Learning and Artificial Intelligence should monitor developments in AI, and report regularly, especially with regard to milestones: Executive Office of the President [2016, October] Preparing for the Future of Artificial Intelligence. 110. Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., and Mané, D. (2016). Concrete problems in AI safety. arXiv preprint arXiv:1606.06565. 111. Science and Technology Committee (2016, 12 September) Robotics and Artificial Intelligence. HC 145 2016-17.

AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK 27 Alternative Futures An ‘intelligence explosion’, or superintelligence, is not a necessary requirement for AI to have a profound The future is uncertain and extremely hard to predict. impact in the more distant future.112 Both the opportu- But if current advances in AI research and development, nities and the risks presented by AI today, a number as well as in other areas such as computing power, of which are discussed in this paper, are likely to be continue then we should expect AI to become amplified significantly. And these will be joined by increasingly powerful and prevalent in our society. new opportunities and risks yet to be envisioned.

POLICY RECOMMENDATIONS

The UK Government should:

1. Give appropriate attention to long-term issues of AI safety: support research into AI safety and horizon scanning; support the institutionalisation of safe AI research conduct in all sectors including the development of a code of ethics;113 develop standards and guidelines for whistle-blowers; and ensure students and researchers are trained in the ethical implications of their work.

112. In a recent article, Huw Price suggests that our grandchildren may be living in a different era, ‘perhaps more Machinocene than Anthropocene’. Price, H. (2016, 17 October) Now it’s time to prepare for the Machinocene. Aeon (retrieved from https://aeon.co, accessed 17 October, 2016). 113. This could build on the work of IEEE’s Global Initiative for Ethical Considerations in the Design of Autonomous Systems.

28 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK CONCLUSION

“We’ve been seeing specialized AI in every aspect of our lives, from medicine and transportation to how electricity is distributed, and it promises to create a vastly more productive and efficient economy. If properly harnessed, it can generate enormous prosperity and opportunity. But it also has some downsides that we’re gonna have to figure out in terms of not eliminating jobs. It could increase inequality. It could suppress wages.”114 President Barack Obama

The economic, political, military, and social history Facebook, IBM, and Microsoft is very welcome in of the last two and a half centuries can in large part this regard. The Partnership’s stated aims are to create be seen as logical ripple effects of the industrial a forum for open discussion around the benefits revolution. The intelligence revolution may well have and challenges of developing cutting edge AI, to more profound impact over shorter timeframes. advance public understanding, and to formulate Research by McKinsey describes AI as contributing best practices on some of the most important and to a transformation of society “happening ten times challenging ethical issues in the field.116 faster and at 300 times the scale, or roughly 3,000 115 times the impact” of the Industrial Revolution. This We cannot rely entirely on public engagement and revolution will create huge opportunities for economic self-regulation from business to guarantee the best growth, scientific development, and social advance- outcomes of AI development. The role of government ment. However there are also significant risks ahead. must be central to maximising the opportunities and minimising the risks of AI. Citizens have a role to play in preparing themselves and their friends, colleagues, and families for the Unfortunately political horizons have tended to accelerated changes that AI will bring in areas such shrink from election cycles, to media cycles, to as employment patterns. At the same time citizens Twitter cycles in recent years and there is a danger should demand that businesses and governments take action to maximise the opportunities and that some of the challenges of AI do not feel urgent minimise the risks of AI development. We also need enough. Another obstacle for politicians of all parties to find ways of having a much deeper and more is the technical and fast-moving nature of this debate informed public debate about AI. which makes it difficult for MPs (like everyone else) to understand, and a complicated conversation to Industry has a critical role to play, especially given have with the electorate. The fact that ‘artificial the highly technical nature of AI and the challenges intelligence’ has only been mentioned 32 times in this poses to citizens and governments in ensuring the House of Commons since electronic records began that those developing AI act in the best interests of is not good enough given the profound economic, society. The recent launch of the ‘Partnership on AI political, military, and social impact this technology to Benefit People and Society’ by Amazon, Google, is set to have.

114. Dadich, S. (2016, November) Barack Obama, Neural Nets, Self-Driving Cars, and the Future of the World. Wired (retrieved from https://wired. com, accessed 18 October, 2016). 115. Dobbs, R., Manyika, J., Woetzel, J. (2015) The four global forces breaking all the trends. McKinsey Global Institute (retrieved from https://mckinsey.com, accessed 10 October, 2016). 116. See Suleyman, M. (2016) Announcing the Partnership on AI to Benefit People & Society (retrieved from https://deepmind.com, accessed 28 September, 2016).

AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK 29 The UK risks falling behind other major players in a fundamentally better society, though also face making the most of the AI opportunity, especially many risks. The UK government has a responsibility the US where the White House recently published to ensure AI is developed and used in a way that a very detailed report on preparing for the future of maximises these benefits, and minimises these risks, artificial intelligence.117 and it is imperative that it acts swiftly and prudently to do so. We are not powerless in the face of the future, it can be actively shaped. We have the opportunity to create

POLICY RECOMMENDATIONS

The UK Government should:

1. Facilitate a House of Commons debate on maximising the opportunities and minimising the risks of AI in the UK.

2. Establish a Standing Commission on AI to examine the social, ethical, and legal implications of recent and potential developments in AI.118

3. Develop mechanisms to enable fast transfers of information and understanding between researchers, industry, and government to facilitate swift and accurate policy-making based on fact.119

4. Launch a prize for the application of AI to tackling today’s major social challenges and delivering public goods.120

117. Executive Office of the President [2016, October] Preparing for the Future of Artificial Intelligence. 118. This was also a recommendation of the recent report of the Science and Technology Committee (2016, 12 September) Robotics and Artificial Intelligence. HC 145 2016-17. 119. CSaP (Centre for Science and Policy) in Cambridge is one effective model in this area. 120. The X Prize, or Nesta’s Challenge Prizes, could be models in this area.

30 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK 31 Future Advocacy is a think tank and consultancy working on some of the greatest challenges faced by humanity in the 21st Century. www.futureadvocacy.org @FutureAdvocacy

OCTOBER 2016