Draft for Comment A Humanist Perspective on This paper attempts to explore some of the practical and ethical challenges that the development of artificial intelligence (AI)1 — or as some prefer, ‘augmented intelligence’2 — poses for society in general, and for humanists in particular. I don’t claim to be an expert on AI. However, I have a professional background in information management and I have spent much of the last two years looking into the threat posed by 'fake news' and disinformation (which is facilitated by AI). And I’m concerned by some of the ethical dilemmas that are emerging. 1 Setting the Context To set the context, this first section provides a brief summary of the status of AI and the benefits that can and are beginning to flow from its deployment — benefits that clearly need to be set against the costs and potential risks, including possible ‘unknown unknowns’. 1.1 A Layperson’s Overview of AI Most people spend a significant part of their lives on the Web, especially the young. We share our thoughts and photos on social media, and we shop, bank and get our news and entertainment on line; we use Google Maps to navigate; call drivers; order pizzas; book flights; record our fitness... But how many of us really appreciate the extent to which our behaviour and choices are analysed, sold and exploited by powerful corporations using AI? This is what Shoshana Zuboff calls ‘surveillance capitalism’ — a force she says is “as profoundly undemocratic as it is exploitative.”3 An enormous amount has been written about AI in recent years and the diverse range of challenges that its development poses for society and the human race.4 This includes an excellent (free) online course and papers and articles on how to optimise the benefits, counter negative side-effects, and avoid or minimise potential threats. Literally thousands of agencies, institutes, universities and companies are involved in some capacity in this work5 — a report in Dec 2017 estimated that there could be 300,000 AI researchers and practitioners worldwide and a market demand for ‘millions’. The number of published papers and patents on AI is growing rapidly.6 Investment in AI and ‘smart’ technology (see Box 1) is massive, and its ‘fruits’ are already widely deployed and celebrated, however we are only at the beginning of the adventure,7 and no one can predict the impact that this technology will be making on our lives in ten let alone 50 years’ time. It is not too late for us to decide what sort of world we would like to see, and what sort of standards and laws should apply, but it’s going to be a Herculean task to control AI as more and more organisations — and politicians and criminals — come to realise what it can do. 1.2 The Benefits of AI In essence, AI amplifies human ingenuity with: • reasoning — enabling us to learn and form conclusions from imperfect data; • understanding — enabling us to interpret the meaning of data, including text, voice and images; and • interaction — engaging with people in natural ways. This means: Box 1: The Internet of Things* • faster, more accurate, more efficient and more Few consumers are aware that many smart reliable machines; devices in their home/office/car are designed to • ability to work in challenging environments (e.g. collect and share potentially private data as part space, the deep ocean, and locations with high of their normal operation, and that in the radiation levels); process they use AI to learn about our behaviour. As more and more products come better/wider communication via smart devices8 and • equipped with cameras, microphones, social media;9 accelerometers, thermal sensors, biometric • useful services — medical diagnosis; translation; analysis and GPS, the consequences for our 10 voice, facial & image recognition; land privacy/security are potentially enormous. 11 management / crop production; security ; climate Here’s an example: change modelling; fraud protection; making investment decisions; health care (not least - Google and Amazon have secured a range of assisted surgery); delivering goods; virtual reality; patents relating to potential future functions of robot waiters; booking appointments — the list is their home assistant products — one is a endless (the links are intended solely as illustrative); method for extracting keywords from ambient speech which would then trigger targeted doing away with mind-numbing work (long distance • advertising. Amazon has a patent allowing its driving; data entry; number crunching; quality control; virtual assistant Alexa to decipher a user’s etc.). physical characteristics including accent, ethnic AI should also release people to do more interesting, origin, emotion, gender, age, even background creative and socially-productive work, but only if noise. One can only speculate on how data mechanisms are there for them to retrain/requalify. collected in this way might be applied in a ‘hostile’ (Home Office-type) environment — or 1.3 Issues of Concern by a criminal, terrorist or autocratic regime... Potential Social Impact * The Internet of Things (IoT) is the network of devices such as home appliances, industrial • how AI — and organisations controlling AI and big equipment and vehicles that contain electronics, data services — shape and mediate our democracy software and actuators, which are connected to the and the norms and values of society may be far from internet and can interact and exchange data. desirable or acceptable; • the unregulated micro-targeting of individuals on social media in political and advertising campaigns;12 A Humanist Perspective on AI Draft for Comment Page: 2

• AI-empowered surveillance and the use of facial-recognition (FR) software without informed consent;13 • AI assisted social media that can contribute to misunderstanding, intolerance and polarisation, and for some, heightened social isolation, self-harm and possibly worse;14 • job losses / unemployment and people having to cope with more leisure time; • unforeseen failure of AI technology leading to financial losses, injuries and or deaths (see Box 2). There are also questions about who decides who lives and who dies — the classic Trolley Problem.15 Just how do you teach a machine to ‘think’ ethically? Profiteering & Criminal Misuse • tech companies and criminals using AI/big data to cheat the system16 or compromise people’s privacy, security and or financial wellbeing17 — without regulatory constraints, the former is likely to become even more dominant and over-powerful (to the detriment of the overall economy / increased inequality); the risk with the latter (and malign state actors and their proxies) is that they will steal or damage our assets; • deep-fake scams (altering photos and video footage, mimicking voices, creating ultra-real ‘fake news’18) and Box 2: Teething Problems? the risk of “creating a world where nothing we see or • Within 24 of launching ‘Tay’ (in hear can be taken on trust, and where ‘fake news’ Mar. 2016) the bot started sending offensive becomes the default rather than the outlier”;19 comments and had to be withdrawn after Cyber Warfare & Military Application people bombarded it with misogynistic, racist and Donald Trumpist tweets. • external interference in political processes to undermine/denigrate democracy, including subverting • In Aug. 2017 two experimental Facebook elections and encouraging dissent and polarisation — chatbots (‘Bob’ & ‘Alice’), designed to and attempting to cover up ‘accidents’ or ‘mistakes’ negotiate with one another, started (the downing of M17 over Ukraine; the Skripal developing their own coded language that poisoning, etc.) by circulating multiple fake conspiracy was incomprehensible to researchers. theories to cause maximum confusion and create • In Nov 2017 a problem developed with ‘reality apathy’; the Turkish-English version of Google • autonomous weapon development, which promises to Translate which led it to convert the gender- change warfare as we know it; neutral pronoun ‘o’ into a ‘he’ when in the same sentence as ‘doctor’ or ‘hard working’, Generic Concerns and a ‘she’ when ‘lazy’ or ‘nurse’ appeared. • the potential to de-anonymise data (e.g. by analysing • In Mar. 2018 a pedestrian was killed by your postcode, or even the way you walk or type); an Uber autonomous vehicle in Arizona, • the use of ‘black box’ algorithms, resulting in unfair or raising concerns not only about the safety of biased treatment of individuals/clients, or unforeseen, AVs but also their ethical implications. potentially catastrophic error; and Many more examples can be found with a • super-intelligence (Artificial General Intelligence/The simple (AI-assisted) search on the internet. Singularity) and the possibility of an AI takeover. This last point is highly contentious.20 Tony Brewer (South East London Humanists) points out that “leaving aside any discussion of whether or when such a Singularity might be achieved it is clear that a) all AI development is moving in the direction of the Singularity, b) that if the Singularity is ever achieved it will be the last system that humans will ever develop, since an AI system could always do it quicker and better, and c) AI development will not stop there, subsequent systems will be super-intelligent. The impact of these characteristics on humanity will be profound and we should be considering them right now.”21 And regardless of whether super-intelligence is ever developed, we should be aware of the possible implications of quantum computing coming of age with its potential to crack virtually any password or encryption.22 China It is perhaps worth noting here that China’s growing dominance in the field of AI — and its ability/intent to capture and collect vast amounts of data on its citizens — also raises profound questions about how the country might deploy its knowledge/market dominance to promote its own particular branch of autocratic rule/control.23 Western values such as human rights, , tolerance, liberty and reciprocal respect could well be side-lined or ignored in the face of discipline and order (which autocratic regimes sponsor and promote). Tony Brewer argues that “inter-state rivalry, in the form of political influence and economic power, will be greatly affected by developments in AI. Since ‘data’ is generally accepted to be the fuel powering AI systems those states with the greatest volume and greatest diversity of data will have an advantage. China, with its lack of concern for privacy and human rights, will beat the USA and the EU.” 2 Ethical Issues Associated with AI Isaac Asimov was one of the first writers to explore the of intelligent machines — he published his famous Laws of in 1942.24 Today AI ethics is receiving a great deal of attention following advances in self-drive cars, autonomous weapons systems, facial recognition technology, and much more besides.25 Two Microsoft Executives recently proposed that there should be a ‘Hippocratic Oath’ for practitioners of AI (Annex 1). Europe is particularly active in the field — a multi-stakeholder forum AI4People26 was launched in Feb 2018 by the Atomium European Institute for Science, Media & Democracy; and around the same time the EU set up the European AI Alliance27 and announced its intention to become a world leader in ethical AI. In Dec 2018 a High- Level Expert Group on AI (set up by the Commission) published a draft set of ‘Ethics Guidelines for Trustworthy AI’. Meanwhile, in the UK (in Jan 2018) the Government published a Digital Charter, and the following April the released a major report on AI which inter alia proposed ‘five overarching principles for an AI Code’. A Humanist Perspective on AI Draft for Comment Page: 3

And this is a good place to start: 1 Artificial intelligence should be developed for the common good and benefit of humanity. 2 Artificial intelligence should operate on principles of intelligibility and fairness. 3 Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities. 4 All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence. 5 The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence. This is just one of the latest attempts to encapsulate the diverse range of ethical issues associated with the development of AI — see, for example, the ‘Asilomar AI Principles’, formulated by the in 2017 (reproduced in Annex 2) and also what the World Economic Forum has to say on the topic (encapsulated in its infographic on ‘Artificial Intelligence and Robotics’). The WEF has also published a useful set of recommendations for creating an ethics code: • When personal data is at stake, we pledge to aggregate and anonymize it to the best of our ability, treating consumers’ data as we would our own. • We pledge to enact safeguards at multiple intervals in the process to ensure the machine isn’t making harmful decisions. • We pledge to retrain all employees who have been displaced by AI in a related role. “As the architects of the future,” the author writes, “we have a responsibility to build technologies that will enhance human lives, not hurt them. We have an opportunity now to take a step back and really understand how these product decisions can impact human lives. By doing so we can collectively become stewards of an ethical future.” Troubling Questions The development of AI raises so many questions. Here’s a sample of issues that keep AI experts up at night: • How do we distribute the wealth created by machines? • How do machines affect our behaviour and interaction? • How can we guard against mistakes? • How do we eliminate AI bias? • How do we keep AI safe from adversaries? • How do we protect against ? • How do we stay in control of a complex intelligent system? • How do we define the humane treatment of AI as systems become more complex and life-like? Did Google cross an ethical line last year when it demonstrated a device that could chat on the phone with people so naturally that they believed they were speaking to a human operator? And what about the use of facial recognition technology without consent, especially by repressive governments? A Caveat... Tony Brewer comments that the problem with much of the above is one of implementation. “Although these attributes would apply, and have applied, to the development of conventional computer systems and to other systems such as nuclear weapons that are similarly developed from the bottom up, I fear they may not, or perhaps cannot, apply to the development of AI systems.” Put simply, AI systems are different from conventional systems and operate in ways that may turn out to be incomprehensible even to those who developed them.28 So Tony believes that “our existing ethical rulebook may be only partly relevant, and we’ll need a new rulebook. Hopefully, it will overlap to a large extent with our existing one. But we haven’t achieved it yet and we don’t yet know how to make it.” He also notes that the “pledge to ‘retain all employees who have been displaced by AI in a related role’ (whilst a worthy aspiration) may be impossible to achieve.” The knowledge, skills and attitudes needed for the jobs being replaced may not match those needed for the new jobs that are being created. A production-line operative being made redundant might have difficulty being retrained as an AI system designer. Also, these aspirations assume that being in work is a good thing. That debate is yet to be had.” 3 Faith and AI A number of religious leaders have spoken of the challenges posed by AI: Archbishop Welby has argued that AI (and gene therapy) could hand the super-rich ever more power; Pope Francis has called for AI “to work for the good of humanity” and for its use to be “properly monitored” — he reiterated this need in a message sent to the Davos Economic Forum in 2018; and former Chief Rabbi, Lord Sacks, has spoken of AI as one of “the most pressing moral issues of our time” in a Radio 4 series on ‘Morality In The 21st Century’.29 Bishop Steven Croft — who sits on the House of Lord’s Artificial Intelligence Committee — has also proposed a set of ‘ten commandments’ on AI, and in doing so expressed his concern that “every development in Artificial Intelligence raises new questions about what it means to be human... The tools offered by AI are immensely powerful for shaping ideas and debate in our society. Christians need to be part of that dialogue, aware of what is happening and making a contribution for the sake of the common good.” 30 In the US there is at least one coalition of faith groups concerned with AI (set up in the Pacific Northwest in early 2018) — AI and Faith is a cross-spectrum consortium of faith communities and academic institutions with the declared mission “to bring the fundamental values of the world’s major religions into the emerging debate on the ethical development of Artificial Intelligence and related technologies.” A Humanist Perspective on AI Draft for Comment Page: 4

I have not yet found a definitive view on AI by a senior Muslim cleric but it seems likely that the idea of creating a computer that simulates human intelligence would be considered haram/prohibited. Interesting to speculate on what a devout Muslim would make of calls for to be given rights.31 The development of AI poses some tricky conundrums for faith-based thinkers. Indeed, one has noted that “AI may be the greatest threat to Christian theology since Charles Darwin’s On the Origin of Species.”32 Indeed, he notes that if humans were to create free-willed beings with emotions, consciousness, and self-awareness “absolutely every single aspect of traditional theology would be challenged and have to be reinterpreted in some capacity.” Other correspondents have observed that “if we learn to digitally encode a human brain, then AI would be a digital version of ourselves. If you create a digital copy, does your digital copy also have a soul?” Then there’s the question of whether Christian redemption would/should apply — would AI be expected to glorify God, attend church, sing hymns, care for the poor? Would it (or conceivably ‘he’ or ‘she’) pray? Might it sin? There are no easy answers for Christians and people of other faiths who are willing to entertain such thoughts. And quite how the world’s main religions would react to the possibility of new life forms being developed in the laboratory (let alone found in space, possibly by AI) is an open question.33 It is perhaps worth mentioning here the case of Anthony Levandowski, the multi-millionaire who once led up Google’s autonomous car research arm but left to head up a religious non-profit, Way of the Future, to (as Wired put it) “develop and promote the realization of a godhead based on artificial intelligence.” Basically, Levandowski wants to “find the consciousness button” that will make machines like humans so these creations can then “take care of the planet in a way we seem not to be able to do so ourselves.” 4 and AI Humanists don’t face these particular challenges, although we do need to be aware of them if we are involved in debates or other educational work. There are however a number of other issues that we might well confront, for example: • the use and abuse of the term ‘humanism’ when discussing AI;34 • the lack of diversity in AI and computer science research, and the desirability of (genuine) humanist input; • the possible impact of developments in AI on current humanist campaigns; • the question of how the big corporations that are using AI might be held to account; • the challenge for humanists posed by populism, polarisation and ‘post-truth’ politics (the new ‘normal’); and • how humanists might get involved in the AI debate. 4.1 Terminology In an article on ‘Humanism and AI’ Jacobsen has discussed how the term ‘humanism’ is being used to describe the orientation of giant technological companies in the development of AI. He cites the following examples: Fei-Fei Li (Chief Scientist for AI research at Google Cloud) penning an op-ed on the topic in ;35 Apple’s Tom Gruber describing Siri as “humanistic AI — artificial intelligence designed to meet human needs by collaborating [with] and augmenting people;” and Microsoft’s Chief Executive, Satya Nadella, opining that “human- centered AI can help create a better world.” In short, the rhetoric around AI involves the utilization of the terms ‘humanism’ and ‘humanistic,’ or ‘human-centered’ to “substantiate the mission of the AI development.” In reality AI may be very different from ‘humanistic’ — we may come to rely more and more on ‘AI companions’ and experience a gradual erosion of human intimacy as a result.36 The use of such terms presents both a challenge and an opportunity for humanists: the challenge is that this development may (further) confuse people about the nature of humanism37 — Jacobsen asks what if these human and humanistic values, purported to represent all humankind, simply reflect the orientations of the billionaires and technology companies? Could this not tarnish our reputation?38 The opportunity is for humanists to use the platform created by such comment to explain our beliefs. 4.2 Should Humanists Get Involved in the Debate on AI? Luis Granados argued recently in TheHumanist.com “AI is too big to leave to the experts, be they venture capitalists, think-tank lawyers, or tech geeks. The burden of critical thinking falls especially on humanists, because relying on scripture for guidance, as some theists do, cannot yield useful results.” Jeff Dean (Google Brain39) would presumably support this position: he argues that a lack of diversity in AI and computer science research is “limiting the breadth of experience needed to devise ‘humanistic thinking’” and fears that because computer labs only staff computer scientists “a single world view could take shape from this bias.” This is something Dean thinks “could impede the development of new ways of thinking.” This idea is echoed by Li (op cit.) who says that governments should “make a greater effort to encourage computer science education, especially among young girls, racial minorities and other groups whose perspectives have been underrepresented in AI. And corporations should combine their aggressive investment in intelligent algorithms with ethical AI policies that temper ambition with responsibility.” In this respect Finland’s open-source course on AI is much to be welcomed. The European Humanist Federation has recently proposed the creation of a European Observatory for Artificial Intelligence, an EU body that would (in order to properly address already identified and yet unknown concerns related to AI) “accompany user acceptance, boost public debate and implement societal control over the risks.”40 Basically, AI is too important to leave to the big tech companies, venture capitalists and private software companies. 4.3 How Might AI & Misinformation Affect Our Campaigns? One wonders how AI might impact on humanist campaigning on public ethical issues, such as assisted dying, biotechnology and genetic engineering. Given the extraordinary rate of advancement of AI the questions are likely to start appearing sooner rather than later, and we need to be ready. Indeed, it is in areas like healthcare, where AI stand to benefit us the most that there is the biggest potential for harm. Healthcare is “an industry where decisions are not always black and white,” where AI is currently “far from being able to make complex diagnoses or replicate A Humanist Perspective on AI Draft for Comment Page: 5 the ‘gut feelings’ of a human.” It’s not difficult to imagine, at some point in the not too distant future, people talking about robot-assisted dying – a bespoke ‘suicide pod’ is already on the market. And how about some local authorities in France reportedly using AI to place children in schools — one wonders how faith-based schools in Britain would fare! And what about human rights and human dignity?41 4.4 Holding Big Tech to Account This discussion raises the question of accountability and human rights, not least in respect of ‘black box’ algorithms, and especially where they don’t employ ‘human-in-the-loop’ system.42 Luis Granados (op cit.) points out that the AlphaGo computer “makes decisions in a way that is impossible for humans to trace or understand.” Concern about this has led to calls for all applications of AI to be capable of explaining themselves, and for back box AI that produces an untraceable result to be forbidden, if not by law, then at least by social and industry standards of acceptability.43 One possible way around life-changing decisions “happening in the dark” has been proposed by Sandra Wachter who maintains that “to arrive at their decisions, machine-learning algorithms automatically build complex models based on big data sets, so that even the people using them may not be able to explain why or how a particular conclusion is reached.” Wachter (who trained as a lawyer) is working to “drag them into the light.” She believes that “we should have the legal right to know why algorithms come to specific decisions about us. But there’s a clash, as software owners often claim that increasing transparency could risk revealing their intellectual property, or help individuals find loopholes to game the system.” Her suggestion is ‘counterfactual explanations,’ statements of how the world would need to be different in order for a different outcome to occur.44 But Wachter goes further and warns that transparency in the decision-making process is “only one side of the coin of accountability... It gives you some idea why a decision has been made a particular way, but it doesn’t mean the decision is justified or legitimate.” If algorithms are building assumptions about us based on variables we’re not even aware of “it becomes a human rights problem.” And we can expect consequences.45 There have for some time been calls for AI developers to be required by law to conduct Life Cycle Human Rights Impact Assessments of their products.46 4.5 The Challenge of ‘Post-Truth’ and the New ‘Normal’ Then there’s the way that facts and opinions have become interchangeable in our ‘post-truth’ world. A key factor in this has been the corruption of social media platforms and their use to tout all manner of rumour, false information, hate speech and conspiracy theories.47 We’ve also seen the spread of populism and the growth of the Alt Right (and a sharp decline in the acceptance of the rights of others in Europe and the US48); ‘reflexive control’ – covert foreign interference in domestic politics via disinformation;49 and a haemorrhaging of trust in traditional sources of information and authority.50 And all of the above factors are linked, with AI playing a major part in their evolution, deployment and (often viral) spread. Bots, platform algorithms, and content curation ensures that social media users often get a distorted/unbalanced view of what is important or popular, and exposed to more and more extreme / graphic material.51 I think these developments raise particular concerns for humanists, not least how we should make our case in a world where our main weapons of argument, logic and reason, are daily undermined by those who conflate facts and opinions and dismiss evidence, reasoned argument and expert opinion.52 When it’s a debate between what's moral/right or immoral/wrong belief invariably triumphs. This sadly is the new ‘normal’.53 4.6 What Might Humanists Do? Tony Brewer argues that we should be asking many more questions about the development of AI, such as: “what sort of human do we want to be, how do we want to live together, and how do we want to work? If humanism has a purpose, and if individual humanists have agency, surely these are just the sorts of questions we humanists should be able to answer.” Here are some suggestions as to what humanist groups might want to consider. They might start by: • accepting that AI systems are different from conventional systems and operate in ways that may turn out to be incomprehensible even to those who developed them; and • actively working to keep abreast of developments in the field, and especially the growing body of thought concerning ethical dilemmas that AI and big data collection raises. The analysis and references cited in this paper should help get people started; and if groups feel inspired, they might want to go on to: • question / treat with suspicion claims that company’s particular AI creations are ‘humanistic’ or ‘human- centred’ — we can do without the term ‘humanism’ being sullied or brought into disrepute; • challenge at every opportunity Big Tech’s exploitative business models which treat users as a means to achieve commercial or political ends; • actively lobby for developers to be required to conduct Life Cycle Human Rights Impact Assessments of their products before they, and their ‘black box’ algorithms, are licensed for use; and • get involved in the growing debate about the vexed question of how to translate general ethical principles into acceptable regulations and standards to ensure that society’s norms and values are (and remain) protected. This last point is important: given the nature of AI, we may well need a new kind of ethical rulebook, one that ensures that systems are a) reliable, safe and inclusive; b) encoded for FAT (fairness, accountability & transparency); and that they c) respect our privacy, security, and basic human rights. Humanists might also want to look into the practicality of working more closely with groups that are working on AI54 and those promoting critical thinking, media literacy and citizen science, or fighting ‘fake news’ / disinformation (my particular interest/concern). We could surely make a useful input into the work of these organisations, and their thinking on AI, and we would all doubtless learn a great deal in the process.

A Humanist Perspective on AI Draft for Comment Page: 6

And a thought to conclude: is it too much to ask that companies that develop and promote increasingly life-like goods and services for the public might one day be required to see that such products take proper account of basic humanist values such as tolerance, understanding and compassion? This is something our increasingly polarised and fractious world desperately needs. Concluding Remarks Few would disagree that, despite its awesome potential, AI poses a very real threat to society/democracy, and in so many ways. We should expect the unexpected, especially if planning, coordination and or regulation by the industry and government is inadequate or lacking — and or if criminals, mischief-makers and autocrats have their way. These issues are not going to go away. Indeed, they are more than likely to soon start impinging on local humanist groups’ agenda/campaigning. Some observers are predicting that 2019 will be “the year that AI grows up,” and I think humanists should be ready. Mike Flood 26th Feb 2019

I should like to acknowledge and thank those who sent comments on an earlier draft of this paper (Charles Baily, Tony Brewer, Susan Guiver, Elaine Lever & David Williams) and also to recognise the work of the very many authors that I have quoted or cited. Of course, I am solely responsible for any errors of fact or interpretation that may have crept into the analysis. I trust if you come across any suspicious or suspect items, you’ll let me know. Many thanks!

Mike is chair of Milton Keynes Humanists. He has spent virtually his entire professional career working in the NGO sector and is currently working (through Critical Information) to raise awareness of the threat posed by ‘fake news’ and disinformation.

Annex 1: A Hippocratic Oath for Artificial Intelligence Practitioners In the forward to Microsoft’s recent book, The Future Computed, executives Brad Smith and Harry Shum proposed that AI practitioners highlight their ethical commitments by taking an oath analogous to the Hippocratic Oath sworn by doctors for generations. In the past, much power and responsibility over life and death was concentrated in the hands of doctors. Now, this ethical burden is increasingly shared by the builders of AI software. Future AI advances in medicine, transportation, manufacturing, robotics, simulation, augmented reality, virtual reality, military applications, dictate that AI be developed from a higher moral ground today. Here’s the proposed oath: I swear to fulfil, to the best of my ability and judgment, this covenant: I will respect the hard-won scientific gains of those scientists and engineers in whose steps I walk, and gladly share such knowledge as is mine with those who are to follow. I will apply, for the benefit of the humanity, all measures required, avoiding those twin traps of over-optimism and uniformed pessimism. I will remember that there is an art to AI as well as science, and that human concerns outweigh technological ones. Most especially must I tread with care in matters of life and death. If it is given me to save a life using AI, all thanks. But it may also be within AI’s power to take a life; this awesome responsibility must be faced with great humbleness and awareness of my own frailty and the limitations of AI. Above all, I must not play at God nor let my technology do so. I will respect the privacy of humans for their personal data are not disclosed to AI systems so that the world may know. I will consider the impact of my work on fairness both in perpetuating historical biases, which is caused by the blind extrapolation from past data to future predictions, and in creating new conditions that increase economic or other inequality. My AI will prevent harm whenever it can, for prevention is preferable to cure. My AI will seek to collaborate with people for the greater good, rather than usurp the human role and supplant them. I will remember that I am not encountering dry data, mere zeros and ones, but human beings, whose interactions with my AI software may affect the person’s freedom, family, or economic stability. My responsibility includes these related problems. I will remember that I remain a member of society, with special obligations to all my fellow human beings. Source: Oren Etzioni in TechCrunch (Text slightly edited but emphasis in original).

Annex 2: Asilomar AI Principles Research Issues 1 Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence. [Ed: There is no suggestion as to who it should be beneficial to!] 2 Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as: How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked? How can we grow our prosperity through automation while maintaining people’s resources and purpose? How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI? What set of values should AI be aligned with, and what legal and ethical status should it have? A Humanist Perspective on AI Draft for Comment Page: 7

3 Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy- makers. 4 Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI. 5 Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards. Ethics and Values 6 Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible. 7 Failure Transparency: If an AI system causes harm, it should be possible to ascertain why. 8 Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority. 9 Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications. 10 Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviours can be assured to align with human values throughout their operation. 11 Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity. 12 Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyse and utilize that data. 13 Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty. 14 Shared Benefit: AI technologies should benefit and empower as many people as possible. 15 Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity. 16 Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives. 17 Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends. 18 AI Arms Race: An arms race in lethal autonomous weapons should be avoided. Longer-term Issues 19 Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities. 20 Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. 21 Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact. 22 Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures. 23 Common Good: should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

References Here are some of the main references that I’ve called upon in preparing this paper: • Artificial Intelligence Committee [16 April 2018]: ‘AI in the UK: ready, willing and able?’, Report of Session 2017-19 House of Lord Paper 100 • Atherton, Kelsey D [6 Sep 2018]: ‘Fully autonomous ‘mobile intelligent entities’ coming to the battlefields of the future’, Defense News • Bateman, Kayleigh [undated]: ‘Google Brain notes that AI ‘humanist thinking’ can’t be achieved without diversity’, Wearethecity • Bradbury, Danny [4 Jul 2017]: ‘Why, Robot? Understanding AI ethics’, The Register • Carrier, Ryan [19 Jun 2017]: ‘Faith and Artificial Intelligence — Humanity’s greatest schism’, Medium • Fry, Hannah [17 Sep 2018]: We hold people with power to account. Why not algorithms? • Future of Life Institute [2017]: ‘Asilomar AI Principles’ • Government of South Korea [2012]: ‘South Korean Robot Ethics Charter 2012’ • Government of the [25 Jan 2018]: ‘Digital Charter’ • Granados, Luis [19 Apr 2018]: ‘Ahead of the Curve: Regulating Artificial Intelligence Can We Expect AI to Explain Itself?’, The Humanist • Hall, Wendy & Jérôme Pesenti [15 Oct 2017]: ‘Growing the Artificial Intelligence Industry In the UK’, Department for Digital, Culture, Media & Sport + Department for Business, Energy & Industrial Strategy • High-Level Expert Group on Artificial Intelligence [18 Dec 2018]: ‘Draft Ethics Guidelines for Trustworthy AI’, European Commission • Hodges, Susy [30 Jan 2018]: ‘Artificial Intelligence: A Brave New World?’ Vatican News • Jacobsen, Scott Douglas [20 Jun 2018]: ‘Humanism & AI’, Medium A Humanist Perspective on AI Draft for Comment Page: 8

• Katwala, Amit [11 Dec 2018]: 'How to make algorithms fair when you don't know what they're doing', Wired • Mark Latonero [Oct 2018]: 'Governing Artificial Intelligence: upholding human rights & dignity', Data&Society • Merritt, Jonathan [3 Feb 2017]: ‘Is AI a Threat to Christianity? Are you there, God? It’s I, robot’, The Atlantic • Polyakova, Alina & Boyer, Spencer P [Mar 2018]: ‘The Future Of Political Warfare: Russia, The West, And The Coming Age Of Global Digital Competition’, Brookings • Syal, Dharmesh [17 Jan 2019]: ‘It’s time to stop talking about ethics in AI and start doing it’, World Economics Forum • Tashea, Jason [17 Apr 2017]: ‘Courts Are Using AI to Sentence Criminals. That Must Stop Now’, Wired • Tørresen, Jim [2014]: ‘Future Perspectives on Artificial Intelligence’, University of Oslo • World Economic Forum [undated]: ‘Artificial Intelligence and Robotics’ • Zuboff, Shoshana [31 Jan 2019]: ‘The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power,’ published by Profile [summarised here, and beautifully reviewed by James Bridle].

End Notes

1 Artificial intelligence is “the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions) and self-correction. Particular applications of AI include expert systems, speech recognition and machine vision. AI can be categorized as either weak or strong. Weak AI, also known as narrow AI, is an AI system that is designed and trained for a particular task. Virtual personal assistants, such as Apple's Siri, are a form of weak AI. Strong AI, also known as artificial general intelligence, is an AI system with generalized human cognitive abilities. When presented with an unfamiliar task, a strong AI system is able to find a solution without human intervention.” The High-Level Expert Group on Artificial Intelligence is more specific: AI “refers to systems designed by humans that, given a complex goal, act in the physical or digital world by perceiving their environment, interpreting the collected structured or unstructured data, reasoning on the knowledge derived from this data and deciding the best action(s) to take (according to pre-defined parameters) to achieve the given goal. AI systems can also be designed to learn to adapt their behaviour by analysing how the environment is affected by their previous actions. As a scientific discipline, AI includes several approaches and techniques, such as (of which and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems). 2 Augmented intelligence is an alternative conceptualization of AI that “focuses on AI's assistive role, emphasizing the fact that it is designed to enhance human intelligence rather than replace it. The choice of the word augmented, which means ‘to improve’, reinforces the role human intelligence plays when using machine learning and deep learning algorithms to discover relationships and solve problems. Some industry experts believe that the term artificial intelligence is too closely linked to popular culture, causing the general public to have unrealistic fears about artificial intelligence and improbable expectations about how it will change the workplace and life in general. Researchers and marketers hope the term augmented intelligence, which has a more neutral connotation, will help people understand that AI will simply improve products and services, not replace the humans that use them.” 3 Here’s a poignant quote from a recent article by Zuboff: “In these new supply chains we may find signs of the people you share our life with, your tears, the clench of his jaw in anger, the secrets your children share with their dolls, our breakfast conversations and sleep habits, the decibel levels in my living room, the thinning treads on her running shoes, your hesitation as you survey the sweaters laid out in the shop and the exclamation marks that follow a Facebook post, once composed in innocence and hope. Nothing is exempt, from ‘smart’ vodka bottles to internet-enabled rectal thermometers, as products and services from every sector join the competition for surveillance revenues. These are siphoned from your daily life in ways that are designed to keep you ignorant. In the US, breathing machines used by people who suffer from sleep apnoea secretly funnel data to the beleaguered sleeper’s health insurer, often to enable the company to refuse payment. Some cell phone apps record your location as often as every two seconds for sale to third parties... Once we were the subjects of our lives, now we are its objects... Just about everyone connected to the internet is crying out for an alternative path to the digital future, one that will fulfil our needs without compromising our privacy, usurping our decisions and diminishing our autonomy... The fight for a human future belongs to all of us.” 4 See, for example, the Future of Life’s Global AI Policy Resource and the World Economic Forum. See also NESTA, which has recently started logging global approaches to AI mapping. 5 In the UK we have: the Society for the Study of Artificial Intelligence & Simulation of Behaviour (1964) “the oldest AI society in the world”; the Alan Turing Institute (2015), Internet of Things Security Foundation (2015), the Leverhulm Centre for Future of Intelligence (2015), the Ada Lovelace Institute (2018), London Cyber Innovation Centre (in 2018), Centre for Data Ethics & Innovation (2018), Oxford University’s Future of Humanity Institute, UCL’s Department for Computing and many more. According to a new report London is home to some 758 AI companies, and these specialise in more than 30 industries with particular strengths in insurance, finance and law. Indeed, 80 per cent of the top 50 AI companies in the UK are London-based. [For a short history of AI in the UK see this Government report, p18.] 6 Artificial Intelligence Index 2018 provides graphs on the growth in papers and patents on AI from Europe, the US and China. 7 Google CEO Sundar Pichai made a bold claim recently when he called AI “one of the most important things that humanity is working on. It’s more profound than, I don’t know, electricity or fire.” 8 You have your smartphone on you at all times; it can learn about your interests and behaviour; and thanks to sensors, it knows what’s happening with you physically and even mentally. AI allows your phone to track, interpret and respond to your patterns of behaviour. It can learn your route to and from work and warn you of an accident or disruption. It can even monitor your health and detect a heart attack. “This is the stuff of nightmares,” writes Zuboff, “it’s made worse by the apparent lack of empathy that now characterises the younger generation, which may come from children spending so much time on their phones that they can no longer react normally to other people (eg someone filming a person lying bleeding on the pavement without making any effort to help).” 9 An example: LinkedIn uses AI; when the platform’s 562 million users log in to their accounts, they are served up unique recommendations for jobs and connections (powered by AI); and recruiters who use LinkedIn are presented with a list of ideal candidates, filtered by machine learning. 10 According to a major review of AI: “Facebook, Google, Amazon, Apple, Microsoft and Baidu all use AI to develop their principal services, using the rich, continuous data streams from user interactions continually to train AIs to improve performance in face recognition, A Humanist Perspective on AI Draft for Comment Page: 9

language interactions (Siri, Alexa, Cortana etc), and customer service. Cisco, Samsung and Huawei are all using AI to develop their core products.” 11 Here is one of the more unusual (but disturbing) examples of AI use in security: the US government is backing efforts that use machine learning to detect whether a DNA sequence encodes part of a dangerous pathogen. Biologists the world over routinely pay companies to synthesize snippets of DNA for use in the laboratory or clinic. But scientists and intelligence experts worry that bioterrorists could hijack such services to build dangerous viruses and toxins — perhaps by making small changes in a genetic sequence to evade security screening without changing the DNA’s function. Anyone who wants a specific piece of DNA can have the string of letters, called bases, synthesized for pennies per base. In 2006, reporters at The Guardian newspaper paid a DNA-synthesis company to make part of the smallpox virus, prompting calls from governments and scientists for stricter screening measures. 12 Over 40% of the world’s population (i.e. 3.4 billion people) are ‘active social media users’ (including an estimated 40 million people in the UK), and over 50% are ‘internet users’. 13 In the real world, FR is “often combined with other biometrics such as fingerprints or gait analysis... Privacy advocates are calling for a ban on [FR’s] use by law enforcement. It is becoming more and more difficult for the average person to know when this technology is being used to track them, by whom and for what purposes. And while FR may be error-prone now, this is unlikely to stay the case for long. How comfortable the public is with the use of FR in public spaces is likely to vary depending on the context. Many people may be fine with police using FR for crowd control at major events, for example, but not with the same technology being used to track them around the supermarket aisles in an attempt to up-sell them on potatoes... The technology is developing particularly rapidly thanks to enormous resources being thrown at it by both governments and private corporations. They’re not spending that money for the sake of scientific endeavour – they’re investing in the technology because they plan to use it.” In January a large coalition of human right and civil liberties groups in the US wrote to Google’s CEO, Sundar Pichai, asking him to commit to not selling facial recognition products to governments, reminding him of an earlier statement of his that the tech industry has a responsibility to think about the consequences of its technology, it “just can’t build it and then fix it.” The letter also mentioned Google’s recently published AI principles and its pledge that “the company will only seek to develop AI that is socially beneficial, free from unfair bias, and tested for safety.” 14 Echo-chamber effects can include confirmation bias, people thinking their views are more widely held than they are, and ‘the illusory truth effect’. 15 Take self-driving cars: “A child runs into the road and there’s no time to stop, but the software could choose to swerve and hit an elderly person, say. What should the car do, and who gets to make that decision?” Classic counter arguments include: “the self-driving car wouldn’t be speeding in a school zone, so it’s less likely to occur. Utilitarians might argue that the number of deaths eliminated worldwide by eliminating distracted, drunk or tired drivers would shrink overall, which means society wins, even if one person loses. You might point out that a human would have killed one of the people in the scenario too, so why are we even having this conversation?” But as this article points out “Decisions in advance suggest ethical intent and incur others’ judgement, whereas acting on the spot doesn’t in the same way... The programming of a car with ethical intentions knowing what the risk could be means that the public could be less willing to view things as accidents.” Or in other words, “as long as you were driving responsibly it’s considered ok for you to say ‘that person just jumped out at me’ and be excused for whomever you hit, but AI algorithms don’t have that luxury.” 16 It is well-known that Volkswagen figured out how to cheat on its emissions testing. What’s less widely known is that the company used AI to do it (and may have caused some 1,200 premature deaths in Europe). Example cited by Luis Granados. 17 Many users recognise that social media is not ‘free’ and involves risks (identity theft, data leakage, fake apps and malicious links, and risks associated with social sharing, etc.). Significantly fewer will appreciate the many ways in which advertisers can invade people’s privacy for example by data scraping, using apps that leak personal data; using cookies to facilitate online social tracking; etc. Individuals can reduce their exposure to risk (with strong password; using two-factor authentication; taking care with privacy setting; etc.) but social media platforms also have a responsibility to make their Terms & Conditions more explicit and be required to spell out more clearly what they do with our data. At the very least they should ensure default settings are conservative so that people are not deceived or misled. We also need to consider the active role of platforms in promoting content, and establish minimum standards for their doing so. 18 We are not talking here about airbrushing a politician’s photo before an election to make him/her look more attractive, nor even lip- syncing software like Face2Face (which can transpose the facial expressions of an actor onto a target video in real time), the latest technology uses deep learning and facial mapping to produce videos that appear so genuine that it's hard to spot the fakes. It seems inevitable that such videos (‘deepfakes’) will at some point be used to interfere in elections / threaten national security. There’s more about this on the Fighting Fake website. 19 House of Lords Report, para 262. 20 A number of major personalities, such as , and , have argued that the super-intelligence is a real possibility and a serious threat — Gates has said he doesn't understand people who are not troubled by the possibility that AI could grow too strong for people to control; Musk has compared building artificial intelligence to “summoning the demon”; whilst others, like as Jeff Dean (the head of Google Brain) consider an AI a “completely made up fear.” What are we ordinary mortals to make of it? 21 Would a strong AI system possess consciousness? Would it have feelings? Should it have rights, including the right to vote? What if someone were to create surrogate copies; would they too be conscious?... These issues are raised here, along with questions about how we might form emotional relationships with robots, not least with children and the elderly. Might an AI robot train children to be ideal customers for its products? And for an old person cared for (befriended?) by an AI robot, would it be against their human rights not to be able to interact with other human beings? 22 It is suggested that quantum technologies will be highly disruptive, especially in the field of cybersecurity. “Once large-scale quantum computers become available (which at the current rate could take another ten to 15 years), they could be used to access pretty much every secret on the internet. Online banking, private emails, passwords and secure chats would all be opened up. You would be able to impersonate any person or web page online.” 23 A recent report on China points out that “since Party general secretary Xi Jinping came to power in 2012, the situation has changed [from a policy of ‘reform and opening to the outside world’ and ‘peaceful development’]. Under his leadership, China has significantly expanded the more assertive set of policies initiated by his predecessor Hu Jintao. These policies not only seek to redefine China’s place in the world as a global player, but they also have put forward the notion of a ‘China option’ that is claimed to be a more efficient developmental model than liberal democracy.” A recent article (in Wired) noted that In 2018 “Chinese President Xi Jinping experienced his own Sputnik moment. This time it wasn’t caused by a rocket lifting off into the stratosphere, but a game of Go – won by an AI. For Xi, the defeat of the Korean Lee Sedol by DeepMind’s Alpha Go made it clear that artificial intelligence would define the 21st century as the space race had defined the 20th. The event carried an extra symbolism for the Chinese leader. Go, an ancient Chinese game, had been mastered by an AI belonging to an Anglo-American company. As a recent Oxford University report confirmed, despite China’s many technological advances, in this new cyberspace race, the West had the lead. Xi knew he had to act. Within twelve months he revealed his plan to make China a A Humanist Perspective on AI Draft for Comment Page: 10

science and technology superpower. By 2030 the country would lead the world in AI, with a sector worth $150 billion. How? By teaching a generation of young Chinese to be the best computer scientists in the world.” 24 Asimov’s Three Laws of Robotics: 1) A robot may not injure a human or, through inaction, allow a human to be injured. 2) A robot must obey the orders given it by a human, except where such orders would conflict with the first law. 3) A robot must protect its own existence as long as such protection does not conflict with the first or second laws. He later introduced a 'zeroth law': 0) A robot may not injure humanity, or, by inaction, allow humanity to come to harm. Some more recent ethical codes argue that robots should be friendly and kind (like good humanists), and they should be difficult to subvert or abuse by malicious actors (ditto!). 25 AI is one of a number of emerging technologies in the Fourth Industrial Revolution (4IR), others include robotics, nanotechnology, quantum computing, biotechnology, the Internet of Things, Blockchain, fifth-generation wireless technologies (5G), and additive manufacturing/3D printing. Note that several of these technologies are also likely to raise major ethical issues (not least quantum computing, as already mentioned). [For the record: the 1st Industrial Revolution was mechanisation; the 2nd, mass production; and the 3rd computing & automation.] 26 In Nov 2018 AI4People produced an ‘Ethical Framework for a Good AI Society’, which was presented to the European Commission and the European Parliament. 27 The European AI Alliance is “a forum engaged in a broad and open discussion of all aspects of Artificial Intelligence development and its impacts.” It notes that “Given the scale of the challenge associated with AI, the full mobilisation of a diverse set of participants, including businesses, consumer organisations, trade unions, and other representatives of civil society bodies is essential. The European AI Alliance will form a broad multi-stakeholder platform which will complement and support the work of the AI High Level Expert Group in particular in preparing draft AI ethics guidelines, and ensuring competitiveness of the European Region in the burgeoning field of Artificial Intelligence.” 28 We saw this with AlphaGo Zero, the first computer program to defeat a world champion at the ancient Chinese game of Go. Here two computer systems worked out how to win at Go and came up with novel winning strategies that could not be explained. This paper provides more details and comments that “AlphaZero’s strength and originality truly surprised us. Chess is full of superhuman expert systems, yet AlphaZero discovered an uncharted space in which its self-taught insights were both startling and valuable. That uncharted space was so significant that AlphaZero was able to convincingly defeat the strongest expert system at the time of testing.” 29 You can find a useful list of references on AI and religion at ReligionLink. 30 Five of Croft’s ‘commandments’ are similar to the overarching principles proposed by the House of Lords. His others include: AI being used to reduce inequality of wealth, health, and opportunity; and not used for criminal intent, nor to subvert the values of our democracy, nor truth, nor courtesy in public discourse. He says the primary purpose of AI should be to enhance and augment, rather than replace, human labour and creativity. It should never be developed or deployed separately from consideration of the ethical consequences of its applications; and Governments should ensure that the best research and application of AI is directed toward the most urgent problems facing humanity. 31 The Robot Ethics Charter (proposed by South Korea in 2007) argues that robots should have: “the right to exist without fear of injury or death;” and “the right to live an existence free from systematic abuse.” Difficult to imagine such rights being programme into a robot sponsored by a theocracy like Saudi Arabia or Iran... And what about the question of whether the robot would only obey men’s wishes or be subject to Sharia Law? Clearly room for speculation here... 32 See for example: Carrier and Merrit (in References Section). 33 Some years ago (2010) geneticist pioneer Dr Craig Venter and his team claimed to have made a completely new ‘synthetic’ life form from a mix of chemicals. They manufactured a new chromosome from artificial DNA in a test tube, then transferred it into an empty cell and watched it multiply – the very definition of being alive... (the work is described in this article). 34 Here, for the record, is Humanists UK’s definition of ‘humanism’: “Humanists are people who shape their own lives in the here and now, because we believe it's the only life we have. We make sense of the world through logic, reason, and evidence, and always seek to treat those around us with warmth, understanding, and respect,” 35 Li notes that when he was a graduate student in computer science in the early 2000s, “computers were barely able to detect sharp edges in photographs, let alone recognize something as loosely defined as a human face. But thanks to the growth of big data, advances in algorithms like neural networks and an abundance of powerful computer hardware, something momentous has occurred: AI has gone from an academic niche to the leading differentiator in a wide range of industries, including manufacturing, health care, transportation and retail... Despite its name, there is nothing ‘artificial’ about this technology — it is made by humans, intended to behave like humans and affects humans. So if we want it to play a positive role in tomorrow’s world, it must be guided by human concerns.” 36 We shouldn’t forget that there are also people looking for an android partner who/that is “protective, loving, trusting, truthful, persevering, respectful, uncomplaining [and] pleasant to talk to...” and also that there is a very real possibility of abuse — Matt Simon Gear writes: “As AI gets smarter and smarter, it will be easier to trick people — especially children and the elderly — into thinking the relationship is reciprocal... And how does the system keep bad actors from exploiting these bonds to, say, use these robot companions to squeeze money out of the elderly?... Imagine an unscrupulous toy maker inventing a doll so sophisticated that it appears animate... and having [it] tell [a] kid to buy a personality upgrade for $50.” One couldn’t rule out the possibility that a robot could be programmed to persuade a vulnerable elderly person that he or she was ‘a nuisance’ / ’in the way’ and that they should consider killing themselves. 37 There is even a consultancy called Humanism which appears to “combine the worlds of AI and blockchain.” It says it applies “the scientific method to investment management,” and notes ambiguously that it is “a philosophical and ethical stance that emphasizes the value and agency of human beings, individually and collectively, and generally prefers critical thinking and evidence (rationalism, empiricism) over acceptance of dogma or superstition.” 38 I haven’t considered the use of the term ‘,’ which was originally explored and promoted by some very eminent humanists, notably JBS Haldane and , and has clear connections to AI. Transhumanism is transformation of the human condition by developing and deploying technologies to greatly enhance human intellect and functioning. This could involve the use of brain- computer interfaces. The pursuit of this philosophy is highly controversial and not without its critics. Indeed the most serious limitations to what humans will ultimately achieve are more likely to be social and cultural rather than technical. 39 Google's Brain Team focuses on ‘deep learning,’ a part of a broader family of machine learning methods, which can be supervised, semi-supervised or unsupervised. The Team’s mission is to "make machines intelligent and improve people's lives." 40 The EHF argues that such an EU Observatory would benefit society as a whole. Citizens would “become real actors of the development of AI and would be able to provide systematic societal feedback while being guaranteed efficient redress in case they are harmed”; developers would “benefit from society’s feedback and identify new threats or risks, find solutions that truly boost societal A Humanist Perspective on AI Draft for Comment Page: 11

acceptance and improve their services as a whole”; and policy makers would” be able to better identify when and where government intervention is needed and select the best policy responses.” 41 Respect for human dignity is a pillar of the EU Treaties and Charter along with freedoms, equality and solidarity, citizens’ rights and justice. And this chimes with humanists’ human-centric approach to addressing social and ethical problems. And this requires that individuals be “given enough information to make an educated decision as to whether or not they will develop, use, or invest in an AI system at experimental or commercial stages” — I’m quoting here from the Commission’s Draft Ethics Guidelines. Individuals should be “free to make choices about their own lives, be it about their physical, emotional or mental wellbeing.” That said, we need to recognise that the interactions between humans and AI may require us to refine the very concept of human dignity (see e.g. Latonero). 42 Human-in-the-loop (where machines perform the work and humans assist only when there is uncertainty) allows the user to change the outcome of an event or process, or for the acquisition of knowledge regarding how a new process may affect a particular event. An example of where direct human involvement would seem essential is in assessing people -- US courts are already using algorithms (bought from private companies) to determine a defendant's ‘risk’; and some are using AI to sentence criminals. 43 In a new book, mathematician Hannah Fry argues that “We can’t just think of algorithms in isolation. We have to think of the failings of the people who design them – and the danger to those they are supposedly designed to serve.” And she suggests that “a good place to start would be with Tony Benn’s five simple questions, designed for powerful people, but equally applicable to modern AI: ‘What power have you got?’ ‘Where did you get it from?’ ‘In whose interests do you use it?’ ‘To whom are you accountable?’ and ‘How do we get rid of you?’ 44 In essence, “counterfactual explanations give answers about why a decision was made without revealing the guts of the algorithm.” It finds the ‘sweet spot’ between meaningful information and protecting intellectual property rights and trade secrets. 45 Note that this is likely to be a major issue in China when its truly Orwellian social credit score system kicks in in 2020 — surely one of AI's most poisonous fruits. Here’s one chilling example: As the bullet train from Beijing to Shanghai “sets off from each station, an announcement plays in both Chinese and stilted English: ‘Dear passengers, people who travel without a ticket or behave disorderly, or smoke in public areas, will be punished according to regulations and the behaviour will be recorded in the individual credit information system... If you’re caught travelling without a ticket or smoking on the train, you’ll be put on a blacklist. You may even find yourself banned from the railways... In principle, this might seem like a good idea. But what gives the system a sinister edge is the government’s stated intention. By 2020, it wants to join up the railway blacklist with similar blacklists held by other government departments, municipalities and even private sector businesses. These records will then form part of a national ‘social credit’ system... In one region, the phone system was configured so that anyone calling someone on a debtor blacklist was warned that they were contacting an untrustworthy individual.” 46 The case for HR Impact Assessments is made here. 47 In a nutshell ‘fake news’, ‘alternative facts’, lies, deception and dogma taught as ‘fact’, confuse and mislead the public and contribute to mistrust in government and mainstream organisations. It damages individuals and businesses, and destroys reputations, sometimes lives; and it incites suspicion, fear and anger, which undermines social cohesion and ultimately democracy as we know it. There is more about this on the Fighting Fake website that I set up in early 2017 to help raise public awareness of the issue (see also the related Facebook Page). My concern is that when people cannot tell fact from fiction, and politicians and mainstream institutions are no longer trusted, we risk descent into chaos. Over the last couple of years I have attended a fair number of conferences and talks on keeping the internet open, and on disinformation and digital forensics, and I have been struck by the general feeling amongst those in the know that we are losing the battle. Indeed, many prominent individuals have also spoken out publicly about their fears, not least Sir Tim Berners-Lee (inventor of the World Wide Web), Jaron Larnier (a founding father of VR), and Stephen Hawking and Elon Musk, to name but a few. 48 ‘Acceptance of the rights of others’ is one of the five pillars of the Global Peace Index [the others are: ‘High Levels of Human Capital’, ‘Free Flow of Information’, ‘Low Levels of Corruption’ & ‘Sound Business Environment’]. Between 2005 and 2013 this particular indicator was positive, however its fortunes reversed and it dropped significantly post 2013, notably in Europe and the US [see Report p 65/66]. 49 In 2017 the EUvsDisinfo was debunking on average three fake stories a day thought to originate in Russia’s Internet Research Agency or one of the Kremlin’s proxies. Joseph Nye, has observed that: “in the information age it’s not just whose army wins but whose story wins,” and right now Russia tells the best stories… 50 Public trust has been monitored for years by Edelman. The paradox — highlighted in Edelman’s 2018 Trust Barometer — is that the more democratic the country, the less trusted the media. In authoritarian countries like China, Russia and Iran trust in the media appears to be going up! 51 Bots are automated accounts that can spread misinformation as well as amplify and distort regular conversations taking place online; platform algorithms tend to prioritise negative stories (which are shared more frequently than positive ones), and content curation involves platforms selecting and sharing material deemed to be of interests to individual users.(Here’s an example concerning anti-vaxxers.) It doesn't really matter if a toxic YouTube video is only seen by a dozen people: it matters if people watching relatively innocuous content are driven towards it by YouTube’s recommendation system or it is promoted by the platform’s algorithms and seen by millions. 52 I am not for a moment suggesting that we abandon logic and reason, but we do need to find another way of getting our message across, one that appeals to people’s emotions and identifies and addresses their concerns and fears (legitimate or otherwise). For example, I wonder if we can learn anything from the work on ‘rational irrationality’, a concept developed to explain how detrimental policies come to be implemented in a democracy (the concept has been applied to belief and is explained here). 53 Peter Pomerantsev’s splendid Analysis, ‘The War for Normal’, presents a bleak picture of our ‘speeded up’ world. As he says: “We live in a world where everyone is trying to manipulate everyone else, where social media has opened up the floodgates for a mayhem of influence. And the one thing all the new propagandists have in common is the idea that to really get to someone you have to not just spin or nudge or persuade them, but transform the way they think about the world, the language and concepts they have to make sense of things.” Pomerantsev (author of an acclaimed book on the media in Putin's Russia) “examines where this strategy began, how it is being exploited, the people caught in the middle, and the researchers trying to combat it. Because it is no longer just at the ‘fringes’ where this is happening – it is now a part of mainstream political life.” 54 Many institutes around the world adopted ‘a positive approach to life’. Here are a couple of examples from the States: the Future of Life Institute, which aims to “catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges”; and the Center for Humane Technology (Time Well Spent Movement), which aims to “reverse the digital attention crisis and realign technology with humanity's best interests.” It would be useful to put together a list of UK organisations with similar objectives.