<<

DELPHI – INTERDISCIPLINARY REVIEW 2019 | VOLUME 2 | NUMBER 2 OF EMERGING

CYBERWARFARE Cyberwar and Mediation Theory Nolen Gertz, Peter-Paul Verbeek and David M. Douglas

AI AND The Impact of Disinformation, Social Bots and Political Targeting Maja Brkan

DIGITAL PLATFORMS Why Europe Needs Public Funding for Platform Development Paul Nemitz and Arndt Kwiatkowksi BERLIN

December 11–12 Arena Berlin Join thousands of investors, Powered by TCPDF (www.tcpdf.org) www.lexxion.eu founders, and developers at Europe’s iconic startup event.

TechCrunch.com/DisruptBerlin

Get 15% OFF Disrupt Berlin Passes

DELPHI Delphi 2|2019 Contents I

Contents

Editorial 63 Cees Zweistra Articles and Democracy: 66 The Impact of Disinformation, Social Bots and Political Targeting Maja Brkan Cyberwar and Mediation Theory 72 Nolen Gertz, Peter-Paul Verbeek and David M. Douglas Steps to Designing AI-Empowered : 79 A Value Sensitive Design Approach Steven Umbrello OutlOOk What Is the Business Value of Ethical Tech? 84 Contributions by Steffen Augsberg, Siri Beerends, Ida Rust, Paul Nemitz, Nicholas Borsotto and Vuyiswa M’Cwabeni repOrt Understanding ‘AI Made in Germany’: 87 A Report on the German Startup Landscape Jessica Schmeiss and Nicolas Friederici OpiniOn Why Europe Needs Public Funding for Platform Development 95 Paul Nemitz and Arndt Kwiatkowksi stArtup Digest How Ethical Debates Can Enrich Data Science and Artificial Intelligence 99 Interviews with Elena Poughia and Zara Nanu Anna Laesser BOOk reviews Towards a Code of Ethics for Artificial Intelligence by Paula Boddington 105 Laurens Naudts The of Enhancement: Transhumanism, Disability, and Biopolitics 107 by Melinda C Hall Caio Weber Abramo MiscellAneOus Imprint III Masthead III II Imprint Delphi 2|2019

Publisher For further information please contact Lexxion Verlagsgesellschaft mbH [email protected] Güntzelstraße 63 · 10717 Berlin · Germany Tel.: +49 30/81 45 06-0 · Fax: +49 30/81 45 06-22 Phone: +49 30/81 45 06-0 · Fax: +49 30/81 45 06-22 www.lexxion.eu Contributions are welcome and should be submitted according to the Delphi Typeset author guidelines. Any such contribution is accepted on the Automatic typesetting by metiTEC-software understanding that the author is responsible for the opinions me-ti GmbH, Berlin expressed in it. More informations available at delphi.lexxion.eu. Delphi annual subscription* rates 2019 (4 issues) Regular Price Photocopying printed version only 417,00 € All rights reserved. No part of this journal may be reproduced in printed version + online edition (incl. archive)** 471,00 € any form whatsoever, e.g. by photo print, microfilm, or any online edition only (incl. archive)** 410,00 € other means, without prior permission of the publisher. This journal was carefully produced in all its parts. Nevertheless, Startups authors, editors and publisher do not warrant the information printed version only 165,00 € contained therein to be free of errors. Readers are advised to printed version + online edition (incl. archive)** 198,00 € keep in mind that statements, data, illustrations, procedural online edition only (incl. archive)** 158,00 € details or other items may inadvertently be inaccurate. Students printed version + online edition (incl. archive)*** 98,00 € online edition only (incl. archive)*** 72,00 € Ownership and shareholdings pursuant to Section 7 lit. a No. 2 and Section 2 No. 6 of the Berlin Press Act: Shareholder of * Prices include Postage and Handling. EU Member States: VAT will be added if applicable. Lexxion Verlagsgesellschaft mbH is Dr. Wolfgang Andreae, ** Multi-user online acess via IP authentication (IP-range). Publisher, Berlin. *** Single user online access via user name and password.

This journal may be cited as [2019] Delphi. Delphi is supplied under our terms of sale and supply. Copies of our terms and conditions are available upon request. Lexxion Verlagsgesellschaft mbH. VAT Reg.No. DE 209981990. ISSN (Print) 2626-3734 · ISSN (Online) 2626-3742 eDitOr-in-cHieF Woodrow Barfield Professor Emeritus, Chapel Hill, USA Ciano Aydin University of Twente, the Netherlands SENS Foundation, USA AssOciAte eDitOrs William Echikson Centre for European Studies, Belgium Francesca Bosco World Economic Forum, Switzerland Paul Nemitz European Commission, Belgium Florian Krausbeck ambrite AG, Switzerland Vuyiswa M’Cwabeni SAP SE, Germany Anna Laesser Impact Hub Berlin, Germany Nishant Shah ArtEZ University of , the Netherlands Matthias Lamping Max Planck Institute for and Stefan Lorenz Sorgner Competition, Germany John Cabot University, Italy Vince Madai Rob van den Hoven van Genderen Charité Berlin, Germany VU University Amsterdam, the Netherlands Ida Rust University of Twente, the Netherlands Anna Zeiter eBay Inc., Switzerland Yueh-Hsuan Weng Tohoku University, Japan Cees Zweistra eXecutive eDitOrs Delft University of , the Netherlands Clara Hausin Lexxion Publisher, Germany eDitOriAl BOArD [email protected] Steffen Augsberg Jakob McKernan Justus-Liebig-Universität Gießen, German Ethics Lexxion Publisher, Germany Council, Germany [email protected]

Delphi 2|2019 Editorial 63

Editorial

In 2018, Delphi was launched with the objective to encourage and foster interdisci- plinary debate and discussion on technological progress. The focus of Delphi is specif- ically dedicated to stimulating exchange around the potential gains, challenges and questions that are provoked by , such as Artificial Intelligence (AI), autonomous weapons and . The previous issue saw a variety of constructive-critical perspectives regarding emerg- ing technologies, such as automated weapons, applications of AI in music therapy and online social networking sites. This issue welcomes interdisciplinary contributions from the fields of law, the as well as from industry representatives and policymakers. Moreover, this issue covers perspectives on a variety of technolo- gies and related questions. From the potential dangers of cyber warfare technology, to the potential gains of AI, this issue covers the full range of Delphi’s central objective. This issue also welcomes a new section: Outlook. In the Outlook section, experts from wide ranging technology-related fields are requested to express their views re- garding a central theme connected to emerging technology. For this issue we asked our contributors to examine the business value of ethical tech. The concept is as hard to define as it is interesting to investigate which makes interdisciplinary analysis par- ticularly fruitful. Providing an ethicist’s viewpoint, Steffen Augsberg argues that ethical technology can be seen as technologies which are developed with an anticipation of ethical con- cerns. Examples include the growing awareness of the impact of and concerns about our increasingly data-driven environment and connected debates around privacy. Clearly, as he argues, there is a business model for manufacturers who incorporate such values into their technologies or develop technologies that tackle those issues. As Ida Rust puts it, it is well imaginable that products will have a tag in future that says ‘ethically approved’. This view is also reflected in Paul Nemitz’s analy- sis of Apple’s success in the stock markets. According to Nemitz, Apple has gained a competitive advantage by heeding the ethical concerns of its customers and users in its design process. But ethical technology cannot be reduced to products which antic- ipate societal attitudes towards the big ethical questions of the day. As Siri Beerends remarks, it may also be useful to ask whether or not technologies, such as advanced AIs, can themselves be ethical in the way are. That is, could AI-systems en- gage in the complex and multi-layered ethical situations we find ourselves in? And perhaps more crucially, could they do so in a ‘’ way? A part of the answer could be found in the Startup Digest of this volume, which fea- tures two businesses explicitly aiming to make a positive impact on society through the application of emerging technologies. Aiming to promote the ‘art of making data useful’, Dataconomy advocates for a better understanding of data science and looks to enhance ‘data literacy’. Meanwhile, Gapsquare’s goal is to eradicate unfair payment

DOI: 10.21552/delphi/2019/2/3 64 Editorial Delphi 2|2019

structures by utilising AI and in that sense aligns with Augsberg’s definition of ethical tech. This because it identifies payment structures in business, which are unequal be- cause they are based on, for example, or race. In short, it is a technology that anticipates developments in ethical attitudes. However, it is also an ethical technolo- gy in the sense that it does engender biases itself. It is not a neutral technology but re- flects certain ethical dispositions and engages in ethical questions. For example, it needs to decide what equal payment is and what it is not. Gapsquare argues that it has found a way of tackling potential issues by constantly reviewing their systems. As such, and as Siri Beerends suggests, it is ultimately up to humans to make decisions on these crucial questions. The debate regarding AI and the ethical questions it provokes, continues in the Re- view section. What AI is, what ethical questions it provokes and how experts should deal with them, is the theme for discussion in Paula Boddington’s book, Towards a Code of Ethics for Artificial Intelligence. As the reviewer shows, Boddington has deliv- ered an important contribution to the rapidly growing field of AI-ethics. However, it remains somewhat unclear who the addressee of book is. Nevertheless, as Naudt shows, Boddington’s book does succeed in opening up new perspectives in a field that is be- coming more and more concerned with ethical questions. The second book review, by Caio Weber Abramo, explores yet another important domain in emerging technology: bioethics. Abramo has delivered a review of Melin- da Hall’s book on bioethics and enhancement, The Bioethics of Enhancement: Tran- shumanism, Disability, and Biopolitics. Hall’s book itself is a critical assessment of tran- shumanist discourse and the transhumanist’s focus on overcoming human embodied vulnerability. According to Hall we should not focus on enhancing our embodied be- ing, but rather look for political and social ways to improve our current condition. Crit- ical as Hall’s contribution is, Abramo feels that she does not go far enough. In reading Abramo’s article we find an interesting and thought provoking argument for this posi- tion. A position that also makes clear why Hall’s book is an important book to read if we seek to understand the debates in the field of bioethics. In the Report section we return to the subject of AI. Specifically, Jessica Schmeiss and Nicolas Friederici have investigated what is needed to drive the development of AI in Germany and critically examined the German government’s ‘AI made in Ger- many’ agenda, which aims to make Germany a world leader in the development of AI. What makes the contribution particularly relevant is its concreteness. The study found that there are three different areas in which Germany could become a leader in the development of AI. Firstly, it could contribute in developing AI-based technology for other firms, which created the demand for it in the first place. Secondly, startups could develop AI technology for automating tasks that are currently performed man- ually. Finally, AI could function as a solution providers. That is to say that it could that provide entirely new solutions and create new products and demands. According to Schmeiss and Friederici, this field holds the most potential and should therefore re- ceive the lion’s share of available funding. Paul Nemitz’s and Arnd Kwiatkowski’s article is also about assessing the future pos- sibilities of new and emerging technologies from a practical perspective. Their article investigates whether it is feasible that large and new platforms, comparable to Delphi 2|2019 Editorial 65

Whatsapp and Facebook, could be established in Europe. They have focused on the areas of health, public administration and education, and made some concrete sug- gestions on how to optimise circumstances so that the establishment of platforms in these areas could be achieved. Again, this article contains some concrete suggestions in that direction and is for that matter a nice starting point for policymakers in this field. By contrast, the article by Nolen Gertz, Peter-Paul Verbeek and David Douglas, is perhaps the most philosophical one in this issue. In this thought provoking piece, the authors use insights from Technical Mediation Theory (TMT) to show that cyber tech- nologies do not merely produce effects in . What happens in cyberspace does not stay there; no dualism between a ‘real’ and a ‘virtual’ world exisits. Cyber warfare technologies do have a profound impact on our ‘real lives’ as well. When that has been shown, it does become possible to draw an analogy between cyber warfare technology and technologies. The latter have been banned by the Biological Weapons Convention (BWC) as weapons of mass destruction with uncon- trollable effects. But since cyber technologies belong, like biological weapons, to our everyday life, their use and further development should be banned altogether. The ar- ticle is notable for the careful analyses that supports this conclusion. Maja Brkan has offered a perspective from the field of law, focusing on a highly top- ical subject, the impact of AI on democracy. Her article offers an assessment of the detrimental impact of social bots (algorithmic driven social media accounts) and tar- geted advertisements on online social networks. These technologies could have a sig- nificant impact because of their scale and the lack of some point of (human) verifica- tion. This goes also for the emerging phenomenon of ‘deep fakes’, the manipulation of material which makes the content seem real or expressed by an actual . Al- though it is not always clear what the effect of deep fakes and other online technolo- gies could be, Brkan tends to conclude that these forms of AI are ‘silently taking over democracy’. In her article we find an important and convincing account of the need to include some form of transparency in the AI’s we use in democratic processes. The second issue of 2019 is Delphi at its best in that it offers a wide ranging forum in which philosophical, legal and business-inspired perspectives meet in order to re- flect on the profound and manifold ways in which technologies are changing our so- ciety and modes of being. Cees Zweistra Associate Editor Delft University of Technology 66 Artificial Intelligence and Democracy Delphi 2|2019

Artificial Intelligence and Democracy: The Impact of Disinformation, Social Bots and Political Targeting

Maja Brkan*

Free elections and democracy in Europe and globally can be detrimentally affected by a ma- licious use of new technologies, in particular artificial intelligence (AI). AI can be used as a tool to produce and spread disinformation or facilitate psychographic micro-targeting of voters in the run-up to elections. At the same time, AI can effectively counter such uses of technology. This article discusses the ways in which freedom of elections and democracy can be impacted through the deployment of AI.

I. Introduction II. The Use of Artificial Intelligence to Impact Democracy European and global are under a se- vere threat due to extensive spread of disinforma- 1. Social Bots tion through social and traditional media. The use of automated accounts and bots, psychographic mi- Social bots are automated or semi‐automated social cro-targeting, and deepfakes to proliferate fake media accounts, primarily controlled by algorithms news during elections are making the problem and programmed in a way to have the ability to in- even more alarming. In addition, freedom of elec- teract with human social media users.3 They can au- tions in the EU and the European democracy can tomatically generate and spread content, without re- be detrimentally affected by the use of artificial in- telligence (AI) in other ways. For example, automat- ed social bots can be (mis)used to promote politi- cal candidates and convince the voters to vote for DOI: 10.21552/delphi/2019/2/4 this candidate even if they do not spread disinfor- * Associate Professor of EU Law, Faculty of Law, Maastricht Univer- mation, in particular if coupled with micro-target- sity, The Netherlands. For correspondence: The increasing use of artificially intelligent tools 1 For EU efforts to counter disinformation, see for example Commu- nication from the Commission to the European Parliament, the can seriously threaten public values of democracy, Council, the European Economic and Social Committee and the Committee of the Regions: Tackling online disinformation: a rule of law, freedom of elections and prevention of European approach, COM(2018) 236 final; Joint manipulation of voters. Nevertheless, it is also cru- to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Commit- cial to balance these values against freedom of ex- tee of the Regions: Action Plan against Disinformation, pression, media freedom and media pluralism. EU JOIN(2018) 36 final; Commission Recommendation of 12.9.2018 on election cooperation networks, online transparency, protec- institutions, governments, media outlets and civil so- tion against cybersecurity incidents and fighting disinformation ciety are deploying regulatory mechanisims to en- campaigns in the context of elections to the European Parliament, C(2018) 5949 final sure the balance between these public values. For ex- 2 European Parliament resolution of 25 October 2018 on the use of ample, numerous policy documents have been adopt- Facebook users’ data by Cambridge Analytica and the impact on ed on the level of the EU to counter disinformation1 data protection (2018/2855(RSP)), points 7-8 2 3 Kai‐Cheng Yang et al, ‘Arming the public with artificial intelli- and in response to political advertising. This article gence to counter social bots’ (2019) Human Behavior and Emerg- aims to discuss the ways in which the use of AI can ing Technologies 48.; Naja Bentzen, ‘Computational propaganda techniques’ (2018) European Parliamentary Research Service impact democracy and freedom of elections. More accessed 5 April 2019; Philip N Howard, Samuel Woolley and Ryan Calo, ‘Algorithms, impact of the use of social bots, psychographic mi- Bots, and Political Communication in the US 2016 Election: The cro-targeting, disinformation and voting advice ap- Challenge of Automated Political Communication for Election Law and Administration’ (2018) 15 Journal of Information Tech- plications. nology & Politics 2, 81-93 Delphi 2|2019 Artificial Intelligence and Democracy 67

vealing their non-human identity.4 One of the most nesses or politicians who do not reveal their identi- important features of bots is that they can achieve ty.10 scalability,5 which enables them to massively spread The use of social bots in election campaigns raised information and hence to (artificially) enhance the a considerable degree of controversy primarily dur- importance of a particular idea or popularity of a po- ing the 2016 US presidential elections.11 However, litical candidate. Social bots used for political propa- elections and referenda in Europe do not seem to be ganda are sometimes termed also ‘political bots’,6 immune on the impact of bots either. For example, which strive to generate ‘likes’ and attract followers (dis)information was spread with the help of bots in on social media.7 These bots can also identify key- Germany during debates on the UN migration pact12 words in public posts or conversations and then pop- and during the 2017 German elections,13 in Sweden ulate the content with their own posts (if they act as during the 2018 elections14 and in France during the spam bots) or conversations (if they act as chat bots).8 2017 presidential elections.15 Moreover, the Brexit Consequently, social bots can impact public polit- referendum campaign is a salient example of the use ical opinion by promoting or discrediting political of social bots,16 where the latter were allegedly even candidates. Simultaneous use of numerous bots at used to massively sign a petition for a second Brexit the same time may also convey the impression that referendum.17 Bots were reportedly spreading infor- the information is coming from highly diverse mation also in the run up to the Catalan indepen- sources, and is hence more reliable. Nevertheless, so- dence referendum,18 within the framework of a de- cial bots can spread truthful information as well as bate on immigration in Italy19 and in the run-up to disinformation alike, depending on the (potentially the European elections.20 manipulative) goal for which they are being used. Fi- From a technical perspective, the creation of so- nally, social bots can be efficient in mobilising citi- cial bots on social networks is becoming increasing- zens for the so-called ‘astroturf’ campaigns.9 Astro- ly easier. On Twitter, for example, the creation of so- turf campaigns give the impression to be grassroot cial bots is greatly facilitated through the open appli- campaigns run by non-profit organisations or citi- cation programming interface (API)21 and it has been zens, whereas in reality their driving force are busi- found that bots represent up to a quarter of all Twit-

4 See in this sense Kai‐Cheng Yang et al (n 3) 48 Systems, Hobart, 2017) ac- cessed 5 April 2019 5 ibid 14 For an analysis, see Johan Fernquist, Lisa Kaati, Nazar Akrami, 6 See for example Robert Gorwa, Douglas Guilbeault, ‘Unpacking Katie Cohen, Ralph Schroeder, ‘Bots and the Swedish Election: A the Social Media Bot: A Typology to Guide Research and Policy’ Study of Automated Accounts on Twitter’ FOI Memo 6466, FSS (2018) Policy & Internet 8; Philip N Howard, Samuel Woolley Marknadsarbete Digitala lägesbilder valet 2018, September 2018 and Ryan Calo (n 3) 85-87; Alessandro Bessi, Emilio Ferrara, accessed 5 April 2019 Discussion’ (2016) 21 First Monday 11, 1-14 15 Emilio Ferrara, ‘Disinformation and social bot operations in the 7 Christian Grimme, Mike Preuss, Lena Adam, Heike Trautmann, run up to the 2017 French presidential election’ (2017) 22 First ‘Social Bots: Human-Like by Means of Human Control?’ (2017) Monday 8 accessed 9 April 2019 16 Marco T Bastos, Dan Mercea, ‘The Brexit Botnet and User- 8 ibid Generated Hyperpartisan News’ (2017) 37 Social Science Com- puter Review 1, 38-54 9 Philip N Howard, Samuel Woolley, Ryan Calo (n 3) 86 17 BBC ‘EU Referendum Petition Hijacked by Bots’ (BBC, 27 June 10 More on this notion from the perspective of international legal 2016) ac- processes, see Melissa J Durkee, ‘Astroturf ’ (2017) 69 cessed 5 April 2019 Stanford Law Review, 201-268 18 Mark Scott, Diego Torres, ‘Catalan referendum stokes fears of 11 Samuel C Woolley, Douglas R. Guilbeault, ‘Computational Russian influence’ Politico (29 September 2017) accessed 5 April 2019 (eds), Computational Propaganda Research Project: Working 19 David Alandete and Daniel Verdú, ‘How Russian Networks Paper No 2017.5, 1-28 Worked to Boost the Far Right in Italy’ EL PAÍS in English (1 March 12 ‘Germany mulls crackdown on social media bots’, DW, 16 2018) December 2018 accessed 5 April 20 Emmi Bevensee, Alexander Reid Ross and Sabrina Nardin, ‘We 2019 built an Algorithm to Track Bots During the European Elections – What We Found Should Scare You’ Independent (22 May 2019) 13 Brachten et al estimate that the impact of social bots on German elections was minimal; see Florian Brachten et al, ‘Strategies and 21 Robert Gorwa, Douglas Guilbeault, ‘Unpacking the Social Influence of Social Bots in a 2017 German State Election – A Media Bot: A Typology to Guide Research and Policy’ (2018) Case Study on Twitter’ (Australasian Conference on Information Policy & Internet, 7 68 Artificial Intelligence and Democracy Delphi 2|2019

ter accounts.22 Facebook is equally populated with a fy the users as having an open, conscientious, ex- considerable amount of fake accounts23 and bots on travert, agreeable and neurotic personality type.27 In its Messenger service.24 Increasingly, the creation of addition, this data was coupled with a massive bots does not require specific in-depth programming amount of other Facebook data about users, which skills as it is facilitated by online services such as were, in turn, micro-targeted with tailored political Somiibo. advertisements.28 This data operation allegedly helped Donald Trump win US elections in 2016.29 A similar data science manipulation apparently also 2. Psychographic Micro-Targeting contributed to the win of the ‘leave’ vote during the Brexit referendum.30 Democracy and freedom of elections can be signifi- cantly impacted also through micro-targeting of vot- ers with political advertisements,25 particularly if 3. AI as a Tool to Create and Spread coupled with the spread of disinformation. In the so- Disinformation cial media environment, profiling for the purposes of advertising is usually based on the combination During the past years, democratic political systems of objective criteria, such as gender, age, marital sta- in Europe and globally have been significantly en- tus or place of residence, and subjective criteria, such dangered by spreading of disinformation (fake as personal interests and personal history. While mi- news), particularly during the run-up to elections. cro-targeting has traditionally been used for com- The use of AI can significantly exacerbate these mercial advertising, nowadays it is increasingly de- threats in three regards. ployed for political advertising during election cam- First, as already analysed above, the spread of dis- paigns. information through automated accounts and bots, In particular, during the 2016 US presidential elec- coupled with psychographic micro-targeting, does tions, micro-targeting using psychographic criteria – not only reach an incomparably greater number of also termed psychographic profiling – was widely voters, but also appeals to their sensitivities, fears used to convince the voters to cast their vote for the and psychological characteristics. Republican candidate.26 In other words, political ad- Secondly, while automated journalism may great- vertisements that targeted individual voters on social ly facilitate reporting, it is also highly important that media, appealed on their personality type. The pro- the content generated by automated means is regu- files were created by the data science company Cam- larly verified and that accountability for such jour- bridge Analytica mainly on the basis of a personali- nalism is ultimately attributed to a human.31 How- ty quiz launched on Facebook that enabled to classi- ever, unless there is a malicious intent behind such

22 See, with reference to other literature cited therein, Tobias R 27 Karl Manheim and Lyric Kaplan, ‘Artificial Intelligence: Risks to Keller, Ulrike Klinger, ‘Social Bots in Election Campaigns: Theo- Privacy and Democracy’ 21 Yale J L & Tech 106, 139 retical, Empirical, and Methodological Implications’ (2018) 36 28 ibid 139-140 Political Communication 1, 176 29 ibid 140 23 Jack Nicas, ‘Does Facebook Really Know How Many Fake Ac- counts It Has?’New York Times (30 January 2019) 30 See for example Jamie Stanley, ‘Meet Cambridge Analytica: The Big Data Company Responsible for Trump & 24 Khari Johnson, ‘Facebook Messenger passes 300,000 Bots’ Brexit’ (NOTA UK, 2 February 2017) accessed 9 April 2019 -company-responsible-for-trump-brexit/> accessed 31 May 2019; 25 Rubinstein calls this ‘political direct marketing’; see Ira S Rubin- for a proposal of Cambridge Analytica in this regard, see Cam- stein, ‘Voter Privacy in the Age of Big Data’ (2014) Wis L Rev bridge Analytica and SCL Group, ‘Leave.EU: Psychographic 861, 882 Targeting for Britain’ (2015) accessed 31 2016 US presidential elections, see for example Karl Manheim May 2019 and Lyric Kaplan, ‘Artificial Intelligence: Risks to Privacy and Democracy’ 21 Yale J L & Tech 106, 137-145. However, micro- 31 It is currently disputed who should bear the accountability for targeting of voters based on data collection was used in US automated journalism; see for example Matteo Monti, ‘Automated already well before these elections; see Chris Evans, ‘It's the Journalism and Freedom of Information: Ethical and Juridical Autonomy, Stupid: Political Data-Mining and Voter Privacy in the Problems Related to AI in the Press Field’ (2018) Opinio Juris in Information Age’ (2012) 13 Minn J L Sci & Tech 867, 884, 886 Comparatione 1, 8-9 Delphi 2|2019 Artificial Intelligence and Democracy 69

automated content, false information would be at content.36 The second category could include fake best created incidentally and have limited capacity videos of politicians attending high-level internation- to harm freedom of elections. Nevertheless, the po- al meetings they never attended, shaking hands with tential of automated journalism to create large-scale prominent world leaders or offering support to vul- disinformation should not be neglected32 and should nerable societal groups, such as homeless, sick or oth- be included in the policy debates on disinformation. erwise affected. The recent doctoring of videos of Third, an even more significant threat to democ- Nancy Pelosi demonstrates that deepfakes could rep- racy could be posed by the creation of political deep- resent a serious threat for democracy and freedom fakes.33 Deepfake technology enables creation of of elections.37 video or audio material that appears to be real, but However deepfakes are used, they have the capac- is actually fake. It can take a form of interposing parts ity to lead to manipulation of elections where timing of the video with other content (such as swapping is of essential importance; if such a video is released the face) or manipulating the video so that it seems shortly before elections, it can severely damage a can- that the person on the video is saying something she didate’s political reputation or even sway election re- is actually not. sults.38 This potentially harmful effects can be exac- There is currently a disagreement as to whether erbated by difficulties to effectively detect and de- deepfakes effectively represent a threat to democra- bunk these quasi-realistic videos that give the audi- cy: some see it as a potentially severe source for po- ence an appearance of truth. litical manipulation,34 whereas others consider it to be a ‘false alarm’ whose threat ‘hasn’t materialised’.35 Nonetheless, the mere potential for political manip- 4. Algorithmic Voting Advice Applications ulation is a sufficient source for concern and for an early regulatory response. Numerous examples have The aim of the voting advice applications is to help been put forward to depict these threats that can be, users take a decision which political party corre- in the framework of elections, broadly categorised in sponds best to their political opinions. This is partic- two categories: videos aimed to harm political oppo- ularly important in a multi-party system with numer- nents and those seeking to enhance the candidates’ ous smaller or mid-size political parties whose polit- political popularity. The first category could include ical agendas do not different considerably, yet aspire videos depicting politicians involved in corruption for their voice to be heard. Voting advice applications or another controversial or criminal activity and ut- have become widespread in the run-up to numerous tering statements with inappropriate or offensive elections, both on the national as well as on the Eu-

32 Compare Matteo Monti, ‘Automated Journalism and Freedom of Texas Law, Public Law Research Paper No 692; U of Maryland Information: Ethical and Juridical Problems Related to AI in the Legal Studies Research Paper No 2018-21, accessed 22 May 2019, 20-21; Kelly Trues- dale, ‘Can You Believe Your Eyes? Deepfakes and the Rise of AI- 33 For an in-depth analysis of deepfakes see for example Robert Generated Media’ (2018) Georgetown Law Technology Review Chesney and Danielle Keats Citron, ‘Deep Fakes: A Looming 107 Law Review (2019, Forthcoming); U of Texas Law, accessed 22 May 2019 Public Law Research Paper No. 692; U of Maryland Legal Studies Research Paper No. 2018-21, accessed 22 May 2019, 1-58 "deepfake" tech’ (CBS News 25, May 2019; updated 26 May 2019) accessed ahead of deepfake videos before the 2020 US election' (CNN 31 May 2019 Business, 26 April 2019) 38 Robert Chesney and Danielle Keats Citron, ‘Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Securi- 35 Russell Brandom, 'Deepfake Propaganda is Not a Real Problem' ty’ (2018) 107 California Law Review, 107 California Law Review (The Verge, 5 March 2019) accessed 21 May 2019, 7 2018-21, accessed 22 May 36 For examples, see Holly Kathleen Hall, ‘Deepfake Videos: When 2019, 22; Kelly Truesdale, ‘Can You Believe Your Eyes? Deepfakes Seeing Isn’t Believing’ (2018) 27 Cath. U. J. L. & Tech. 51, 52; and the Rise of AI-Generated Media’ (2018) Georgetown Law Robert Chesney and Danielle Keats Citron, ‘Deep Fakes: A Loom- Technology Review accessed 22 May 2019 70 Artificial Intelligence and Democracy Delphi 2|2019

ropean level. For example, in the run-up to the 2019 with many other algorithmic decisions, transparen- European elections, different voting advice applica- cy regarding how exactly the matching between the tions were offered to the European citizens, not on- users’ answers and those provided for by political ly on the national level, but also through a Europe- parties is not available. Further work therefore needs wide recommender systems such as euandi201939 or to be done to enhance the transparency of these rec- EUvox40. ommender systems, to avoid potential bias.49 In the Netherlands, for example, voting recom- Moreover, the voting advice applications collect mender systems are widely used to help voters sensitive data about peoples’ political preferences choose their preferred party; different websites are and political opinions. Even though many of the ap- available that offer a specific EU version of the rec- plications seemingly function on an anonymous ba- ommender system for European elections, such as sis, which would render the General Data Protection StemWijzer41, mijnstem42, Kieskompas43, or the Regulation (GDPR) inapplicable,50 they sometimes MVO Kieswijzer44. According to statistical data, nevertheless collect personal data that allows for around 10% of the Dutch population used the identification of users. For example, the Dutch vot- StemWijzer recommender system before the 2019 ing advice application Kieskompas requires data European elections; for the national elections, this about the year of birth, province, postal code and number was much higher.45 In Slovenia, similar vot- even (optionally) e-mail address of users, which is er recommender systems were used before the 2019 clearly information that enables the identification of European elections, such as the one from the news- theuser.51 Thepossibilitytoidentifyuserswouldlead papers Večer46 or Delo47. In Poland, a similar recom- to the application of the GDPR and the requirement mender system was used for previous elections, of processing such data on one of the legitimate named ‘Latarnik wyborczy’ (meaning Election light- grounds for processing from Article 9(2) GDPR, with house).48 the most common ground being explicit consent (Ar- The most pressing question with regard to the vot- ticle 9(2)(a)). However, the websites offering voting ing advice applications is whether the recommender advice applications that collect personal data in prin- algorithms do not suffer from an engrained bias that ciple do not require the users to give their explicit would lead to favouring of a particular political par- consent. ty. Such potential presence of bias would depend al- However, the impact that the voting advice appli- so on whether the organisation that sets up such an cations have on voters’ electoral choices seems to be application is an independent body or a body poten- limited. Research suggests that the voters were most- tially affiliated with a certain political party. Unfor- ly impacted in their electoral choice when the sug- tunately, on many of the abovementioned applica- gested party already coincides with the party they tions, this information is not available. Moreover, as were already considering to vote, but little impact has

39 See accessed 23 May 2019 47 See accessed 23 May 2019 40 See 23 May 2019 41 The version for European elections in 2019 was available on 48 Krzysztof Dyczkowski, Anna Stachowiak, ‘A Recommender accessed 23 May 2019 System with Uncertainty on the Example of Political Elections’ in Salvatore Greco et al (eds), Advances in Computational Intelli- 42 For European elections in 2019, see accessed 23 May 2019 Information Processing and Management of Uncertainty in 43 For European elections in 2019, see Knowledge-Based Systems (Springer 2012) 441, 442 accessed 23 May 2019 49 On the question of bias, compare Clifton van der Linden and Jack 44 See accessed 23 May 2019 Vowles, ‘(De)coding elections: the implications of Voting Advice Applications’ (2017) 27 Journal of Elections, Public Opinion and 45 According to the statistical data, around 1,7 million Dutch voters Parties 2, 3-4 used StemWijzer for the 2019 European elections, which ac- counts for about 10% of the entire population in the Netherlands; 50 See Recital 26 of the Regulation (EU) 2016/679 of the European for the national elections in 2017, this figure was much higher, Parliament and of the Council of 27 April 2016 on the protection 6,8 milion; see ‘Bijna 1,7 miljoen gebruikers voor StemWijzer’ of natural with regard to the processing of personal (ProDemos, 23 May 2019) accessed 23 May 2019 Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L 119/1 46 See accessed 23 May 2019 51 See accessed 23 May 2019 Delphi 2|2019 Artificial Intelligence and Democracy 71

been noticed when the system suggested the user to with regard to political advertising54 and the rules re- vote for a party she did not previously consider.52 garding funding of European political parties have This demonstrates that voting advice applications been amended with the aim to prevent misuse of da- might not have a detrimental effect on democracy ta to impact the outcome of elections on the EU lev- and freedom of elections. el.55 It seems that the rules on data protection are therefore no longer serving only the protection of pri- vate individuals, but also safeguarding public values, III. Conclusion including democracy. Finally, it is perhaps important to note that AI itself can also play a significant role The use of artificial intelligence can threaten and pro- in protecting democracy and democratic values, such tect democracy at the same time. As demonstrated as freedom of expression and fairness of elections. above, numerous application of technologies deploy- The detection of controversial uses of AI technolo- ing AI can be detrimental to democracy. Due to this gies is sometimes possible only with the AI itself. broad array of impacts, it seems appropriate to con- Typical examples are automated detection of deep- cur with Polonski who somewhat controversially, al- fakes which cannot be recognisable with a naked eye beit truthfully, opined that the AI ‘silently took over or the establishment of existence of bots on ’.53 In response to this ‘takeover’, the EU media. It is important to recognise that any technol- has already taken numerous legal and policy mea- ogy is in and of itself neutral and that democracy and sures to protect democratic processes, especially other public values are impacted through the human through protection of voters’ data. The European Par- use of this technology and its purpose, as determined liament for example called for more transparency by humans.

52 See for example R Michael Alvarez, Ines Levin, Peter Mair, 53 Vyacheslav Polonski, ‘How Artificial Intelligence Silently Took Over Alexander Trechsel, ‘Party Preferences in the Digital Age: The Democracy’ (World Economic Forum, 9 August 2017) accessed 17 May 2019 ‘Do Voters Follow the Recommendations of Voter Advice Appli- 54 European Parliament resolution of 25 October 2018 on the use of cation Websites? A Study of the Effects of Kieskompas.nl on its Facebook users’ data by Cambridge Analytica and the impact on users’ Vote Choices in the 2010 Dutch Legislative Elections’ data protection (2018/2855(RSP)), points 5, 8 (2014) 20 Party Politics 416, 426. Compare Jan Kleinnijenhuis, Jasper van de Pol, Anita MJ van Hoof and André PM Krouwel, 55 See Article 10a(1) of Regulation (EU, Euratom) No 1141/2014 of ‘Genuine Effects of Vote Advice Applications on Party Choice: the European Parliament and of the Council of 22 October 2014 Filtering out Factors that Affect Both the Advice Obtained and the on the statute and funding of European political parties and Vote’ (2019) 25 Party Politics 291, 299-300 European political foundations [2014] OJ L 317/1 72 Cyberwar and Mediation Theory Delphi 2|2019

Cyberwar and Mediation Theory Nolen Gertz, Peter-Paul Verbeek and David M. Douglas*

Cyberwar (military operations conducted via computer networks) is often downplayed com- pared to traditional military operations as they are largely invisible to outside observers, difficult to convincingly attribute to a particular source and rarely cause physical damage or obvious harm. We use mediation theory to argue that cyberwar operations cause harm by undermining trust in computerised devices and networks and by disrupting the trans- parency of our usage of in our daily lives. Cyberwar operations mil- itarise and weaponise the civilian space of the Internet by co-opting and targeting civilian infrastructure and property. These operations (and the possibility of such operations occur- ring) fundamentally change users’ Internet experience by fostering fear and paranoia about otherwise unnoticed and transparent aspects of their lives, similarly to how biological and chemical weapons create fear and paranoia about breathing, eating, and physical exposure to the world. We argue that the phenomenological aspects of cyberwar operations offer a compelling justification for prohibiting cyberwar in the same manner in which biological and are prohibited.

I. What the Cyberwar Debate Reveals theory. For this trying to simply apply JWT to about our Views of Cyberspace leads, as Taddeo argues, to confusion over how to apply traditional thinking about war to In response to the confusion surrounding the mili- a non-traditional battlefield like cyberspace. tarisation of cyberspace, Taddeo1 has tried to bring IE is neither traditional nor anthropocentric, as it together the traditional just war theory (JWT) and is based on an ontology of information, an ontology the revolutionary information ethics (IE) to create a that sees reality as an ‘infosphere’ populated, not by new framework for understanding cyberwarfare. human beings and non-human beings, but by ‘infor- This new framework, which Taddeo has named ‘just mational beings’.3 By moving from a dualistic to a information warfare’ (JIW), is an attempt to merge monistic ontology, Taddeo is able to fill in the gaps the principles for adjudicating warfare found in JWT that appear when trying to apply traditional theories with the ‘ontocentric’ ethics of IE. Whereas the prin- of warfare to cyberwarfare, particularly the gap of ciplesofJWTsuchaslastresort,proportionality,right how to ethically account for the damage and destruc- intention, legitimate authority, and discrimination tion of cyber-based targets. This is achieved by focus- are meant to be used to determine if a war is or is not ing not on protecting human life as a casus belli, but justified,2 JWT is a traditional and anthropocentric rather on protecting the infosphere:

DOI: 10.21552/delphi/2019/2/5 help political and military leaders to determine when and how it is ‘just’ to fight a war. JWT can therefore be seen as an attempt to * Nolen Gertz, Assistant Professor of Applied Philosophy, University avoid what theorists in this tradition see as the dangers of the of Twente. For correspondence: . Peter-Paul murderous history of political realism and of the suicidal de- Verbeek, Professor of Philosophy of Technology, University of featism of pacifism. Twente. For correspondence: . David M Douglas. For correspondence: . 3 Luciano Floridi, The Ethics of Information (OUP 2013). According to Floridi, humans and websites, animals and computer programs 1 Mariarosaria Taddeo ‘Just Information Warfare’ (2014) Topoi 35, can all be seen as information – think for example of the parallels 213-224 between a human’s DNA sequence and a website’s HTML code. 2 These and other principles that can be found in JWT have a Floridi argues on the basis of this ‘infocentrism’ that all such lengthy history of debate behind them that stretches from the informational beings or ‘clusters’ deserve to be respected, to be Peloponnesian War to today. The principles were arrived at by accorded some amount of moral value, based on an information- philosophers and theologians who were concerned with trying to al hierarchy. Delphi 2|2019 Cyberwar and Mediation Theory 73

...an entity may lose its rights to exist and flourish tified so long as it is only intended to return the in- when it comes into conflict (causes entropy) with fosphere to its status quo ante, but this requires that the rights of other entities or with the well-being we know what the status quo ante is. of the Infosphere. It is a moral duty of the other Hence without a better understanding of the im- inhabitants of the Infosphere to remove such a ma- plications and consequences of cyberwarfare for licious entity from the environment or at least to both military and non-military users, we cannot suc- impede it from perpetrating more evil.4 cessfully justify or deter cyberwarfare. It is for this reason that we argue that rather than trying to un- JIW appears to clear up the blind spots and confu- derstand cyberwarfare through the lens of IE, we sion of traditional theories, but at the cost of losing should instead turn to mediation theory, which can one of the major principles of JWT: discrimination. help us to analyse the human-technology relations The principle of discrimination requires that mili- involved in cyberspace and cyberwar. And this will taries observe a strict distinction in targeting be- make it possible to show that the cyber/physical and tween combatants and noncombatants. However, in real/virtual dualisms underlying JIW fail to recognise cyberspace it is much more difficult to make such that cyberwarfare technologies are not merely oper- distinctions, not only because any user is a potential ating in the ‘virtual’ realm of cyberspace, but play an ‘malicious entity’ – IW rightfully targets only mali- important normatively-mediating role in the ‘real cious entities, be they military or civilian’,5 but be- world’ as well. cause of the very of cyberspace. This inability to maintain the principle of discrim- ination in cyberwarfare is typically referred to as the II. Mediating Cyberspace ‘attribution problem.’6 In traditional warfare, dis- crimination is maintained by requiring that mili- Mediation theory is a descriptive and normative the- taries wear uniforms and fight as far from civilian ory of human-technology-world relations, of how populations as possible, both of which seem to be im- technologies mediate the relationships humans have possible requirements to maintain in cyberspace. to the world. While its descriptive side is a continu- While there have been attempts to address this issue, ation of postphenomenology, its normative side is an such as using a broader distinction between ‘licit tar- attempt to use postphenomenology to reveal how gets and illicit ones’7 or calling for an international technologies already mediate ethical life, and to de- agreement to only attack using ‘digital signatures’8 cide how technologies should mediate ethical life. If, these approaches do not touch on the larger issue of as Ihde10 shows, technologies mediate human expe- what it means to turn a predominantly civilian space rience in the form of embodiment, hermeneutic, al- into a battlefield, to weaponise and militarise cyber- terity, and background relations, then, as Verbeek11 space. Taddeo9 believes that cyberwarfare can be jus- argues, humans are not autonomous in the way that traditional ethical theories suppose, as technologies do not only participate in, but actively shape ethical decisions and actions. 4 (n 1) Analysing cyber technologies through the lens of 5 (n 1) mediation theory makes it possible to show that these 6 Randall R Dipert, ‘The future impact of a long period of limited cyberwarfare on the ethics of warfare’ in Luciano Floridi and technologies in fact cannot be understood as being Mariarosaria Taddeo (eds), The Ethics of Information Warfare part of a ‘virtual reality’. Rather, they need to be un- (Springer 2013) 25-38; Kenneth Geers ‘The Challenge of Cyber Attack Deterrence’ (2010) Computer Law & Security Review 26, derstood as mediators in the real world. First, the vir- 298-303 tual world that is opened up by cyber technologies is 7 (n 1) a world that is experienced by ‘real humans’, via key- 8 Patrick Lin et al, ‘Is warfare the right frame for the cyber debate? Luciano Floridi and Mariarosaria Taddeo (eds), The Ethics of boards, screens, VR glasses, cameras, speakers, and Information Warfare (Springer 2013) 39-60 microphones. And therefore, second, what happens 9 (n 1) in the virtual realm will inevitably mediate what hap- 10 Don Ihde, Technology and the Lifeworld (Indiana University Press pens in the real world: ultimately, it does not happen 1990) to virtual beings, but to real beings, whose avatars 11 Peter-Paul Verbeek, Moralizing Technology (The University of Chicago Press 2011) and virtual words are part of their lifeworld. 74 Cyberwar and Mediation Theory Delphi 2|2019

To carry out a mediational analysis of cyberspace, ground. The experience can be jarring. The user we must start by recognising that cyberspace is not becomes plainly aware that the options for inter- one technology, but rather a complicated complex of acting with the technology are limited, that partic- many interconnected technologies. We might say for ular keys can be pressed, that the computer re- example that we are using our laptops to get online, sponds to some things and not others. Though it but in reality we are using at least a keyboard, a had been an invisible tool a moment ago, the com- mouse, and a display to use programs that are graph- puter now sits as an obstacle between one and ical user interfaces interpreting and translating codes one’s work. The user again becomes aware of her and scripts sent by us and to us over an internet con- or his place in front of the device. When a com- nection. That we do not refer to each of these process- puter unexpectedly and abruptly ceases to work es, processes which of course could be even further properly, a user may become explicitly conscious broken down and described in ever greater detail, is of the computer’s identity as a technology, and of indicative of the relationship between users, comput- her or his situation as a user, all of the sudden.15 ers, and cyberspace. Rosenberger,12 following Ihde, describes this as ‘the deeply embodied character of When the computer functions as expected, we lose human relations to technologies such as the comput- sight of it, working not at it or on it, but with it and er.’ through it. It is for this reason that we tend to use Embodiment relations, according to Ihde,13 occur spatial metaphors when referring to the internet, as when technologies become an extension of the user, we experience not code on a computer screen, but a enabling the user to carry out a project through the cyberspace, where we can go online, surf the web, technology, a project that could not have been car- andfollowotherusersonsocialmedia,oftenformuch ried out without the transformative mediation of the longer than we realise, as we lose sight of ourselves technology. A key aspect of this relation is that we fo- as well. At the same time, the screens, keyboards, and cus on what we achieve through the technology, rather other devices we embody help to shape how we go than on the technological mediation itself, such as online, what cyberspace means, and how ‘following’ when I say I see something rather than saying I see becomes a dimension of sociality. something through my glasses, or I say I go online When the computer functions in an unexpected rather than saying I go online through my comput- way however, we regain sight of it and of ourselves, er. This technological ‘transparency’14 is not a side- experiencing the computer no longer as an ‘invisible effect of using a technology such as a computer, but tool,’ but as ‘an obstacle between one and one’s work.’ is rather essential to its use, for the more aware we To describe the malfunctioning computer as an ‘ob- become of the mediating role of the technology, the stacle’ is to recognise that, while it may appear that less the technology can affect the mediation, requir- there is something to be gained from a malfunction ing that we focus not on our ends but instead on the forcing one to pay more attention to one’s computer means to achieve them. and to one’s use of it, this increased attention is of- As Heidegger first pointed out, when technologies ten experienced not as a discovery, but as a distrac- cease to function properly they lose not only their tion. A malfunction does not tend to lead us to new functionality, but their transparency, turning from insights and learning opportunities, as instead we be- the invisibility of being ‘ready-to-hand’ to the obtru- come irritated, angry, fixated, unable to focus on any- siveness of being ‘present-at-hand.’ The malfunction- thing or anyone other than the malfunction, for ing technological artefact is no longer an extension which reason, as Rosenberger16 puts it, ‘the experi- of one’s embodiment, existing instead as a mere ence can be jarring.’ Part of what is so ‘jarring’ about thing,callingattentiontonotonlyitsownlimitations, but to the limitations of the user as well. Though Hei- degger uses a broken hammer as an example, this re- 12 Robert Rosenberger, ‘The Sudden Experience of the Computer’ lational breakdown can also apply to a computer: (2009) AI & Society 24, 173-180 The user’s everyday relationship with the comput- 13 (n 10) er is disrupted when it acts in an unexpected way. 14 (n 10); (n 12) Suddenly, aspects of computer use that had faded 15 (n 12) into the background explode again into the fore- 16 (n 12) Delphi 2|2019 Cyberwar and Mediation Theory 75

such an experience is that the malfunction moves the However, a malfunction does not only impact our computer along ‘the human-technology continuum embodiment relations to our computers as a key el- of relations’ from the one end of embodiment rela- ement of our relationship to computers, particularly tions to the opposite end of ‘alterity relations’.17 Ihde with regards to cyberspace, can be described as a writes: ‘hermeneutic relation’.19 When we say that we ex- Word processors have become familiar technolo- plore cyberspace, go online, surf the web, and follow gies, often strongly liked by their users… Yet in social media, what such shorthand is most often re- breakdown, this quasi-love relationship reveals its ferring to is the act of reading. Similar to embodi- quasi-hate underside as well. Whatever form of ment relations, the hermeneutic relation of reading ‘crash’ may occur, particularly if some fairly large requires transparency, not only of the technological section of text is involved, it occasions frustration device conveying the texts and images we read, but and even rage. Then, too, the programs have their of the texts and images themselves. In reading we idiosyncrasies, which allow or do not allow certain lose not only ourselves, but also the lines that make movements; and another form of human-technol- up the letters that make up the words that make up ogy competition may emerge. (Mastery in the the sentences of the book or web-pages before our highest sense most likely comes from learning to eyes. This ‘hermeneutic transparency’20 allows us to program and thus overwhelm the ’s pre- experience what we are reading in an embodied man- vious brainpower. ‘Hacking’ becomes the game- ner, enabling us to project ourselves into what we are like competition in which an entire system is the reading without the distraction of being aware that alterity correlate.) Alterity relations may be noted we are interpreting symbols, moving our eyes, and to emerge in a wide range of computer technolo- turning pages with our hands. gies that, while failing quite strongly to mimic bod- What we are also unaware of due to hermeneutic ily incarnations, nevertheless display a quasi-oth- transparency is the and trust we put into what erness within the limits of linguistics and, more we are reading. Hermeneutic relations are, like em- particularly, of logical behaviors. Ultimately, of bodiment relations, a form of technological media- course, whatever contest emerges, its source lies tion, but unlike embodiment relations, hermeneutic opaquely with other humans as well but also with mediations present the world to us rather than help- thetransformedtechnofact,whichitselfnowplays ing us to extend our bodies towards the world. The a more obvious role within the overall relational danger of hermeneutic relations therefore is not on- net.18 ly that of misplaced trust, for, as Ihde21 points out with regards to the example of the Three Mile Island As Ihde makes clear, a malfunctioning computer can disaster, being misled by what one is reading can present itself as more than just an obstacle, as it can havelifeanddeathconsequences.Merelybeingmade become a competitor, an object of ‘quasi-hate’ that, aware of the possibility of hermeneutic breakdown, as Ihde goes on to point out, can provoke fantasies of the possibility of reading something that does not opposite to those of embodiment relations. With al- reflect the truth of the world it purports to show us, terity relations we no longer dream of becoming one can be enough to make us paranoid and make the with the technology, but instead have, as we see again transparency required for reading nearly impossible. and again in pop culture, the fears that the ‘ For technologies to not function as expected with re- power’ of computers would soon replace human gards to hermeneutic relations leads, as it does with thinking, fears that political or military decisions will regards to embodiment relations, to being forced to not only be informed by but also made by comput- become aware of the mediating technology and our ers’. dependence on it, but it is much easier to replace a malfunctioning computer than it is a malfunction- ing news source. Paranoia that one’s new computer 17 (n 10) will eventually break just like the previous comput- 18 (n 12) er may lead to irritation, but paranoia that one’s new 19 (n 10) source of information will eventually mislead just 20 ibid like the previous source may lead to a general mis- 21 ibid trust of the medium itself, as for example has hap- 76 Cyberwar and Mediation Theory Delphi 2|2019

pened with both Wikipedia and the ‘mainstream me- have become an integral part of what it means to be dia’ having become euphemisms in the United States human in a digital era. Moreover, these technologies for untrustworthy information. help to shape the moral decisions we make and the The fourth type of human-technology relations moral frameworks from which we think. that has a central place in Ihde’s typology is the back- ground relation.22 Here, technologies are not experi- enced or embodied directly, but form the context of III. Mediating Cyberwar our experiences. The air conditioning system that is making noise all the time. The notifications our mo- Now that mediation theory has helped to clarify the bile devices are giving us all day, to inform us about conceptual confusion surrounding cyberspace, we messages and calls we receive, and the activity of oth- can turn our focus to the conceptual confusion sur- er people on social media. In fact, many information rounding cyberwar, and in particular on what it technology systems have started to form the back- means to weaponize and militarise cyberspace. If, as ground of our daily lives. And by contextualising our we have seen, cyberspace is not a virtual realm inde- existence, these technologies have a profound influ- pendent of the real world, but rather is composed of ence on the ways in which we live our lives – up to technologies that are essential to the fabric of our dai- the point that they are in fact conditioning it, from ly lives, then cyberwarfare should be thought of as their roles at the background. Banking systems, com- being on par less with crime and espionage25 and munication systems, traffic control systems, surveil- more with chemical and biological warfare.26 Deter- lance systems are not just neutral backgrounds, but rence of cyberwarfare must therefore be sought not shape the character of our daily lives by forming its through regulation27 but through international context. treaties that ban the practice altogether. Beside analysing the specific relations that can Mediation theory allows us to highlight an issue arise between humans and technologies, mediation for deterrence against cyber attacks, which is how theory also addresses the normative dimension of the capability to employ active defenses or to retali- these relations. When technologies help to shape our ate transforms the Internet from a medium that is interpretationsof theworld,andtheactionsandprac- benign in itself to a militarized medium that may be tices we engage in, they in fact help us to do ethics: employed to cause harm. This becomes apparent they inform our moral decisions and actions. MRI when we consider that many of the tools of cyber at- imaging mediates moral decisions about the lives of tacks are dual-use technologies that are well within coma patients,23 just like drones mediate the moral the means of the technologically-savvy to acquire and engagement of operators with their victims.24 Tech- use. This is another aspect of the attribution problem nologies mediate morality - they have become part that makes it difficult to distinguish the actions of of the moral of human beings. Moreover, tech- states and non-state actors. If a non-state actor could nologies help to shape normative frameworks. Our launch a cyber attack, another non-state actor may norms regarding acceptable forms of suffering, for retaliate in kind. The Internet would become an av- instance, have developed in interaction with anes- enue through which individuals, either unwillingly thetic technologies. While anesthesia used to be high- or unknowingly, may become targets of states who ly contested in its early days, it has now become im- have identified them as attackers. moraltooperateonpeoplewithoutgivingthemprop- er anesthesia. Technologies are morally neutral, but help us to do ethics by informing moral actions, de- 22 ibid cisions, and frameworks. From the perspective of me- 23 (n 11) diation theory, then, cyber technologies should not 24 Nolen Gertz, The Philosophy of War and Exile (Palgrave-Macmil- be analysed and evaluated as technologies that pri- lan 2014) 25 Patrick Lin et al, ‘Is warfare the right frame for the cyber debate? marily affect the virtual realm. As we saw, informa- Luciano Floridi and Mariarosaria Taddeo (eds), The Ethics of tion technologies play a profoundly mediating role Information Warfare (Springer 2013) 39-60 in our ‘real’ lives as well. By being embodied, read, 26 Gregory Koblentz and Brian Mazanec, ‘Viral Warfare: The Securi- ty Implications of Cyber and Biological Weapons’ (2013) Com- and interacted with, and from their role as the con- parative Strategy 32(5), 418-434 textual background of our lives, cyber technologies 27 (n 1) Delphi 2|2019 Cyberwar and Mediation Theory 77

The possibility of being targeted by states may lead in 1972, the Biological Weapons Convention (BWC) individuals to reconsider how they use the Internet, was opened for signature in London, Moscow, and and as a result, change the Internet itself. Jonathan Washington, with 110 states becoming signatories Zittrain28 writes that an important characteristic of and 173 states becoming parties to the BWC. The the Internet is its generativity: the ‘capacity to pro- BWC categorizes biological weapons as ‘weapons of duce unanticipated change through unfiltered con- mass destruction’ and states that the use of biologi- tributions from broad and varied audiences.’ This is cal weapons ‘would be repugnant to the conscience in contrast to what he29 calls ‘sterile’ systems that are of mankind and that no effort should be spared to under strict control that limits their ability to be used minimise the risk.’31 for unanticipated uses. This difference is best illus- The moral arguments that motivate this rejection trated by comparing the current Internet to the net- of biological warfare are grounded in ‘its uncontrol- work services of the 1980s and 1990s, such as Com- lable and indiscriminate effects, its insidious nature, puServe and AOL (America Online). Only software and its deliberate perversion of medical science.’32 supplied by the service operators could access these Koblentz and Mazanec outline33 the following paral- services, and the uses of these services were strictly lels between biological weapons and cyber weapons: controlled by their operators. While such services Both types of weapons pose significant challenges could not be used as sources for cyber attacks, this to attribution; are attractive to weaker powers and level of security would seemingly result in a Catch-22 nonstate actors as an asymmetric weapon; have type paradox: the Internet could be a protected envi- the potential for use as a force multiplier for con- ronment, but at the cost of destroying everything that ventional military operations; are of questionable makes cyberspace worthy of being a protected envi- deterrent value; exhibit a high degree of uncertain- ronment. ty and unpredictability in their use and the poten- tial for major collateral damage or unintended con- sequences; are based on multi-use nature of the IV. Conclusion: Regulating vs Banning underlying technologies; and are typically devel- Cyberwarfare oped under highly secretive programs.

In 1969, President Richard Nixon unilaterally re- On the basis of these parallels, they conclude that cy- nounced the possession and use of biological berwarfare should not be regulated, but banned, with weapons by the United States. As he explained, ‘dissuasion as the heart of a long-term strategy for ‘These important decisions have been taken as an ini- managing the risks posed by cyber and biological tiative towards peace. Mankind already carries in its weapons.’ Similar to the BWC, successful cyberwar- own hands too many seeds of its own destruction. fare dissuasion would require ‘a widely shared un- By the examples we set today, we hope to contribute derstanding among nations and societies that ad- to an atmosphere of peace and understanding be- vances in information technology should only be tween nations and among men.’30 Shortly thereafter, used for peaceful purposes and the use of cyber weapons to attack civilian targets and critical infra- structure is unacceptable.’34 However, as Koblentz 28 Jonathan Zittrain, The Future of the Internet (Penguin Books 2008) and Mazanec point out, it is easier to dissuade states 29 ibid from acquiring and using biological weapons as there 30 Jonathan B Tucker Tucker, ‘A Farewell to Germs’ (2002) Interna- is already a ‘taboo associated with poisons,’ while ‘cy- tional Security 27(1), 107-148 31 United Nations Office for Disarmament Affairs, ‘Convention on ber weapons, as relatively novel creations that oper- the prohibition of the development, production and stockpiling of ate in a new and man-made domain, lack such a sim- bacteriological (biological) and toxin weapons and on their 35 destruction’ (2016) accessed 31 July 2016 Mediation theory has helped us to see that while 32 Brian Balmer, ‘Killing ‘Without the Distressing Preliminaries’: there are indeed discrepancies in our thinking about Scientists’ Defence of the British Biological Warfare Programme’ (2002) Minerva 40(1), 57–75 biological and cyber weapons, one reason for such 33 (n 26) discrepancies is the perpetuation of dualistic think- 34 ibid ing about cyberspace, in particular that cyber 35 ibid weapons ‘operate in a new and man-made do- 78 Cyberwar and Mediation Theory Delphi 2|2019

main’.36Cyber technologies – through forming em- If Koblentz and Mazanec38 are calling for a ban of bodiment, hermeneutic, and background relations cyber weapons on the basis of parallels with biolog- with users – belong to the same domain as biotech- ical weapons, while yet maintaining dualistic think- nologies, the domain of everyday life. It is for this ing about cyber weapons, then mediation theory, by reason that the weaponisation and militarisation of removing such dualistic confusion, only further cyberspace is the weaponisation and militarisation strengthens the argument in favor of a cyber of everyday life. weapons ban. Such a ban would of course not make A further parallel between biological and cyber it impossible for states to still operate cyber weapons weapons is thus that of paranoia. Releasing toxins research programs, much like how the BWC did not into the atmosphere can not only result in ‘uncertain prevent the Soviet Union from maintaining a secret area coverage and effects’ due to ‘environmental and biological weapons program. However, denouncing meteorological conditions’37 but fear and anxiety cyber weapons and stigmatising their use would help about being able to safely go outside. Mediation the- to create new norms and strengthen existing norms ory reveals how cyberspace can better be thought of of safe Internet use, as well as help to expand our un- as part of the atmosphere we breathe rather than as derstanding of the interconnected nature of cyber- a tool we use only on occasion, for which reason the space. Attempts to regulate cyberwar will create the proliferation of cyber weapons can create a paranoia false impression that there are safe and manageable of the space around us, of the space we live in, just ways to weaponise and militarise cyberspace. A cy- as much as can biological weapons. Gas masks and ber weapons ban however would lead more and more hazmat suits allow us to enter regions infected by bi- people to realise that what happens in cyberspace ological weapons, but do so at the cost of requiring doesn’t stay in cyberspace, which would in turn per- that the wearer be limited in activities and disrupt- suade more and more people to protect rather than ed in one’s tasks through the diminished freedom risk endangering the cyber backbone of our daily and constant awareness of the protective gear. Anti- lives. virus software, firewalls, and anti-surveillance hard- ware similarly allow us to enter regions infected by cyber weapons, but again at the cost of requiring the 36 ibid user be limited and disrupted by the diminished free- 37 ibid dom and constant awareness of cyberprotection. 38 ibid Delphi 2|2019 AI-Empowered Nanotechnology 79

Steps to Designing AI-Empowered Nanotechnology: A Value Sensitive Design Approach

Steven Umbrello*

Advanced nanotechnology promises to be one of the fundamental transformational emerg- ing technologies alongside others such as artificial intelligence, , and other in- formational and cognitive technologies. Although scholarship on nanotechnology, particu- larly advanced nanotechnology such as molecular manufacturing has nearly ceased in the last decade, normal nanotechnology that is building the foundations for more advanced ver- sions has permeated many industries and commercial products and has become a billion dollar industry. This paper acknowledges the socialtechnicity of advanced nanotechnology and proposes how its convergence with other enabling technologies like AI can be anticipat- ed and designed with human values in mind. Preliminary guidelines inspired by the Value Sensitive Design approach to technology design are proposed for molecular manufacturing in the age of artificial intelligence.

I. Reinvigorating Nanoethics current investments and national interests towards the development of APM warrant investigations in- Atomically precise manufacturing (APM) is one form to how we can ensure the concept, and its conver- of advanced nanotechnology that falls within the gences with other technologies, is as beneficial to hu- larger category of molecular manufacturing. This manity as possible by intervening at the design stages theoretical mode of manufacturing was developed in and incorporating the relevant values necessary to greater depth in K. Eric Drexler’s 1986 APM book En- achieve a desired end. gines of Creation. However, the concept of APM in There is a difficulty, however, in evaluating ad- theory dates to Richard Feynman’s 1959 talk ‘There’s vanced nanotechnology per se, that is that nanotech- Plenty of Room at the Bottom.’ Today, the majority nology is part of a converging set of transformative of nanotechnology R&D is not APM per se, but in- technologies such as biotechnology, information stead is technology involving simpler nanometer- technologies and cognitive technologies like artificial scale processes; this is sometimes referred to as ‘nor- intelligence. This muddies the waters in prescribing mal nanotechnology’.1 Many nanotechnology re- a rigid set of values, principles or positive governance searchers likewise doubt the feasibility of APM and structures to its development because it is hard to de- instead favor research on more directly promising marcate discrete boundaries between nanotechnolo- nanotechnology directions. Despite these doubts, gies and these others, despite its potentially transfor- mative impacts. This co-constructive convergence of technologies can still be guided towards beneficial DOI: 10.21552/delphi/2019/2/6 ends. The difficulty of guiding technological design * Steven Umbrello, Managing Director at the Institute for Ethics and process should not be confused with technolog- and Emerging Technologies and graduate student at the Universi- ty of Turin (Consorzio FINO), Italy. For correspondence: ical determinism, stakeholders can and should en- . This paper is built upon the author’s previous gage with the design and development processes of work: Steven Umbrello ‘Atomically Precise Manufacturing and Responsible Innovation: A Value Sensitive Design Approach to technologies that involve them. Explorative Nanophilosophy’ (2019) 10 Int J Technoethics 2, It, of course, may not be feasible to account for or 1-21 engage with all the processes, variable or values im- 1 Donal O’Mathuna, Nanoethics: Big Ethical Issues with Small Technology (Continuum 2009) plicated in the design of APM, however, certain con- 80 AI-Empowered Nanotechnology Delphi 2|2019

ceptual steps can be taken and adopted by design VSD makes explicit claims to help foster a participa- teams to nudge their design flows towards beneficial tory space in which stakeholders can communicate outcomes. To this end, this short ethics in practice both their values and desires.6 piece offers a modular and reflexive set of guidelines TheVSDframeworkispredicatedonthepresump- that can provide policy experts, design teams, indus- tion that technology is something that is value-laden try leaders and even ethicists a way of conceptualis- and thus is of significant ethical importance. The ap- ing the way advanced nanotechnology design can be proach puts considerable emphasis on the human confronted that accounts for the values of those it values of freedom, autonomy, privacy, and equality may impact and how those values can be put into given that they were the values distilled both during practice. conceptual investigations as well as empirical stake- holder elicitations.7 Each of these values has the po- tential to be limited by technology and thus must be II. A Value Sensitive Design (VSD) taken into account during the design of technologies. Approach It focuses on the values of stakeholders and how those values can be reconciled with design and engi- In the interest of developing these guiding princi- neering limitations and constraints. Instead of the ples, or perhaps better termed as framing tools, I conventional way of appraising a technologies moral avoid discussion of what constitutes APM, the de- status, ie, how it is placed, used, and construed in a bate over its feasibility, as well as expert projections societal context, VSD aims determine the impact that for when we can expect to see the first instantiation technology has on the moral landscape. It aims to de- of APM technologies. Simply put, the literature that termine the values of stakeholders and integrate discuss APM and how it functions envisions impacts those values early on and throughout the design on an astronomical scale.2 It has even been suggest- process. Similarly, the approach does not seek to rev- ed that the safe development of APM, given that mis- olutionise the engineering practices of designers in aligned development could result in existential cat- such a way that requires unique or burdensome re- astrophe, is contingent on the use of artificial intel- quirements; instead, VSD is an instrument that pro- ligence (even artificial ). Regard- poses ways of changing existing design and engineer- less, the current nanotechnology research is laying ing methods in such a way as to include stakeholder the path for eventual APM systems to emerge. How- values and to adapt itself to the already engaged in ever, first a brief overview of what VSD is and how engineering environments and practices. This is of it functions is warranted so that the tools used to particular importance to policy makers and industri- frame APM do not seem ad hoc, but rather integra- al leaders in terms of increasing the approach’s so- tive. cial acceptance. The VSD methodology emerged in the field of hu- man-computer interaction and in particular from the realisations that two principal values were missing 2 Steven Umbrello and Seth Baum, ‘Evaluating Future Nanotech- from the design process in technological innovation, nology: The Net Societal Impacts of Atomically Precise Manufac- turing’ (2018) Futures 100, 63–73 user autonomy and freedom from bias.3 Hence, 3 Alan Borning and Michael Muller, ‘Next Steps for Value Sensitive VSD’s intention is to ensure that designers include Design’ (SIGCHI Conference on Human Factors in Computing the values held by stakeholders in the early design Systems, Austin, 5 - 10 May, 2012) accessed 15 July 2019 phases to not only produce a satisfactory product for 4 Batya Friedman et al, ‘Value Sensitive Design and Information the stakeholder, but one that ensures that human val- Systems’ in Neelke Doorn et al (eds), Early Engagement and New 4 Technologies: Opening up the Laboratory (Springer 2013) 55 - ues are advanced in technological innovation. 95 The priority on the subsumption of human values 5 Langdon Winner, ‘Do Artifacts Have Politics?’ (1980) 109 is a critical given that the domain of converging tech- Daedalus 1, 148–164. nologies, as harmony between individual, corporate, 6 Steven Umbrello, ‘Atomically Precise Manufacturing and Respon- sible Innovation: A Value Sensitive Design Approach to Explo- and societal values is often lacking in facour of finan- rative Nanophilosophy’ (2019) International Journal of Tech- cial and/or socio-political gain.5 Hence, this warrants noethics 10 a design space that considers the values of all stake- 7 Batya Friedman and Peter Kahn, ‘Human Values, Ethics, and Design’ in Julie Jacko (eds) The Human-Computer Interaction holders that the technology may potentially impact. Handbook (CRC Press 2003) 1177–1201 Delphi 2|2019 AI-Empowered Nanotechnology 81

III. Framing Considerations for Safe ing APM as being co-constituted by AI, biotech- Nano-Futures nology, and information and communication tech- nologies means a more holistic design workflow Framing is a way of envisioning ways that technolog- can be engaged in. This does not mean that foun- ical developments and potential socio-ethical and cul- dational work in each technology cannot be done tural issues can rise given a particular technology’s without reference to the other, but foundational development. Framing certain technological design work in each field can, and does, contribute to the flows (open design avenues) provides designers with others. principled ways to make informed design decisions.8 This convergence framing means that designers, The social sciences have an illustrious background when eliciting stakeholder values, should frame with eliciting stakeholders in different contexts as their elicitations in a way that acknowledges this well as determining their values, interests, and pref- co-constitutive, dynamic and changing nature of erences9 such as the use of Envisioning Cards.10 The the technology in question. Not doing this can following elements are preliminary framing consid- prove deleterious given the fecundity of instru- erations that can be considered throughout the de- mental view of technology (ie, technology is just sign of APM and provide a potential starting point a neutral tool) and technological determinism (ie, for considering ethical design flows. They are a short- humans have no influence on the future develop- list of various values and concerns that have arisen ment of technology). Either of those two positions in the literature, as well as the leads to severe blind spots. The interactional view VSD literature for speculative technologies more of technology, that on which VSD is predicated, specifically.11 argues for the co-constitutive nature of technolo- gy. For at least the past six decades this has been the contention of the sociology and philosophy of 1. Engage with Convergence Literature technology.13

One of the benefits of thinking of APM and other transformative technologies as being part of a con- 2. Avoid Opaque Systems verging technology landscape is that overlap of common values between the different technolo- As mentioned, AI in particular may prove essen- gies can arise, and those common values such as tial to the successful development of APM sys- safety, privacy, usability, effectiveness, autonomy, tems. Perhaps the most useful systems for simula- etc. can serve as the basis for design.12 Understand- tion potential APM models and outputs is deep neural networks. However, one of the issues with these systems is the tendency for them to be black- boxed and their decision making structures to be opaque to both users and engineers. 8 Till Winkler and Sarah Spiekermann, ‘Twenty Years of Value Sensitive Design: A Review of Methodological Practices in VSD Although transparency is often construed as a val- Projects’ (2018) 21 Ethics and Information Technology 81 ue, it should be balanced with things like data pri- 9 Batya Friedman et al, ‘A Survey of Value Sensitive Design Meth- vacy and security. Because of this, transparency ods’ (2017) 11 Foundational Trends Human -Computer Interac- tion 2, 63–125 should not be envisioned as an end-goal per se, but 10 Batya Friedman and David Hendry, ‘The Envisioning Cards: A rather as a value that can be translated into design Toolkit for Catalyzing Humanistic and Technical Imaginations’ (30th International Conference on Human Factors in Computing requirements that either support of constrain oth- Systems, Austin, 5 - 10 May, 2012) er values. To this end, the AI systems used should accessed 15 July 2019 be able to balance these issues. Policy makers and 11 Umbrello (n 6); Umbrello and Baum (n 2) industry leaders interested in AI enabled APM 12 Steven Umbrello, Beneficial Artificial Intelligence Coordination by Means of a Value Sensitive Design Approach (2019) 3 Big should look at the work done by the Foresight In- Data and Cognitive Computing 1, 5 stitute, more particularly at the proposed use of 13 Langdon Winner, ‘Do Artifacts Have Politics?’ (1980) 109 AI systems such as those developed by OpenCog, Daedalus 1, 148–164. or the CANDO platform designed by Christian 14 Allison Duettmann and James Lewis, Artificial Intelligence for 14 Nanoscale Design ( 2017) Schafmeister. 82 AI-Empowered Nanotechnology Delphi 2|2019

3. Aim for Proportionality safety and security issues limit the social accep- tance of technologies, particularly transformative Designers should embrace a standard of propor- ones, but they open up liability issues that may ul- tionality and thus design APM systems that are timately be deleterious to the potential benefits physically limited to not manufacture explicitly that such systems if designed well, can provide. deleterious substances, materials, weapons or self- What this ultimately means is that transparency replicating autonomous nano fabricators.15 This must be construed as explicability and under- naturally must vary amongst users (ie civilian vs standability to the stakeholder sin question. Not military). Hence, balanced APM systems should only must information about how a system works be something that is openly promoted in order to be conveyed to the user, engineers, developer, etc., limit over-engineering which can inadvertently but it must be done so in a way that is understand- open the Pandora’s box of possible materials and able to those agents to permit effective interven- systems that an APM system can manufacture.16 tions if needed. The most notable example is perhaps the engineer- ing of self-replicating . Functioning sim- ilar to biological cells (ie, closest paragon would be 5. Design for Accessibility the ribosome), these machines would be able to replicate more of themselves. The concept, if pos- APM systems should be designed in such a way sible, has many potential applications for deploy- that fosters ubiquitous use given the above design ment, particularly in space exploration and coloni- flows. This is intended to limit exclusions based sation given the ability to meet weight restriction on socioeconomic status, thus promoting a more requirements. However, this type of system can al- egalitarian use of APM systems. so destabilize economic structures drastically and This of course can be an innately difficult value to without notice given the ability to manufacture take into account given the many different groups any goods at any time with marginal costs. There that stakeholders can be sectioned off into, partic- are many examples of both boons and cons that ularly when those individuals are members of can and have already been conceived of. The point more than one sub-group. Case studies in accessi- here is that engineers have to take into account ble computing provide examples of how comput- early on and throughout the design process of the ing systems can be tailored for different stakehold- needs that the technology must meet given the ers.18 stakeholders’ values as well as the unintended ef- fects that can emerge after deployment. IV. Conclusions

4. Aim for Transparency about System There are already existent examples of technologies Security and Safety that are sensitive to the values and framing tools list- ed here. Industry and policy measures regarding fi- Users of APM systems should be informed about nancial technologies for example not only reveal a not only the limitations of their APM systems but trend towards better regulation but also the inclusion also the vulnerabilities of those systems. This be- of values like proportionality regarding the informa- comes particularly relevant as nanotechnology tion that they gather from their users, as well as a converges with ICT, opening up a further range of tendency toward offering greater accessibility as a converging socio-ethical issues (see point 1). Ac- function of the technology itself. Applying this to cess to networks by remote means requires a min- imum standard of both software and hardware se- curity. Users of APM systems should be informed 15 Umbrello (n 6) of any health threats that APM systems may pose 16 Outlined in Umbrello and Baum (n 2) during use such as the deleterious effects that 17 ibid nanoparticles and materials can have on organic 18 Kristen Shinohara et al, ‘Tenets for Social Accessibility: Towards Humanizing Disabled People in Design’ (2018) ACM Transac- 17 cells. Not only does the back-boxing of potential tions on Accessible Computing 11, 6 Delphi 2|2019 AI-Empowered Nanotechnology 83

APM, proportional examinations should frame APM design is a critical factor to social acceptance. Final- technology design. This framing helps designers to ly, the ubiquitous adoption of APM systems may seek a balance between the potential benefits that hinge on their ability to be accessed and used by any the systems could produce against some of the risks member of society regardless of socioeconomic sta- associated with technological potential itself. In more tus. Because of this, design considerations for broad practical terms, APM systems could include physical spectrum use must be accounted for during early de- barriers that enable specific materials to be used and velopmental and conceptualisation stages if the tech- restrict the types of products to be manufactured. nology aims to be equitable and accessible. The benefits that arise from constraints are natural- The VSD methodology provides a principled ap- ly to be weighed against the potential loss of manu- proach to incorporating the values of stakeholders as facturing potential. design requirements both early on and throughout The FinTech world has shown that users are more the development of a technology. The listed framing willing to adopt systems if there is transparency re- considerations provide a potentially useful first step garding the potential hazards and vulnerabilities as- that can be taken as they are distilled from across the sociated with adoption.19 Hence, transparency, and converging technology discourse and provide a set how transparency is understood and instantiated in of common values that are shared by the stakehold- ers of these different, yet ever increasingly intercon- nected artefacts. Policy makers and industry leaders 19 Hyun-Sun Ryu, ‘Understanding Benefit and Risk Framework of should consider engaging with both the VSD dis- Fintech Adoption: Comparison of Early Adopters and Late Adopters’ (51st Hawaii International Conference on System course as well as its applications to other technolo- Sciences, Hawaii, 2018); See also Chris Brummer Daniel Gorfine, gies to determine how to best modify its principled FinTech: Building a 21st-Century Regulator’s Toolkit’ (2014) Milken Institute, 5 approach to specific design contexts. 84 Outlook Delphi 2|2019

Outlook: What is the Business Value of Ethical Tech? In our Outlook section we ask a cross-disciplinary range of experts to answer important ques- tions on how emerging technology will impact society. Each contributor will provide their unique perspective on a particular issue thereby generating novel and important insights. In this edition we ask: What is the business value of ethical tech?

Steffen Augsberg tors who do not exhibit this extrinsically motivated sensitivity. It can also help to attract employees who Professor of Public Law at the Justus-Liebig-Univer- not only pay attention to their own work life balance, sität Gießen, German Ethics Council but also place a premium on an ethically acceptable employer. The digital transformation has captured and changed our informational assumptions and relationships. Specific problems are associated with this process Siri Beerends (such as a new potential for exclusionary and discrim- inatory practices). At the same time, it does imply Cultural Sociologist, Writer and Researcher at medi- improved possibilities for individual orientation and alab SETUP, PhD Candidate at the University of information. In particular, with regard to customer Twente relationship aspects, increased curiosity and greater knowledge on the part of customers is to be expect- Business value can be about market value, invest- ed. Here, the perceptible trend towards social respon- ment value, societal value or other forms of value. In sibility has a strengthening effect. People do not yet an ideal world, developing ethical technologies behave in all areas of life in the way they consider would benefit all forms of business value. Unfortu- normatively appropriate. However, the pressure to nately, in a capitalistic system it doesn’t work that justify one’s actions is clearly rising. This can be seen way. We see this with companies like Google and with particular clarity in the example of climate Facebook where market value does not serve values change: many still drive an SUV, fly to their holiday like privacy and autonomy. Nevertheless, with the in- destinations and eat meat. But they are doing this creasing criticism of ‘Big Tech’, I think businesses with an increasingly bad conscience, and they are might gain commercial profit from ethical alterna- asking what possibilities there are to ‘balance the tives to existing models. books’ – how much BBQ meat am I allowed to eat if However, ethical technology should not be about I voted ‘green’ in the European elections? Moreover, business value but about planet value. Although com- the moral sensitisation can be observed beyond these panies are shouting ‘people and planet over profit’, special circumstances: For instance, people are ques- we are still living in a world where business value tioning which consequences arise from a business and planet value often don’t converge. Ethical tech- model, whether a product bears intolerable risks for nology has become such a hyped term that it suffers workers and/or the environment or whether less da- from inflation. Many companies use the term and ta-intensive procedures are possible. ethical boards as a tool for getting around regulation. Thus, ethical tech can be understood first and fore- This results in ‘ethics washing’ or ‘ethics theatre’: a most as an anticipatory reaction to the social changes facade of concern for the greater good, engineered to described. Taking ethical concerns into account at the repress criticism and keep public attention away early stages of developing and designing a product from the real problems. or a service enables entrepreneurs to use this consid- eration for self-marketing at a later stage. This might prove advantageous especially compared to competi- DOI: 10.21552/delphi/2019/2/7 Delphi 2|2019 Outlook 85

Also, there is no consensus on what ethical tech- Ida Rust nology is, resulting in ethical principles that are too vague to be effective. When we look for example at Consultant Strategy & Policy in Innovation, Advisor ‘ethicalAI-systems’,itisimpossibletoimplementeth- Human Factors & AI, PhD Student at the University of ical decision making in the ambiguous, organic and Twente fluid way humans apply it. Ethical values differ be- tween cultures, countries and income levels; they are Ethics deals with questions about right behaviour. not universal, static and coherent. In everyday life, Ethical tech is technology that behaves according to ethical decisions take place in a particular situation what we have decided is right. The urgency of a de- with particular dynamics, (un)intentions, context-de- bate on ethical tech is due to the increase in the util- pendent factors and irrational aspects. Who gets to isation of (autonomous) complex decision making decide which moral rules and variables are general technology. Examples of emerging tech that prompt enough to encode? I'm not saying that we shouldn't, ethical questions are: does an autonomous military I'm saying that AI changes the way we apply ethics. drone act justly, when it exterminates a family to pre- It will become more rigid; more black and white be- vent the husband from carrying out terrorist activi- cause AI-systems cannot detect and take into account ties, potentially causing a multitude of victims? Is a human (un)intentions when they calculate a partic- company allowed to use data about children’s behav- ular decision. iour retrieved from a kid’s toy for commercial pur- poses? Should we censor social networks to protect users from adverse information? The difference with Paul Nemitz previous technology is that emerging technology seems to be more radically embedded in human life Principal Advisor at DG Justice, European Commission – but is there really a difference when it comes to the business value? Soon we will see investment focused on eth- The goal of business is to make money under the ical tech. These funds will follow the examples of condition of the existence of a profitable market, re- green funds, which only invest in environmentally gardless of the ethical status of the traded good: eg sustainable projects and companies, but their focus the sales of pricier organic food are flourishing; ve- will be technology and business models which are hy- hicle manufacturers are switching to the production per compliant with GDPR and still very data inten- of electric cars; the monetary value of Bitcoin is as- sive. In the same way that there is a lot of money to tronomical, while the driver behind the development be made with , because it is more re- of crypto-currencies is the creation of an honest and source efficient, thus saving costs, more sympathetic, transparent transaction system. As long as there is and more sustainable, ethical tech reduces risks to in- demand, there is business potential. Ethical tech is vestors and will be easier to market. There are a num- not an exception. Therefore, it is just a matter of time ber of why Apple’s success on the stock ex- before a smart phone, self-driving car or credit bank change is lasting so long. One is that Apple is active- carries the label ‘Ethically Approved’. Like my sham- ly seeking to manage personal data in a way that re- poo that says ‘Cruelty Free’. duces risks to the company and customers. With tech The viability of a company doesn’t solely depend accountability journalism and whistle blower protec- onitseconomicvalue.Nevertheless,Iconsideritmore tion gaining ground, and Facebook expecting to be urgent to ask if and how we should interfere in the fined billions for privacy breaches, the risks of those market to guarantee human beneficial technology. making profits in the grey zone of ethics, the shadow of half lies and half-truth, is rising on both sides of the Atlantic. Sustainable profits require credibility Nicholas Borsotto and trust with customers and stakeholders. This is what investments in looking out for rights of individ- Founder at Archgriffin Consulting uals, democracy and the rule of law from the outset of tech and business model development brings to the Often people assume that pushing for ethics in the mark, in addition to safety and sustainability. Tech requires forgoing 'selfish' business goals, but 86 Outlook Delphi 2|2019

that doesn't have to be the case. To invest in ethical ethical considerations. Imagine purchasing depart- Tech means to hold your products and practices to a ments that can add an ethical dimension to their sup- clear set of values, often related to transparency, pri- ply chain operations. Both scenarios in turn will en- vacy, consent, bias and the overall impact of your able these organisations to deliver ethical services to product in the wider society, may those be users, cus- their customers while at the same time transforming tomers or neither. To do so might hit your profits in the corporate behavior. the short run, but in the long term, they can become At the same time, ethical tech can also play a dif- real competitive advantages, for example: ferentiating factor in engineering. One of the most – Customer Loyalty & Branding: Today, millions wor- demanding challenges in the development of au- ry about ethical issues, such as privacy, consent, tonomous vehicles are the ethical questions that arise and participation in unwitting AI experiments. with this technology, eg the ‘trolley problem’. The They have good reason to do so as there is an op- company that solves these issues first will have a sig- portunity to differentiate itself from the rest, to be nificant advantage over its competitors. Indeed, eth- a champion of user rights deserving of their trust. ical tech in business may lead to a ‘winner takes all’ – Hiring: In the competitive world of recruiting top scenario in which a company that ‘solves’ a particu- tech talent, often a decision comes down to factors lar ethical issue within a given market establishes a beyond salary. Caring about the social and ethical dominant position. implications of your products, signals to potential employees that your company is interested in more than the bottom line. – Sustainability: Maybe the most valuable aspect of ethical tech is that it is sustainable. First of all, eth- ical concerns are a major driver of legislation, so being ethical means being ahead of the curve and preventing disruption to your business. Similarly, it provides a solid foundation on which to build your company without worrying about attracting unwanted media attention, employee walk-outs or social media boycott campaigns.

Vuyiswa M’Cwabeni

SVP, Technology and Innovation at SAP SE

Just imagine a world where we as a society could not profit from merits of inventions like the steam en- gine, medical treatments or the internet. The world, as we know today, would be impossible. None of the mentioned technologies are ethical – or unethical – per se. Technology is inherently neu- tral, and its ethical implications depend on the peo- ple developing and using it. Ethical tech is not a new concept. But it is gaining more traction with the rise of new technologies like artificial intelligence and deep learning. The impli- cations of these technologies force us to consider the ethics of their usage. Imagine banks do not just have financial bench- marks in mind when they invest funds, but also make Delphi 2|2019 Report 87

Report

Understanding ‘AI Made in Germany’: A Report on the German Startup Landscape

Jessica Schmeiss and Nicolas Friederici*

I. Introduction gies.4 With policymakers, investors, incumbents and entrepreneurs taking increasing interest in nov- In 2018, the German Federal Government released a el AI business models, it becomes more and more dedicated strategy to support ‘Artificial Intelligence important to understand what ‘AI made in Germany’ (AI) made in Germany.’1 Underlying the strategy is actually looks like. Several descriptive studies have the aim to fuel productivity in the German economy, evaluated the German AI landscape,5 some with a where 65% of jobs could be automated using AI tech- particular focus on startups.6 However, academic nology, with the potential to increase productivity publications have so far focused on the impact of and ultimately GDP growth.2 Another important aim digital technologies (such as AI) in specific indus- is to develop AI-based business models ‘made in Ger- tries7 and on business model innovation in gener- many’ that can shape the way AI is used across in- al.8 dustries at a global scale.3 By contrast, little attention has been paid to spe- Startups play an important role in achieving both cific business models of German AI startups, in par- aims, as they not only develop AI technologies, prod- ticular the types of AI technologies employed and the ucts and services but also novel technology-based value propositions developed. This report thus aims business models. Estimates suggest that about 60% to answer the following questions: (1) What types of of GDP growth from AI technology could be driven AI technology do German AI startups develop? (2) by new business models, while the remaining 40% What are the value propositions they offer to the mar- could result from automation and transformation ket? (3) HowdoGermanAIstartupsdesigntheirbusi- of existing business models through AI technolo- ness models?

DOI: 10.21552/delphi/2019/2/8 hofer, 2018) accessed 10 June 2019 neurship, University of Potsdam, for correspondence: jessi- [email protected]. Innovation & Entrepreneurship Group, 6 Axelle Lemaire and others, ‘Artificial Intelligence – A Strategy for Alexander von Humboldt Institute for Internet & Society. European Startups Recommendations for Policymakers’ (Roland Dr. Nicolas Friederici, Innovation & Entrepreneurship Group, Berger GmbH, 2018)

To answer these questions, we employ a mixed gies associated with AI are data analytics, image and method approach. Firstly, we analyse a comprehen- video recognition, text and speech processing as well sive database of German startups to answer questions as sensor technologies and robotics. All of these tech- 1 and 2. Through this analysis, we identify in partic- nologies involve non-human entities (machines, ular the scope of German AI startups’ value proposi- computers, robots, etc.) that recognise, label, and tion within existing sectors. Here we derive three analyse various input data. While data analytics, im- types of value propositions that German AI startups age, video, text and speech recognition mostly map focus on. Secondly, we illustrate each type of value previously digitised digital datasets, sensors enable proposition with an example of a German AI start- the recording and processing of environmental data, up, showing how they design their business model and robotic systems employ machine learning to act around AI technology. and react in predictable physical environments.14 With these findings, we aim to provide a basis for Its unique technical features make machine learn- decision makers in business and politics to under- ing applicable across virtually any industry and ap- stand the potential impact of AI ‘made in Germany’ plication area. Scholars tend to agree that it is a so- on value creation and productivity. Additionally, we called general purpose technology (GPT)15 – mean- contribute to current research on business models ing that it is pervasive, continuously improving over which aims to understand how business models time, and stimulating of complementary innova- emerge and how the underlying value architecture is tion.16 Machine learning as a GPT can drive change designed.9 on three different levels – tasks, business processes and business models.17 Accordingly, AI can offer different value proposi- II. Artificial Intelligence tions. Firstly, AI can augment and automate existing

The term ‘Artificial Intelligence’ dates back as far as the 1950s, when computer scientists first proclaimed 9 Nicolai J Foss and Tina Saebi, ‘Business Models and Business Model Innovation: Between Wicked and Paradigmatic Problems’ that machines could someday talk, solve problems (2018) 51 Long Range Planning 9; Lorenzo Massa, Christopher and perform creative tasks – activities so far exclu- Tucci and Allan Afuah, ‘A Critical Assessment of Business Model 10 Research’ (2017) 11 Academy of Management Annals 73; Marik- sively done by humans. In this sense, the distinc- ka Heikkilä, Harry Bouwman and Jukka Heikkilä, ‘From Strategic tion between weak and strong AI is important – weak Goals to Business Model Innovation Paths: An Exploratory Study’ (2017) 25 Journal of Small Business and Enterprise Development AI is limited to performing specific predefined tasks 107; Antonio Ghezzi and Angelo Cavallo, ‘Agile Business Model that support , while strong AI Innovation in Digital Entrepreneurship: Lean Startup Approaches’ (2018) Journal of Business Research, in press tries to replicate broad cognitive tasks and mimic hu- 10 Hecker and others (n 5) man behaviour.11 The present study is concerned 11 Inga Döbel and others (n 5); Erik Brynjolfsson, Daniel Rock and with weak AI, since the commercialisation of strong Chad Syverson, ‘Artificial Intelligence and the Modern Productivi- ty Paradox: A Clash of Expectations and Statistics’ (National AI may still be many years away. Bureau of Economic Research, 2017) 24001 Working Paper Series AI has gained wider significance thanks to the accessed 10 June 2019; Ajay Agrawal, Gans Joshua and Goldfarb Avi (eds), The rapid development of internet and communication Economics of Artificial Intelligence: An Agenda (University of technologies since the early 2000s. In particular the Chicago Press 2019) accessed availability of large datasets and more powerful com- 27 May 2019 puters fuelled the emergence of novel AI technolo- 12 Inga Döbel and others (n 5); Hecker and others (n 5) 12 gies, in particular machine learning. Machine learn- 13 Brynjolfsson, Rock and Syverson (n 11); Agrawal, Joshua and Avi ing can be considered the key technology behind AI. (n 11) 14 Hecker and others (n 5); Erik Brynjolfsson and Andrew Mcafee, Machine learning entails the structuring and analy- ‘The Business of Artificial Intelligence’ [2017] Harvard Business sis of large amounts of input data using statistical Review learning methods and algorithms to derive highly ac- 15 Brynjolfsson, Rock and Syverson (n 11); Agrawal, Joshua and Avi (n 11); Iain Cockburn, Rebecca Henderson and Scott Stern, ‘The curate predictions of outputs. Neural networks and Impact of Artificial Intelligence on Innovation’ (National Bureau of deep learning as well as access to large training Economic Research, 2018) 24449 Working Paper Series accessed 10 June 2019 datasets have increased the performance of machine 16 Timothy F Bresnahan and M Trajtenberg, ‘General Purpose Tech- learning algorithms to nearly match human predic- nologies `Engines of Growth’?’ (1995) 65 Journal of Economics 83 13 tions in some areas of cognition. Other technolo- 17 Brynjolfsson and Mcafee (n 14) Delphi 2|2019 Report 89

tasks and processes to significantly reduce costs. Se- tance.26 Recent studies have looked conceptually at condly, AI can change the way firms innovate18 by how value architectures are designed,27 proposing enabling new forms of insight from large sets of da- different tools to be used in the business model de- ta that were previously unavailable. And lastly, AI sign process28. One of the most widely used tools to can stimulate entirely new avenues for value cre- describe business model design by both practition- ation.19 ers and academics is the Business Model Canvas While the economic potential of machine learn- (BMC).29 The BMC splits the business model into ing has been widely discussed, its full potential has nine categories that detail specific activities to cre- not been tapped yet. Startups play a central role in ate, deliver and capture value and thus describes the unlocking this potential by identifying new realms underlying value architecture. of technological innovation and application, and dri- Firms create value by defining a clear value propo- ving the implementation of solutions across indus- sition and identifying and implementing the key ac- tries.20 tivities needed to deliver on that promise. Compa- nies source the required resources and build relation- ships with partners to ensure smooth production of III. Business Model Design the product or service. They deliver value by defin- ing customer segments and understanding their spe- A business model is an abstracted but holistic repre- cific needs. From this understanding, the company sentation of how a firm does business. It describes builds communication and distribution channels to the way a firm creates and delivers value to its cus- reach these customers. Companies monetise the val- tomers, and how it captures a share of the created ue created by optimising their cost structure and value as profits.21 Each firm uses a uniquely config- defining revenue streams30. ured architecture of activities to create, deliver and capture value, which makes up the essence of its busi- ness model.22 Business models have been studied IV. Our Study widely in the context of new ventures,23 for instance, as a means to commercialise new technologies24 and 1. Methodology disrupt entire industries.25 The question of business model design, or how To answer our research questions, we employed a business models emerge by deliberate actions of two-fold approach. First, we analysed a sample of 139 managers and entrepreneurs, is of central impor- German AI startups based on the AppliedAI31 data-

18 Cockburn, Henderson and Stern (n 15); Philippe Aghion, Ben- and Rosenbloom (n 8); Marc König and others, ‘Different Patterns jamin Jones and Charles Jones, ‘Artificial Intelligence and Eco- in the of Digital and Non-Digital Ventures’ Business nomic Growth’ (National Bureau of Economic Research, 2017) Models’ (2018) Technological Forecasting and Social Change in w23928 Working Paper Series accessed 10 June 2019 25 Daniel Trabucchi, Luca Talenti and Tommaso Buganza, ‘How Do 19 Brynjolfsson, Rock and Syverson (n 11) Big Bang Disruptors Look like? A Business Model Perspective’ (2019) 141 Technological Forecasting and Social Change 330; 20 Aghion, Jones and Jones (n 18); Cockburn, Henderson and Stern Karimi and Walter (n 7); Teece (n 24) (n 15); Agrawal, Joshua and Avi (n 11) 26 Zott and Amit (n 8) 21 David J Teece, ‘Business Models, Business Strategy and Innova- 27 Foss and Saebi (n 9); Massa, Tucci and Afuah (n 9) tion’ (2010) 43 Long Range Plann 172 28 Ghezzi and Cavallo (n 9); Heikkilä, Bouwman and Heikkilä (n 9); 22 Zott and Amit (n 8); Foss and Saebi (n 9) Concetta Metallo and others, ‘Understanding Business Model in the Internet of Things Industry’ (2018) 136 Technological Fore- 23 Raphael Amit and Christoph Zott, ‘Value Creation in E-Business’ casting and Social Change 298; Trabucchi, Talenti and Buganza (2001) 22 Strategic Manage. J. 493; Charles Baden-Fuller and (n 25) Stefan Haefliger, ‘Business Models and Technological Innovation’ (2013) 46 Long Range Planning 419; Teece (n 21); Raphael Amit 29 Alexander Osterwalder, Yves Pigneur and Tim Clark, Business and Xu Han, ‘Value Creation through Novel Resource Configura- Model Generation: A Handbook for Visionaries, Game Changers, tions in a Digitally Enabled World’ (2017) 11 Strategic Entrepre- and Challengers (Wiley 2010) neurship Journal 228 30 Osterwalder and others (n29) 24 David J Teece, ‘Profiting from Innovation in the Digital Economy: 31 Applied AI ‘AI Startup Landscape 2019’ (Applied AI, 2019) Enabling Technologies, Standards, and Licensing Models in the accessed 10 June Wireless World’ (2018) 47 Research Policy 1367; Chesbrough 2019 90 Report Delphi 2|2019

Figure 1: Types of AI technology developed by German startups Source: Authors' elaboration

base in January 2019. The goal was to cluster the star- all startups were located in the two German startup tups in the sample into meaningful categories of AI hubs, Berlin and Munich. technologies and value propositions. At the time it We found that all German AI startups employ was drawn, the sample represented a fairly compre- some form of machine learning. However, they dif- hensive list of all German AI startups. One of the au- fer greatly in the complementary technologies used thors and a research assistant independently coded to establish a marketable product. 36% of all German each startup based on information available on its AI startups focus on data capture and analytics in ar- website, Crunchbase and other publicly available in- eas such as financial analysis, online marketing or formation. To assign a startup to an AI technology, business process optimisation. 26% use image and we used the categorisation proposed by Hecker and video recognition in various contexts from medical others:32 image and video recognition, text and diagnostics to autonomous driving, while 25% devel- speech processing, data analytics, sensors and robot- op AI solutions based on text and speech recognition, ics. To categorise value propositions, we used a liter- mainly to build AI for chatbots or customer service ature-based distinction between (1) automation of solutions. Only 9% focus on AI solutions connected tasks, (2) innovation processes and (3) novel forms to sensors while 4% work on advanced robotic solu- of value creation. To finally analyse the underlying tions (see Figure 1). business model design processes, we interviewed three founders of AI startups. During the interview we collected information on all dimensions of the 3. Value Propositions BMC. We found that 43% of German AI startups specialise on the development of technology which can be used 2. Types of AI Technology as a means for product or process innovation by oth- er firms. We call these AI startups ‘Technology De- The German AI startup landscape consisted of 139 velopers’. They offer highly advanced technology to startups as of January 2019. 40% of firms had been founded within the last three years and 84% were in seed or early venture stages. The majority (65%) of 32 Hecker and others (n 5) Delphi 2|2019 Report 91

Table 1: Application areas that German startups focus on Source: Authors' elaboration

Application area Description

Technology Developers Help existing firms to apply highly specialised AI technologies in product and process innovation, across industries

Business Transformers Assist existing firms in augmenting or automating specific tasks and processes, across industries

Solution Providers Design new areas of value creation for industry-specific problems

help existing firms develop AI-based products, for Transformer startups work across at least a few sec- example, one startup is developing a technology to tors. create real-time 3D maps of a given physical environ- 29% of startups create entirely new modes of val- ment. These maps are essential to help autonomous ue creation, generating consumer-facing products or vehicles or robots navigate in this environment. solutions – we refer to these as ‘Solution Providers’. Hence, firms that are working to develop au- While the value propositions of Technology Devel- tonomous vehicles and robots often rely on technol- opers and Business Transformers have to be integrat- ogy developers to supply 3D maps for training their ed with existing value creation processes, Solution vehicles and robots to navigate in a given environ- Providers develop industry-specific solutions that ment. not only change the way business is done currently Some also offer advanced technology to allow but create entirely new products. For example, Solu- firms to innovate their business processes, for exam- tion Providers develop health assistants that can give ple, another German startup develops advanced im- individualised information and advice about tracked age recognition technology that can be used to cap- health parameters, or they produce robotic solutions ture and analyse documents. Currently, this task in- for logistics environments like warehouses. Table 1 volves a lot of manual work which can be automat- provides an overview of value propositions. ed and integrated into innovative processes with the AI-based technology. The AI technology offered by Technology Developers is highly advanced and spe- 4. Business Model Design cialised, but it can be applied across a range of indus- tries and sectors. To illustrate how startups arrive at the three value 28% of AI startups develop technology that helps propositions we identified, we interviewed founders to automate existing tasks or processes – they are about how they designed their business models. The ‘Business Transformers’. Especially in sectors where purposes of the case studies were exploration and il- a lot of structured data is already available, AI-based lustration; we thus interviewed only one founder per solutions can increase productivity significantly. type of value proposition. Some of the most common application areas include customer service, marketing and human relations. a. Technology Developer: Hypatos Business Transformers often use speech and text recognition to automate existing processes, for exam- Founded 2018 in Berlin, Hypatos develops advanced ple through customer service chatbots that can an- image recognition technologies and neural networks. swer commonly asked questions very quickly. Data In particular, it focuses on automated capturing and analytics is commonly used to optimise online mar- classification of semi-structured documents, such as keting campaigns across different platforms, partic- invoices, payslips or prescriptions, to reduce the ularly in e-commerce. Since business processes can manual effort required to capture such documents. be similar across industries, many of the Business The startup offers two products: pre-trained solu- 92 Report Delphi 2|2019

Key Partners Key Activities Value Proposition Customer Relationships Customer Segment Hypatos develops most of Hypatos focuses heavily on the Hypatos automates the capturing of Hypatos closely Hypatos focuses on all the technology in- development of semi-structured documents for internal supports customers in order to companies that rely on house, but often relies on pre-trained models for a number of processes in all industries. This saves design and integrate the models document management open source technology application areas. Therefore, the costs for the customer and reduces the into new processes. Once the processes. The technology is during development. technical development error rate. models are very narrowly specified, but Important partners are and optimisation of the models has fully implemented, only the application areas are cloud providers, which a very high priority. occasional customer service is relevant to almost all ensure that there is always required. companies. sufficient computing power to run the models stably.

Key Resources Channels The central resources are highly New customers are acquired at skilled employees, computing industry events (eg, CFO power as well as capital for the conferences). Customer service is development of the company. handled mostly via email or telephone.

Cost Structure Revenue Streams The biggest cost factor for Hypatos is personnel and The customer pays per request to the implemented API. computing power (both proprietary hardware and cloud providers). Hypatos offers several packages that contain a certain number of requests per month.

Table 2: BMC for Hypatos Source: Authors' elaboration

tions for more standardised applications and a stu- user to track health parameters through wearables. dio software where customers can design and train A special feature is the Remote Monitoring Platform specialised solutions. Table 2 shows the BMC for Hy- allows family members to check in on a relative’s patos. health statistics, for example, for elderly or chroni- cally ill persons. The data from the platform can al- b. Business Transformer: Adspert so be aggregated and shared with doctors or providers to ensure the effective monitoring of Adspert is a product of the Berlin-based Bidmanage- patients in critical states, for instance, after major mentGmbH,foundedin2010.Adspertusesadvanced surgeries that require close aftercare. Table 4 shows statistical models from the financial trading indus- the BMC for Zana. try to optimise keywords and bids for online market- ing campaigns on different platforms. The founda- tional statistical models have remained similar since V. Discussion the company was founded, but large amounts of available data (eg from ecommerce platforms) and Our findings hold some valuable insights for both increased computing power have boosted their pre- academics and decision makers seeking to under- dictive capabilities. Through the automation of the stand ‘AI made in Germany’. Although Germany’s AI time and labour-intensive process of optimising key- startup landscape produces a variety of novel AI tech- words and bids, Adspert promises to increase prof- nologies and business models, it is apparent that star- its for its customers through increased conversions. tups largely focus on AI technologies like data cap- Table 3 shows the Adspert’s BCM. ture & analysis, which allow for in-depth analysis of large digital datasets to derive conclusions and po- c. Solution Provider: Zana tential recommendations for action. So far, startups have focused less on technologies like sensors and Zana was founded 2018 in Karlsruhe. The firm is de- robotics that not only work with digital datasets but veloping an AI-based virtual assistant that can guide also engage with environmental data. Given that the users with health-related questions and allowing the startup is still very young, this trend is not Delphi 2|2019 Report 93

Key Partners Key Activities Value Proposition Customer Relationships Customer Segment Adspert develops The central activities for Adspert Adspert increases the profitability Adspert has long standing Adspert operates in a very the technology are the further development of of online marketing measures by relationships with most well-defined niche market 100% in in-house. their technology as well as the analysing large amounts of data. This customers. Results and the and primarily addresses However, to make the targeted distribution of usually leads to significant sales strategic development e-commerce customers who algorithms as efficient as the technology to new customers growth for of advertising goals are use online marketing as the possible, partnerships and the the customer. communicated on a regular main distribution channel. exist with all major support of existing customers. basis. The most important online marketing parameter here is the platforms. amount of advertising Key Resources Channels expenditure for online The most important resources New customer acquisition marketing, regardless of are qualified employees who for Adspert works via classic product, the size of the work in development as well as B2B channels, such as company or the industry. sales and customer service. A conferences, trade fairs and large part of industry events. the staff is working on the After the successful continuous development implementation of the of the algorithms. software, customer support is handled mainly via email and telephone.

Cost Structure Revenue Streams The biggest cost factor for Adspert are highly qualified employees, who Adspert is the only product of Bidmanagement GmbH and is offered as Software as a are essential for the further development of the technology. Because the technology Service. In this case, Adspert receives a fee for the use of the technology, a percentage can be quickly implemented for new customers and does not need to be of the total advertising budget of the customer. In some cases, Adspert also licenses individualised, a large number of customers can be served the technology. with consistent resources.

Table 3: BMC for Adspert Source: Authors' elaboration

Key Partners Key Activities Value Proposition Customer Relationships Customer Segment Zana develops the The two key activities for Zana provides an intelligent solution Zana operates in a strictly Zana focuses on B2C technology entirely in- Zana in the growth phase to monitor questions regulated industry. B2C business, which is house. However, in are, on the one hand, the about health and health customer relationships are addressed through the app order to establish a continuous development of parameters and share them with managed directly through and the Zana Health Watch. successful company in the technology and, on the other relevant people. the app and the sale of In the B2B area, there is a the healthcare industry, hand, the establishment of wearables. B2B relationships lot of potential in the strategic partnerships sustainable partnerships. are currently tested follow-up care of patients, with clinics, insurance For this reason, Zana is in the context of a clinical for example in cardiology. companies, medical present at many fairs and study. associations and involved in incubator programs. investors are of central importance. Key Resources Channels A central resource is the access Zana is still in an early growth to strategic partners that help phase. An important channel Zana to establish a foothold in is thus the appearance at fairs, the market. Additionally, access start-up competitions and to reliable data sources is key. events in order to gain potential partners. Additionally, Zana uses online and social media channels.

Cost Structure Revenue Streams The biggest cost factor for Zana is its staff and the production and distribution of Zana is still in an early growth phase. An important channel is thus the appearance the Health Watch. at fairs, start-up competitions and events in order to gain potential partners. Additionally, Zana uses online and social media channels. Table 4: BMC for Zana Source: Authors' elaboration surprising: environmental data is hard to acquire and When examining value propositions that German structure, and it presents additional challenges in AI startups provide, we identified three types: Tech- terms of data privacy and security. nology Providers, Business Transformers, and Solu- 94 Report Delphi 2|2019

tion Providers. These categories are in line with cur- creation and delivery. Solution Providers capture val- rent research on the role of AI as a GPT and also mir- ue by experimenting with different revenue models ror our findings on technology types. Technology to finally design the most effective one. The differ- Providers and Business Transformers focus largely ence in value architectures reflects the challenges on areas where existing data and processes can be that Solution Providers face. leveraged, typically within existing firms in sectors To summarise, we see a lot of potential in the Ger- where data is easily accessible and structured. We man AI startup landscape, which continues to grow conclude, that both Technology Providers and Busi- exponentially. At the time of publication of this pa- ness Transformers play an essential role in making per, Applied AI counted 214 startups - a stunning 54% Germany a leading AI nation because they extend AI increase in merely six months. However, despite the technologies deep into existing value creation and in- landscape’s dynamism, it must be stated that the full novation processes. potential of AI technologies has yet to be uncovered. However, both these value propositions focus on Technology Developers and Business Transformers the transformation of existing business through au- are an important starting point to transform the ex- tomation, leading to cost reductions and productivi- isting economic system and realise productivity ty increases, but not a fundamental change in the gains through automation. Still, Solution Providers product portfolio that is available to end customers33. may be even more important in order to establish an To fully tap into the innovative potential of ‘AI made ‘AI made in Germany’ that generates novel technolo- in Germany’, Solution Providers must play an even gies that are commercialised through sustainable bigger role. They do not only change the way busi- business models and are competitive at global scale. ness is currently done but create entirely new areas To this end, startups ought to find an infrastruc- of business. Yet, these AI startups face a different ture enabling them to deliver new business models, competitive setup: they may struggle to establish a even when those have the potential to disrupt estab- functioning business in established industries where lished industries. Strategic partnerships between strong incumbents define the rules of the game, startups and incumbents as well as regulators are key where data may not be as readily available and where to ensure a development of these new business mod- regulation may prevent easy market access.34 els. Another key component is to guarantee access to Looking at the underlying business models for data while ensuring data protection and privacy. these three value propositions, we conclude that val- With these enablers, ‘AI made in Germany’ has the ue creation and delivery are of particular importance potential to fuel productivity and economic growth for Technology Providers and Business Transform- through the transformation of existing industries ers, in particular access to highly skilled talent, com- and the creation of entirely new industries, driven puting power and structured sales cycles. In turn, for by novel AI-based business models. value capture, these startups often rely on proven revenue models.35 In contrast, Solution Providers’ business model design relies less on proven models 33 Aghion, Jones and Jones (n 18); Cockburn, Henderson and Stern (n 15) and instead involves much more experimentation. 34 Brynjolfsson, Rock and Syverson (n 11); Agrawal, Joshua and Avi Aside from access to talent, partnerships and access (n 11) to data become essential factors to guarantee value 35 Metallo and others (n 28); Trabucchi, Talenti and Buganza (n 25) Delphi 2|2019 Opinion 95

Why Europe Needs Public Funding for Platform Development Paul Nemitz and Arndt Kwiatkowksi*

I. Introduction That is why time is of the essence for the creation of a lively ecosystem, when it comes to setting up Over the last decade, Google and Facebook from the platforms for new markets. And there is still space US and Tencent and Alibaba from China, have estab- for other, more specialised ‘Business to Business’ lished leading global platforms in the general ‘Busi- (B2B) platforms to be set up in Europe which could ness to Consumer’ markets. They were able to do so potentially be scaled beyond Europe. in huge home markets without language barriers. In the following, we will look at the potential for They also benefitted from a first mover advantage, B2B platform development in Europe in three sec- low regulatory hurdles, the scalability of their busi- tors. We look at sectors which are not yet covered by ness model and the almost unlimited availability of dominant platforms in Europe and in which, at the capital. In their home markets these platforms seem same time, governments have a strong position in to be nearly unchallenged by competition. In Europe, terms of their ability to shape conditions for market Google has a market share for search of more than development. Specifically we examine the markets 90%. After its acquisition of WhatsApp and Insta- for services in public administration, education and gram, Facebook seems to be equally unchallenged in health. Here, international innovation leaders from its strong market position. However, in their home Europe can emerge, if the EU and Member States markets these platforms have also not yet been sub- lower the hurdles for innovative application devel- ject to effective application of competition law. opment and provide targeted financial support for The ‘winner takes all’ mechanism these platforms this purpose, in line with the strong public interest benefit from is referred to as the network effect. The in shaping these markets. value of the platform to users is really created by all The condition for establishing European plat- the other participants (eg users, app-developers, ven- forms successfully in these markets are good: dors) on that network: A user’s decision of not using – The benefits of digitisation and the platform mod- the leading platform creates disadvantages, eg lack el in these three markets are enormous and have of choice. hardly been realised so far. The network effect enables the leading platforms – Dominant platforms do not exist in these three ar- to achieve massive scale and eliminate competition. eas, neither in Europe nor internationally. Many of the platform markets tend naturally toward – Access to users and data as well as the use of plat- monopolies. The network effect makes it basically forms in these fields can be efficiently promoted impossible to establish a competitor once a dominant through public authorities, given that they are the platform exists because switching costs for all users sole buyers for administration services and in on all sides of the platform are too high and the small- most EU Member States have a strong hold on ed- er number of followers can’t populate the competing ucation and health services, both as regulators and network sufficiently to make them attractive as a as buyers. challenger. – Privateinvestmentinplatformsinthesethreemar- kets in Europe has been scarce, due to the strong position of governments in these markets and al- so due to the fragmentation of requirements and languages, which is of greatest relevance in the ed- DOI: 10.21552/delphi/2019/2/9 ucation and administration markets. * Paul Nemitz is Principle Advisor in the EU Commission and a Member of the German Federal Government's Data Ethics Com- – Investment funds needed for the support of inno- mission. He is expressing his personal opinion. Arndt Kwiatkowk- vative initiatives to create platforms in Europe are si is the Co-founder and CEO of Bettermarks.com, the digital education service for mathematics in school systems. probably in the 2-digit millions per sector. This can 96 Opinion Delphi 2|2019

and should be mobilised by the EU and its Mem- institutional users and the availability of data and ber States, as risk capital is lacking in these seg- open source code are essential for lowering innova- ments. tion hurdles. – The development of platforms for high-quality ser- vices in the fields of health, education and public administration is both in the public interest and a 1. Health basisforEuropeanandpossiblyglobalscaling,and thus profitability, which in turn enables further The healthcare sector is a good illustration of this. improvement of the service at home and also se- For instance, the availability of anonymised data (eg cures growth and employment in Europe. relevant partial information from images from CTG and X-Ray scans) is vital in order to train algorithms. With the continuing development of ‘Industry 4.0’ Without these, the hurdle to innovative application (Internet of Things) more platforms will emerge in development is enormous. A further hurdle is the other sectors (eg agriculture, segments of mechani- availability of case numbers and appointment calen- cal engineering, transport). However, these will be dars. With that in mind partnerships between devel- sector-specific and in comparison to the above men- opers and actors in the healthcare sector should be tion markets, the potential for government influence encouraged through targeted support and guidance is lower. These are also markets for which more cap- as to the proper implementation of the GDPR. ital is available. TheavailabilityofAPIstoconductdiagnosticfunc- Platform companies are not only relevant from the tions (as developed by the App ‘Ada health’) could be point of view of ‘citizen benefit’ but also in terms of an excellent basis to develop further applications for the potential economic reward. Experience has the prioritisation of physician appointments, for ex- shown that platform-based services are often devel- ample. If the results of the diagnostic tool were linked oped in the US and then also offered in Europe. How- and compared with actual findings, the prognosis ever, it would be desirable to promote the develop- quality of the app based diagnostics tool could be ment of platforms in Europe, as this is the only way continuously improved. It could also be used to de- to implement our legal and value concepts (eg data velop applications for the quality assurance of med- protection, transparency) from the outset and ‘by de- ical work. sign’ and to profit economically from international More advanced applications for the evaluation of market leaders for services relating to health, admin- images and measurement results such as smart radi- istration and education based in Europe . This would ology could be developed if the data from hospitals also allow the products to be tailored to the specific was accessible, possibly as part of development part- domestic requirements of the EU and its Member nerships, through which full compliance with GDPR States. can be ensured. There are a number of European projects and pro- grammes which have already prepared the ground forsuccessfulinitiativesonplatformsinEurope,both 2. Public Administration in the form of sectoral initiatives (for example the ‘Digital Learning & ICT Education’ work stream of Pertinent examples from public administration in- the European Commission) as well as horizontal clude the availability of mobility data from public work relating to issues like user control over person- transport, rail, air transport and private car use. With al data (Decodeproject.eu). free access to this data in anonymised form, suitable mobility offers and traffic control systems could be developed that allow citizens and municipalities to II. Critical Factors incorporate the respective individual or democrati- cally determined preferences in the region, which is The critical factors for platforms in these markets in- not possible with global applications such as Google clude the lowering of innovation hurdles, a rapid Maps or Waze. spread of data use and sufficient financing in the ear- It would also be desirable that more administra- ly expansion phase. Accordingly, cooperation with tive services are provided to citizens online. The digi- Delphi 2|2019 Opinion 97

tisation of more than 500 administrative processes a face-to-face MBA course, thus increasing the per- between state and citizens is already an ongoing meability, flexibility and capacity of the education project of the German government with the federal system. states and municipalities sharing the tasks in an or- A digital platform for lifelong learning would al- ganised division of labour. This would be facilitated so be a logical next step based on the development by a digital identity as has been developed in Esto- of public databases of learning materials for teach- nia. Building on this, ‘Identity Guaranteed Adminis- ers and schools. trative Platforms’ or ‘Trust Platforms’ can be created, The public school system in Europe, which is de- which are also needed internationally in order to of- mocratically controlled, needs its own publicly con- fer administrative services (car re-registration, exten- trolled digital learning platform for digital education- sion of identity card, registration, al content. Such a learning platform can also become etc). Health data can be stored for doctors to retrieve an international standard. Conversely, neither the (living will, organ donation requests). The identity quality nor the democratic control of learning con- and administrative platforms can also serve as a se- tent in the EU and its Member States can be main- cure platform for the processing of transactions of tained in the long term if we leave the development third parties, where the identification of the identity and provision of such a learning platform to the glob- is important (account opening, car purchase, trans- al market. By now we should know that only those mission of documents, etc). The legal basis for this who master the infrastructure of digital services can already exist in EU law. also control the content and its quality. One challenge for the education system is the ‘mis- match’ of unplaced jobseekers and unfilled vacancies 3. Education for skilled workers. There is an ‘education gap’ both from the individual perspective of the job seeker and In the field of education, the free availability of tools from the perspective of the economy, which com- for creating adaptive learning materials is a prereq- plains of a shortage of skilled workers. In the techni- uisite for the development of innovative learning me- cal professions, the mismatch problem becomes an dia. Since the development of an ‘adaptive learning obstacle to investment, as SMEs in particular do not platform’ is technically complex and costs tens of invest in new technologies if they are not sure that millions of Euros, the hurdle would be too high for they can quickly hire suitably qualified employees most content providers if they had to develop their who can use the new technologies, or that they can own proprietary platform. On the other hand, access offer or purchase appropriate training themselves. to a common platform with easy-to-use authoring In school education, the biggest problem is the lack tools that is provided by the state, enables education- of teachers and the development of students' perfor- al providers to transfer their didactic knowledge in- mance in the MINT subjects. This is where there is to effective digital learning media without high in- most experience with digital learning systems that vestment cost. In addition, the competition for inno- can remedy the situation. vative applications would be further accelerated if Europe should not wait for the next generation of the usage data were made available to the public in platform companies for educational content from the an anonymous format, or at least to the participants USA or China (comparable to the or Apple of the platform, so that experience from different App Store) to establish themselves as the market learning processes and applications could be used re- standard. It rather makes sense to enable such a plat- peatedly to improve the quality of service and ease form for the European education systems through of use. state or EU startup financing and anchoring in or In addition to financing the platform (provision close to the state educational institutions. The advan- as open source code) and access to usage datasets, tages are: policymakers could create further framework condi- – Efficient and up-to-date learning media are devel- tions to promote the ecosystem around the platform. oped in a wide variety in a timely manner For example, suitable platform offerings could be – Data protection aspects, which play an important available as part of larger curricula, eg a financial ac- role with regard to competence profiles of learn- counting course could be recognised as equivalent to ers, can be integrated. 98 Opinion Delphi 2|2019

– Direct access to educational content for all citizens is great potential for policy to influence the creation and education systems of applications in these areas. – Internationalisation opportunities for platform The measurable benefit and rapid dissemination in and educational content from Europe the home market is then in turn the prerequisite for – Development parallel to curricula and teaching successful internationalisation, and this in turn forms and study methods common here the basis for further improvement of the service. The – Integration in teacher training and further educa- positive correlation between scaling and quality devel- tion opment is visible across all platforms, starting with Google. Europe and its Member States can benefit di- rectly from these effects if the state and industry work III. Conclusion together to ensure that platforms are created in Europe for the aforementioned purposes. This requires a tar- In all markets that are changed by digitalisation, dig- geted, coordinated lead market and promotion policy ital projects are successful if they are continuously on the part of the EU and the Member State’s govern- improved and their impact is measured from the ments. But there is also space for Member State initia- point of use. In the three markets mentioned, there tives, in particular in the fragmented education sector. Delphi 2|2019 Startup Digest 99

How Ethical Debates Can Enrich Data Science and Artificial Intelligence Anna Laesser*

The Startup Digest section introduces startups and grassroots initiatives from around the world that push the boundaries of emerging technologies. Most conversations around emerg- ing technologies are stuck in silos and are quite hyped, making it hard to understand their actual impact on businesses, society and governance. The Startup Digests aim to demystify what is happening on the ground by establishing a discourse via case studies and interviews with startups and grassroots initiatives. Each edition will take a critical look on how these movements apply emerging technologies to achieve a specific purpose – facilitating a dis- course that makes the (new) thinking, the approach and potential impact become more tan- gible.

Introduction Location: Berlin Your name, position: Elena Poughia, Managing Di- Emerging technology will profoundly change how we rector live tomorrow. Data-based technological solutions have increasingly become part of our cultural identi- Question: How do you create a global community of ty, transforming the way we communicate, learn, do data driven pioneers and what role does Dataconomy banking – even date. Although it is revolutionising play? Why is it important and what is the vision you how we life and work, we have also encountered al- are aiming for? gorithms that turned racist, misogynistic and homo- phobic; thereby negatively influencing society on an ‘Create’ is not the right word here – You don’t really ethical and political level. Building impactful solu- ‘create’ a global community, you facilitate, you nur- tions is not just a matter of data science but one that ture it, you belong, you are part of it. Data Natives requires greater care. (the community arm of Dataconomy) spot a ‘move- This Startup Digest takes a closer look at how two fe- ment’, a network effect if you will, a moment of col- male-led startups in London and Berlin are working lectiveunconsciousnessascoinedbyCarlJung. There with data science and artificial intelligence (AI) to is a need for people in Europe to learn more about build solutions that maximise a positive impact and data science, data driven technology and to work on minimise harm to society. Both case studies will re- projects together. Until now, 62% of our community veal how they deal with potential benefits and risks is interested in coming to our events in order to find andhowtheytakeethicalconsiderationsintoaccount. people to co-create and 80% are interested in learn- ing new things. Our role is to facilitate that environ- ment, to provide the right tools and to curate the con- Interview 1: Dataconomy – Data Science tent and the people so we can ensure that all atten- and the Art of Making Data Useful dees are satisfied. We aspire to be the number one community for the data driven generation; an envi- Name of your startup/ initiative: Dataconomy Me- ronment for creators and doers to work on things to- dia & Data Natives gether. This is very important in order to foster in- novation, creation and to encourage the doers to keep on doing.

DOI: 10.21552/delphi/2019/2/10 Question: Can you outline how you/Dataconomy de- * Anna Laesser, Co-founder at Impact Hub Berlin. All views are my own. For correspondence: fines data science and why it has become so impor- 100 Startup Digest Delphi 2|2019

tant? Building on this, what is the difference between ases that have been culturally and socially construct- digital and data experts? ed. How we interpret the world depends on who we are – gender, sex, age, nationality, race, class and all Data science is the art of making data useful, and it other cultural and social factors. is a difficult and tedious task – currently on the in- When designing new systems one should take in- ternet every 60 seconds 100 videos are uploaded on to consideration the data being used and include an YouTube, 40K pictures are being posted on insta- explanation to justify any given biases. Many advo- gram, 350K tweets are being shared; data science cate for ‘explainable’ AIs – systems that trace back to makes it actionable, provides a story and puts this in the ‘how’ a system decided on a topic. Based on what a framework. Data has always been important – in- we would like to achieve, it would be good to have deed, information is an important evolutionary ad- diverse data samples and ensure that we have a di- vancement of civilisation. Ancient Greeks’ stories, verse team of people assessing the results. myths, and learnings were passed onto rocks and papyrus, the earliest known paper document in Question: Data is part of our new cultural identity, Europe was found in the 11th century and by the 15th transforming the way we communicate, learn, do century, documentation and knowledge passing banking, even date. Data-based solutions have mas- through books brought about a renaissance in the sci- sive potential to change our lives. However, there is a ences and philosophy. Computers and information great responsibility to design ethical solutions that technologyarethe‘game-changers’ofourtimes.They maximise positive impact and minimise harm to soci- are transforming the way we communicate, interact, ety. Does Dataconomy address ethical debates? If yes, solve problems, work and live our lives. how do you empower people to design positive solu- From a philosophical standpoint, if you think tions for tomorrow? about it, we are all made out of data – what comput- ers have is the information as the algorithmic DNA We address the topic by inviting experts to discuss of their computational power, our DNA contains in- it in private and public forums – we create the envi- formation we’re not even aware of and can not even ronment for the people to meet and talk about this decode verbally or consciously. It provides us with and by connecting them they will hopefully meet and the unspoken wisdom and the ‘gut feeling’ of situa- start building solutions together or influence the de- tions to engage or not engage with. cision makers in taking action. Digital and data experts – that’s a very broad ques- tion and assumption. Not all data experts are the Question: Can the ethical debate as well as data pri- same as well as not all digital experts do the same vacy, data protection and data diversity be framed thing – when it comes to developers and data scien- more constructively? If yes, please share examples. tists, simply put, you need data as a developer but that is not a crucial part of the job, and you need some I am not sure there is a ‘one size fits all’ for this type coding skills as a data scientist however unless you of debate and people have different ways of ap- are an engineer you don’t need coding as crucial part proaching and understanding how data is changing of the job. their lives. I know a lot of people for example who are not interested in their own data privacy and pro- Question: ‘Data is neutral’ – what is your response? tection, they prefer to donate their data for research According to you what are the three key most crucial and have more personalised products. In Europe da- aspects that should be taken into consideration when ta privacy and regulation have been important topic designing new, ethical data-based solutions? leading up to GDPR and now the formation of arti- cle 11 and 13 that will change drastically how infor- ‘Data is neutral’ is as broad and generic of a term such mation is shared on the web. as ‘tech is good’ or ‘facts matter’. It is all relative and it all depends on what kind of data, how has it been Question: How does Dataconomy help bridge the gap collected, where there any parameters set and how between data science and business? What are the two is it being interpreted. Data is not neutral as long peo- main challenges each side currently faces? How can ple are not neutral and we all have our own set of bi- both sides be brought closer together (eg legislators, Delphi 2|2019 Startup Digest 101

corporations, startups, data managers, data sub- aged to work in tech when and if the environments jects)? will take their experience into consideration; we need to include more female heroes in our lives, more We bridge the gap by conducting educational and female role models and narratives, more inclusive training programs for all professionals to explain da- spaces, embracing and accepting the female and ta science to non-techies and business managers as feminine traits of being and existing – currently all well as provide further training and workshops to da- our systems are systems of patriarchy and we favour ta scientists. We also connect data scientists with ca- and encourage masculine and male characteristics reer opportunities and assist businesses who do not and traits in the systems and infrastructures we currently have a data scientist in house or do not un- built. We need to change our perception and mind- derstand the power of their data to get consultancy set in order to encourage girls from a young age to and connect with the relevant stakeholders. code and become mathematicians, scientists, and On one hand, data scientists who are currently re- programmers. We need to encourage women by fos- cruited in business with a small or previously non- tering an environment that celebrates and invites existent data science department face the challenge them. of structuring, cleaning and optimising the existing data – this is a tedious and long procedure. Another Question: What are the most interesting data-fused challenge and this one should be addressed immedi- technologies/applications that currently inspire you? ately and it is a challenge faced by both parties (da- Why? ta scientists and business professionals) is the lack of communication. – Hippo.AI: A foundation started by Bart de Witte Business managers and professionals – you need with the aim to democratise healthcare through to include data scientists in your vision and have supporting open source projects. It’s something in them buy in. Data scientists as well as techies are not between an NGO and an Open Source Foundation your regular ‘working class’ workforce. They should to advance humanity. understand and embrace the vision of your compa- – Plan A: A data-driven platform in the fight against ny wholeheartedly. Also, the more involved you have climate change them the more active and engaged they will become. – KIProtect: The programme which enables data-dri- This is certainly a way that both parties can come ven teams to easily work with and share sensitive closer together. data with guaranteed security and compliance. A challenge uniquely faced by the business arm of Unlike most encryption methods, KIProtect de- a company is that the business managers don’t speak tects and secures your data on the fly while retain- the ‘data science’ lingo in which case I think a data ing its usefulness and readability. dictionary should be established and all stakeholders – The Data Union: The Data Union protects the in- should become data literate. terests of anyone who produces data on the inter- net. The Data Union wants to give people more Question: Does the underrepresentation of women in control over their data and shape strong terms of data science influence how our society will look like conditions for organisation who use that data. tomorrow? Do you have examples that reveal the ‘gen- der-tech-gap’ especially in data science and in new da- ta-based solutions businesses are building? How do Interview 2: Gapsquare – Revolutionise you encourage women to work in technology? Fair Pay Through AI

The underrepresentation of any group and margin- Name of your startup/initiative: Gapsquare alised community establishes a system and an infra- Location: Bristol, UK structure where they are not properly represented Your name, position: Dr Zara Nanu, CEO and Ant and their input and experience is not properly fac- Kennedy, CTO tored in, so yes, it is important for any future sys- tems that we are building to make them as inclusive Question: What does Gapsquare do and what role and sustainable as possible. Women can be encour- does AI play? 102 Startup Digest Delphi 2|2019

Gapsquare develops cloud based SaaS to help busi- male candidates and give lower scores to female can- nesses avoid unfair pay structures. Our flagship prod- didates. The company had to scrap the program when uct Gapsquare FairPay provides gender pay analysis, they realised the bias. ethnicity pay, equal pay, and equal pay for equal val- What this shows is that companies are increasing- ue. Our vision has always been to use AI when we ly viewing machine learning as an obvious solution look at all the aggregated data to understand trends, to business problems and equality problems in par- explain gaps, and help companies close gaps that ticular. But when not done right, this can significant- should not be there. The easier part is identifying ly backfire. (where they are present) any equal pay issues – and where employees are being paid differently, without Question: What are the challenges for ensuring that any underlying reasons, for the same roles. AI operaties safely, fairly and non-biased in the (near) More complexities arise when we start looking at future – what potential ethical considerations arise? gender pay gaps, and identify reasons behind these How do Gapsquare or other organisations in your net- gaps. Some issues that we are already beginning to work deal with it? identify are slow career progression for women, clus- tering of women in occupations that pay less per Ethics is a very important part of our work. We op- hour, and lack of flexible working. erate with data that is embedded with bias. Data for This is where it gets more interesting as we bring hundreds of thousands of employees, with a large together an interdisciplinary approach including so- percentage of men in higher paid roles and women cial sciences, behavioral economics, and data science. in lower paid roles, with more men in roles and de- It’s a very exciting space. The World Economic Fo- partments that pay more per hour. rum predicts it will take over 200 years to close the These problems don’t only affect Gapsquare but gender pay gap globally. We want to use data and all businesses that are using AI to achieve their goals. tech to make this happen in 20 years. Although this may be a scary prospect, we at Gap- This is where AI comes into play. By training com- square are tackling this through constant review of puters to analyse large, complex data it can identify our AI systems and underlying datasets to alert problematic indicators in seconds, and provide a where there might potentially be bias allowing us to foundation for scenario planning and fast identifica- take corrective action before exposing the technolo- tion of solutions. If done right, AI can help acceler- gies to our end users. ate and make unprecedented progress. Question: What are your machine learning algo- Question: What are the benefits/potential of ma- rithms based upon? What type of decision-making chines engineered to take decisions/process data? Feel process does it replicate? How do you integrate the free to include examples from your experience. data into your decision-making process of gapsquare?

Initially we thought the machines are going to be the Gapsquare FairPay allows us to augment existing hu- holy grail to solving pay inequalities at work. But it man processes through continuous monitoring of very quickly transpired that we are dealing with bi- salary data. This enables businesses to understand ased data, and that the algorithms themselves are al- how they are tackling any issues they have in their so biased. If we apply these algorithms to existing da- pay structure. Using a companies internal data and ta, all we will end up doing is accelerating inequality. anonymised and aggregated data from all of our cus- This is why we see cases such as the Amazon da- tomers,wecanmakerecommendationsonwhereyou ta science based recruitment platform. Amazon had might have an unexplained pay gap, what business a group of data scientists in Edinburgh develop a ma- units, job roles contribute to you pay gap and recom- chine learning based tool to recruit talent into the mend how you can approach closing these gaps. company. The team created 500 computer models Where our tool really becomes exciting is when and was taught to recognise 50,000 terms that were we begin to add more data from organisations: such found in their databases of past candidates and em- as performance reviews, exit interviews, etc. Gap- ployees. As Amazon is a tech company, heavily re- square FairPay can then begin to make more intelli- liant on a male workforce, the models began to favor gent decisions around individuals career progression Delphi 2|2019 Startup Digest 103

with the tool working in collaboration with existing Question: What traits will increasingly become im- HR staff. portant in the development of AI? List the top three We are not looking to replace the human compo- according to you. nent of pay and reward decision-making. We view this as a process of co-working with AI in order to Trust. In a recent lightning Twitter Poll, I asked if free up time for existing staff to tackle higher value readers trust Artificial Intelligence. I know, it was a problems whilst working to ensure the right people ‘sweeping generalisation’ kind of question, in a world are in the right roles and they are remunerated fair- where AI and trust mean different things to differ- ly for their work. ent people, so any statistical accuracy is out the win- dow. However, it was enough to show that an over- Question: Based on your experience, how do you pro- whelming majority don’t have trust in AI (nearly mote gender equality through technology? 70%). When we set up Gapsquare just over three years Tech is still very much a mans world, as only 25% of ago, we set off on a mission to use machine learning the workforce in this occupation is female. Data from and Artificial Intelligence to create more diversity the World Economic Forum indicates we are far from and inclusion in the workplace. Our aim is to use AI closing the gender gap in tech (as well as any other to create fairness in work. Science, Technology, Engineering and Math occupa- While we are developing AI to help create more tions). This is an important subject because tech is inclusion and diversity, we find increasingly that peo- increasingly re-shaping the world we live in. Predom- ple do not necessarily trust it. At the end of the day, inately male teams are more likely to replicate gen- to the end user, AI is this black box in which you de- der bias and stereotypes when they develop AI and fine what success looks like, mix it with some data, machine learning algorithms. and hope that it delivers insights that will help you Although we are seeing progress and more young reach that goal. women are considering tech jobs, the rates of leav- The only way to overcome this big trust issue is ing the profession are growing. This is related to the transparency. And in order for it to be successful, fact that it is difficult for women to feel like they fit transparency will have to cut across multiple levels in in all male environments. – from collection and use of data to defining success It will take a collective effort between the tech sec- together and looking at outcomes with an under- tor, educational systems and governments to create standing that they can be flawed. sustainable change. Companies can see through their At the end of the day, AI is here to stay and if we data at Gapsquare where the key challenges are. do not build trust with its users, it will be developed Sometimes it is an issue of recruitment, other times in silos and applied in AI bubbles. Partial attempts an issue of career progression. Their data merely mir- towards transparency will not suffice. Trust will rors the global trends. A 2016 report from McKinsey come with full disclosure, and tech giants will have shows that women made up 37% of entry-level roles to lead or follow on this. Trust will be built when da- in tech, and only 25% advanced to senior manage- ta holders learn to embed transparency in what data ment roles. they use, what kind of algorithms they use, and what Companies are starting to use tech to create fairer outcomes are being generated. recruitment processes. For example by using tools that identify gender biased language in job ads be- Question: What do we need to do to teach morality fore they are advertised, or the use of fair online re- to machines? How can we design more ethically cruitment platforms that remove people’s names aligned machines that maximise fairness and reduce from the application process. But this is only half of biases? From your point of view what is needed to pro- the job in using tech to bring more women into the mote this? An interesting question connected to this occupation. Educational establishments need to be is 'do we want to train AI to be like us or to be more creative in what technologies they use to attract fe- moral than us? We've seen AI turn racist, mysogenic, male talent. Governments in turn have a role to play homophobic, when we let it learn from us. And if we in developing and funding initiatives that bring do decide that we want to teach an AI to be a mirror women into tech. to our moral code, whose moral code should we use?' 104 Startup Digest Delphi 2|2019

Morality means different things to different people Conclusion and cultures. And by culture we mean it in the broad- est sense as in the geography, but also in the more lo- Without doubt, there is a lot of potential in data sci- cal sense as in the company culture. These cultures ence, machine learning and artificial intelligence. can play a big role in the way data scientists under- Both interviews reveal the complexity of creating eth- stand their remit in creating AI. These are all open ical solutions with a long-term positive impact on so- questions to which we are contributing every day ciety. However, the also show how the ethical debate though our thinking and our decision making. can add richness to the solutions that are being devel- This was also apparent in the Moral Machine Ex- oped. Three key insights based on both interviews are: periment. The platform that generates moral dilem- 1) Amachine’s‘morality’mirrorsitscreators.Accord- mas and collects information on the decisions that ingly, bias in data collection, data aggregation and people make between two destructive outcomes. decision-making processes makes it challenging to There are no right or wrong answers, but the data design fair and ethical solutions. Both startups em- could be used to inform decision making inbuilt in phasise the importance of critically evaluating self driving cars. What the research team found out every step of the process to reduce the risk of a so- is that while there are generic/global preferences lution turning bad. when it comes to prioritising certain pedestrian over 2) Data science and artificial intelligence require a others when faced with a faulty self driving car, pref- new form of awareness and collaboration among erences revealed by the Moral Machine are highly scientists, researchers, businesses as well as other correlated to cultural and economic variations be- stakeholders such as legislators, politicians and the tween countries. public. Taking these different perspectives into ac- This is why decisions around morality of machines count is essential to create solutions that will have need careful consideration and thinking. Collective- a long-term positive impact on society. ly we have to agree on an ethical framework and set 3) Morewomenhavetobeencouragedtogetinvolved of principles created for data practitioners and re- into this growing sector. Their ideas will create searchers. The framework will have to be based on greater diversity and therewith empower the next trust and transparency if we want people to buy in- generation to build less biased and more innova- to it. tive solutions – ideally shaping a better future. Delphi 2|2019 Book Reviews 105

Towards a Code of Ethics for Artificial Intelli- regulations might be.3 Furthermore, in order for the gence latter to be workable and efficient, an understand- By Paula Boddington ing should also be created concerning who will be Springer International Publishing AG; 124 pages involved in the production, critique and application of the code in question. In addition, one should re- Laurens Naudts* main mindful of the contextual factors that shape the construction of both ethics and AI.4 Through As indicated by the inclusion of the word ‘towards’ eight chapters, the author, with success, sets out to in the title of this book, author Paula Boddington does clarify the entire life cycle of code development, and not intend to present the reader with the definitive the associated questions that might arise. The chap- code of ethics for artificial intelligence. Rather she ters follow a natural progression, and even though aims to provide a roadmap on how and under what a wide range of topics and considerations are cov- conditions a code of ethics for AI could take form. ered, the reader is able to maintain oversight and Both short and practical in nature, the author offers comprehension. Particularly relevant is the com- a concise guide detailing the value of, and need for, ment, ‘what this means for AI’, interspersed through- ethics and its codification within the AI environ- out the text in which the sometimes abstract ethical ment, whilst also highlighting the particular pitfalls principles or challenges are applied in an effort to il- and challenges that might impede their formulation. lustrate their specific relevance for AI. Practical ex- Boddington’s work is pragmatic: it serves towards amples or anecdotes are highlighted, which increas- facilitating the creation of an end-result, ie a code of es the understandability and readability of the sub- ethics. Hence, it should provide methods to make a ject matter. difficult procedural enterprise more workable in One could question Boddington’s ambition to practice. It should moreover provide the necessary tackle AI as a whole. As the author notes, though om- tools for non-philosophers to make more value-sen- nipresent in current day debates, the term artificial sitive assessments when dealing with AI. Consider- intelligence is complex and not easily defined.5 ing the depth of both ethics and AI, a clear and com- Moreover, AI covers a broad range of applications, prehensive structure is needed in order to enable the which each raise their own set of particular chal- reader to understand and reflect upon the breadth lenges. Given this disparity, how then do we estab- of issues explored. In this regard, the author right- lish the building blocks towards a code of ethics for fully notes1 that the process of drafting a code not artificial intelligence? Boddington acknowledges only requires an understanding of what ethical ques- that within the debate, a focus on distinctive charac- tions are,2 but also what the purpose of codes and teristics of AI might be needed.6 In addition, she notes that, where principles often tend to favor AI in general, ‘the value issues we need to consider are DOI: 10.21552/delphi/2019/2/11 much more local and contextualised.’7 If the AI de- * Laurens Naudts, Doctoral Researcher at the KU Leuven Centre for bate could benefit from contextualisation, then per- IT and IP Law, Belgium. Laurens’ research interests focus on the interrelationship between artificial intelligence, ethics, justice, haps instructions concerning the creation of a code fairness and the law. His Ph.D. research reconsiders the concepts of ethics should also follow a contextualized ap- of equality and data protection within the context of machine learning and algorithmically guided decision-making. For corre- proach? Yet, even when considered in broad terms, spondence: AI does raise particular, overarching challenges. For 1 Paula Boddington, Towards a Code of Ethics for Artificial Intelli- example, Boddington challenges the focus on ‘intel- gence (Springer 2017) Preface. 8 2 ibid chs 1 - 3. ligence’ as the hallmark of humanity. Admittedly, 3 ibid ch 4. some challenges related to AI are shared with other 4 ibid chs 5 - 7. rapidly evolving technologies, both past, present and 5 ibid 1. future, such as the impact of ‘hype’ on ethical rea- 9 6 See inter alia ibid 91-92, 111. soning. Furthermore, ethical confrontations in AI 10 7 ibid 111. might not be all that new. As the author notes: ‘The 8 ibid 86. extension, enhancement and replacement of human 9 ibid 31-35. agency and reasoning in AI serve as the loci of many 10 ibid 86. of the ethical issues that arise in its use, sometimes 106 Book Reviews Delphi 2|2019

presenting us with vivid versions of old questions.’11 of ethics will need the backing and support of other Of course, Boddington also provides sufficient atten- formalised or less formal, and institutionalised ways tion to the challenges AI raises for professional codes of addressing the ethical questions confronting us.’16 of ethics in particular, including the difficulties of Hence, she aims for a wide field of application, which developing a code of ethics amidst fast technologi- could serve both research and industry. Likewise, the cal changes.12 Even though the questions raised, broad, and perhaps general, nature of the book, ex- might often seem general, they nonetheless are per- pands the target audience. Towards a Code of Ethics tinent. not only provides insight to those tasked in drafting The final chapter can be considered both a synthe- an actual code of ethics, it could equally be of inter- sis of previous chapters, and a training exercise for est for researchers dealing in AI, such as legal schol- the reader. Looking to the future, Boddington elabo- ars or social scientists. Boddington has delivered a rates upon, and critiques, the Asilomar AI principles, balanced product, stressing the importance of foun- a tentative approach towards governing AI research dational underpinnings, whilst at the same time not drafted by the .13 Here, Bod- losing track of their practical implementation. In a dington’s commentary on a set of existing principles time where institutes and corporations are likely allows the reader to follow an applied critique of the thinking about implementing a code of ethics for lessons learned throughout the book, whereas the their AI research activities, her work presents a reader, at this point in time, might have gained suf- roadmap, as well as a compendium of elements to ficient knowledge to insert their own vision on the consider in AI research. principles.14 Towards a Code of Ethics arose from a project fo- cused on professional codes of ethics for artificial in- 11 ibid 29. 15 telligence researchers. However, the book never ful- 12 ibid 67-79. ly explicates who the artificial intelligence researcher 13 It should be noted that Boddington’s work was funded by the might be. This makes it difficult to discern in what Future of Life Institute. This could explain the preference given to these principles over others that currently exist. field, context, or setting, a professional code of ethics 14 Paula Boddington, Towards a Code of Ethics for Artificial Intelli- for AI research would actually apply. Nevertheless, gence (Springer 2017) 104-110. Boddington argues that ‘given the turbulent land- 15 ibid xiii. scape in which AI is developing, professional codes 16 ibid 40. Delphi 2|2019 Book Reviews 107

The Bioethics of Enhancement: Transhumanism, disabilities, conceptions which are repeated and re- Disability, and Biopolitics inforced by the discourses examined. By Melinda C Hall She is careful to point out that her object of study Lexington books; 194 is not transhumanism as a whole nor the most philo- sophically sophisticated variants of it, but those Caio Weber Abramo* which do have a developed theoretical literature and have found wide diffusion in society, be it in popu- I. The Good lar media or adoption by professional associations. Hall uses the frameworks of philosophy (particular- Every once in a while, but still more often than it ly those developed by French political philosopher should, preposterous ideas appear and gain popular- Michel Foucault, and the more recent fields of femi- ity. Very many were birthed or raised in the 20th cen- nist theory and disability theory) to give (mostly) tury, falling within what Alan Sokal aptly coined grounded, substantiated critiques to the mode of ‘fashionable nonsense’,1 and many still insist in sur- thinking and trans-philosophical implications of viving through the 21st Century, trying to associate these branches of transhumanism and its unbridled themselves with scientific and technological devel- advocacy for and ‘improve- opments that few yet understand. Some of them are ment’ through eugenicist practices. more or less harmless (such as ‘quantum spirituali- Of particular interest to her is how they relate to ty’) while others carry an intrinsic danger to life and Foucault’s bio-political question: Who will live? She limb which, alas, has made an indelible presence in shows that transhumanism focuses on control over history itself. the body, and carries value judgments on what is or Works such as The Bioethics of Enhancement serve isn’t a worthy life; namely, the life of disabled, defec- the much needed purpose of exposing such doctrines tive bodies is deemed less valuable than abled, ‘nor- for what they really are. Melinda Hall, of Stetson Uni- mal’ ones. versity, expertly uses philosophical discourse analy- This is flawed from the start. A major pillar of the sis to show how two such ideas, the human enhance- transhumanist conception of the world is that indi- ment discourse and certain flavours of - viduals are defined mostly by their genetic makeup: ism, are inextricably linked not only to dubious philo- Our genes are the most important factor in establish- sophical positions, but also to social and political ing who we are and can even predict who we will be.2 stances of the most abhorrent type. Furthermore, she Proponents of human enhancement suggest we cor- convincingly describes how these discourses may re- rect our genes and physiology by whatever new-fan- inforce the already dreary social and political envi- gled technique is around in order to solve individual ronment of persons with disabilities. and social problems of all kinds. Some go as far as to Far from being interested in only abstract con- advocate for the total abandonment of any body at cerns, the book is rife with concrete examples of the all. Human bodies are bad, and ‘defective’ bodies are dangers faced by those considered as disabled, rang- worse. ing from psychological pressure to difficulties in ac- Some transhumanists are very careful to point out cess to healthcare to downright murder. Hall’s pre- (and Hall does) they do not wish by this to attack sentation leaves little doubt of the relationship be- people with disabilities, but merely advocate the ‘pre- tween these real events and society’s conceptions on vention’ of further defective births, and by individ- ual choice alone, not by legal imposition.3 The indi- vidual is here considered as an entity in a vacuum, DOI: 10.21552/delphi/2019/2/12 immune to social pressure, error or prejudice, an is- * Bachelor of Philosophy (, Epistemology and land of independent thought and pure intellectual- Logic), Universidade de Brasília and Master of Laws (Conflict and Security Law), Universiteit Utrecht. All views my own. For corre- ly-based action; another serious misgiving the book spondence: . exposes. It shows how transhumanist and human 1 Alan Sokal and Jean Bricmont, Intellectual Impostures (Profile enhancement discourse ignores or dismisses that Books 1999). some human problems aren’t bodily problems at all, 2 This view is called 'genetic determinism'. I will come back to this point in section II below. but social and political. It implies a simplistic (al- 3 These two points will be dealt with in Section III below. though popular) conception of humanity in which 108 Book Reviews Delphi 2|2019

only the individual dimension exists. Yet humans are is, how the concepts of ‘illness’ and ‘disability’ are so- ‘social’ animals in the same sense we are made of cially constructed; the semantic or abstract sense. atoms. The relationship between these two, that is, the un- Hall addresses some examples of the misuse of sci- deniable real occurrence and the way we socially un- entific tools and concepts in transhumanist dis- derstand it, is never addressed. course, correctly criticising the philosophical posi- Not that an explanation would have saved the idea tions associated with adopting them uncritically. An- itself. Constructivism has been the target of serious other that could be mentioned is the ample use of bona fide philosophical criticism for decades, and it pseudo-scientific lingo in order to gain a veneer of is quite difficult to construe it in consistent terms; respectability, a typical trait of fashionable nonsens- some authors in fact say it is impossible.6 It isn’t nec- es; besides the blatant technology name-dropping, essary to engage with the literature of an opposing see for instance how employs the lan- field (though it would be salutary), but it is impera- guage and symbols of mathematical systems theory tive to engage with some literature. The idea of ‘so- to purely descriptive entities which are not (and per- cial construction’ is appealing and deserves study, but haps can’t be ever) measured or quantified.4 The use it can also be facile and abused, stretched beyond its of this jargon is a mere rhetorical device, used to give philosophical ability to clarify to serve in defending readers the impression that these ideas have some inanities.7 In order for the central tenets of Hall’s cri- basis in actual data and fact-based scientific research. tique to survive, her definitions of social and politi- No such data exists. This sits in opposition to the goal cal construction must be shown to be minimally co- of scientific language, which is to provide clarity and herent and not to fall into solipsism. precision. Second, some of the claims bearing on scientific issues are presented without evidence and can even be shown to be downright wrong. Hall’s claim of a II. The Bad ‘lack of evidence regarding the risk’ of drinking alco- hol during pregnancy is demonstrably false: Such ev- Hall’s execution of this book is not without fault. idence does exist.8 Whether they are valid or credi- Some are rather minor (for instance, Chapter 4 is de- ble or not is certainly worth discussing (even by scribed as a Case Study, but the case chosen (nega- philosophers), but it must be discussed (especially by tive genetic selection) doesn’t receive a clear or sys- philosophers). This kind of unsubstantiated claim tematic treatment), but two in particular stand out borders on the irresponsible. It also shows a lack of as more serious. engagement with actual scientific methods and liter- First, there is no clear explanation of what the au- ature, not a small shortcoming in a work attempting thor understands by ‘social’ and ‘political construc- to address topics such as and medical re- tion’, even though these concepts play central roles search. This is the case even when the white-wash- in the book’s thesis. There is a general problem of on- ing statements happen to be true; for instance, she ly explaining concepts much after their initial use, is correct in pointing out that genetic determinism but mostly other fundamental ideas are either clear- (mentioned above in section I) is not endorsed by the ly (albeit briefly) explained, or reference is given to near totality of geneticists today, but doesn’t make the the author is adopting. But ‘so- reference to any in particular, which wouldn’t be dif- cial’ and ‘political construction’ were left out, and the text doesn’t provide enough elements to construe them implicitly. Her approach is to ‘provide exam- 4 Melinda C Hall, The Bioethics of Enhancement: Transhumanism, ples of social construction of disability’,5 but the ex- Disability, and Biopolitics (Lexington books 2017) 25. amples fail to bring clarity. For instance, the asser- 5 ibid 48. tion that illness can be ‘politically constructed’ by the 6 See for instance Paul Boghossian, Fear of Knowledge: Against Relativism and Constructivism (Oxford University Press 2006). decision to wage a war, in which soldiers are wound- 7 Examples abound, but perhaps one of the most (in)famous is ed. Here, ‘illness’ and ‘disability’ are used in a con- Bruno Latour’s assertion that the bacillus that causes tuberculosis crete sense, and social action is causally responsible literally didn’t in fact exist until it was observed under the micro- scope in 1882. for their real occurrence. But what is elsewhere dis- 8 J Williams et al, ‘Fetal Alcohol Spectrum Disorders’ (2015) 136 cussed is how conditions are considered to be so, that Pediatrics. Delphi 2|2019 Book Reviews 109

ficult. In fact, it is not hard either to scientifically Perhaps the greatest flaw of The Bioethics of En- demonstrate that strict genetic determinism is factu- hancement is simply not going far enough in its crit- ally false, as evolutionary biologist Richard Lewon- icism. Engaging with transhumanism as simply an- tin does.9In short, the otherwise well-constructed cri- other perfectly acceptable intellectual exercise gives tique in this book becomes seriously threatened by legitimacy to several of its core propositions which the lack of attention to relevant literature and the are anything but acceptable under any standards, par- (avoidable) introduction of flawed arguments and ticularlyeugenics.AuthorsquotedbyHallunabashed- unjustified, unsupported statements and sweeping ly try to revive the terminology and of eu- generalisations. genics, and this goes in itself unchallenged through- out the book. Not only has been utterly dis- credited as a scientific proposition, partially due to III. The Ugly its reliance on genetic determinism, but the whole global legal order has been re-shaped since 1945 to re- Popular proponents of transhumanism invariably pel its re-introduction in practice. The adoption of eu- present it in grandiose terms as a sort of panacea, genicist , such as the so-called ‘Nuremberg selling promises not otherwise found outside works laws’,11 have led to very concrete implications which of or fantasy; Bostrom himself chose need hardly be mentioned.12 Current proponents of to depict the fight against as a myth involving re-branded eugenics are very aware of this danger,13 a dragon, and others of his non-transhumanist claims and claim that it should remain a personal choice of about reality have been likened to mere fiction.10 The individuals, its adoption as official policy to be avoid- very serious and worrying direct consequences of ed. But no mechanism to prevent the transformation their ideas seem to be treated the same way, with easy into law of the widespread social norms these authors and unrealistic solutions (or dismissals). Naturally advocate is given, and The Bioethics of Enhancement they fail to adequately answer these legitimate con- does not call them out on this glaring fault. cerns, going as far as attempting to unapologetically Granted, the focus of the book is not the overall ‘reclaim’ eugenics itself as a putatively positive en- deficiencies of transhumanism or human enhance- deavour, refusing to address the fact that it and ‘ide- ment, but how aspects of these discourses affect peo- al body’ are at the very heart of the great- ple with disabilities. So perhaps one shouldn’t ask of est crimes in human history. it what it didn’t propose itself to do. But it does re- main a work of Philosophy and we can hold it to the standards of that field, one of them being the search 9 Richard Lewontin, The Triple Helix: Gene, , and Envi- for the most fundamental questions, an exercise in ronment (Harvard university press 2002). exploring the limits and ultimate consequences or 10 See interview with physicist George Ellis in Sabine Hossenfelder, Lost in Maths: How Beauty Leads Physics Astray (Basic Books implications of ideas. Hall concludes that, despite the 2018) 215. wish to improve quality of life by ‘correcting’ bodies 11 Law for the Prevention of Genetically Diseased Offspring (Gesetz using technological enhancement, transhumanism zur Verhütung erbkranken Nachwuchses), enacted 14 July 1933; Law for the Protection of German Blood and German Honour cannot be an ally of persons with disabilities. What (Gesetz zum Schutze des deutschen Blutes und der deutschen Ehre), enacted 15 September 1935. seems clear enough is that transhumanist discourse 12 Final Solution to the Jewish Question (Endlösung der Judenfrage), is more concerned with constructing a post-human, described in the Besprechungsprotokoll of the Wansee Confer- which isn’t recognisably human in any meaningful ence held 20 January 1942, document available at: (accessed 31 Janu- in this kind of imposture, all human bodies are ary 2019). flawed.14 It stands to reason it isn’t an ally of persons 13 See the caveat on Bostrom in ch 1. Yet, they insist that it would work this time. with disabilities, since it doesn’t seem to be an ally 14 ibid 19-20. of any persons at all.

4 issues/year CALL FOR PAPERS www.lexxion.eu/delphi Editor-in-Chief Delphi is a pioneering interdisciplinary review of emerging technologies as seen through the perspectives of experts from the fields of science and technology, Ciano Aydin ethics, economics, business and law. Inspired by the idea to encourage inclusive, University of Twente thoughtful – and sometimes unsettling – debates on the many opportunities Associate Editors and challenges created by technological progress, the international quarterly review brings together authors with different professional backgrounds as well as Francesca Bosco opposing views. Contributions to Delphi come in compact formats and accessible World Economic Forum language to guarantee a lively dialogue involving both thinkers and doers. Anita Jwa Stanford Law School Florian Krausbeck ambrite AG What’s in it? Anna Laesser Impact Hub Berlin Delphi reviews recent developments in artificial intelligenceand robotics, digital Matthias Lamping and financial technologies, as well as in bio, health and human enhancement Max Planck Institute for Innovation technologies. and Competition We invite authors from Delphi’s focus disciplines science and technology, ethics, Vince Madai economics, business and law to contribute articles, essays and country/thematic Charité Berlin reports and to critically review relevant books, art and media. The length of Ida Rust the contributions should range between 1000 and 3500 words. All submissions University of Twente should be written in British English. Contributions will be subject to aquality Yueh-Hsuan Weng check by experts before acceptance for publication. Tohoku University Cees Zweistra Delft University of Technology

Write for Delphi – What’s Your Benefit? Editorial Board ● Break Out of Your Silo: Present your ideas to a new audience of experts with Steffen Augsberg other professional backgrounds. Contextualise your work within the broader German Ethics Council, discourse. Justus-Liebig-Universität Gießen Woodrow Barfield ● Take advantage of our network of experts from other countries Collaborate: Professor Emeritus, Chapel Hill and disciplines to collaborate on cross-cutting analyses of an emerging tech Aubrey de Grey topic. SENS Research Foundation ● Reach Out Globally: The review is read internationally by thought leaders in William Echikson business, research and policy, giving a boost to your contribution’s visibility. Centre for European Policy Studies Vuyiswa M‘Cwabeni SAP, SE Deadlines for Submission Paul Nemitz European Commission ● Delphi 3/2019: 25 August 2019 Nishant Shah ● Delphi 4/2019: 15 October 2019 ArtEZ University of the Arts Stefan Lorenz Sorgner John Cabot University Steven Umbrello Contact Institute for Ethics and Emerging The editorial team looks forward to discussing your proposals and receiving your Technologies submissions. For further enquiries, please contact the Executive Editors: Rob van den Hoven van Genderen VU University Amsterdam Anna Zeiter Clara Hausin Jakob McKernan eBay International AG [email protected] [email protected]

Lexxion Verlagsgesellschaft mbH · Güntzelstraße 63 · 10717 Berlin · Phone +49-30-81 45 06-0 Fax: +49-30-81 45 06-22 · Mail: [email protected] · www.lexxion.eu 17th International Conference 19 29 - 30 November 2019 Brno, Czech Republic

Organised by the Faculty of Law in cooperation with the Faculty of Social Studies, Masaryk University, and European Academy of Law and ICT Submissions are accepted in a variety of cyber and ICT related topics concerning law, social sciences and .

Papers are solicited to the following streams: Cybersecurity, Cyber-Warfare, Cybercrime, Digital Evidence. eCommerce, Digital Single Market, Government 2.0 & eJustice & ODR, Legal Informatics, Intellectual Property On-Line, International Internet Law, Privacy and Personal Data, Art & Cyberspace, New Media and Politics, Internet and Society, Psychology of Cyberspace, Liability vs. Compliance (in Engineering: Aircrafts, Blockchains, Car Engines and more), Usable Security and Privacy, Automatic detection of online risks and opportunities, Medical Data, Data Trusts for AI Development, Public Sector Information and Open Data, Manipulative Techniques On-Line.

Work-in-progress papers are also welcome.

cyberspace.muni.cz Peer-reviewed, fast-track, open access. A journal on internet regulation for academics, civil society advocates, entrepreneurs, the media and policymakers alike.

policyreview.info @PolicyR

SPECIAL ISSUE SPECIAL ISSUE

Political Power, Upcoming micro- Vol. 6, Issue 4 jurisdiction targeting and Guest-edited by Angela Daly, Guest-edited by Balázs Bodó, Chinese University of Hong Natali Helberger and Claes surveillance Kong and Monique Mann, H. de Vreese, University of Queensland University of Amsterdam Technology

ROLLING SUBMISSIONS CALL FOR PAPERS

Open for submissions: What do digital inclusion and data literacy mean today? – Peer-reviewed articles Special issue guest-edited by Elinor Carmi and Simeon Yates, – Opinion and news pieces Liverpool University · Deadline for submissions: 25 Aug 2019

Published by the Alexander von Humboldt Institute for Internet and Society in cooperation with the UK Copyright and Creative Economy Centre (University of Glasgow), the Institut des sciences de la communication (CNRS / Sorbonne Université) and the Internet Interdisciplinary Institute (Open University of Catalonia).

DIGITAL Making sense of our connected world. SOCIETY A blog that off ers room for debates on the challenges of the digitalisation process BLOG from technological, judicial, sociological and economic perspectives.

hiig.de/blog @hiig_berlin

ISSUE IN FOCUS ISSUE IN FOCUS

Digital Work innovation SMEs are investing in digital in the How will AI change our working and entre- technologies to respond to rapid digital age life? The lines between work and changes in all sectors. What do leisure time already blur. Who preneur ship they pursue in terms of digital will still be working? What does innovation? Who is successful the future hold for crowdworkers and what are the challenges? and trade unions?

The Digital society blog is a project of the Alexander von Humboldt Institute for Internet and Society (HIIG). What is the Business Value of Ethical Tech?

Steffen Augsberg, Justus-Liebig-Universität Gießen, German Ethics Council

The digital transformation has captured and changed our informational assumptions and relationships. Specific problems are associated with this process (such as a new potential for exclusionary and discriminatory practices). At the same time, it does imply improved possibilities for individual orientation and information. In particular, with regard to customer relationship aspects, increased curiosity and greater knowledge on the part of customers is to be expected. Here, the perceptible trend towards social responsibility has a strengthening effect. People do not yet behave in all areas of life in the way they consider normatively appropriate. However, the pressure to justify one’s actions is clearly rising. This can be seen with particular clarity in the example of climate change: many still drive an SUV, fly to their holiday destinations and eat meat. But they are doing this with an increasingly bad conscience, and they are asking what possibilities there are to ‘balance the books’ – how much BBQ meat am I allowed to eat if I voted ‘green’ in the European elections? Moreover, the moral sensitisation can be observed beyond these special circumstances: For instance, people are questioning which consequences arise from a business model, whether a product bears intolerable risks for workers and/or the environment or whether less data-intensive procedures are possible. Thus, ethical tech can be understood first and foremost as an anticipatory reaction to the social changes described [...].

DELPHI – INTERDISCIPLINARY REVIEW OF EMERGING TECHNOLOGIES

Delphi is a pioneering interdisciplinary review of emerging technologies as seen through the perspectives of experts from the fields of science and technology, ethics, economics, business and law. Inspired by the idea to encourage inclusive, thoughtful – and sometimes unsettling – debates on the many opportunities and challenges created by technological progress, the international quarterly review brings together authors with different professional backgrounds as well as opposing views. Contributions to Delphi come in compact formats and accessible language to guarantee a lively dialogue involving both thinkers and doers.

ISSN 2626-3734