Decision-Making with Artificial Intelligence in the Social Context Responsibility, Accountability and Public Perception with Examples from the Banking Industry
Total Page:16
File Type:pdf, Size:1020Kb
watchit Expertenforum 2. Veröffentlichung Decision-Making with Artificial Intelligence in the Social Context Responsibility, Accountability and Public Perception with Examples from the Banking Industry Udo Milkau and Jürgen Bott BD 2-MOS-Umschlag-RZ.indd 1 09.08.19 08:51 Impressum DHBW Mosbach Lohrtalweg 10 74821 Mosbach www.mosbach.dhbw.de/watchit www.digital-banking-studieren.de Decision-Making with Artificial Intelligence in the Social Context Responsibility, Accountability and Public Perception with Examples from the Banking Industry von Udo Milkau und Jürgen Bott Herausgeber: Jens Saffenreuther Dirk Saller Wolf Wössner Mosbach, im August 2019 BD 2-MOS-Umschlag-RZ.indd 2 09.08.19 08:51 BD 2 - MOS - Innen - RZ_HH.indd 1 02.08.2019 10:37:52 Decision-Making with Artificial Intelligence in the Social Context 1 Decision-Making with Artificial Intelligence in the Social Context Responsibility, Accountability and Public Perception with Examples from the Banking Industry Udo Milkau and Jürgen Bott Udo Milkau received his PhD at Goethe University, Frankfurt, and worked as a research scientist at major European research centres, including CERN, CEA de Saclay and GSI, and has been a part-time lecturer at Goethe University Frankfurt and Frankfurt School of Finance and Management. He is Chief Digital Officer, Transaction Banking at DZ BANK, Frankfurt, and is chairman of the Digitalisation Work- ing Group and member of the Payments Services Working Group of the European Association of Co- operative Banks (EACB) in Brussels. Jürgen Bott is professor of finance management at the University of Applied Sciences in Kaiserslau- tern. As visiting professor and guest lecturer, he has associations with several other universities and business schools, e.g. IESE in Barcelona. He studied business administration at the Julius Echter University of Würzburg, and statistics and operations research at Cornell University (New York). He received his doctorate degree from Goethe University Frankfurt. Before he started his academic ca- reer, he worked with J. P. Morgan, Deutsche Bundesbank and McKinsey & Company. He was in- volved in projects with the International Monetary Fund and the European Commission (EC) and is an academic adviser to the EC, helping to prepare legislative acts or policy initiatives on banking issues. Abstract One can sense a perception in public and political dialogue that artificial intelligence is regarded to make decisions based on its own will with an impact on people and society. The current attribution of human characteristics to technical tools seems to be a legacy of fictional ideas such as Isaac Asimov's "Three Laws of Robotics" and requires a more detailed analysis. Especially the blending of the terms ethics / fairness / justice together with artificial intelligence illustrates the perception of technology with human features, which may originate from the original claim of "ability to perform similar to human beings". However, the current discussion about artificial intelligence falls short, not only due to the missing free will and own choice of those technological agents, but as decision-making lacks an overall model for the decision-making process, which is embedded in a social context. With such a process model, the question of "ethics of artificial intelligence" can be discussed with the selected issues of autonomy, fairness and explainability. No tool, algorithm or robot can solve the challenges of decision-making in case of ethical dilemmas or redress the history of social discriminations. However, the public discus- sion about technical agents executing pre-defined instruction has a tendency towards anthropomor- phism, which requires a better insight for the chain of (i) responsibility for the decision, (ii) accountabil- ity for execution, and (iii) perception in the society. BD 2 - MOS - Innen - RZ_HH.indd 2 02.08.2019 10:37:52 Decision-Making with Artificial Intelligence in the Social Context 1 Decision-Making with Artificial Intelligence in the Social Context Responsibility, Accountability and Public Perception with Examples from the Banking Industry Udo Milkau and Jürgen Bott Udo Milkau received his PhD at Goethe University, Frankfurt, and worked as a research scientist at major European research centres, including CERN, CEA de Saclay and GSI, and has been a part-time lecturer at Goethe University Frankfurt and Frankfurt School of Finance and Management. He is Chief Digital Officer, Transaction Banking at DZ BANK, Frankfurt, and is chairman of the Digitalisation Work- ing Group and member of the Payments Services Working Group of the European Association of Co- operative Banks (EACB) in Brussels. Jürgen Bott is professor of finance management at the University of Applied Sciences in Kaiserslau- tern. As visiting professor and guest lecturer, he has associations with several other universities and business schools, e.g. IESE in Barcelona. He studied business administration at the Julius Echter University of Würzburg, and statistics and operations research at Cornell University (New York). He received his doctorate degree from Goethe University Frankfurt. Before he started his academic ca- reer, he worked with J. P. Morgan, Deutsche Bundesbank and McKinsey & Company. He was in- volved in projects with the International Monetary Fund and the European Commission (EC) and is an academic adviser to the EC, helping to prepare legislative acts or policy initiatives on banking issues. Abstract One can sense a perception in public and political dialogue that artificial intelligence is regarded to make decisions based on its own will with an impact on people and society. The current attribution of human characteristics to technical tools seems to be a legacy of fictional ideas such as Isaac Asimov's "Three Laws of Robotics" and requires a more detailed analysis. Especially the blending of the terms ethics / fairness / justice together with artificial intelligence illustrates the perception of technology with human features, which may originate from the original claim of "ability to perform similar to human beings". However, the current discussion about artificial intelligence falls short, not only due to the missing free will and own choice of those technological agents, but as decision-making lacks an overall model for the decision-making process, which is embedded in a social context. With such a process model, the question of "ethics of artificial intelligence" can be discussed with the selected issues of autonomy, fairness and explainability. No tool, algorithm or robot can solve the challenges of decision-making in case of ethical dilemmas or redress the history of social discriminations. However, the public discus- sion about technical agents executing pre-defined instruction has a tendency towards anthropomor- phism, which requires a better insight for the chain of (i) responsibility for the decision, (ii) accountabil- ity for execution, and (iii) perception in the society. BD 2 - MOS - Innen - RZ_HH.indd 3 02.08.2019 10:37:52 Decision-Making with Artificial Intelligence in the Social Context 2 Decision-Making with Artificial Intelligence in the Social Context 3 Introduction The vibrant discussion about “ethics of artificial intelligence” and “algorithmic fairness” in media and academic writings might veil that the key issue is the process of decision-making itself and the impact of decisions on people and the society. The current debate places special emphasis on one single element of decision-making: the “instructed agent” such as a piece of software, a robot, but even a human being with a manual. However, it dismisses the embeddedness of any decision-making in a wider social context. Anthropomorphisation of machines may fit the Zeitgeist, and e.g. Northeastern University’s Roundtable “Justice and Fairness in Data Use and Machine Learning” discussed the rela- tionship between Artificial Intelligence (AI) and "[...] fundamental questions about bias, fairness, and even justice still need to be answered if we are to solve this problem." (Northeastern, 2019). However, novel technology is never “neutral”, but always Janus-faced with opportunities and risks1. A careful risk assessment is required for immediate and long-term consequences based on the best available knowledge2. However, there is a current vogue about “bias, fairness, and even justice“ of AI and algorithms3 (for an introduction see: Olhede and Wolfe, 2018). This exceeds quantitative risk as- sessment of technologies (Aven, 2012), as it blends the bottom-up approach of statistical estimations of risks with a top-down dispute about ethical and social values. According to Max Weber (1922), the question about the tangible impact of any new technology in a social context relates to “Verantwor- Figure 1: Decision-making in a social context with development of the proposed process model, which tungsethik” (ethic of responsibility), whereas the discussion about generally agreed believes relates to is tested with three different perspectives. The conclusion that decision-making required a separation “Gesinnungsethik” (ethic of conviction). between (i) the responsibility, (ii) accountability for the execution, and (iii) the perception in the society. In this paper, we understand ethic as the right way of and responsibility4 for decision-making. With that background, we discuss the process of decision-making with “instructed agents” (a “written” software code, a “trained” piece of AI, or an “assigned” employee with a work manual5) as a basis for the gen- This