Corporate social irresponsibility: humans vs artificial intelligence Kristijan Krkac Kristijan Krkacis Abstract Department of Marketing, Purpose – The supposedly radical development of artificial intelligence (AI) has raised questions Zagreb School of regarding the moral responsibility of it. In the sphere of business, they are translated into Economics and questions about AI and business ethics (BE) and corporate social responsibility (CSR). The Management, Zagreb, purpos of this study is to conceptually reformulate these questions from the point of view of two possible aspect-changes, namely, starting from corporate social irresponsibility (CSI) and Croatia. starting not from AIs incapability for responsibility but from its ability to imitate human CSR without performing typical human CSI. Design/methodology/approach – The author draws upon the literature and his previous works on the relationship between AI and human CSI. This comparison aims to remodel the understanding of human CSI and AIs inability to be CSI. The conceptual remodelling is offered by taking a negative view on the relation. If AI can be made not to perform human-like CSI, then AI is at least less CSI than humans. For this task, it is necessary to remodel human and AI CSR, but AI does not have to be CSR. It is sufficient that it can be less CSI than humans to be more CSR. Findings – The previously suggested remodelling of basic concepts in question leads to the conclusion that it is not impossible for AI to act or operate more CSI then humans simply by not making typical human CSIs. Strictly speaking, AI is not CSR because it cannot be responsible as humans can. If it can perform actions with a significantly lesser amount of CSI in comparison to humans, it is certainly less CSI. Research limitations/implications – This paper is only a conceptual remodelling and a suggestion of a research hypothesis. As such, it implies particular morality, ethics and the concepts of CSI and AI. Practical implications – How this remodelling could be done in practice is an issue of future research. Originality/value – The author delivers the paper on comparison between human and AI CSI which is not much discussed in literature. Keywords Business ethics, Artificial intelligence, Corporate social irresponsibility, Human irresponsibility, Robot irresponsibility Paper type Conceptual paper “The work of intellect is post mortem.” (Dewey, 1922, p. 276) Received 1 September 2018 “The practice has to speak for itself.” (Wittgenstein, 1969, p. 135) Revised 30 October 2018 Accepted 1 November 2018 The author would like to thank 1. Introduction his colleague professor Borna Jalsˇenjak for suggestions con- Any serious writing on the conceptual modelling of corporate social irresponsibility (CSI) by cerning some changes that I humans and an artificial intelligence (AI) besides an exhaustive scientific research, requires applied to our previous ideas on morality of robots and AI, a precise remodelling of human concepts of artificiality and responsibility. For a long time, Shahla Seifi for encouragement humans were becoming more and more inhuman, mostly augmented by various simple and to finish this paper because its basic ideas developed for a nowadays more advanced improvements, and still they think of themselves as being the decade now, and Zita Cso¨ke only creatures on Earth to be capable of moral responsibility. Meanwhile, humans invented Mesˇinovic for proofreading the final version of the paper. robots and AI: PAGE 786 j SOCIAL RESPONSIBILITY JOURNAL j VOL. 15 NO. 6 2019, pp. 786-802, © Emerald Publishing Limited, ISSN 1747-1117 DOI 10.1108/SRJ-09-2018-0219 There is no a priori argument as to why robots cannot be responsible as we can, and even more responsible, or at least not less irresponsible (for doubts concerning the general idea of the misguided analogy between humans and AI in function and value aspects see Putnam, 1992). They can certainly be causally responsible in the same way as a lightning bolt can be responsible for initiating a fire. They can even be morally responsible without being causally responsible if they prevent an upshot of some event, say, a car accident by changing the sequences of traffic lights due to an increase in the probability of an accident occurrence on some crossroads, etc. (Fischer, 2010, pp. 309-316) And yet, humans are deeply reluctant to call this a responsibility of AI in the same way as they are responsible. This phenomenon is in part grounded on the fact that humans have different concepts of morality, and consequently of ethics. Among many, hereafter the pragmatic concept of morality and pragmatic ethics will be presupposed (Margolis, 1996; LaFollette, 2000). It may have many internal problems and some external if compared to traditional ethics such as virtue ethics, ethics, and utilitarianism, but concerning the present issue, it seems to have an advantage: (b) The pragmatic ethics emphasis on habits and habitual actions is essentially quite comparable to the actions of robots and AI (if one compares a child washing its hand before a meal and a robot preparing and serving a simple meal the habitual feature of actions of both should be obvious). Pragmatic ethics in its traditional and contemporary formulas from both sides of the Atlantic Ocean comes down to the emphasis on human action from the point of view of the future, a better world that is in the making through the undertaking of present actions. The emphasis on the procedural nature of actions, mainly habits, on the background of a form of life or a culture, is the feature of European pragmatist ethics (in e.g. Wittgenstein, Karl-Otto Apel, Habermas and others), while the emphasis on creative and imaginative improvements of the future world is the feature of American pragmatists (James, Dewey, Putnam, Rorty and others, for the summary see Margolis, 1996; Krkac2006 ). The European school (in fact pragmatism is historically European if its initiation is Bain’s idea that a belief is a rule or a habit of action) rest on the nature of action itself, especially rule-guided actions. Therefore, it is often said in many similar forms that moral correctness of an action comes down to following an established habit in a given form of life or a culture (or society). Morally incorrect actions are violations and are possible by simple and common conditions, namely, a motive for an immoral action, know-how to perform an action, and an opportunity to perform it. Minimising opportunities for immoral actions is discussed as the primary goal of ethics (Krkac, 2006): (c) By determining the conditions of immoral actions as violations of established habits of a culture, the basic idea of irresponsibility is initiated. It surely needs to be perspicuously represented and applied to social and corporate aspects of it (Krkac, 2011) and to AI (Krkac and Jalsˇenjak, 2018a, 2018b). Nonetheless, this is at most a technical issue of conceptual remodelling of the negation of the universalised phenomenon and the notion of responsibility: (d) Even the lying and cheating AI in the corporate world, which is a highly unlikely case, compared to and contrasted with the systematic human CSI, seems to be a better choice if one wants to research the overall reduction of CSI in the corporate world (for analysis of lying see Krkac et al., 2012). All other more likely cases in which AI would not be lying, cheating and similar, are obviously far better choices. However, there is also some probability that making this choice on the side of AI must count on one typical human rationalisation of human CSI, which is VOL. 15 NO. 6 2019 j SOCIAL RESPONSIBILITY JOURNAL j PAGE 787 “blame it on the machines”, or in the present case “blame it on the AI”. Such obviously false accusations of AI are closely connected with the mostly irrational fear of machines (i.e. mechanophobia and technophobia), especially in cases in which it could be given some degree of autonomy to deal with moral aspects of its own actions without human interference in the decision-making process and in the performance of an action. Hereafter, the following issues will be researched and discussed to conceptually remodel the concepts of human and artificial irresponsibility and CSI. Humans are our only candidates for responsibility. Robots and the present AI cannot be responsible for its actions the way that humans can, and consequently cannot be irresponsible as well. However, let us imagine a developed AI that could be responsible and even irresponsible, and let us compare such an AI with present humans. The first issue here is to describe human CSI. The second is to describe possible CSI of future AI. The following text is to some extent drawn from two previous materials (not published in journals or books); the first was a note on the CSI concept and the second from the lecture on the relativity of the dichotomy between human and AI moralities (Krkac, 2011, pp. 78-89; Krkac and Jalsˇenjak, 2018a, pp. 187-204): 1. (e1) Following from (a), (b), (c) and (d), the basic hypothesis of the paper based on remodelled concepts is that AI is less CSI than humans will be developed. This hypothesis can be formulated as a research question concerning AI and humans in business operations in further research. 2. (e2) Following from (e1) in the following parts (1 and 2) two elements of remodelling of human and AI irresponsibility will be supplied. That is to say the following: (e2.1) There is CSI performed by humans (as a part of general human irresponsibility) and it is not easily reduced (in terms of opportunity to perform CSI), (this indication will be analysed and remodelled in part 1). (e2.2) There is not CSI performed by AI (as a part of general AI irresponsibility) and even if performed it will be easily reduced by humans and/or by AI itself (in terms of causal without moral or moral without casual responsibility in view of consequentiality of action), (this indication will be analysed and remodelled in part 2).
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages17 Page
-
File Size-