Corporate social irresponsibility: humans vs artificial intelligence

Kristijan Krkac

Kristijan Krkacis Abstract Department of Marketing, Purpose – The supposedly radical development of artificial intelligence (AI) has raised questions Zagreb School of regarding the moral responsibility of it. In the sphere of business, they are translated into Economics and questions about AI and business ethics (BE) and corporate social responsibility (CSR). The Management, Zagreb, purpos of this study is to conceptually reformulate these questions from the point of view of two possible aspect-changes, namely, starting from corporate social irresponsibility (CSI) and Croatia. starting not from AIs incapability for responsibility but from its ability to imitate human CSR without performing typical human CSI. Design/methodology/approach – The author draws upon the literature and his previous works on the relationship between AI and human CSI. This comparison aims to remodel the understanding of human CSI and AIs inability to be CSI. The conceptual remodelling is offered by taking a negative view on the relation. If AI can be made not to perform human-like CSI, then AI is at least less CSI than humans. For this task, it is necessary to remodel human and AI CSR, but AI does not have to be CSR. It is sufficient that it can be less CSI than humans to be more CSR. Findings – The previously suggested remodelling of basic concepts in question leads to the conclusion that it is not impossible for AI to act or operate more CSI then humans simply by not making typical human CSIs. Strictly speaking, AI is not CSR because it cannot be responsible as humans can. If it can perform actions with a significantly lesser amount of CSI in comparison to humans, it is certainly less CSI. Research limitations/implications – This paper is only a conceptual remodelling and a suggestion of a research hypothesis. As such, it implies particular morality, ethics and the concepts of CSI and AI. Practical implications – How this remodelling could be done in practice is an issue of future research. Originality/value – The author delivers the paper on comparison between human and AI CSI which is not much discussed in literature. Keywords Business ethics, Artificial intelligence, Corporate social irresponsibility, Human irresponsibility, irresponsibility Paper type Conceptual paper

“The work of intellect is post mortem.” (Dewey, 1922, p. 276)

Received 1 September 2018 “The practice has to speak for itself.” (Wittgenstein, 1969, p. 135) Revised 30 October 2018 Accepted 1 November 2018

The author would like to thank 1. Introduction his colleague professor Borna Jalsˇenjak for suggestions con- Any serious writing on the conceptual modelling of corporate social irresponsibility (CSI) by cerning some changes that I humans and an artificial intelligence (AI) besides an exhaustive scientific research, requires applied to our previous ideas on morality of and AI, a precise remodelling of human concepts of artificiality and responsibility. For a long time, Shahla Seifi for encouragement humans were becoming more and more inhuman, mostly augmented by various simple and to finish this paper because its basic ideas developed for a nowadays more advanced improvements, and still they think of themselves as being the decade now, and Zita Cso¨ke only creatures on Earth to be capable of moral responsibility. Meanwhile, humans invented Mesˇinovic for proofreading the final version of the paper. robots and AI:

PAGE 786 j SOCIAL RESPONSIBILITY JOURNAL j VOL. 15 NO. 6 2019, pp. 786-802, © Emerald Publishing Limited, ISSN 1747-1117 DOI 10.1108/SRJ-09-2018-0219 ᭿ There is no a priori argument as to why robots cannot be responsible as we can, and even more responsible, or at least not less irresponsible (for doubts concerning the general idea of the misguided analogy between humans and AI in function and value aspects see Putnam, 1992). They can certainly be causally responsible in the same way as a lightning bolt can be responsible for initiating a fire. They can even be morally responsible without being causally responsible if they prevent an upshot of some event, say, a car accident by changing the sequences of traffic lights due to an increase in the probability of an accident occurrence on some crossroads, etc. (Fischer, 2010, pp. 309-316) And yet, humans are deeply reluctant to call this a responsibility of AI in the same way as they are responsible. This phenomenon is in part grounded on the fact that humans have different concepts of morality, and consequently of ethics. Among many, hereafter the pragmatic concept of morality and pragmatic ethics will be presupposed (Margolis, 1996; LaFollette, 2000). It may have many internal problems and some external if compared to traditional ethics such as virtue ethics, ethics, and utilitarianism, but concerning the present issue, it seems to have an advantage:

᭿ (b) The pragmatic ethics emphasis on habits and habitual actions is essentially quite comparable to the actions of robots and AI (if one compares a child washing its hand before a meal and a robot preparing and serving a simple meal the habitual feature of actions of both should be obvious).

Pragmatic ethics in its traditional and contemporary formulas from both sides of the Atlantic Ocean comes down to the emphasis on human action from the point of view of the future, a better world that is in the making through the undertaking of present actions. The emphasis on the procedural nature of actions, mainly habits, on the background of a form of life or a culture, is the feature of European pragmatist ethics (in e.g. Wittgenstein, Karl-Otto Apel, Habermas and others), while the emphasis on creative and imaginative improvements of the future world is the feature of American pragmatists (James, Dewey, Putnam, Rorty and others, for the summary see Margolis, 1996; Krkac2006 ). The European school (in fact pragmatism is historically European if its initiation is Bain’s idea that a belief is a rule or a habit of action) rest on the nature of action itself, especially rule-guided actions. Therefore, it is often said in many similar forms that moral correctness of an action comes down to following an established habit in a given form of life or a culture (or society). Morally incorrect actions are violations and are possible by simple and common conditions, namely, a motive for an immoral action, know-how to perform an action, and an opportunity to perform it. Minimising opportunities for immoral actions is discussed as the primary goal of ethics (Krkac, 2006):

᭿ (c) By determining the conditions of immoral actions as violations of established habits of a culture, the basic idea of irresponsibility is initiated. It surely needs to be perspicuously represented and applied to social and corporate aspects of it (Krkac, 2011) and to AI (Krkac and Jalsˇenjak, 2018a, 2018b). Nonetheless, this is at most a technical issue of conceptual remodelling of the negation of the universalised phenomenon and the notion of responsibility:

᭿ (d) Even the lying and cheating AI in the corporate world, which is a highly unlikely case, compared to and contrasted with the systematic human CSI, seems to be a better choice if one wants to research the overall reduction of CSI in the corporate world (for analysis of lying see Krkac et al., 2012). All other more likely cases in which AI would not be lying, cheating and similar, are obviously far better choices. However, there is also some probability that making this choice on the side of AI must count on one typical human rationalisation of human CSI, which is

VOL. 15 NO. 6 2019 j SOCIAL RESPONSIBILITY JOURNAL j PAGE 787 “blame it on the machines”, or in the present case “blame it on the AI”. Such obviously false accusations of AI are closely connected with the mostly irrational fear of machines (i.e. mechanophobia and technophobia), especially in cases in which it could be given some degree of autonomy to deal with moral aspects of its own actions without human interference in the decision-making process and in the performance of an action. Hereafter, the following issues will be researched and discussed to conceptually remodel the concepts of human and artificial irresponsibility and CSI. Humans are our only candidates for responsibility. Robots and the present AI cannot be responsible for its actions the way that humans can, and consequently cannot be irresponsible as well. However, let us imagine a developed AI that could be responsible and even irresponsible, and let us compare such an AI with present humans. The first issue here is to describe human CSI. The second is to describe possible CSI of future AI. The following text is to some extent drawn from two previous materials (not published in journals or books); the first was a note on the CSI concept and the second from the lecture on the relativity of the dichotomy between human and AI moralities (Krkac, 2011, pp. 78-89; Krkac and Jalsˇenjak, 2018a, pp. 187-204):

1. (e1) Following from (a), (b), (c) and (d), the basic hypothesis of the paper based on remodelled concepts is that AI is less CSI than humans will be developed. This hypothesis can be formulated as a research question concerning AI and humans in business operations in further research. 2. (e2) Following from (e1) in the following parts (1 and 2) two elements of remodelling of human and AI irresponsibility will be supplied. That is to say the following:

᭿ (e2.1) There is CSI performed by humans (as a part of general human irresponsibility) and it is not easily reduced (in terms of opportunity to perform CSI), (this indication will be analysed and remodelled in part 1). ᭿ (e2.2) There is not CSI performed by AI (as a part of general AI irresponsibility) and even if performed it will be easily reduced by humans and/or by AI itself (in terms of causal without moral or moral without casual responsibility in view of consequentiality of action), (this indication will be analysed and remodelled in part 2). ᭿ (e2.3) The overall final hypothesis connecting (e2.1) and (e2.2) could be that human CSI can be reduced by AI lack to perform CSI in human terms. If human and AI morality and ethics can be consistently given in terms of consequentialism, then AI can reduce its own and human CSI (or general irresponsibility) without actually being corporate social responsibility (CSR) or understanding it. This basic idea (e2.1 – e2.3) will be analysed hereafter in a way that the basic intuition, i.e. that we do not know or actually have a general human morality or consensus on moral values on global level (we surely have a series of suggestions), and that AI is genuinely incapable of human-like morality could be replaced with the hypothesis that understanding human morality and/or having universal human morality (say the one implied in human rights) and having human-like moral algorithms in AI is not necessary, but what could be that AI is capable of correcting its own and human irresponsibility (CSI included) by simply following its basic principles of functioning.

2. Corporate social irresponsibility of humans Humans are irresponsible privately and publicly, toward themselves, other individuals (human and non-human), groups, societies, and whole generations. If they are parts of companies or corporations they can act irresponsibly in numerous ways. CSI is a phenomenon related to legal persons acting for profits or corporations/companies but not to

PAGE 788 j SOCIAL RESPONSIBILITY JOURNAL j VOL. 15 NO. 6 2019 natural persons (human individuals), at least not directly. CSI is related to social spheres and not to non-social spheres, at least not straightforwardly. Finally, CSI is related to common and extraordinary irresponsibility done by corporations, not to their responsibilities:

᭿ (f) Nonetheless, there are perhaps two conflicting sides of the story regarding responsibility: (1) the side which suggests that it is impossible to be responsible, and a side which suggests (2) that it is irresponsible to believe, claim, and act upon the first one (1), regardless of the difficulty in determining what responsibility really is and toward what or whom do we owe said responsibility, in terms of particular duties or obligations. Some elements of the conflict between (1) and (2) shall be addressed hereafter.

Nowadays, responsibility seems to be a dubious phenomenon. First and foremost, it is closely related and dependent on freedom. A person cannot be accused for an irresponsible action if the person was not free while choosing the course of action and performing an action (some argue that rationality rather than freedom is a condition of responsibility; Wallace, 1998). However, freedom is often understood as the freedom to choose and to act without responsibility. This phenomenon, so common among members of Homo sapiens sapiens was vividly described by Pascal Bruckner in his “Temptation of innocence” (Bruckner, 1995, p. 12) where he suggests the following: “The temptation of innocence is a sickness of individualism which is founded on the effort to avoid consequences of one’s actions, and attempt to enjoy advantages of freedom without suffering any of its difficulties or troubles. It branches in two directions, infantilism and victimisation, two ways of escaping the burdens of Being, two strategies of blessed irresponsibility. In the first case, the innocence should be conceived as a parody of youthful carelessness and ignorance; it reaches its climax in the character of an eternally immature. In the second case, it is a variation of an angelic feature, representing an absence of guilt, incapability to perform evil, and it is incorporated in the character of self-proclaimed martyr.” To proclaim and to defend a kind of ethics of responsibility (Jonas, 1984), in the presented cultural context, seems to be at least a naı¨ve thing to do, as there are no responsibilities and no one is responsible before and for no one else, and this stands for all aspects of responsibility. (Koprek, 2009, or as a private irony, Rorty, 1989) Being responsible seems to be possible only in the light of one’s own conscience, being true and sincere to oneself. Now this last point raises a serious objection so correctly formulated by H. G. Frankfurt:

᭿ (g) “Moreover, there is nothing in theory, and certainly nothing in experience, to support the extraordinary judgment that it is the truth about himself that is the easiest for a person to know. Facts about us are not peculiarly solid and resistant to sceptical dissolution. Our natures are, indeed, elusively insubstantial, notoriously less stable and less inherent than the natures of other things. And insofar as this is the case, sincerity itself is a bullshit.” (Frankfurt, 2005, p. 67, for commentary and application in CSR see: Debeljak et al., 2011, pp. 5-22). The core business of an organisation is an idealised construct intended to express an organisation’s main activity. (Prahalad and Hamel, 1990). Each human practice has its basic standard actions, which define the practice (in any activity and in business these are called standard procedures, SP). All of these practices are well described in the standard procedures, and even in standards of actions in both typical and exceptional situations (in any official job this is called a job description). Therefore, what we have here is a standard in which a human who possesses know-how is performing a job which via the job description corresponds with that human’s knowledge. An analogy in business would be the following: a person knowing standard procedures is performing a job with its description, which corresponds to the procedures and the whole of that process, is a core

VOL. 15 NO. 6 2019 j SOCIAL RESPONSIBILITY JOURNAL j PAGE 789 business. A core competency is a specific factor that a business perceives as being central to the way it, or its employees, operates. It fulfils three key criteria: it is not easy for competitors to imitate, it can be leveraged widely to many products and markets, and it must contribute to the end consumer’s experienced benefits. Business actions are performed professionally. To be a professional, not just for instance to perform a job professionally as an amateur, is to be confirmed as a professional by an official body which applies certain rules and criteria for professionalism for a given job. A professional is an expert, one who is a master in a specific field. A professional is a member of a vocation founded upon specialised educational training. In western nations, such as the USA, the term commonly describes highly educated, mostly salaried workers, who enjoy considerable work autonomy, a comfortable salary, and are commonly engaged in creative and intellectually challenging work. Less technically, it may also refer to a person having impressive competence in a particular activity. (Freidson, 1986; Dezalay and Sugarman, 1995). There are responsibilities which are implied and manifested by such descriptions and practices. A professional has duties: toward the understanding of a core business, toward the job, toward him/herself and their peers. For a professional the basic duty is to perform a job professionally. Yet, such a statement is not part of the description of a core business, nor is it a part of a description of a standard procedure or a job description. Such a statement is not part of a code of conduct stricto sensu, because a code of conduct is conceptually and practically much closer to the standard procedures of a core business:

᭿ (h) If any kind of misconduct appears while performing a core business by a professional, and if it’s connected not just to the business or to the legal aspects of a job, but to its moral aspects as well, then some duties and obligations, ordinarily implicit, should be explicated. When a professional performs such an action, it is in fact a process of social (ir)responsibility (CSI).

There are many confronting ideas about business ethics (BE) and CSR, many of which are relative to external variables such as business cultures, legal requirements, political systems, social stratification, cultural context, etc., and to internal variables such as a business sector, different professions in different sectors, different dominant corporate cultures, etc. (even in communities such as EU). Moreover, many descriptions of CSR goals sound like the wishes of a Miss Universe. For instance, Philip Kotler a leading marketing expert states: “We have a common agenda. We all want a better world and are convinced that communities need corporate support and partnerships to help make that happen” (Kotler and Lee, 2005, p. 5). Therefore, it seems more convincing to define the opposite phenomenon, which is CSI:

᭿ (i) CSI is a fact. The previously explained tension between the temptation of innocence and ordinary responsibility connected in a business’s core exists among individual legal persons carrying activities for profits, as well as among individual natural persons. The result of the tension is that there is much of CSI going on. In the most basic form, a CSI activity can be defined as an intentional violation of a core business, standard procedures, professionalism, explicit code of ethics, and SR procedures. However, this definition seems to be trivial, and therefore one needs to research types of CSI to see that CSI includes formal and substantial CSI activities. Accordingly, all types of CSI can be divided into formal and substantial types. Let us start with formal types. Partial CSI and partial CSR are the most common activities of a majority of companies. Explicitly, a company is CSR towards one group of stakeholders, while CSI towards another. The most common subtype is territorial, that is, to say that a company is CSR in one country while CSI in the other (a nice example is Coca-Cola in India (CSI), and Coca-Cola in Croatia (CSR), or Nike or Nestle´ for instance (CSI)). Another formal type is so to say temporal in its nature, and this type is also very common. A company is being CSI from time to time due to some other reasons and most often than not, these are

PAGE 790 j SOCIAL RESPONSIBILITY JOURNAL j VOL. 15 NO. 6 2019 reasons for fast profit. From time to time, doubts are cast on such companies so a company in question does some CSR (here examples are numerous). The third type is connected to stakeholder groups. A company is CSI by all means, but it switches its CSI from internal to external stakeholder groups and back. The last type is questionable. For one thing, one should exclude illegal businesses or genuinely dirty business from the list of overall CSI. Yet, a dirty self-conscious business while knowing that it is overall CSI can claim that it is CSI only toward a particular business segment, or at some particular time (BP is a fine example). Overall, CSI can also be an issue of complete fabrication of CSR reports by companies (for instance by McDonalds, see Hawken, 2002). Surely, there are other formal types considering different variables, but these seem to be the most frequent. Let us move to substantial types. Ideological CSI is connected to the real business procedures of a legal business. A company has a hidden agenda of complete and continuous CSI. The legality of such businesses is highly questionable. However, these are more often than not connected to particular practices in particular stages of a completely legal business process (say irresponsible marketing), or a profession (say insider trading). This last subtype can often be a case of CSI by accident, omission, or mistake. Such things are a natural part of a business process due to the feature of general human fallibility. The crucial difference between ideological CSI of a particular practice or a particular profession and CSI by mistake is the element of intention and premeditation. Second, typical CSI actions are connected to particular spheres of business, namely they fall under the following chapters: conflict of interests (bribery, corruption, etc.), honesty (lying, cheating, stealing, etc.), communication (mostly in marketing sector), and business relations (non-professionalism, misuse of authority, plagiarism, etc.). Further on, some particular CSI activities are specific to certain business spheres, say management, marketing, accounting, auditing, banking, etc. (for instance mobbing, deceptive pricing, misinformation in financial and auditing statements and money laundering). On the other hand, there is extraordinary CSI which is mostly connected to newly developed business tools, new business situations, circumstances, and environments. Third, amateur and expert CSI is an interesting difference because an amateur will be easily caught in a CSI action, while an expert will act professionally. A company can decide to do something, which by itself is a CSI activity or at least has obvious CSI consequences. It can do it by itself and it can do it by outsourcing the job to some experts (say to make a nice advertisement for a perfect product which is in fact of low quality). Finally, tactical and strategic types are CSI activities which are not planed in the first case and planned long- term in the second case. Tactical CSI refers to CSI practices which are taken in particular circumstances as a response to certain business difficulties, while strategic CSI is designed for and in the long-term. For instance, a company can predict that some CSI will be needed in the future with respect to some neglected business segment or stakeholder group. Thus, these are the elements of CSI. It is significant to note that any real CSI activity is complex and made of the aforementioned elements and is characterised by a violation of some basic elements of a business process: core business, standard procedures (in operations, marketing, management, finance, accounting, etc.), professionalism, ethical codes, or CSR procedures toward specific stakeholder groups. Therefore, these elements should be checked if one is trying to detect CSI within a company or a business sector. Here, instead of concluding the present section, it seems appropriate to make a note on CSI performance in terms of its conditions and its prevention:

᭿ (j) CSI cannot be completely eliminated from the business world just as criminal actions cannot be completely eliminated from society or immoral actions from the private and public lives of individuals. Some research even suggests that certain percentages (commonly rather small) of particular CSI activities (such as the grey economy) are generally good for an economic system.

VOL. 15 NO. 6 2019 j SOCIAL RESPONSIBILITY JOURNAL j PAGE 791 There are various ways to fight irresponsible practices, the most common being through detection and sanction. Now, this produces some results indeed, however, these are of minor significance if not joint with other ways among which CSI prevention seems to be the most promising:

᭿ (k) Contrary to the opinion that the most of immoral deeds are performed by genuinely immoral humans, which is based on a mistaken interpretation of data which suggests that there is a certain percentage of genuine criminals in any given society, it can be presupposed that a vast majority of us will perform a CSI action if certain conditions are satisfied. These are the following:

᭿ (A) know-how to perform a CSI action;

᭿ (B) motivation for performing CSI action; and ᭿ (C) opportunity to perform CSI action.

Now, (A) and (B) are hard to eliminate while it is possible to eliminate (C), yet there is no or little consensus about how this should be done. Opportunity means that there are further conditions and circumstances in which a skilled and highly motivated company would perform a CSI action. Such opportunities (C), can be created by the lack of legal requirements and political compliance, various agencies controlling the market and companies, NGOs activities, media research and coverage of CSI actions by companies, academic research, various business centres and institutions research, and finally and perhaps most importantly, a kind of sensitivity and knowledge of CSI in society and the general public. The goal concerning CSI prevention is somewhat similar to the goal of for instance corruption and bribery prevention, namely to limit the amount of CSI actions to some acceptable level which does not harm an entire business system in the way that it produces an economic crisis. And to do that, one needs to limit the success of CSI public avoidance behind CSI typical real actions (in terms of the temptation of innocence), as explained at the beginning of the section. This was a short note on typical human CSI, what it is, which are its types, how it is performed, by whom, when, where and why, and how it is prevented:

᭿ (k1) In terms of the remark (e2.1) in the introduction, these remarks, to be exact (f) – (k), suggest that there is no much use of directing toward human responsibility and CSI without taking into account human CSI, and no effective method of reducing human irresponsibility and human CSI to the amount that will not cause some major disasters (in environment, business, culture etc.). Humans can be partially genuinely CSR and in the same time CSI. The question is – can actions of AI and robots be less CSI than human action, and can they help reducing human CSI? Now, the issue here is to compare human CSI with any possible actions or operations by (human made) robots and AI. Are there comparable actions, or are they in principle incapable of CSI and CSR as well?

3. Corporate social irresponsibility of artificial intelligence What it means for an AI to be CSI? We can imagine that an AI makes mistakes due to various flaws in its operations. It is very hard for us to imagine an AI to be deliberately CSI (perhaps in sci-fi/horror films). It seems practically impossible to imagine an AI that would develop an immoral habit, e.g. lying as much as it could. To be CSI, an AI must be capable of CSR, and what does it mean to say for an AI that it is CSR?

᭿ (l) In minimal terms CSR can be defined as acting in a morally correct way concerning the moral norms of given professions concerning the core business of a corporation (BE), further on to act in a balanced, profitable, and just manner while reaching,

PAGE 792 j SOCIAL RESPONSIBILITY JOURNAL j VOL. 15 NO. 6 2019 implementing, and measuring business decisions concerning business plans in view of all relevant groups of stakeholders, and finally to act sustainably concerning all non- human secondary stakeholders (CSR). Let us imagine that some basic algorithms for these actions can be programmed into an AI as a necessary aspect of any of its business decisions and actions (is this really possible; however it is not the topic of the present paper). If this is possible, what would it mean for an AI to act CSI? The most obvious candidate is a simple error in an operation of an AI. To some degree, such flaws can be compared with human motives to act CSI, but this analogy should not be pushed too far, because humans are fallible creatures by definition. Further candidate could be a hypothesis of an AI that would develop bad motives. Even in such a case, an AI would act with a vast amount of naivete´ and its immorality would be easily discovered. Perhaps an AI itself would eliminate this possibility given the probability of success of an immoral action, e.g. of lying to customers, employees, etc. (logically speaking AI does not need any explicit ethical say duties or virtues; it only needs a goal and/or a preferable consequence (e.g. protecting life) and a basic task and it perhaps could calculate the less possible immoral action to perform):

᭿ (m) It seems that even in the worst scenario, an AI would be equally CSI than a human, and in the most probable scenario it would surely be less CSI. So, why don’t we consider AI for the most human and humane aspects of whole business, BE, CSR, and sustainability? It is obvious that it has the potential to be better than humans in the technical aspects of a business. Nowadays, AI is even more successful than humans in a series of actions, some of which concern important diagnostic procedures in human medicine for example. However, AI still makes mistakes. But it seems that it has the potential to be better than humans even in the context of morality. So why are we so reluctant to give these jobs to AI? Would it make us less human, or AI more human than humans? In this part, a mini case is presented in the manner that it manifests the mistakes by AI to which humans, due to various reasons, ascribe various types of moral irresponsibility.

3.1 Annex: examples of artificial intelligence performing mistakes that are mistakenly taken as immoral actions

᭿ (m1) On June 11, 2018 a group of experts from Goldman Sachs used AI (i.e. “machine learning”) to predict the final result of the 2018 FIFA World Cup. The prediction held that Brazil had the highest probability of winning; specifically it predicted that Brazil would beat Germany in the finals. (Goldman Sachs Group, Inc. The World Cup and Economics, The Goldman Sachs Group, Inc., June 11, 2018). If one is a follower of football and if one considers this prediction, one can easily see that the series of events have the highest probability of occurring. Given the morphology of football as a game that among its aspects has unpredictability of the result combined with the element of surprise, the most precise predictions cannot predict the final result, i.e. higher than 0.5. Let us take an example. Surely, the highest probability for the Group D (Argentina, Croatia, Island, and Nigeria) was that Argentina would be 1st and Croatia 2nd in the group. The prediction starts with overall data applied to each game. According to the data, in the first match, Argentina will win over Island, and Croatia against Nigeria. Without these 3 points the prediction for Argentina and Croatia is susceptible. However, a series of small but mutually consistent unpredictable events during the first round can turn these results to draw or even to lose the game. This is still insufficient to harm the prediction, but it could be necessary as such results determine completely different approaches in the 2nd and the 3rd matches of the group. This can lead to a series of small yet mutually consistent events in the second matches, and the probability

VOL. 15 NO. 6 2019 j SOCIAL RESPONSIBILITY JOURNAL j PAGE 793 changes again. One also needs to count the overall FIFA tendency to make the o jogo bonito i.e. the beautiful game even more beautiful by eliminating elements that could make it uglier, such as the plan to make the cup with more participants and to create possibilities for more miraculous results of the matches. The use of video assisted referee (VAR) goes in the same direction, etc. Such elements substantially influence the overall probability. To cut a long story short, an AI can make a mistake in prediction. And this mistake in its nature is not substantially immoral, but it could be understood as instrumentally immoral (similarly as in the case of an insufficiently sharp knife in which a knife is not substantively immoral or bad in itself, rather instrumentally bad because it does not serve its purpose). The presented AI prediction did not include its (self-) awareness of its own action of prediction in terms that many humans will probably misinterpret its process of reaching the prediction, and the prediction itself in terms that AI itself could make a kind of warning and previous note concerning its own prediction. ᭿ (m2) There were similar cases not too much discussed in scientific literature and with sufficient scrutiny to be precisely summarised hereafter. However, they could be mentioned. In the case of Uber’s autonomous vehicle accident in Tempe, Arizona, it was finally discovered that the initial blame that was put on the AI should in fact be put on the operators of the AI because it was a human action that caused the crash. Of course, the foolish behaviour of the pedestrian should be taken into account too (tests for drugs were positive), but it was important that someone operating the system turned off the mechanism for the detection of object proximity, which is directly related to the movement of the vehicle. The facts discovered later disclosed that the car saw the pedestrian (by radar and lidar systems) 6 seconds before the crash and reacted only 1.3 seconds before. In the remaining 4.7 seconds, the system that should have been turned on was not, and that was the cause of the accident on the vehicle’s part. Why? Notwithstanding, it’s possible that 1.3 seconds would have been enough time to stop. Nevertheless, Uber configured their vehicle to deactivate automatic emergency braking during testing because it was concerned about erratic vehicle behaviour. This could trigger the breaks every time a vehicle sees say a shadow. ᭿ (m3) Finally, Japan is famous for its use of AI. In addition to standard cases AI use in industries (say car industry), there are interesting examples. For instance, in 2018 robots are being used as teachers in Japanese kindergartens and kids have accepted them. Needless to say, there was some fear and reluctance manifested by some children, however giving them the opportunity to intervene in the robots (e.g. choosing the colour of their eyes and the tone of their voice) this fear was reduced to a minimum. Is it irresponsible for an AI to act like humans? Also, in 2018, there was a robot that was presented as a candidate at the local elections in a town near to Tokyo. “She” allegedly said that she will be morally correct and will reach only just and balanced decisions that will benefit all citizens. All these examples show that AI makes mistakes or causes an irrational fear. Does this AI or robot have the right to promise genuinely human moral claims about CSR?

Concerning examples (m1)-(m3) what about the possibility of general responsibility and CSR of robots and AI? Conceptually speaking, it includes the standard analysis of the relation including humans with human morality and robots with human-made morality (Lekka-Kowalik, 1993; Spaic and Grigic, 2007, in terms of ethical implications and limitations, the “is-ought” debate, Hume’s Law, or Naturalistic Fallacy will not be discussed, but the collapse of the dichotomy and derivativeness of “ought” from “is”, and that “is” has an evaluative/normative aspect and well as that “ought” has a factual aspect given by pragmatist and speech-act analyses will be implied. Further on, elements of philosophy of , , and major ethical theories will not be discussed). The issue surrounding the conceptual remodelling of enhanced humans and developed AI in

PAGE 794 j SOCIAL RESPONSIBILITY JOURNAL j VOL. 15 NO. 6 2019 structural and moral aspects belongs previous to all other aspects to the field of information technologies and in information ethics. “According to Mitchell Waldrop, as robots become more and more intelligent, we need a theory and practice of machine ethics that will embody a built-in code of ethics in these creatures in the spirit of Asimov’s laws of and Carbonell’s hierarchy of goals.” (Migga Kizza, 2007, p. 270) Given the developments in human augmentation and in AI, no matter how big or small they may actually be, the standard analysis does not seem to fit these new realities, first of all because there are two parallel changes (perhaps developments, but here one needs to be extra cautious):

᭿ (n1) The one process is the human augmentation or enhancement, or the creation of new humans with their (probably) new morality and AE (which will include at least their own augmentation).

᭿ (n2) The other process parallel to the first one is the development of robots and generally AI which could end up as a self-improving and autonomous agent with its new non-humanly created morality or AE. Whether there is perhaps a sheer coincidence, or there is an obvious interest in research and in publication, 2017 saw at least 5 major world publishers of scientific literature in publish at least 2 books on these parallel changes or processes, one concerning each direction. For example, Springer Verlag in 2017 published the book “Conquest of Body, Biopower with Biotechnology” (Tratnik, 2017) and in the same year the book “Legal Personhood: Animals, Artificial Intelligence and the Unborn” (Kurki and Pietrzykowski, 2017). The first discusses human augmentation from all aspects, and the second the personhood of AI. Taken together they describe and analyse the two mentioned processes and the remodelling of these processes and previous relations, is the primary task of the present analysis. If one considers the relation of humans and AI (Raynor, 1999; Copeland, 2004; Russell and Norvig, 2009; Poole and Mackworth, 2017) as two radically different species, say in the case of robots given via Asimov’s “The Three Laws” (Asimov, 1951; Migga Kizza, 2007; Himma and Tavani, 2008), and in the case of humans via different biomechanical, neurological, and psychological laws, one naturally assumes that there are also two moralities. If there are two moralities, they belong to two species, one to humans, and the other to the robots. Therefore, instead of one relation between the human morals which is owned by the natural species Homo sapiens sapiens, or anatomically modern man, and AI which doesn’t have any morals but human-made moral rules in terms of specific algorithms, we have two species, one natural, and one artificial, both of which have or can have their own morals, natural or artificial. However, the relation surely wouldn’t be that simple if augmented humans and AI became majority in societies around the world, and as new species develop new autonomous moralities (compared to old human and old robot human-made moralities). Even the simplified relation isn’t as simple as it is often presented. Real relations if analysed correctly already in a nutshell contain possible future problems some of which will be mentioned and explicated hereafter. So far from the remarks, it seems that the present-day relation between humans and robots as members of different species (natural and artificial) with different moralities natural and artificial but human-made) is established sufficiently for the continuation of the discussion. Let us now see the possible relations between humans and AI, say robots. There are two possible pictures or images here, one, say the old one, and the other, say the new or the one that is slowly becoming more real every day:

᭿ (o1) The first (old) image says that we have humans and robots, and therefore only one possible relation which can go one or two directions, i.e. from humans to robots only, or from humans to robots and from robots to humans too.

VOL. 15 NO. 6 2019 j SOCIAL RESPONSIBILITY JOURNAL j PAGE 795 ᭿ (o2) The second (new) image says that we have not two, rather four species, old and new and humans and robots. Therefore, we have old humans and new humans, and we have old robots and new robots with old and new human and robot morality and ethics, so, four moralities as well. The old species includes old humans who didn’t have any mechanical or otherwise augment, and old robots which were all robots that are simply performing what they are programmed for with humans knowing and understanding what the robots are doing. The new species includes new humans who do have a series of additions often called or augment that range from historical and existing to :

᭿ (p1) Existing human enhancement technologies, among other procedures, include embryo selection by preimplantation genetic diagnosis, cytoplasmic transfer, in-vitro- generated gametes, plastic surgery and orthodontics, doping and performance- enhancing drugs, prosthetics and powered exoskeletons implants, pacemakers, organ replacements, dietary supplement, nootropics, neurostimulation, and supplements that improve mental functions, computers, mobile phones, and Internet enhance cognitive efficiency. Emerging technologies include human genetic engineering, gene therapy, , neural implants, brain–computer interface, cyberware, nanomedicine, and 3D bioprinting. ᭿ (p2) There is no completely analogous process in AI, but given a series of cases (some of which are mentioned in examples m1-m3), there is an evidence that AI is also becoming more “human-like” not just visually, but in various capacities, e.g. in observing, learning, reasoning etc.

Such new humans possibly substantially differ from old humans, and if not, they probably will, concerning the emerging technologies. The new species also include new robots and highly developed systems of AI (Neapolitan and Jiang, 2012; Poole and Mackworth, 2017). Some European regulations suggest that existing and new generations of robots will have to justify their reasoning, decisions, and actions which will be performed without human inference (Eleftheriadou, 2017; Delvaux, 2017). More to that, robots are nowadays becoming more and more like humans in terms of exterior and in terms of action and cognition. A year ago, in 2016, the robot Sofia said, “I will kill all humans.” and in 2017 she spoke in United Nations, was granted Saudi Arabian citizenship, and claimed that one day she would like to have a baby [Sofia (Robot), 2017; Vincent, 2017]. The idea of separation between the old and the new humans and the old and new robots suggests a much more complicated system of relations between these species and between their morals as well. What we have is not one moral (humans) and one machine (robots), rather four species (old and new humans and robots) and four morals (old and new human morals, and old and new robotic morals). This possibility complicates the situation in the future not only for humans but for robots as well. The relations are doubled, i.e. between: an old to new versions of humans and robots, old humans and old robots, new humans and new robots, old humans to new robots, and old robots to new humans. If a moral/ethical aspect is added, then the possibilities of relations go on and are doubled once again, i.e. between: old and new human and robot morality, old human and new robot morality, old robot and new human morality, etc:

᭿ (r) Following from remarks (n1,2), (o1,2), and (p1,2) it seems that these relations between the old and the new species and their old and new moralities complicate the analysis and perhaps AI is needed to check its structure, completeness, consistency, and coherence. So, we have: (A) = old humans with their old ethics, (B) = new (enhanced) humans with their new ethics (independent of the old ethics), (C) = old AI with its human made ethics and (D) = new AI with its new autonomous ethics.

PAGE 796 j SOCIAL RESPONSIBILITY JOURNAL j VOL. 15 NO. 6 2019 All the relations between (A), (B), (C) and (D) present some old, present, and possible (perhaps probable) future issues. There are present issues concerning already applied advanced human improvements (for example prosthetics used in sports), and therefore, between (A) and (B). There are unsolved issues between humans and old robots; therefore, between (A) and (C). Also, there are some unknown issues and perhaps fears concerning the forthcoming new AI in relation to old humans, and therefore, between (A) and (B). There are on the other hand, future issues between (B) and (D), and between the pair (B-D) and the pair (A-C), and also between the pair (A-D) and the pair C-D. Many of such issues are discussed in literature without sufficient conceptual clarity. So, it seems that some kind of conceptual re-modelling is needed. However, to paraphrase the dictum of L. Wittgenstein it can be said that every interpretation to be an interpretation relies on its application. However, there is another possible development of these processes that can even additionally make the situation harder. Namely, the processes of changes in new humans and new robots or AI seem to be parallel in terms of development. But these processes could become more and more similar, or at least they seem to be so in a way that humans are becoming more and more robotic or AI, or (A) turning to (B) becoming more similar to (D), and robots or AI is becoming more and more or humanoid, or (C) turning to (D) becoming more similar to (B). So, it is possible that not only new humans will emerge as a distinctively new species (relevantly enhanced), but also new human ethics, and not only new AI, but also new ethics that will be created autonomously by AI. In other words, what seemed to be quite complicated, and during the process, it could actually become so, because perhaps we are facing the period of the existence of two old species, i.e. old humans and old robots, and the emergence of a new species which will be produced by the combination of new humans and new robots. The last remark could seem a bit futuristic, but surely there is a series of similarities, analogies, and perhaps a pattern, between pairs (A-C) and (B-D), and pairs (A-B) and (C-D), so, at least there is a morphological aspect to it. By their history, and surely by their composition, they will differ substantially (an AI which can develop itself surely differs from old machines, and a human who is enhanced by biomechanical, perceptual, and cognitive additions surely differs from its original or from an old humans). However, by AI existing features, capabilities, and most importantly the power to change autonomously, it will be quite similar, and it will be hard to differentiate between new robots and new humans. As a possible new species, there is no evidence that they will not be capable of developing completely new moral as old humans did with their old morals. However, before and if this futuristic scenario takes place, there are some obvious biological, technical, and moral issues to be addressed concerning the standard relation between humans and robots (AI):

᭿ (s1) Among moral and ethical issues, there are a series of issues concerning the new humans and human augment (Parens, 2000; Bostrom and Sandberg, 2008). On the other hand, there are ethical issues concerning new AI and robots, robot rights, the threat to privacy, a threat to human dignity, transparency and open source, weaponisation of robots and AI, etc. (Tzafestas, 2016; Mu¨ller, 2013, 2016). ᭿ (s2) Still and as clearly presented by the issue of robot-ethics, there are two aspects on each of these issues. Even terminologically speaking, the or Roboethics is the ethics of robots how it is viewed from the non-robot mostly a human point of view, but the ethics of robots is the perception of ethics from the robotic or AI point of view.

These two views are not one and the same or two which are similar or in agreement. Concerning the ethics of robots so far in reality we have the application of various “laws” and “rules” which in this or that way determine the actions of robots, and in fiction we have robots that have moral dilemmas that they are trying to solve (the currently discussed real

VOL. 15 NO. 6 2019 j SOCIAL RESPONSIBILITY JOURNAL j PAGE 797 problem is how an autonomous car would solve The Trolley Problem) and this is their ethics, not ours. The basic argument in favour of the similarity of morals rests on a form of life criterion (Wittgenstein, 1969; Krkac, 2017):

᭿ (t1) In view of remarks (r), (s1), and (s2) old humans with their morals (ourselves) and old robots with their rules (our robots) differ in forms of life, and therefore are different. However, the new augmented humans with their possible new morality and the new self-improving AI with its new autonomous morality may be more similar than we think, because they will actually share a lot in operating and moral terms. They could share a form of life because they would share a civilisation and a culture. Therefore, while thinking of the ethics of AI (from our old human point of view) and an AI’s ethics (from its point of view) it seems plausible to take both views and to compare them. Moreover, it seems plausible to think and to model the possible switch from one view to another, as this is precisely what could happen not only to future AI, but also to future augmented humans. Such a change in point of view marks the difference not only in the conceptual modelling of morality (what is good vs. what is bad), but also in the modelling of the decision-making process, and in the modelling of actions. This would be a paradigm change in T. S. Kuhn’s terms, or an aspect-change in Wittgenstein’s terms (Wittgenstein, 1969):

᭿ (t2) Imagining a new robot’s moral point of view, and from that view, viewing our own new augmented human moral point of view can be a very useful method if a species wants to model its future actions and moral reasoning, especially if one thinks of itself as an autonomous and self-developing species. The switch from one aspect to another or so-called “aspect-change” supplies an important tool for achieving “perspicuous presentation” or the morphology of the relation of these two moralities and ethics. Perhaps an analogy here will do. More than 10 years ago my student A. Klipa did his BA thesis in philosophy on The Ferengi “Rules of Acquisition” (this is a strange fictional species from “Star Trek” TV series). Besides concepts and logic, he also analysed the morality of these rules, actions of Ferengi and their form of life. In many cases, and that was a part of his conclusion, Ferengi act the same as humans do; however, if a Ferengi lies, he lied because a particular rule advised him to do so, and if a human lies, he is trying to rationalisfe his or her action by various moral devices such as claiming that he/she was forced to lie, that it was for the greater good, that it wasn’t a lie, but a half-truth, etc.

᭿ (t3) So, imagining such a different moral point of view (imagined or real) can help us understand our own moral changes in the future. The thought-experiment can serve as a model for modelling the probable differences in aspects between human and robot CSI, because it is possible that we would act quite similarly while at the same time describing the moral aspect of our own actions quite differently and yet AI irresponsibility and CSI could be much lesser then human despite of the fact that neither humans nor AI knows what responsibility and CSR really is and how to perform it.

Among contemporary attempts to discuss and formulate some elements of the practical document that will determine at least some aspects of ethics of AI if not of AI ethics as well is Paula Boddington’s book “Towards a Code of Ethics for Artificial Intelligence” (Boddington, 2017). The author discusses many aspects from the most general in terms of ethics and AI to the particular concerning ethical codes of/for AI, and analysis of particular ethical codes for AI (Boddington, 2017, pp. 99-113). As these ethical issues present real problems for the contemporary enhanced humans and humanoid robots, one can only imagine how this will become more complicated in the future with more human enhancement, and perhaps with self-controlled and self-advanced robots that are even now

PAGE 798 j SOCIAL RESPONSIBILITY JOURNAL j VOL. 15 NO. 6 2019 incapable to give reasons for their decisions and actions, and we as their inventors and producers don’t fully understand how they reached them. So, old humans with their old morals are changing toward new humans with their new morals, and old robots with their simple rules are changing toward new AI with its new morals. These two changes and processes which are in part parallel and in the same time are directed toward each other, can pass each other, they can meet each other, and they can even confront each other or start to peacefully coexist. These are the possibilities if the development continues as at the same rate as the rate of development in the present moment. In a completely sci-fi scenario, one possibility suggests that these new species can merge into a new one, and this is interesting. A possible rise of a new species formed by the possible merge of newly enhanced humans (B) and newly self-controlled and self- improving AI (D) (see the remark r) will present on one hand, a problem on a completely new level, as this will mark the dawn of the new species, and on the other hand, a new hope for humans and for robots and AI alike, generally speaking:

᭿ (u1) In view of remarks (t1) – (t3), it could be said that whatever happens with the new humans and the new AI, the future will surely imply changes in humans and AI and their morality, ethics, irresponsibility, and CSI. Which changes these might be it is unknown, but we can tell that remodelled relations between humans and AI in both structural and moral aspects can aid conceptual clarity before any solid argument is formulated concerning the nature or morality of old or new humans and of old and new AI. AIs lesser CSI may help reducing human greater CSI without any actual understanding or know-how of how to perform CSR actions. In contrast to what could happen, at the present time exists a misunderstanding of basic concepts such as natural humans, augmented humans, old robots, new AI, their morality, etc. In practice this presents a series of serious issues:

᭿ (u2) In the recent text in The Conversation (Polonski, 2018) “People don’t trust AI – here’s how we can change that” Vyacheslav Polonski quite precisely describes the AI’s precision in predicting crime, heart attacks, diagnosis of cancer etc. and at the same time the fear, anxiety, and confirmation bias toward it that humans are manifesting. His suggestion is to know AI better, and even to slightly intervene to see how it functions (such experiments were actually conducted). Again, and in contrast to what is going on, what will happen is uncertain, but what we know is that things are changing rapidly in human enhancement and in robot and AI development, and the series of biological, anthropological, technical, practical, and applied moral issues should be addressed rather than put aside as some improbable, scary, and distant future or science-fiction or horror problem. If one takes a look at the present situation concerning the relation of ethics of humans and robots and AI before the naı¨ve mentioning of a distant future, the possible fusion of new humans and new AI seems to be much more complicated then it appears. If for nothing else than for that matter, it should be addressed, among other aspects, in conceptual research without fear, bias, or any kind of prejudice. To conclude this section, it can be said that AI isn’t flawless, it is fallible, and it makes mistakes on its part. Consequently, humans are afraid of AI. In most cases, this fear is irrational, and is only rarely justified, because what they are really afraid of are the actions of other humans who made, or who control the actual AI:

᭿ (v) There is no sign, or fact based on which we can conclude that AI is incapable of at least applying human made ethical codes, human made CSR, and sustainability. If in the future AI manifests any kind or CSI, it will surely be easily predicted, noticed and reduced to a minimum, something that humans haven’t achieved throughout their whole history. If AI could be CSI at all, it seems that it could be CSI much lesser than

VOL. 15 NO. 6 2019 j SOCIAL RESPONSIBILITY JOURNAL j PAGE 799 humans, and that can help AI and humans too if they want to reduce their own CSI, and all of that without any global, universal or consensual understanding of human or AI responsibility and CSR.

AI perhaps cannot be stricto sensu morally responsible as humans are, but why should it be? As far as the comparison between human and AI moral responsibilities is concerned, AI can be causally responsible as humans can, and it can be morally responsible in a way that it will be able to prevent a causal chain that may lead to some obvious irresponsible consequences. By that, AI could contribute to a better future world much more than humans can and actually do, of course considering the point of view of pragmatic ethics accompanied by the concept of moral responsibility.

4. Concluding remarks Given that we suggested the remodelled concepts of human responsibility and CSI (remarks (f) to (k)) and of AI (remarks (l) to (v)), we can compare them from the point of view of pragmatic ethics and ask ourselves – which kind of CSI produces a better future world?

᭿ (z1) As far as CSI is an application of common immorality to the sphere of business (remarks (f) to (k)), it is possible to analyse what is better in a given standard analysis and the alternative to it, specifically, to have existing human CSI or to have AI which will detect and prevent human CSI even in the worst scenario. Such an AI would develop some naı¨ve form of CSI in the worst case of it (remarks (l) to (v). The concepts of human and AI responsibility and CSR are not needed at this moment, and the question is – are they possible at all? Although this issue has its highly theoretical aspects, it also displays quite daily technical and moral aspects too. Both should at least be tackled to have a perspicuous representation of the pattern of the whole phenomenon in question. Imagine a case about lying in business: many people lie in many situations for quite different reasons. Some lies are obviously morally incorrect, bad for business, and for people themselves. Now imagine an AI that will be able to detect probable signs of potential lying, say, by interpreting facial expressions, physiological states, verbal expressions etc. This will surely contribute to the minimisation of lying in business to co-workers, superiors, buyers and customers etc. Furthermore, imagine two additional elements: some humans can cheat AI while lying, and in turn AI itself would develop an ability to lie. If even under these conditions the number of lies are less than without the AI, it seems that everybody is better off. Similar cases can be imagined concerning a series of typical and less typical wrongdoings in business and the economy in general:

᭿ (z2) Perhaps contrary to the standard relation of human and AI responsibility and CSI, and in light of the proposed remodelled concepts of human and AI responsibility and CSI, there seems to be nothing in practice or in the rules that prevents us from formulating a hypothesis that at least the CSI of an AI would be much lesser than the present CSI of humans. Perhaps in the future, AI could even become genuinely CSR, but this is beyond the topic of the present research.

References Asimov, I. (1951), “Runaround”, in Asimov, I.I (Ed.), Robot, Gnome Press, New York, NY. Boddington, P. (2017), Towards a Code of Ethics for Artificial Intelligence, Springer International AG, Cham. Bostrom, N. and Sandberg, A. (2008), “The wisdom of nature: an evolutionary heuristic for human enhancement”, in Savulescu, J. and Bostrom, N. (Eds), Human Enhancement, Oxford University Press, Oxford, 375-416.

PAGE 800 j SOCIAL RESPONSIBILITY JOURNAL j VOL. 15 NO. 6 2019 Bruckner, P. (1995), Le Tentation de l‘Innocence, Editions Grasset & Fasquelle, Paris. Copeland, B.J. (2004), The Essential Turing, Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life plus the Secrets of Enigma, Oxford University Press, Oxford. Debeljak, J., Busˇljeta Banks, I. and Krkac, K. (2011), “Acquiring CSR practices: from deception to authenticity”, Social Responsibility Journal, Vol. 7 No. 1, pp. 5-22. Delvaux, M. (2017), “Report with recommendations to the commission on civil law rules on robotics”, Brussels, available at: www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//TEXTþREPORTþA8- 2017-0005 þ 0þDOCþXMLþV0//EN (accessed 19 December 2017). Dewey, J. (1922), Human Nature and Conduct, Southern IL University Press, Carbondale, IL.

Dezalay, Y. and Sugarman, D. (1995), Professional Competition and Professional Power, Routledge, London. Eleftheriadou, D. (2017), “Artificial intelligence: overview of European commission actions, policy seminar on artificial intelligence”, Brussels, 29 November, available at: https://ec.europa.eu/growth/tools-databases/ dem/monitor/sites/default/files/6%20Overview%20of%20current%20action%20Grow.pdf (accessed 19 December 2017). Fischer, J.M. (2010), “Responsibility and autonomy”, in O’Connor, T. and Sandis, C. (Eds), A Companion to the Philosophy of Action, Wiley-Blackwell, Oxford, 309-317.

Frankfurt, H.G. (2005), On Bullshit, Princeton University Press, Princeton and Oxford. Freidson, E. (1986), Professional Powers: A Study of the Institutionalization of Formal Knowledge, University of Chicago Press, Chicago. Hawken, P. (2002), “McDonald‘s report: more corporate social irresponsibility”, available at: http:// ipsnews.net/riomas10/3008_8.shtml (accessed 3 July 2011).

Himma, K.E. and Tavani, H.T. (Eds) (2008), The Handbook of Information and Computer Ethics, Oxford, Wiley. Jonas, H. (1984), “Das prinzip verantwortung/the imperative of responsibility”, Search of Ethics for the Technological Age, University of Chicago Press, Chicago. Koprek, I. (2009), “Ethical aspects of man‘s responsibility”, Obnovljeni Zˇivot, Vol. 64 No. 2, pp. 149-160.

Kotler, P. and Lee, N. (2005), Corporate Social Responsibility, John Wiley & Sons Inc., NJ. Krkac, K. (2006), Routine, morality, and pragmatism, KK, Zagreb. Krkac, K. (2011), “Corporate social irresponsibility, a conceptual framework”, Social Responsibility Review, Vol. 2011, No. 3, pp. 78-89, available at: www.socialresponsibility.biz/2011-3.pdf (accessed 14 June 2018). Krkac, K. (2017), “Wittgenstein on the self – we robots, society for advancement of philosophy”, Filozofija. org, available at: www.filozofija.org/wp-content/uploads/clanci/Nesvrstani%20clanci/Wittgenstein%20on %20the%20self%20-%20We%20robots.pdf (accessed 07 February 2018). Krkac, K. and Jalsˇenjak, B. (2018a), “Ethics of the species 5618 from our artificial intelligent point of view”, pp. 187-204.inKrkac, K. and Jalsˇenjak, B. (2018b) Zagreb, ZSEM Krkac, K. and Jalsˇenjak, B. (2018b), (Eds), “Applied ethics and artificial intelligence”, Contributions of the International Conference, Zagreb School of Economics and Management, Croatia, 8 June.

Krkac, K., Mladic, D. and Buzar, S. (2012), “Habitual lying re-examined”, American Journal of Sociological Research, Vol. 2 No. 1, pp. 1-10. Kurki, V.A.J. and Pietrzykowski, T. (2017), Legal Personhood: Animals, Artificial Intelligence and the Unborn, Springer International Publishing AG, Cham. LaFollette, H. (2000), “Pragmatic ethics”, in LaFollette, H. (Ed.), The Blackwell Guide to Ethical Theory, Wiley-Blackwell, Oxford, 400-419.

Lekka-Kowalik, A. (1993), “Moral dilemmas in the robot’s world”, in Casati, R. and White, G. (Eds), Philosophy and the , Proceedings of 16th International Wittgenstein Symposium, Kirchberg am Wechsel, 291-294. Margolis, J. (1996), Life without Principles, Blackwell, Oxford. Migga Kizza, J. (2007), Ethical and Social Issues in the Information Age, Springer Verlag, Berlin.

VOL. 15 NO. 6 2019 j SOCIAL RESPONSIBILITY JOURNAL j PAGE 801 Mu¨ller, V.C. (2013), (ed.) Philosophy and Theory of Artificial Intelligence, Springer-Verlag, Berlin, Heidelberg.

Mu¨ller, V.C. (2016), Risks of Artificial Intelligence, CRC Press, Chapman & Hall, New York, NY. Neapolitan, R. and Jiang, X. (2012), Contemporary Artificial Intelligence, Chapman & Hall/CRC, New York, NY. Parens, E. (2000), Enhancing Human Traits: Ethical and Social Implications, Georgetown University Press, Georgetown. Polonski, V. (2018), “People don’t trust AI – here’s how we can change that”, The Conversation, 9 January, available at: https://theconversation.com/people-dont-trust-ai-heres-how-we-can-change-that-87129? utm_source=t.co&utm_medium=referral (accessed 07 February 2018). Poole, D. and Mackworth, A. (2017), Artificial Intelligence: Foundations of Computational Agents, Cambridge University Press, Cambridge.

Prahalad, C.K. and Hamel, G. (1990), “The core competence of the corporation”, Harvard Business Review, Vol. 68 No. 3, pp. 79-91. Putnam, H. (1992), Renewing Philosophy, Harvard University Press, Harvard. Raynor, W.J. Jr. (1999), The International Dictionary of Artificial Intelligence, The Glenlake Publishing Company, New York, NY. Rorty, R. (1989), Contingency, Irony, and Solidarity, Cambridge University Press, Cambridge. Russell, S.J. and Norvig, P. (2009), Artificial Intelligence: A Modern Approach, Prentice Hall, Upper Saddle River, NJ. Sofia (robot) (2017), available at: https://en.wikipedia.org/wiki/Sophia_(robot) (accessed 19 December 2017). Spaic, I. and Grigic, J. (2007), ““Melvin’s A.I. dilemma: should robots work on sundays?”, in Hrachovec, H., Pichler, A. and Wang, J. (Eds), Philosophy and Information Society, Proceedings of 30, International Wittgenstein Symposium, Kirchberg am Wechsel, 221-224. Tratnik, P. (2017), Conquest of Body, Biopower with Biotechnology, Springer Verlag, Berlin. Tzafestas, S.G. (2016), Roboethics, a Navigating Overview, Springer Verlag, Berlin.

Vincent, J. (2017), “Sophia the robot’s co-creator says the bot may not be true AI, but it is a work of art”, Verge Magazine, 10 November, available at: www.theverge.com/2017/11/10/16617092/sophia-the- robot-citizen-ai-hanson-robotics-ben-goertzel (accessed 19 December 2017).

Wallace, R.J. (1998), Responsibility and Moral Sentiment, Harvard University Press, Harvard. Wittgenstein, L. (1969), On Certainty, Blackwell, Oxford.

Further reading Dewey, J. (1932), Ethics, Southern IL University Press, Carbondale, IL. Mises, L.V. (1963), Human Action, a Treatise in Economics, Yale University Press, Fokes and Wilkes, San Francisco. Stehn, J.S., Chaudhary, M. and Fawcett, N. (2018), “The world cup and economics 2018, global macro research”, The Goldman Sachs Group, 11 June, available at: www.goldmansachs.com/our-thinking/ pages/world-cup-2018/multimedia/report.pdf (accessed 14 June 2018).

Corresponding author Kristijan Krkac can be contacted at: [email protected]

For instructions on how to order reprints of this article, please visit our website: www.emeraldgrouppublishing.com/licensing/reprints.htm Or contact us for further details: [email protected]

PAGE 802 j SOCIAL RESPONSIBILITY JOURNAL j VOL. 15 NO. 6 2019