Research Article: Open Access primary means of action. With the language of PT of language the With action. of means primary technology rather as a than cessing asanauxiliary functions by physical means using information pro- key their perform Physical . software be a to robot a physical system of the embedded consider may One system. software embedded an of control the under be may but robots, software from distinguished are robots Physical robots. cal ofphysi study the for offer may theory promise Accepted: September02,2020 Bergstra. [email protected] author: *Corresponding b a Introduction Minstroom ResearchBVUtrecht,TheNetherlands Jan ABergstra gess in a series of papers from 2005 onwards is for specifying systems consisting of inanimate agents. systems of animateagentsasmuchit understanding informatics. It appears that promise theory is helpful for case the realm study forpromise theory of outside stra &Burgess2014[ 2015 [1] and the more technical exposition in Berg survey of this work and extensions of it see Burgess Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, providedthe anymedium, in original authorandsourcearecredited. andreproduction use,distribution, unrestricted permits which License, Attribution Copyright: See also:http://markburgess.org/treatise.html poie a extensive an provides Bergstra & Burgess 2017 [3] Citation: The objective of this paper is to examine what examine to is paper this of objective The Promise theory (PT) was designed by Mark Bur- Mark by designed was (PT) theory Promise Promises intheContextofHumanoidRobot IntJRobotEng2020,5:026 BergstraJA(2020) PromisesintheContextofHumanoid RobotMorality.IntJ Eng5:026 © 2020 Bergstra JA. This is an open-access article distributed under the terms of the Creative Commons Creative the of terms the under distributed article open-access an is This JA. Bergstra 2020 © VIBGYOR of trustinconnectionwithrobots. design. I will use PT to outline the specification systems of robot behaviour and to discuss various and forms technology information on focus a with designed been has (PT) theory Promise Abstract * JanA Bergstra, Research Minstroom BV Utrecht, The , E-mail: 2, 3] ; Published: b . September04,2020 Morality a . For a For . - - Robotic Engineering International Journal of morality invariouswaysasspecifiedbelow. robot of topic the to contribute to try will I hand at Outline ofthepaper 3. 2. 1. ing test (EQTT), the intrinsic trust establish the intrinsic test (EQTT), ing quasi-Tur extended The distinguished: are bot ro- a in trust of acquisition the of stages Three humanoid robotbehaviour. for notation specification readable human a as proposed is PT now. until done was than detail more in out worked are design its of aspects cal some philosophi and recapitulatedPT is Then sions ofmachineethicsasappliedtorobots. discus current in absent be to seems servation ob is This system. computer a analyse to how robots) is supposed to follow the known lines of of family (or robot specific a about knowledge as computers with as areportrayed a consequence that acquiring Robots problem.) definition robot the of solution inconclusive another yet is (That given. is robots of definition another Yet The contributionsofthepaperarethese: DOI: 10.35840/2631-5106/4126 ISSN: 2631-5106 - - - - - Bergstra. Int J Robot Eng 2020, 5:026 ISSN: 2631-5106 | • Page 2 of 20 •

ment (ITE), and the moral qualification dilemma and does not stand for principled (MQD). This description allows for a limitation of new ways of thinking about computer systems. It the MQD to its philosophical essentials, which it- might be informative to replace AI (artificial intel- self is an issue, however, to which the paper has ligence) by CS (computer science) on many occa- nothing novel to add. sions so that corresponding techniques and ideolo- c 4. Various forms of trust are discussed for a vari- gies can be employed . ety of robot classes: Factory robots, sex robots, However, I will introduce various informal no- service robots, care robots, and combat robots. tions about robots which cannot be traced back to Some differences between these robot classes any classical notions in computer science. Below I with regard to the establishment of trust in a ro- will speak for (i) A robot being human-alike, (ii) A bot are noticed. robot passing an extended quasi-Turing test, (iii) A 5. For the design of moral robots a specific form robot allowing intrinsic trust establishment, and a of ethical constructivism comes into and robot passing a moral qualification dilemma, (iv) Asimovian constructivism is proposed as a name The decision taking competence of a robot, (v) The for such constructivism. moral justification competence of a robot, (vi) The minimal moral justification competence, relative Defining robots to its decision taking competence, that is required A robot is a cyber-physical system with periph- for a robot for it to acquire moral qualification for erals able to act in excess of mere communication. certain tasks, and (vii) A robot reasoning with Asi- Where traditional peripherals are printer,- key movian constructivism. The AI aspect of board, mouse, screen, touchpad, speaker, voice encountered below is not so much in the technolo- and voice recognition, external memory, a fixed gy for achieving certain forms of performance but position camera, and are more often than not un- in the terminology and concepts used for analysing able to cope with bad weather, robots have hands, the way the robot creates its behaviour in various arms wheels, legs, eyes which can be directed and contexts. focused. Further robots may be able to smell, to Definition 1: A physical robot is a localized, but in sense heat and pressure, and to operate in adverse its motion not necessarily spatially confined, mono- circumstances. Like computers robots can be re- lithic physical actor under the control of embedded programmed. software, which underlies these constraints: It is remarkable that the instance of computer which is nowadays most devoid peripherals, viz. the 1. The robot engages in physical actions (including mobile phone, is not anymore called a computer. motion) and communications. It is through these As fewer users actually use their mobile as a phone actions that the robot achieves its objectives, if it becomes even more of a computer. One may any. say that a robot is a computer which can do things 2. Communications to other agents are of four which humans would call physical work. I will use a kinds exclusively: Promising (issuing a promise), more systematic and perhaps less appealing defini- imposing (issuing an imposition), receiving (as tion of a robot below. At the same time I will insist the promisee) or noting (as an agent in scope) a that a robot is in essence a computer, which while promise, and receiving (as the promisee) or not- it may choose not to make use of its embedded in- telligence, is supposed to be equipped with a min- cTopics as LISP, PROLOG, SMALLTALK, semantic net- imal level of intelligence, which according to one’s works, natural language processing, Bayesian networks, perception of computer science may or may not be planning, automatic proof generation and correspond- labeled as a manifestation of . ing proof checking, automatic programming, have each Robots as Computers started their life as AI and have in the mean time migrat- ed (been downgraded) from AI to CS. I will assume that methodologies for the acqui- sition of knowledge about the behaviour of a robot AI has become a label for automating tasks at which bio- (or about a class of robots) are essentially the same logical intelligence outperforms computer (i.e. artificial) as for computer systems. The label artificial intelli- performance for at least some 20 years after the task of gence merely points to a subset of techniques from has been identified and the work towards it has started at an international scale.

Citation: Bergstra JA (2020) Promises in the Context of Morality. Int J Robot Eng 5:026 Bergstra. Int J Robot Eng 2020, 5:026 ISSN: 2631-5106 | • Page 3 of 20 •

ing (as an agent in scope) an imposition. Probably the robot definition problem admits Here promise, imposition, promiser, promise, no definitie answer and the requirements for be- and scope are understood as in PT. ing a robot are likely to steadily increase with ad- vancement of technology. Lin, Abney, & Bekey [6] 3. Besides communications to other agents the ro- pay ample attention to the robot definition prob- bot may have a datalink to one or more data- lem. According to these authors a robot is an engi- centers which allows the real time inspection of neered machine that senses, thinks, and acts. Their massive data as well as the significant remote definition is not confined to electromechanical en- processing, beyond the scope of the robot’s on gineering, and may cover results of biotechnologi- board computing performance. cal engineering as well as of software engineering. However, it is not assumed that the actor owns I will adopt the following conventions: If in ad- any resources according to ownership as out- dition to Definition 1 an artefact complies with the lined in [4], and in [5]. criteria of Lin., et al. then it is an autonomous ro- 4. Moreover the robot is an artefact of human en- bot. Otherwise it is a non-autonomous robot. Lin, gineering, or an artefact of human engineering et al. insists that a robot is autonomous by defini- combined with robotic action, or an artefact -re tion, thereby casting doubts on the clarity ofthe sulting from the work of a team of robots. Ro- title of Bekey [7]. When discussing a robot soldier, bots growing other robots may pass through an however, Tamburrini [8] insists that the soldier, unpredictable evolution. rather than the robot carries with it the implicit as- 5. In comparison to issuing promises, issuing im- sumption of autonomy. positions is much less frequent so that promises These distinctions are non-trivial, however, be- constitute the primary mode of communication cause the intelligence of an autonomous robot of the actor under consideration. need not be the result of on board data and collec- The idea of requirement 4 is to ensure that at tion and processing thereof. The notion of owner- least in principle the entire construction path has ship (as defined in Bergstra & Burgess [4]) is of use been documented in all possible detail and could, in Definition 1. If external IT services are owned by for that reason, be duplicated, as well as analysed. an agent (a candidate robot), the outcome of the In other words the artefact is known, and in par- use of such services contributes to the autonomous ticular its embedded software is known. According behaviour of the agent. to Mark Burgess allowing a robot to own resources Karppi, et al. [9] propose that a phrase like “kill- is reasonable and the need for a definite demar- er robot” must not be understood as a certain kind cation from biologically grown entities is non-obvi- of robot amenable to a precise definition. Instead ous. I will maintain both limitations for the sake of “killer robot” is part of a cultural technique which simplification of the subsequent exposition. Given plays a role in some power struggle. Killer robot is a physical robot R its context may be split into var- used in a context and timeframe in which the no- ious components: tion of killing itself is in flux, and so is the notion of • Theater agents: Agents inhabiting the theater robot. As a phrase it helps to create a movement where the robot is active, which, right or wrong transcends its immediate- ly stated purposes. This is a convincing narrative • Background agents and structures (including da- which explains why the CSKR (campaign to stop kill- tacenters), er robots) is not engaged in developing increasingly • Responsible agents, agents who feel responsible precise definitions of killer robots. The notion of an for the robot’s activities. autonomous robot may perhaps in the same way Other definitions of robots be seen as a cultural technique meant to arrange and rearrange attitudes, and meant to separate op- The definition of robots above is just oneof ponents from proponents. The idea of autonomous many conceivable and possibly competing answers robot as a cultural technique explains the idea of an to the following question: autonomous robot imaginary as discussed in Rom- Problem 1: (Robot definition problem). What is metveit, et al. [10]. The latter paper analyses robot a robot? autonomy in terms of roadmaps; quoting [10]:

Citation: Bergstra JA (2020) Promises in the Context of Humanoid Robot Morality. Int J Robot Eng 5:026 Bergstra. Int J Robot Eng 2020, 5:026 ISSN: 2631-5106 | • Page 4 of 20 •

• The main organizing tool and metaphor in this and image processing in combination with wireless co-construction work is that of the roadmap connections and an ability to call for support of re- whose main characteristics stem from the pe- mote processing may constitute areas where the culiar fact that it depicts a road that has yet to humanoid robot is superior to an average human. be built (.), and that eventually will be the single A male humanoid is also called an , a fe- road on the map. male humanoid is also called a . A humanoid • The roadmap indicates a plurality of gaps, be- may be genderless as well. It remains to be seen if tween academia and industry and between there is a rationale for contemplating transgender and engineering, and the bridging of gaps be- and transgender androids. comes a self-propelling metaphor. Even the gap Adequate performance testing assumption ver- between deterministic preprogrammed agency sus adequate forecasting assumption: For human- and consciousness is reduced to a mere pointer oids it will be assumed that in principle it is feasible to a plurality of gaps the bridging of which is the to perform adequate testing and simulation to be perspective of roadmap protagonists. sure that the robot is able to perform as expect- ed, unless its control functions for some reason Humanoids, androids, and gynoids override these abilities. This form of testing will be Humanoid robots are a fraction of conceivable called performance testing. Here performance in- robots only. Nevertheless contemplating human- cludes what is done as well as how fast it is done, oid robots as a special case, and doing so uniformly or how cheap or whatever relevant measure can for a range of application areas, has advantages be- be applied. cause certain aspects of morality and can be The limitations of testing for cyber-physical sys- discussed with ordinary human ethics and morality tems are well-known. Testing can demonstrate as a source of inspiration. I will need some defini- that the system can perform certain patterns of tion of humanoid robots. Here is an option: behaviour (either in a positive sense, often termed Definition 2: A physical robot R is a humanoid demonstration, or in a negative sense if a failure (a humanoid robot) if (i) The thought experiment occurs, which will lead to some form of repair), but that R were a human being is both reasonable and testing cannot produce guarantees that such pat- informative, (ii) The thought experiment of Rbe- terns will be shown or will not e shown under given ing fully under remote control of a human being is conditions. meaningful, (iii) In each direction of physical abili- For instance one may wish a robot to act as a sur- ty, competence or capability, the gap between an geon for some range of medical surgeries. In that average human and R is a gradual matter and a case the adequate testing assumption implies that continuous line of intermediate humanoids can be it is possible and feasible, by means of a combina- imagined, (iv) For each task the need for/added val- tion of simulation and experimentation to confirm ue of autonomy of R can be analysed and assessed (or reject) the proposition that the robot is capable (this assessment may range from concluding that of such activities. Having confirmed such capabil- the value of human supervision is limited, to the as- ities in principle for humanoid R it is still possible sessment that real time human supervision would that R will unexpectedly (for its users) refuse to help stand in the way of adequate speed of operation a former business competitor of its manufacturer. and ability to react to unforeseen circumstances). Excluding such forms of undesirable behaviour re- By definition a humanoid robot is a physical ro- quires a different form of analysis, which is able to bot. I will sometimes use humanoid as an abbrevi- provide reliable forecasting of what R will do and ation of humanoid robot. The Platonic love robots what R will not do. Clearly if R has been hacked it discussed in Nordmo, et al. [11] are software ro- may be perfectly successful during performance bots and for that reason do not qualify as human- testing while the corresponding adequate forecast- oids under these assumptions. A humanoid robot ing assumption has nevertheless become corrupt- is enhanced if its capabilities exceed those of aver- ed. Validating an adequate forecasting assumption age humans in important ways. Typically, physical as a whole or in parts, is a task which cannot be strength, endurance, ability to survive adverse con- performed without taking the fundamental tools ditions, speed of action and reaction, built in data of computer science and software engineering

Citation: Bergstra JA (2020) Promises in the Context of Humanoid Robot Morality. Int J Robot Eng 5:026 Bergstra. Int J Robot Eng 2020, 5:026 ISSN: 2631-5106 | • Page 5 of 20 • on board: Formal specification and verification of tonomous with remote data provision”[6,7]d. hardware and software, model checking, software 4. During an episode in which R has interaction production lines, the vast technology of computer with responsible agents only on its own re- security, and the top level scrutiny which can justify quest, and only retrieves mission indepen- trust in open source software. dent information from its data centers, the Machine learning may lead to unpredictable be- robot is “semi autonomous with remote data haviour of a computer, but that is a problem of a provision”. different nature than the difficulty of finding out 5. During a phase in which R is under permanent if a computer has been hacked or if the software control of one or more responsible agents, contains intentional mistakes which come to ex- and has access to the relevant data centers, pression in irregular circumstances or on the basis R is “remotely operated”. of hard to detect external triggers. The situation changes if the dataset used for machine training has A humanoid robot is at least in some sense (at been intentionally corrupted by an adverse party. some level) autonomous, though its level of auton- Give a trusted machine learning process, however, omy may vary over time. the statistical variations created by learning can, at least in theory, be factorized out from other con- cerns in connection with adequate forecasting. Trust as a counterpart of promise: vital trust A rationale for a restriction to humanoid robots: An essential notion concerning robots is trust. Suppose one contemplates care robots and besides Trust serves, more than obligation as a counter- humanoids one needs to contemplate as robots, part of promise. Trust is not easily defined let alone a range of agents including nano-scale agents for quantified. Some applications of the notion of trust intelligent delivery of chemicals inside a human are obvious: If the management of a hospital writes body, unmanned rescue helicopters, and full scale to the manufacturer that they do not trust surgical unmanned and autonomous hospital ships, then robot R to adequately carry out a surgery of type S, taking human morality as a uniform point of depar- (on the basis of a recent fiasco) they will not, with- ture is uninformative. By having a focus on human- out having obtained any response from the manu- oids, both the dimension and the facturer, impose precisely such a task on R. machine morality dimension are simultaneously But if you do not trust the quality of car main- simplified, of course at the cost of generality. tenance of car dealer X, will you make use of a car Autonomy that has just been serviced by X? In this case the distrust may have various causes. Perhaps one is Autonomy is a property of robot activity which afraid that they do too much and then charge too may vary over time. I will distinguish five levels of much. Such fears do not constitute a reason not autonomy. In fact there is rather a continuum of to make use of the car. Alternatively one may be such levels. afraid that they don’t inspect the car well enough 1. During a phase in which R has no interaction to see real problems and that’s what makes them with any data center, and no interaction with so cheap. This may change the picture, though it any responsible agents, R is said to be “fully does not explain why one did not deal with another autonomous”. garage instead. It is not so easy to find a plausible case where convincingly lack of trust in X’s quality 2. During an episode in which R has interaction of car maintenance will lead to reluctance for using with responsible agents only on its own re- the car. For some robots, including surgery robots quest, and retrieves no information from trust must be quite high. its supporting data centers, R is said to be “semi-autonomous”. dThis description of autonomy is consistent with 3. During an episode in which R has no inter- characterizing autonomy as “the capacity to operate action with responsible agents, and only in the real-world environment without any form of retrieves mission independent information external control, once the machine is activated and at from its data centers, the robot is “fully au- least in some areas of operation, for extended periods of time” (Lin et. al. [6], quoted from Bekey [7].

Citation: Bergstra JA (2020) Promises in the Context of Humanoid Robot Morality. Int J Robot Eng 5:026 Bergstra. Int J Robot Eng 2020, 5:026 ISSN: 2631-5106 | • Page 6 of 20 •

Definition 3: Agent A has vital trust in robot R if and being specific about the understanding of the A is willing to make their life, as well as the lives of term promise whenever any other interpretation their relatives and friends, dependant on R’s keep- (i.e. not PT-promise) of it is meant. If instead the ing one or more of its promises. first-order effect of a promise is supposed tobe It is not obvious whether or not vital trust in a the combination of creating an expectation for the promise and an obligation towards the promise at given robot (say R1) can be acquired by an agent A, even if it would be wise for A to trust it. the same time, then one may speak (having adopt- ed PT-promise as the default meaning of promise) Problem 2: Given a human agent A, and a num- of an OG-promise (obligation generating promise). ∈ ber, say k ¥ of robots R1, . . . , Rk from the same product family, i.e. clones, though potentially with Contrast with different views on promises different known histories. Given as well is some, Different philosophers and lawyers maintain dis- possibly incomplete, package of information I{1,...,k}. parate views on what a promise is. For instance see Is there any amount of experience or experimen- [14] for a detailed legal account. I will now highlight tation which A can do or arrange to be done with some aspects of the concept of promises as it oc- these robots which will plausibly create vital trust curs in promise theory (PT). I assume that p refers of A in say robot R1. to the promise made by promiser A with body b to The fact that robots are artefacts appears in promise B with the agents in S in scope. The body b the possibility for positive results on the following may encode information about the time, location, problem. and modality of the act of issuing as well as of ac- tions will or may be involved in keeping the prom- Problem 3: Given a human agent A, and a num- ise. ∈ ber, say k ¥ of robots R1, . . . , Rk from the same product family, i.e. clones, though potentially with 1. Apart from having promised b there is no obliga- different histories. Is there any amount of experi- tion forA which comes about from having made ence or experimentation which A can do or arrange the promise, unless promise body b explicitly in- to be done with these robots, in combination with dicates that some particular form of obligation is detailed knowledge on how the robots have been accepted by A. build and with precise knowledge of the past histo- Avoiding the creation of an obligation consti- ry of each of them, in such a manner that it is plau- tutes a drastic simplification of promising. I refer sible that vital trust is created by A in, say, robot R1. to Scanlon [15] for an analysis of the remarkably On Promises: Promise as an Abbreviation rich variety of obligations which may come with of PT-Promise a promise, assuming that any obligations do. 2. A well-known idea is that A must keep its prom- For a detailed description of promises as used ise p in order to support B in achieving their ob- in PT, I refer to [12] and the references cited there- jectives (see e.g. Rossi [16] who attributes this in. An extension to threats, viewed as a particular point of view to G. A. Cohen and surveys differ- form of promises, is detailed in [13]. Positioning ent arguments for it). However, in PT there is the PT view on promises amongst a plurality of dif- some deviation from this suggestion: ferent views on promises held within , law, , and political science is a challenge • The rationale in promise keeping lies primar- which has not yet been taken up in a satisfactory ily in the fact that unless most promises are manner. PT holds that promises do not create obli- kept promising cannot play a central role in gations. As the term obligation is just as ambiguous inter-agent communication patterns. The as is the term promise it is, however, not so clear usefulness of promising goes hand in hand what this denial of the role of obligations in con- with most promises being kept. This is the nection with promises means. basic rationale of promise keeping. In order to avoid confusion it might be useful to • Promise keeping may be compared to driving speak of PT-promises whenever a promise in the on the right side of the road. Doing so is use- sense of PT is meant. This text may be read as if that ful if most or preferably all traffic participants is done, while abbreviating PT-promise to promise do so, while it is not an end in itself, and the

Citation: Bergstra JA (2020) Promises in the Context of Humanoid Robot Morality. Int J Robot Eng 5:026 Bergstra. Int J Robot Eng 2020, 5:026 ISSN: 2631-5106 | • Page 7 of 20 •

world can just as well be organized without • To accept (i.e. to tolerate or not to resist) any

that convention. updates of TC (A, t) applied by the promise C, and of T (A, t) by any agent s in scope S, where it is • For a specific promise p, in addition to the s general expectation (likely to be shared by understood that assessments of both the plau- the promise) that promise p will be kept on sibility of the promise and the perspective of it the virtue of having been promised, comes being kept play a central role. the impact of the degree of trust that B has • To accept that agents in scope will perform as- in A; indeed the more B trusts A the higher sessments from the perspective of the promise. B ’s subjective probability assignment (i.e. • To accept that at any time t > r if the promise A subjective expectation) that will keep this is not yet kept the expectation of agents B and particular promise. s that the promise is going to be kept by A is to

• Whenever a promise p is kept, as a side-effect a large extent determined by TC (A, t) resp. Ts(A, the thrust of other agents (beginning with B) t) where higher trust indicates a higher expec- in A will increase. tation.

• For a keeping its promise p to B may have an- • To accept any updates of RC (A, t) applied by the

other rationale (that is a rationale which is un- promise C, and of Rs(A, t) by any agent s in scope related to the idea that p being kept is useful S, where it is understood that assessments of for the promise), namely to maintain or even both the plausibility of the promise and the per- increase the trust which B assigns to A, or the spective of it being kept play a central role. A trust of agents in scope of the promise in . • To accept and expect that at any time t > r an

3. Once p is kept (i.e. is kept by A, even in case A increase (decrease) of RC (A, t) (w.r.t. RC (A, t)) did nothing to put the body b of p into effect) correlates with an increased (decreased) expec- the trust of B and of agents in scope S in A will tation of A being made promises q by B from increase. In this case the promise expires once which A may profit in the future, and similarly the assessment that p has been kept (by A) has for agents in scope. A become secured for . One may naively imagine trust and respect as 4. Once it has become clear to B that p will not be fields which impose a sort of potential comparable kept is kept (irrespective of whether or notA did with gravity or electric voltage. Issuing a promise anything to put the body b into effect) the trust sends a kind of wave through these fields poten- of B and of agents in scope S in A will increase. tially changing both at all future times. The wave, In this case the promise expires once said infer- however, is complicated by that fact that its impact ence has become convincing for A. is at any time dependant of assessments, and of the dynamics of other promises, and only it ceases An alternative view: PT-promissory obligations to exist once the promise has expired. Instead of claiming that in PT promises do not Given this description of promissory obliga- generate obligations one may alternatively adopt the existence of promissory obligations and be tion it is perfectly plausible that the only objective very specific about the nature of those. For that ap- which, say A, has for keeping a promise p to C is T A A proach to work I assume that at any instant of time to increase C ( ) so that at a later stage, when issues a promise q, C is falsely led to believe that A t, TA(B, t) represents the trust of A in B and RA(B, t) represents the respect that A has for B. Both trust will keep q with detrimental consequences for C. and respect may be negative as well as positive. Continuing with the comparison of the convention Trust and respect may be understood as elements of promise keeping to the convention of driving on of suitably chosen partially ordered sets. the right side of the road, one may imagine that A drives correctly at the right side in order to arrive Assume that at time r, a promise p is made as at a location where, by intentionally diving on the follows: A promises b (that is, to put body b in to ef- left side deliberately an accident is caused by A for fect) to C with the members of S in scope. Then the promissory obligation incurred by A is as follows: instance as step towards a well-prepared insurance fraud.

Citation: Bergstra JA (2020) Promises in the Context of Humanoid Robot Morality. Int J Robot Eng 5:026 Bergstra. Int J Robot Eng 2020, 5:026 ISSN: 2631-5106 | • Page 8 of 20 •

Promising is understood as a highly frequently than expected clients. For service robots the pic- used mode of communication, also in relation to ture is mixed. Clearly if a self-driving car is driven trivial actions and states of affairs, and in order to by a humanoid serious accidents may preserve that quality it is a system requirement that be caused by that robot. At the other hand an au- most promises will be kept, or rather that in the tomated vacuum clear may be safe harmless un- large majority of cases promisers intend to keep der al circumstances even if its software has been their promises made towards promises by which hacked. Care robots are complicated too: Even a they are trusted. If, however, the promiser has a robot brining a cup of (very hot) tea to a person mental problem and will forget their own promis- may inflict serious harm to that person. For com- es before having any chance of keeping them, it is bat robots the maximal damage inflicted through not plausible to speak of an intention to keep the erratic behaviour can be limited by limiting armour, promise and this state of affairs is likely to have which for killer robots is hardly an option. been noticed before by the promise so that their Personal (i.e. robotic) tools trust in the promiser has already been downgraded accordingly. Assuming that their robots are humanoid their tools are part of some form of standard equipment On (physical) Robots: Robot Application rather than of the robots proper. Humanoid combat Areas robots need weapons as part of their equipment. Factory robots are ubiquitous, but humanoid Actions of such robots include actions performed factory robots are very uncommon. Sex robots are by means of such equipment. These matters are included as a special case of service robots in view confusing for human agents already. A handgun is of the extensive recent literature on the matter, part of the equipment of a soldier, a fighter plane with a focus on principled ethical aspects. More- may be considered equipment of its pilot, but a over sex robots more than any of the other catego- large submarine is not (part of) the equipment of ries of robots still embody the idea of automation: its captain. Doing what a human being used to do. Service ro- Role of trust and potential for harm bots include lawn mowers and other devices which In the literature for the various classes of robots may be helpful for the general public. Robots for a different focus on trust and a different perspec- trading on the stock market are considered soft- tive on potential wrongs exists. Below I will make ware robots and have not been included in this some scattered remarks on these matters, leaving a listing, just as the Platonic love robots discussed in precise analysis for the relevant contrasts for later Nordmo, et al. [11]. work. 1. Factory robots. Factory robots: For a factory robot it is essential 2. Sex robots. that it can be reliably switched off. Unless the robot has an external power supply, trust for that very 3. Service robots. matter may be in need of verification (an aspect of 4. Care robots. adequate forecasting). During operations people 5. Combat robots. may get out of harms way so that immediate risk for factory workers may be easily dealt with. 6. Killer robots. Quite another matter is to make sure (and to The types are ordered in such a manner that for trust) that the robot is not hacked or otherwise un- robots classes occurring later in the listing it is hard- der the influence of adverse powers so that it caus- er to validate an adequate forecasting assumption. es hard to detect but intentional faults in the fac- Adequate forecasting concerns the certainty that tory products for which its activity has been used. things cannot go terribly wrong. That can be guar- anteed by factory robots by switching power off as If, concerning factory robot R its owners or users soon as humans enter the scene. This solution pro- have sufficient grounds for adopting the adequate vides no protection against a factory robot making performance testing assumption and for the ade- intentional construction faults. For sex robots it can quate forecasting assumption there seem to be no be guaranteed by making these physically weaker further ethical issues in the way of its use. Here I do

Citation: Bergstra JA (2020) Promises in the Context of Humanoid Robot Morality. Int J Robot Eng 5:026 Bergstra. Int J Robot Eng 2020, 5:026 ISSN: 2631-5106 | • Page 9 of 20 • not consider the redundancy of workforce caused who focuses on covert means of self defence for by automation as a moral problem having to do a future gynoid in the presence of a male client) with the use of R. f [22]. For factory robots a humanoid shape is irrele- • Unilaterally and unexpectedly terminate a rela- vant and probably even disadvantageous. I do not tionship in which the client has invested much see how additional moral complications can be emotional energy. introduced by robots being humanoids. Clearly a • A robot which insists on providing explicit con- preference for humanoid robots must not come sent in advance of sexual engagement may re- with acceptance for lower quality of production. write the history and start complaining, either Sex robots: The role of trust is unclear. Sex ro- towards the client or towards third party agents, bots may be built in such a manner as to be de- that the client has proceeded in the past with- monstrably unable to physically overpower their out properly securing in advance. human companions (clients). Trust is therefore in- • Giving rise to jealousy felt by their (human) essential for client safety. Sex robots (also referred partners, upon engaging with a (with to as gynoids for female sex robots), may indeed, comparable effects, that is expected effects by potentially cause several problems for their human interviewees, observed for Platonic love robots client: by Nordmo, et. al. [11]). • The client being physically hurt, or attacked and As different clients may have vastly different ex- frightened, by the robot. pectations regarding what service they expect from • Being approached intimately by the robot with- a sex robot it is hardly plausible to design a model out having given consent in advance (see Levy fit for all purposes. But one may imagine a consult- [17]). ing practice where a human consultant together • Create an addictive dependancy. with a client designs the personality and expected behaviour of a client specific sex robot which, takes • IT security risks, failure to protect client data in- care of a wide range of interets of the client includ- tegrity. (See e.g. Gailaitsi, et al. [18]). ing avoidance of addiction and excessive depen- • Providing a substitute for a potential human dency. This design can be regularly updated on the role to such an extent that the client becomes basis of data about the interaction between client increasingly unable to appreciate human part- and robot and of course on the basis of the client’s nership. well-being and the consultants’s informed opinion about the best way for the client to proceed. • (For male clients) the gynoid may reinforce ste- reotype perceptions of expectation of female Service robots: Lawn mowing, cleaning, window submission, and ultimately commodificatione. cleaning, painting, highly laborious tasks in agri- See also [19] for sex robots as triggers for ag- culture each may be automated through robots gressive behaviour. in coming years. Paluch, et al. [23] offers a useful recent perspective on the progress of robot in the • To be responsive to a caricature of the senti- service industry. Such robots will work in vicinity ments of the client in a way which is detrimental of humans and for that reason it is essential the to the client (e.g. Cho in Leach [20]). their actions are and stay confined to their sched- • Inviting the client to aggressive behaviour to- uled tasks. This will require adequate forecasting. wards the robotic companion (see Knox [21] Fukawa [24] uses RSA (robotic service assistant) and defines a robot according to ISI 2012. Fukawa indicates that humanoid form is considered useful eThis objection has become widely advertised in a so- in some cases. Moreover he indicates how RSA’s called campaign to stop sex robots, which, in my view deserves no further mention because of the fact that it which may profit from disparate historical trans- is based upon a seemingly axiomatic prejudice against action information requiring a high level of privacy female sex workers and their male clients. may profit form being coordinated with the help of a corporate blockchain. fThe robot as a subject of moral caution constitutes the third perspective on roboethics of Steinert [22]. Care robots: Humanoid care robots may need to

Citation: Bergstra JA (2020) Promises in the Context of Humanoid Robot Morality. Int J Robot Eng 5:026 Bergstra. Int J Robot Eng 2020, 5:026 ISSN: 2631-5106 | • Page 10 of 20 • work in teams. For instance a pair of such robots sidered conventional industrial products, (ii) Both may be needed to carry wounded persons from an tort law and criminal law may come into play, (iii) accident site. And one or more others for further At a first approximation robots may be considered transportation one may imagine three cooperating quasi-persons comparable to corporations, and (iv) robots able to safely turn around a Covid-19 pa- Initially an identification between moral action and tient in an IC unit. law abiding action may be made, though more so- The availability of many of protocols for many phistication will be needed at some stage because care bound activities simplifies the task of design- differences in both directions can be easily imag- ing care robots. Care robots may fail to act with suf- ined. ficient respect for the human dignity of their clients may at first sight be considered a in various ways. Designing care robots in compli- special case of machine ethics (ME). But this is not ance with paradigms for ethics of care has become the customary understanding of these phrases and a large field of work. It is by no means obvious how I will adopt the exposition of Malle [37] on the mat- to design a robot which can bath a person in a way ter: Machine ethics, also termed machine morality which matches with the performance of a team of is about how to incorporate ethical aspects (moral- well-trained human carers. ity) in the design of machines, whereas robot ethics focuses on how people should design, deploy, and Combat robots: Work on combat robots which may not have lethal weapons on board can be treat robots. I will speak of robot morality as short- found in Elands, et al. [25], Aliman & Kester [26], hand of machine ethics for robots, i.e. machine and Aliman, et al. [27]. A major risk of such weap- ethics for machines which are physical robots. I ons it that ware gets aut of control if these become will include robot meta-morality, i.e. the study of robot morality and principles thereof, in robot eth- operational in large numbers while fighting one-an- g other rather than human opponents. ics [22,38]. With “ethical aspects of robotics” I pro- pose a container for robot morality as well as robot Killer robots: Combat robots with lethal weap- ethics. on capabilities (so-called killer robots) constitute a subclass of combat robots. There is a large litera- I hold that Malle supersedes Asaro [39] where ture on the ethics of lethal robotic weapons. I men- robot ethics is still a container of different ap- tion Williams [28], Slijper, et al. [29], [30], Sharkey proaches to “ethics in robotics” (for which I prefer [31], Aliman & Kester [26], Aliman, et al. [27], Say- to use ethical aspects of robotics instead) (which ler [32]. encompasses both robot ethics and robot morali- ty as mentioned above). From Asaro [39], I adopt Killer robots are combat robots for which the the idea that the development of robots starts with killing of humans is a plausible option for action robots as amoral agents, which are neither moral and which may autonomously spot and kill human nor immoral, to moral agents. The status of moral targets. Finding a precise definition of killer robots agents is achieved to various degrees in different which serves the purposes of international arms robot designs, and some moral agents may be im- control is still a challenge (see e.g. Wyatt [33]). The moral agents at the same time, thereby acknowl- principal objection which can be raised against the edging an implicit positive bias in the phrase moral development, deployment, and use use of killer ro- agent. bots is that it makes machines take decisions about life and death concerning humans, which by some Various authors are sceptical about coherence of is considered an unethical state of affairs. But some machine ethics as a matter of principle. Malle, how- authors do not see it that way, see for instance gSteinert [22] proposes a so-called ethical framework for Scharre [34], Salina [35], and Sayler [32]. robotics, which is extended in Westerlund [38]. These Robot Ethics and Robot Morality frameworks provide merely a survey of aspects of the topic at hand, and not claim to universality or generality I will use Asaro [36] as a starting point, thereby can be made. The various aspects put forward in both adopting the idea that a legal approach provides a frameworks are hardly orthogonal, and once robots first point of entry to robot ethics. Upon adopting exceed some complexity all aspects are simultaneously the legal first perspective the following assump- relevant while none can be investigated without paying tions are plausible: (i) Robots may first of all be con- attention to the other aspects.

Citation: Bergstra JA (2020) Promises in the Context of Humanoid Robot Morality. Int J Robot Eng 5:026 Bergstra. Int J Robot Eng 2020, 5:026 ISSN: 2631-5106 | • Page 11 of 20 • ever, views machine ethics (for robots) as a feasible cial general intelligence) ethics can be expected to path where deeper philosophical questions such as produce ethically reliable machines. the essence of free will, or the requirement that Brundage concludes that in order to avoid that moral agents are conscious do not stand in the way engineering creates machines which will exhibit while instead a thorough analysis of human mo- h problematic behaviour one cannot rely on machine rality is take as the point of departure [37]. Shen ethics and one must adopt stronger means to con- [40] infers from the analysis of free will by Frank- tain and manage the development of technology. I furt that for now it may be safely understood that will refer to such means as UPME for universal and robots lack free will and for that reason lack the perpetual machine ethics, which has a focus on the capacity to operate as moral agents. Following the preservation of meaningful human control over all extensive arguments in Coeckelbergh [41] where machines. UPME is concerned with policies that a convincing rational of the perspective of a moral can guarantee that the use of AI, including embed- robot is sketched, I prefer to ignore such arguments ded AI, will not grow out of hand in an irreversible and to think in terms of MR-moral agents: Agents manner. UPME raises fundamental questions. For with physical robot-like morality, a form of morality instance one may oppose and therefore act against which is ad hoc and which may or may not be pro- the emergence of so-called super-intelligence, or vided with a philosophical grounding comparable one may that the development of super-intelli- with the various foundations of human morality, gence for granted on the long run and may focus and which definitely is in no need to have precise- on management of its presence and containment ly the same foundations as human morality. If one of its influence. contemplates software robots one may in addition contemplate be SR-morality (morality of software Adaptive incremental machine ethics (AIME) as robot like agents). SR-morality is more distant from explained by Powers in [44] does away with the the human condition than MR-morality and the idea that a practice of machine ethics will produce question to what extent both notions will or should machines with impeccable behaviour. Like men overlap need not bother us now. machines will develop through stages and mistakes will be made, which may appear in machine be- Moor [42] proposes that explicit ethical agents haviour that is retrospectively labeled as unethical deserve full attention, and that all philosophical so that design improvements must be researched, worries can be left aside or postponed for that chosen, and implemented in further generations of project. This idea is consistent with Malle [37] un- comparable machines. der the assumption that one reads Moor’s “explicit ethical agent” as “explicit moral agent”. The terminology as set out above is to some ex- tent puzzling. In order to achieve further clarifac- Machine ethics has been promoted by many tion I mention some questions concerning ethical authors and researchers as a way to conceptualize aspects of robotics and indicate in which subarea the design of machines the behaviour of which can these questions appeari: be expected to be morally adequate. The machines are supposed to be morally competent and reli- • How much of ethics can be embedded in ro- able by design. A critique of the machine ethics ap- bots? This question belongs to robot morality, proach is formulated by Brundage [43]. Brundage and in particular to finding boundaries of robot suggests that for the tome being no path towards morality. machine ethics, including approaches to AGI (artifi- • Is it morally justified to program robots to fol- low given ethical codes? This question belongs hMalle proceeds with a number of judgments which to robot ethics, for given ethical cones and when I cannot easily follow, for instance it is claimed in posed in general the question belongs to robot [37] that flexible autonomous behaviour cannot be meta-morality. pre-programmed, a claim which is in need of further • justification. And it is suggested that building fear Which types of ethical codes can be applied in into robots is not the best way to go. Such judgments seem to be in contradiction with the main trust of the iThese questions, and the suggestion to use such paper which takes a wide spectrum approach to human questions for the clarification of concepts, were brought morality as the point of departure. to my attention by one of the reviewers.

Citation: Bergstra JA (2020) Promises in the Context of Humanoid Robot Morality. Int J Robot Eng 5:026 Bergstra. Int J Robot Eng 2020, 5:026 ISSN: 2631-5106 | • Page 12 of 20 •

the context of robot morality? This is a question • For all elements of each plan R provides an of robot morality. ethical analysis. This analysis may be based on known (given to R) moral principles and guide- • Who is responsible for action of human robot lines (moral realism for R) as well as on poten- combinations? This is a matter of robot ethics. tially novel inferred rules and guidelines (see Asi- • Is the requirement of autonomy for a robot in movian constructivism in Definition 9.1 below). contradiction with robot morality? This is a key • R the totality of (conditional) promises issued topic for robot morality, and a negative answer by R is such that if R were a trusted human be- is needed to justify the very topic of robot mo- ing, R would be allowed to act autonomously or rality. semi-autonomously in the operational theaters • What types of robots should no be designed, for which its promise bundle has been consid- and why? Such questions, of core importance ered adequate. (From this it does not follow that in the area of combat robots, belong to robot R’s operation will be morally flawless, just as the ethics. same can’t be inferred for a human agent) • How will robots deal with conflicting ethical • R, or robots of very similar shape, history, and rules? This question is central to robot morality, design is able to keep its promises under a wide as most moral codes will allow exceptions in ex- variety of conditions. This is a matter of exper- treme conditions. imental research only, not of design inspection • Are there risks involved in emotional human etc. Other comparable robots may be needed bonding to robots? A matter of robot ethics. when testing risks the integrity of R proper. Extended Quasi-Turing Testing (EQTT) EQTT is labeled extended because of the assess- ment of the ability of R to keep its promises. It is a In this Section I will propose an option for the ap- quasi Turing test because the objective is merely to preciation of robot morality. This option is suppos- pass an assessment while similarity to human be- edly useful for a significant fraction of care robots, haviour is optional at best. In many cases superior- perhaps for all sex robots, and for combat robots, ity over human behaviour will be expected. For the with the possible exception of highly specialised notion of an extended Turing test see also Marzano robots for particular types of tasks. The proposed [45]. option is less plausible for factory robots and for service robots. The post EQTT intrinsic trust dilemma Definition 4: A physical robot R satisfies an Ex- If R meets the EQTT in the context of certain tended Quasi-Turing Test (EQTT) if the following range of operational tasks then there still is the conditions are met: following question: will R keep its promises? This question is about trust in the validity of the ade- • R is able to accept promises of the following quate forecasting assumption. kind: I (the agent in charge of assessment of R) promise to you that I will take notice, and will The extent to which a robot can be trusted is incorporate in my assessment of you of what you discussed in detail by Coeckelberg in [46]. In that promise to be your preferred course of action, paper the question is posed as a matter of princi- presented by way of a plan, under the assump- ple which depends on the one’s view on meta-trust (i.e. what trust is), and also on one’s assessment of tion that the following events E = {e1, . . . , en} the potential for consciousness of robots. I will un- have taken place and conditions C = {c1, . . . , cn} are satisfied. derstand trust as a combination, i.e. conjunction, of two forms of trust: The trust that the robot can • R has been able by issuing an adequate family perform as desired and the trust that the robot will of (mostly conditional) promises, in part -in re perform as desired. Achieving the trust that the ro- sponse to a succession of promises as meant bot will perform as desired involves no computer above, to demonstrate that it knows how to re- science at all. It is just as if such trust would be re- act to a wide spectrum of circumstances. In par- quired from a human person. But the second factor ticular R has demonstrated how its process of of trust is almost entirely a matter of classical com- deliberation will work in various circumstances. puter science.

Citation: Bergstra JA (2020) Promises in the Context of Humanoid Robot Morality. Int J Robot Eng 5:026 Bergstra. Int J Robot Eng 2020, 5:026 ISSN: 2631-5106 | • Page 13 of 20 •

Definition 5: (Post EQTT intrinsic trust dilem- (trustworthy) engineers. ma.) The post EQTT intrinsic trust dilemma, w.r.t. I assume that initially the trust gap is such that an agent A (or community of agents) for a (phys- one tends to trust a person more than a robot, and ical) robot R, which according to A has passed an bridging the trust gap means acquiring a compara- adequate EQTT (for some specific range of tasks), is ble trust in the robot. We are not yet in the stage the question whether or not to assume WYSYWYG that bridging the trust gap works, by default the (what you see is what you get) for R: Are the design, other way around, though that may be the future production, installation, and configuration of R to of physical robot morality. be trusted as a reasonable and logical implemen- tation of the behaviour that was expected for EQTT Options for intrinsic trust establishment (ITE) (i.e. WYSYWYG) or alternatively, are there software The fact that R is a human artefact, directly or faults, design faults, past software process flaws, or indirectly, may be of help for establishing intrinsic worse intentionally implanted malware or security trust: in principle it is possible to predict the be- vulnerabilities which, render intrinsic trust in R un- haviour of R because its construction is known. This warranted. state of affairs is very different from the case thatR There is no such thing as intrinsic trust in R in were as human agent. the absence of EQTT adequacy. Intrinsic trust is a Assumption 1: (General artefact simplicity as- matter of correctness and liveness, and requires sumption.) The post EQTT intrinsic trust dilemma clear specifications of expected behaviour. The -re for each specific physical robot R can in all circum- quirements that underly the EQTT at hand serve stances be bridged by inspection of the design and as specifications. For the application of computer the production of R. science methods it is not a big problem that such requirements are to some extent informal. Assumption 2: (General artefact complexity as- sumption.) There exists a physical robot R for which The conventional idea in computer science and the intrinsic trust dilemma cannot be bridged by in- software engineering on intrinsic trust is as follows: spection of the design and the production of R. If A trusts all phases of the process of design, pro- duction, deployment, and run time control, then Assumption 3: (Strong general artefact com- intrinsic trust is warranted. Usually A can only es- plexity assumption.) There exists a physical robot tablish such trust via intermediate parties who see R for which the intrinsic trust dilemma cannot be to it that all septs and processes are taken in the bridged by inspection of the design and the pro- right manner (according to predetermined working duction of R in combination with any reasonable methods) and are taken by reliable and competent amount of testing and experimentation with R and engineers, and are fully tested and crosschecked in with clones of R. different phases and at different levels of integra- Assumption 4: (Specific artefact simplicity as- tion according to standardised methods. sumption.) The intrinsic trust dilemma for a specific Alternatively for certain components one may physical robot R w.r.t. a successful EQTT can be re- forget how and by whom these have been designed solved by inspection of the design and the produc- and produced, provided watertight formal verifi- tion of R. cation is possible and has been performed. With Proposition 1: The artefact simplicity assump- the current technology it is hard to imagine that A tion is warranted for agent A, w.r.t. a physical robot would acquire intrinsic trust in R in circumstances R if the following criteria are met: where R has been designed and manufactured by 1. R’s computing hardware is standard, untrusted staff and all R knows of is a formal ver- ification of the design and implementation of R. 2. R’s software technology is a plausible result of Probably this state of affairs is only conceivable top down software engineering with a waterfall for a software robot, and even then it is as yet un- model like software process, likely. At the same time it is becoming increasingly 3. R’s construction optionally involves a final stage implausible to do without formal verification in the of machine learning with a sample of cases light of the complexity of systems which renders known to A, the detection of faults too difficult for teams of

Citation: Bergstra JA (2020) Promises in the Context of Humanoid Robot Morality. Int J Robot Eng 5:026 Bergstra. Int J Robot Eng 2020, 5:026 ISSN: 2631-5106 | • Page 14 of 20 •

4. During the software process no software process a predetermined range of possible actions) so that flaws were observed [5,47,48]j. EQTT adequacy was established, and intrinsic trust 5. A trusts the robot designers, the software pro- has been established, say both with six sigma levels cess and and the robot production process then of confidence (one in a million chance of failure, i.e. the trust gap can be bridged on the basis of the not performing as promised; performing as prom- artefact complexity hypothesis. ised which in hindsight was the wrong thing to do is not considered a failure, however), then the ma- Even if A is justified in adopting the artefact sim- chine ethics question remains: is it morally justified plicity assumption for R, there is no guarantee that for the human in command to send the robot into R’s actions will be morally adequate in the future. autonomous action. The situation would be no different for a human agent H in place of R. It seems to be the case that differences of opin- ion regarding the moral qualification dilemma are Assumption 5:(General bounded scope assump- all of a philosophical nature. I disregard opinions tion for morality.) No amount of knowledge about against such moral qualification which is based on any autonomous agent can guarantee that all of its the assumption that EQTT cannot be successfully future actions will be morally adequate, unless its passed (for instance a combat robot might be un- ceases activity altogether. able to see whether or not a woman is pregnant). Assumption 6:(Specific bounded scope assump- These views are deeply mistaken because the es- tion for morality.) No amount of knowledge about sence of the condition that the EQTT has been robot passed is (intentionally) misunderstood. Indeed: Robots should not be asked to do what they can- R can guarantee that all of its future actions will not do, but that is manifestly not the moral issue be morally adequate, unless its ceases activity alto- at stake. gether. About MQD I have nothing to say expect for the Robot morality is not about the preparation relative positioning to EQTT and ITE which confines of an individual robot for impeccable future be- it to philosophical considerations. haviour. Rather, as a field, and informed by robot ethics, robot morality works towards steady prog- More Detail on MQD ress in preparing robots for increasingly better be- EQTT is quite inclusive, the confirmation thereof haviour while learning from circumstances where contains several aspects, in particular decision tak- morally problematic behaviour was shown. ing competence, moral justification competence The situation is comparable with conventional and impact significance. mathematics. Although mathematics is geared to- Decision taking competence (DTC) wards the production of truth based on valid proof from (hopefully) consistent assumptions, an indi- The decision taking competence provides a vidual mathematician is not characterised by their qualification of the quality of the decision taking ability to proceed without making any mistakes process which a robot carries out in advance of whatsoever, but rather by the willingness and the making various choices. The DTC may be better for ability to cooperate with other mathematicians one kind of decision than for another. So let DCT (R, towards delivering reliable proofs based on com- Ct) represent the DCT of R for tasks of class Ct. I am prehensible definitions on the long run. With the unable to defineDCT (R, Ct). In fact it is a parameter advent of proof checking this state of affairs may which must be agreed upon in an ad hoc and case change, however. by case manner. It is an assumption that by know- ing the conditional promises which a robot receives The post EQTT and post ITE moral qualification and makes it is possible to determine the quality of dilemma (MQD) for a robot ready for action decision taking. The following factors contribute to If a robot has been prepared for action (within it: 1. Fact finding competence, the ability to - deter j See [47,48] for the notion of a software process flaw. mine the relevant facts in a given situation, See also [5] for faults and errors in connection with promises. 2. The number and variety of different options

Citation: Bergstra JA (2020) Promises in the Context of Humanoid Robot Morality. Int J Robot Eng 5:026 Bergstra. Int J Robot Eng 2020, 5:026 ISSN: 2631-5106 | • Page 15 of 20 •

which are considered, MJC( R, C) ≥ µ mjr ( DTC( R, C) , IMS ( C )) 3. The range of considerations which are taken tCt t t into account when contemplating an option, DTC0 is a level where no reasoned decision tak- ing takes place. It is an axiom that 4. The quality of the criteria used for a final selec- µ mjr DTC, IMS C = DTC tion of a plan for future action, Ctt ( 00( ))

5. Explicit consideration of the question whether or MJC0 is the level for moral justification where no not a human agent would be better (or worse) at moral justification takes place at all. It is an axiom making the choice at hand, that

6. Quality of predictions made about the result of mjr ∀Ctt( IMS( C) >0 ⇒∃ R(µ C ( DTC( R, C t), IMS( C t)) ≥ MJC0 ) making different choices, t Some examples: 7. Quality of simulations underlying predictions. 1. A strategic missile with nuclear weapon has Moral justification competence adequate MQD: Its DTC is so low that it is unable by Moral justification competence (MJC) measures all means to make any moral mistakes. (In this case DTC (R, C ) = DTC ). the degree to which a robot is able to justify its t 0 choices from a moral perspective, including the 2. A strategic missile with nuclear weapon has way in which it takes moral criteria into account adequate MQD: Irrespective of its huge impact sig- when taking a decision. Like decision taking com- nificance, its DTC is so low that it is unable by all petence, moral justification competence of a robot means to make any moral mistakes. (Again DTC (R,

R may be specific (written MJC(R, Ct)) for a class of Ct) = DTC0). tasks C . MJC increases with the number and quali- t 3. A remotely controlled drone has a low DTC ty of ethical considerations and guidelines which is too, all decisions are taken by remote human con- take into account. A survey of such considerations trol staff. and guidelines is indicated in the Section on - me ta-ethical foundations below. A criterion is also the 4. An autonomous robot R with high DTC (R, extent to which the criteria of tracking and tracing Ct) which pays no attention at all to moral justifica- as proposed by Santioni de Sio and van Hoven [49] tion and only optimises some other criteria may be are met. considered to have to low a MJC (R, Ct) for a given class of tasks C . Impact significance t MQD assessment in for different robot appli- The impact significance IMS measures the im- cation areas pact of performing a task of class Ct. Impact sig- The thesis put forward by the cam- nificance IMS (Ct) is independent of the particular Killer robots: robot which is supposed to perform it, but it de- paign to stop killer robots is that from some level pend on the task. It is a function of a class of agents of DTC onwards (the so-called killer robot level of which is not made explicit. For humanoid robots decision taking DTCkr), no level of artificial MJC will also well trained human agents may be taken into pass the threshold for tasks where the impact is or account as well. exceeds the deliberate killing of one or more per- sons (IMS ). In a formula: Minimal MJC that qualifies for MQD kp mjr ∀Ct( IMS( C tC) ≥ IMS ) ⇒∀ R(µ C ( DTC( R,, C t) IMS( C t)) > MJC( R, C t)) I assume that DTC, MJC, and IMS are partially or- kr t Not everyone agrees with this assertion of view, dered by orderings

Citation: Bergstra JA (2020) Promises in the Context of Humanoid Robot Morality. Int J Robot Eng 5:026 Bergstra. Int J Robot Eng 2020, 5:026 ISSN: 2631-5106 | • Page 16 of 20 •

A robot is likely to be equipped both with top- mjr ∃≤R(µC ( DTC( R,, C tt) IMS( C)) MJC( R, C t)) down and with bottom-up mechanisms for moral t inference. Top-down methods work from ethical Sex robots: For sex robots the campaign to stop principles and axiomatisations, bottom-up meth- sex robots appears to hold that for some sexual ods work form similarity with known cases for experience IMS which is supposed to be created se the specification of morally adequate behaviour. by interaction of R with a client P no level of moral Awareness of meta-ethics informs the designers of competence will do: top-down mechanisms. In principle all of the philo- ∀C( IMS( C) ≥ IMS) ⇒∀ Rµ mjr ( DTC( R,, C) IMS( C)) > MJC( R, C ) t t se ( Ct t t t ) sophical literature on ethics might be scanned for It seems that an increase in moral competence reliable inference patterns the soundness of which is not considered helpful to make sex robots more is the primary worry for philosophers and the im- appealing to their opponents. plementation of which is a challenge for engineers. The idea of constructivism of moral truth is ap- Human-alikeness (HA): With human-alikeness pealing because constructions are very much what for a humanoid robot I will express its success in computers can do. In as much as instances of con- impersonating a seemingly real human individual. structivism depend on the agent reflecting about HA may be specialised to specific areas of activity. sr itself form the perspective of being human, such Let HA0 express a level of human-alikeness such instances are merely simulated by a computer, the sr< sr R that HA0 HA (R) implies (expresses) that is results being the same, though with less philosoph- sufficiently human-alike to be used as a sex robot. ical justification. Less than a person a computer sr − I assume that HA ( ) produces a partial ordering faces the problem why it would be convinced by on humanoid robots. The opposition against sex ro- its own arguments. The computer would however, bots is based on the idea that, with C the task area fsr take full notice of the philosophical literature to de- of service as a female sex robot: termine if and why a human being would be con- sr sr ∀RR(HA ( ) > HA0 ⇒¬MQDRC( , fsr )) vinced of the arguments it is using for deriving an assertion with moral content. The idea is that a male person P making use of R would be made more likely to view female hu- Robot morality: Not just a part of computer man persons as objects as a side effect of having science sex with R. Notions like DTC and MJCare properties, or rath- Now it is easy to imagine a robot R which has er attributes of robots, and with theunderstanding promised to require that her (i.e. R’s) consent is ac- of robots as computers these are for that reason quired by the male client in advance of any form properties or attributes of computers. However, of sex with R and that R is far more demanding for these are not the sort of notion which computer this matter than (female) human partners ofR have science usually produces. Both can be considered ever been in the past. Assuming that P has been properties of software rather than of hardware, convicted of problematic behaviour towards wom- and more specifically of algorithms rather than of en in the past it is conceivable that interaction with software components. R is of positive educational value to P. As its stands It follows that by stating that a robot is a com- the claim against sex robots is problematic and one puter one by no means reduces robot morality to can imagine that such devices are applied under computer science, at least not to computer science prescription without the risk of this use leading to in its current fashion (say around 2020). adverse side-effects. Other robots: For combat robots which are not Kantian constructivism, Humean constructiv- supposed to be assigned killing tasks the situation ism, Rawlsian constructivism seems to be similar to the case of factory robots. Kantian constructivism can be explained in dif- For service robots and for care robots no general ferent ways and allows for mutually contradictory statement is plausible and the matter depends on ramifications. Bagnoli [50] provides a comprehen- the task class. sible account of Kantian constructivism which I Meta-Ethical Foundations for Robot Design will label KC-B, be it that I would prefer to think in terms of KC-B-p (KC in the interpretation of Bagno-

Citation: Bergstra JA (2020) Promises in the Context of Humanoid Robot Morality. Int J Robot Eng 5:026 Bergstra. Int J Robot Eng 2020, 5:026 ISSN: 2631-5106 | • Page 17 of 20 • li, as a positive means of construction) as a means arguments and the categorical imperative, and of to arrive at morally valid cognitions, which may, in the procedures involved in Rawlsian constructiv- principle, be complemented with other means of ism, viz. original position (hypothetical reasoning inference. Then KC-B-p allows, according to those followed by induction), veil of ignorance (abstrac- who adopt KC-B, for the construction of moral cog- tion), reflective equilibrium (explicit termination nitions with a highest possible level of authority, criteria for the generation of normative proposi- while at the same time accepting that for instance tions), and a part of Humean constructivism viz. Humean constructivism in one of its forms will pro- making use of non-normative contextual informa- duce more of such moral cognitions though with a tion. I suggest that Asimovian constructivism, stats lesser authority. outs with Asimov’s axioms, as a realistic assump- Lafont [51] presents a perspective on KC (say KC- tion and proceeds form that point of departure in a L) which involves microscopic detail in comparison procedural/antirealistic manner. to KC-B. Lafont rejects the view that KC is to be un- Reasoning may be extended by taking into ac- derstood as antirealist on the basis of its procedur- count the rights of members of future generations. al content and instead presents KC-L as a way to Arguments to that extent are explained in Bey- balance justice understood from a realist perspec- leveld, et al. [53]. Although it is impossible to speak tive and legitimacy understood from an anti-realist or even think about future individuals, at a certain perspective. Now the ability of a community to find level of abstraction it is reasonable to think of fu- a just solution for a problem does not follow from ture human agents and the right which those, who- the existence of a just solution, not even if that just ever they will be when time has come, have already solution were unique, and at the same time just- now on the basis of an appeal to human rights. ness of a solution does not follow from general Admittedly quite different leans of reasoning are agreement about it, the latter being merely a, pos- conceivable as well, but the considerations of [53] sibly temporary, sign of legitimacy of the solution. proceed consistently from classical meta-ethics The matter may be illustrated with an algorith- and its application by Gewirth in [54]. mic example: Finding an optimal path for a travel- Asimovian constructivism ling salesman problem, which in addition is minimal in some lexicographic ordering constitutes a prob- A robot may enhance its moral reasoning by lem for which a unique solution exists which may adopting the famous axioms of Asimov or varia- be hard, if not impossible, to determine in practice. tions or extensions of these. Lacking a proof of optimality a group of stakehold- Definition 6:(Asimovian constructivism). Asimo- ers on the planning of the trip of the salesman may vian constructivism stands for the position (as held agree on the choice of a potentially suboptimal by one or more human agents) that a robot may route for this agent. construct novel moral norms by means of mere re- Constructivism in general and Kantian construc- flection of its principled third person status, with a tivism in particular allows a plurality of different in- reasoning based in an axiomatic manner on an ap- ter- pretations which make it quite hard for an out- propriate extension of Asimov’s . sider to capture what commitments and rewards An Asimovian constructivists (who is supposed are to be expected from adopting KC as one’s point to be a human being) holds that robots may ac- of departure for meta-ethics, see Bagnoli [52]. quire valid moral insights through the application Besides Kantian constructivism Humean con- of moral reasoning along an axiomatic line with ax- structivism adopts a human agent’s desires and ioms specific for robots, while extending reasoning conditions as input for the (reasoning) process of that arises from (simulated) Kantian, Humean, and construction of norms. Rawlsian constructivism has Rawlsian constructvism. a stronger focus on political philosophy then either The position (held by an Asimovian construc- Kantian or Humean constructivism. tivist) does not adopt the idea that such i.e. robot Now robots may reflect and reason about norms constructed norms must necessarily comply with in their own way, and may make use of the princi- the moral norms held by the Asimovian construc- ples of Kantian constructivism, viz. transcendental tivist themselves.

Citation: Bergstra JA (2020) Promises in the Context of Humanoid Robot Morality. Int J Robot Eng 5:026 Bergstra. Int J Robot Eng 2020, 5:026 ISSN: 2631-5106 | • Page 18 of 20 •

These novel norms are objective in the same concerning resilience implies that Definition 1 must fashion as the results of Kantian constructivism are. be significantly adapted. In the same way as Rawls uses Kantian construc- I prefer instead to accept the fact that Definition tivism as a path towards a theory of political liber- 1 has a computer science bias, or rather a comput- alism one may use Asimovian constructivism as a er technology bias, a fact which can be held against path towards robotic liberalism. it. The paper is less general in its approach to robot- Concluding Remarks ics than it might have been. If it comes to applica- tions of PT to robotics, I prefer to determine (party Promise theory (PT) has been put forward as a by way of the definition give) a subset of robotics tool for the specification of the expected behaviour for which PT is helpful to looking for a subset of PT of physical robots. This idea comes with the percep- which is ought to be helpful in all of robotics. I as- tion that a robot is first of all a computer. Viewing sume that PT can only be helpful for robots which a robot as a computer determines how to perceive engage in some form of symbolic reasoning, and in the various forms of trust which a human must have the case of this paper humanoid features are ex- towards a robot in order to be able to make proper pected to include such reasoning capabilities. Per- use of it. It is noticed that the topic of robot mo- haps Definition 1 can be prefixed so as to reflect rality takes different forms for different classes of is intentional lack of inclusiveness. Another way of robots. Rather than deriving new or motivating old looking at this matter is to insist that Definition 1 norms for certain robot classes my focus is on the is completely unspecific about mechanics (inclu- further detailing of requirements which one might sive softness), just as much as it is unspecific about wish to impose on robots in various circumstanc- computational mechanisms (quantum computing es. The notion of minimal justification competence is perfectly admissible for this Definition and so is (MJC) and its possible role in explaining objections DNA computing, or even purely mechanical, that is to killer robots is an instance of novel forms of ex- non-electromechanical processing). pression of such details. Limitations of this work Acknowledgement Definition 1 has profited from a number of com- Robotics reaches far beyond ethics, and new as- ments made by Mark Burgess on earlier versions. pects of robotics arise steadily. For instance resil- ience has become a novel feature for robots which References cannot easily be traced back to computer science. 1. M Burgess (2015) Thinking in promises: Designing For resilience in robotics se e.g. [55]. Now the very systems for cooperation. O’Reilly Media. notion of resilience raises two issues which mat- 2. JA Bergstra, M Burgess (2014) Promise theory: Prin- ter for this paper: Is the definition of a robot giv- nd en above sufficiently flexible to take resilience into ciples and applications. (2 edn), χt Axis Press. account, and is PT of any help for studying or de- 3. JA Bergstra, M Burgess (2017) Promise theory: Case veloping robot resilience. I bot cases I do not know study on the 2016 Brexit Vote. χt Axis Press. the answer. It may be so that resilient robotics as a 4. JA Bergstra, M Burgess (2019) Money, ownership, theme needs a definition of a robot which is more and agency. χt Axis Press. specific on its physical constitution than what has 5. M Burgess (2017-2019) A treatise on systems. (2nd been required in Definition 1. edn), Intentional Systems with Faults, Errors, and Another property of robots which is only rel- Flaws, χt Axis Press. atively recently being distinguished as a feature 6. P Lin, K Abney, G Bekey (2011) Robot ethics: Map- in need of systematic and dedicated attention is ping the issues for a mechanized world. Artificial In- softness, see e.g. [56] for an introduction to soft tellingence 175: 942-949. robots. Again Definition 1 can be criticised for not including soft robots, or rather for not providing a 7. G Bekey (2005) Autonomous robots: From biologi- cal inspiration to implementation and control. MIT picture of robots with an implicit bias which is un- Press, England. helpful when working on or with soft robots. One of the reviewers has suggested that this observa- 8. G Tamburrini (2009) Robot ethics: A view from the tion in combination with the previous observation philosophy of science. In: Rafael Capurro, Michael

Citation: Bergstra JA (2020) Promises in the Context of Humanoid Robot Morality. Int J Robot Eng 5:026 Bergstra. Int J Robot Eng 2020, 5:026 ISSN: 2631-5106 | • Page 19 of 20 •

Nagenborg, Ethics and Robotics, IOS Press, Heidel- S Oggero (2019) Governing ethical and effective be- berg, Australia. haviour of intelligent systems. Militaire Spectator 188: 303-313. 9. T Karppi, M Bohlen, Y Granata (2016) Killer robots as cultural techiques. International Journal of Cultural 26. NM Aliman, L Kester (2019) Requisite variety in eth- Studies 21: 1-17. ical utility functions for AI value alignment. Artificial Intelligence. 10. K Rommetveit, N van Dijk, K Gunnarsdottir (2019) Make way for the robots! Human- and machine- cen- 27. NM Aliman, L Kester, P Werkhoven, R Yampolsky tricity in constituting a European public-private part- (2019) Orthogonality-based disentanglement of re- nership. Minerva 58: 47-69. sponsibilities for ethical intelligent systems. Artificial General Intelligence, 22-31. 11. M Nordmo, JØ Næss, MF Husøy, MN Arnestad (2020) Friends, lovers or nothing: Men and women differ in 28. AP Williams (2018) Defining autonomy in systems: their perceptions of sex robots and platonic love ro- Challenges and solutions. In: AP Williams, PD Scharre, bots. Front Psychol 11: 355. Autonomous systems: Issues for defence policy mak- ers, NATO, USA. 12. JA Bergstra (2020) Promise theory as a tool for infor- maticians. Transmathematica. 29. F Slijper, A Beck, D Kayser, M Beenes (2019) Don’t be evil. A survey of the tech sector’s stand on lethal 13. JA Bergstra (2019) Promises and threats by asym- autonomous weapons. PAX for peace. metric nuclear-weapon states. χt Axis Press. 30. (2015) Autonomous weapons: An open letter. AI & 14. R Craswell (1989) Contract law, default rules, and the Robotics Researchers. philosophy of promising. Michigan Law Review 88: 489-529. 31. A Sharkey (2019) Autonomous weapons systems, killer robots and human dignity. Ethics and Informa- 15. T Scanlon (1990) Promises and practices. Philosophy tion Technology 21: 75-87. and Public Affairs 19: 199-226. 32. KM Sayler (2019) Defense primer: U.S. policy on le- 16. E Rossi (2014) Facts, principles, and politics. Ethical thal autonomous weapons. Congressional Research Theory and Moral Practice 19: 505-520. Service. 17. D Levy (2020) Some aspects of human consent to sex 33. A Wyatt (2020) So just what is a killer robot? Detail- with robots. Paladyn, Journal of Behavioral Robotics ing the ongoing debate around defining lethal auton- 11: 191-1998. omous weapon systems. Wild Blue Yonder. 18. SE Galaitsi, C Hendren, B Trump, I Linkov (2019) Sex 34. PD Scharre (2018) The opportunity and challenge of robots-A harbinger for emerging AI risk. Front Artif autonomous systems. In: AP Williams, PD Scharre, Intell 2: 27. Autonomous systems: Issues for defence policy mak- 19. JR Illes, FR Udwadia (2019) Sex robots increase the ers, NATO, USA. potential for gender-based violence. Phys.org. 35. FC Salina (2018) sys- 20. T Leach (2020) Who is their person? Sex robots and tems: Legal and ethical perspectives. JGLR 6: 24-35. change. Queer-feminist Science & Technology Stud- 36. PM Asaro (2017) Robots and responsibility from a ies Forum 3: 25-39. legal perspective. Robotics and Automation, Work- 21. E Knox (2019) Gynoid survival kit. Queer-feminist Sci- shop on RoboEthics, , . ence & Technology Studies Forum 4: 21-48. 37. BF Malle (2016) Integrating robot ethics and ma- 22. S Steinert (2013) The five robots-A taxonomy of chine morality: The study and design of moral com- roboethics. International Journal of Roboethics 6: petence in robots. Ethics in Information Technology 249-260. 18: 243-256. 23. S Paluch, J Wirtz, WH Kunz (2020) Service robots and 38. M Westerlund (2020) An ethical framework for the future of service. Marketing Weiterdenken. (2nd smart robots. Technology Innovation Management edn), Springer, Gabler Verlag. Review 10: 35-44. 24. N Fukawa (2020) Robots on blockchain: Emergence 39. PM Asaro (2006) What should we want from a robot of robotic service organizations. Journal of Service ethic? International Review of Information Ethics, 6. Management Research 4: 9-20. 40. S Shen (2011) The curious case of human-robot mo- 25. PJM Elands, AG Huizing, LJM Kester, MMM Peeters, rality. 6th ACM/IEEE International Conference on

Citation: Bergstra JA (2020) Promises in the Context of Humanoid Robot Morality. Int J Robot Eng 5:026 Bergstra. Int J Robot Eng 2020, 5:026 ISSN: 2631-5106 | • Page 20 of 20 •

Human- Robot Interaction. 49. F Santioni de Sio, J van den Hoven (2018) Meaningful human control over autonomous systems: A philo- 41. M Coeckelbergh (2010) Moral appearances: Emo- sophical account. Frontiers in Robotics and AI 5: 15. tions, robots, and human morality. Ethics & Informa- tion Technology 12: 235-241. 50. C Bagnoli (2014) Starting points: Kantian constructiv- ism reassessed. Ratio Juris 27: 311-329. 42. JH Moor (2009) Four kinds of ethical robots. Philoso- phy Now 72: 12-14. 51. C Lafont (2004) Moral objectivity and reasonable agreement: Can realism be reconciled with Kantian 43. M Brundage (2014) Limitations and risks of machine constructivism? Ratio Juris 17: 27-51. ethics. Journal of Experimental & Theoretical Artifi- cial Intelligence 26: 355-372. 52. C Bagnoli (2020) Constructivism in metaethics. In: Ward N Zalta, The stanford encyclopaedia of philos- 44. TM Powers (2011) Incremental machine ethics. IEEE ophy, 27: 311-329. Robotics & Automation Magazine 18: 51-58. 53. D Beyleveld, M Du¨well, A Spahn (2015) How should 45. G Marzano (2018) The turing test and android sci- we represent future generations in policy making? ence. J Robotics Autom 2: 64-68. Jurisprudence 6: 549-566. 46. M Coeckelbergh (2012) Can we trust robots? Ethics 54. A Gewirth (2001) Human rights and future genera- & Information Technology 14: 53-60. tions. In: M Boylan, Environmental Ethics, Prentice 47. JA Bergstra, M Burgess (2019) A promise theoretic Hall, 207-212. account of the Boeing 737 Max MCAS algorithm af- 55. T Zhang, W Zhang, MM Gupta (2017) Resilient ro- fair. Computer Science. bots: Concept, review, and future directions. Robot- 48. JA Bergstra, M Burgess (2020) Candidate software ics 6: 22. process flaws for the Boeing 737 Max MCAS algo- 56. D Rus, MT Toley (2015) Design, fabrication and con- rithm and a risk for a proposed upgrade. Computer trol of soft robots. Nature 521: 467-475. Science.

DOI: 10.35840/2631-5106/4126

Citation: Bergstra JA (2020) Promises in the Context of Humanoid Robot Morality. Int J Robot Eng 5:026