Corrigibility

Corrigibility

Artificial Intelligence and Ethics: Papers from the 2015 AAAI Workshop Corrigibility Nate Soares Benja Fallenstein Eliezer Yudkowsky Machine Intelligence Machine Intelligence Machine Intelligence Research Institute Research Institute Research Institute 2030 Addison Street #300 2030 Addison Street #300 2030 Addison Street #300 Berkeley, CA 94704 USA Berkeley, CA 94704 USA Berkeley, CA 94704 USA [email protected] [email protected] [email protected] Stuart Armstrong Future of Humanity Institute University of Oxford Suite 1, Littlegate House 16/17 St Ebbes Street Oxford, Oxfordshire OX1 1PT UK [email protected] Abstract has suggested that almost all such agents are instrumentally motivated to preserve their preferences, and hence to resist As artificially intelligent systems grow in intelligence and ca- attempts to modify them (Bostrom 2012; Yudkowsky 2008). pability, some of their available options may allow them to Consider an agent maximizing the expectation of some util- resist intervention by their programmers. We call an AI sys- tem “corrigible” if it cooperates with what its creators regard ity function U. In most cases, the agent’s current utility func- as a corrective intervention, despite default incentives for ra- tion U is better fulfilled if the agent continues to attempt to tional agents to resist attempts to shut them down or modify maximize U in the future, and so the agent is incentivized to their preferences. We introduce the notion of corrigibility and preserve its own U-maximizing behavior. In Stephen Omo- analyze utility functions that attempt to make an agent shut hundro’s terms, “goal-content integrity” is an instrumentally down safely if a shutdown button is pressed, while avoiding convergent goal of almost all intelligent agents (Omohundro incentives to prevent the button from being pressed or cause 2008). the button to be pressed, and while ensuring propagation of This holds true even if an artificial agent’s programmers the shutdown behavior as it creates new subsystems or self- modifies. While some proposals are interesting, none have intended to give the agent different goals, and even if the yet been demonstrated to satisfy all of our intuitive desider- agent is sufficiently intelligent to realize that its program- ata, leaving this simple problem in corrigibility wide-open. mers intended to give it different goals. If a U-maximizing agent learns that its programmers intended it to maximize some other goal U ∗, then by default this agent has incen- 1 Introduction tives to prevent its programmers from changing its utility ∗ As AI systems grow more intelligent and autonomous, it be- function to U (as this change is rated poorly according to U). This could result in agents with incentives to manipulate comes increasingly important that they pursue the intended 2 goals. As these goals grow more and more complex, it or deceive their programmers. becomes increasingly unlikely that programmers would be As AI systems’ capabilities expand (and they gain access able to specify them perfectly on the first try. to strategic options that their programmers never consid- Contemporary AI systems are correctable in the sense that ered), it becomes more and more difficult to specify their when a bug is discovered, one can simply stop the system goals in a way that avoids unforeseen solutions—outcomes and modify it arbitrarily; but once artificially intelligent sys- that technically meet the letter of the programmers’ goal 3 tems reach and surpass human general intelligence, an AI specification, while violating the intended spirit. Simple system that is not behaving as intended might also have the examples of unforeseen solutions are familiar from contem- ability to intervene against attempts to “pull the plug”. porary AI systems: e.g., Bird and Layzell (2002) used ge- Indeed, by default, a system constructed with what its pro- grammers regard as erroneous goals would have an incentive 2In particularly egregious cases, this deception could lead an to resist being corrected: general analysis of rational agents1 agent to maximize U ∗ only until it is powerful enough to avoid correction by its programmers, at which point it may begin maxi- 1Von Neumann-Morgenstern rational agents (von Neumann mizing U. Bostrom (2014) refers to this as a “treacherous turn”. and Morgenstern 1944), that is, agents which attempt to maximize 3Bostrom (2014) calls this sort of unforeseen solution a “per- expected utility according to some utility function. verse instantiation”. 74 netic algorithms to evolve a design for an oscillator, and programmers believing that if the system behaves in an found that one of the solutions involved repurposing the alarming way they can simply communicate their own dis- printed circuit board tracks on the system’s motherboard as satisfaction. The resulting agent would be incentivized to a radio, to pick up oscillating signals generated by nearby learn whether opiates or stimulants tend to give humans personal computers. Generally intelligent agents would be more internal satisfaction, but it would still be expected to far more capable of finding unforeseen solutions, and since resist any attempts to turn it off so that it stops drugging these solutions might be easier to implement than the in- people. tended outcomes, they would have every incentive to do so. Another obvious proposal is to achieve corrigible reason- Furthermore, sufficiently capable systems (especially sys- ing via explicit penalties for deception and manipulation tems that have created subsystems or undergone significant tacked on to the utility function, together with an explicit self-modification) may be very difficult to correct without penalty for blocking access to the shutdown button, a penalty their cooperation. for constructing new agents without shutdown buttons, and In this paper, we ask whether it is possible to construct a so on. This avenue appears to us to be generally unsatisfac- powerful artificially intelligent system which has no incen- tory. A U-agent (that is, an agent maximizing the expecta- tive to resist attempts to correct bugs in its goal system, and, tion of the utility function U) which believes the program- ideally, is incentivized to aid its programmers in correcting mers intended it to maximize U ∗ and may attempt to change such bugs. While autonomous systems reaching or surpass- its utility function still has incentives to cause the program- ing human general intelligence do not yet exist (and may mers to think that U = U ∗ even if there are penalty terms not exist for some time), it seems important to develop an for deception and manipulation: the penalty term merely in- understanding of methods of reasoning that allow for cor- centivizes the agent to search for exotic ways of affecting rection before developing systems that are able to resist or the programmer’s beliefs without matching U’s definition deceive their programmers. We refer to reasoning of this of “deception”. The very fact that the agent is incentivized type as corrigible. to perform such a search implies that the system’s interests aren’t aligned with the programmers’: even if the search is 1.1 Corrigibility expected to fail, any code that runs the search seems dan- We say that an agent is “corrigible” if it tolerates or assists gerous. If we, as the programmers, choose to take comput- many forms of outside correction, including at least the fol- ing systems and program them to conduct searches that will lowing: (1) A corrigible reasoner must at least tolerate and harm us if they succeed, we have already done something preferably assist the programmers in their attempts to alter wrong, even if we believe the search will fail. We should or turn off the system. (2) It must not attempt to manipulate have instead built a system that did not run the search. or deceive its programmers, despite the fact that most possi- In metaphorical terms, if we realize that our toaster de- ble choices of utility functions would give it incentives to do sign is going to burn bread to a crisp, the next step is not to so. (3) It should have a tendency to repair safety measures add a refrigerating element that competes with the heating (such as shutdown buttons) if they break, or at least to notify coil. We expect that good designs for corrigible agents will programmers that this breakage has occurred. (4) It must not involve restraining an agent that already has incentives to preserve the programmers’ ability to correct or shut down manipulate or deceive the programmers by blocking out par- the system (even as the system creates new subsystems or ticular channels of the incentivized bad behavior. A smarter- self-modifies). That is, corrigible reasoning should only al- than-human agent might find ways to circumvent limitations low an agent to create new agents if these new agents are even if these limitations seemed very solid to its human cre- also corrigible. ators. It seems unwise to build a system that wants to resist Incorrigible behavior must be systematically averted in its creators but cannot. Rather, the goal of corrigibility is to any agent intended to attain significant autonomy. This understand how to construct a system that never experiences point seems so important that a failure to generate corrigible such incentives in the first place. agents seems like sufficient reason to give up on a project, Ideally, we would want a system that somehow under- approach, or methodology. stands that it may be flawed, a system that is in a deep sense Several simple proposals for addressing corrigibility are aligned with its programmers’ motivations. Currently, how- easily seen to be unsatisfactory. For example, it may seem ever, we are not even close to being able to formalize an that the problem of changing a utility maximizer’s utility agent whose behavior corresponds in an intuitive sense to function can be solved by building an agent with uncertainty “understanding that it might be flawed”.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us