Teaching AI Agents Ethical Values Using Reinforcement Learning And

Teaching AI Agents Ethical Values Using Reinforcement Learning And

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19) Teaching AI Agents Ethical Values Using Reinforcement Learning and Policy Orchestration (Extended Abstract) ∗ Ritesh Noothigattu,3 Djallel Bouneffouf,1 Nicholas Mattei,4 Rachita Chandra,2 Piyush Madan,2 Kush R. Varshney,1 Murray Campbell,1 Moninder Singh,1 and Francesca Rossi1 1IBM Research, Yorktown Heights, NY, USA 2IBM Research, Cambridge, MA, USA 3Carnegie Mellon University, Pittsburgh, PA, USA 4Tulane University, New Orleans, LA, USA Abstract similar principles [Arnold et al., 2017]. The overriding con- cern is that the agents we construct may not obey these values Autonomous cyber-physical agents play an increas- while maximizing some objective function [Simonite, 2018; ingly large role in our lives. To ensure that they Rossi and Mattei, 2019]. behave in ways aligned with the values of soci- The idea of teaching machines right from wrong has be- ety, we must develop techniques that allow these come an important research topic in both AI [Yu et al., 2018] agents to not only maximize their reward in an en- and related fields [Wallach and Allen, 2008]. Much of the re- vironment, but also to learn and follow the implicit search at the intersection of artificial intelligence and ethics constraints of society. We detail a novel approach falls under the heading of machine ethics, i.e., adding ethics that uses inverse reinforcement learning to learn and/or constraints to a particular system’s decision making a set of unspecified constraints from demonstra- process [Anderson and Anderson, 2011]. One popular tech- tions and reinforcement learning to learn to max- nique to handle these issues is called value alignment, i.e., imize environmental rewards. A contextual bandit- restrict the behavior of an agent so that it can only pursue based orchestrator then picks between the two goals aligned to human values [Russell et al., 2015]. policies: constraint-based and environment reward- While giving a machine a code of ethics is important, the based. The contextual bandit orchestrator allows question of how to provide the behavioral constraints to the the agent to mix policies in novel ways, taking the agent remains. A popular technique, the bottom-up approach, best actions from either a reward-maximizing or teaches a machine right and wrong by example [Allen et al., constrained policy. In addition, the orchestrator is 2005; Balakrishnan et al., 2019; 2018]. Here we adopt this transparent on which policy is being employed at approach as we consider the case where only examples of the each time step. We test our algorithms using Pac- correct behavior are available. Man and show that the agent is able to learn to act We propose a framework which enables an agent to learn optimally, act within the demonstrated constraints, two policies: (1) πR, a reward maximizing policy obtained and mix these two functions in complex ways. through direct interaction with the world, and (2) πC , ob- tained via inverse reinforcement learning over demonstrations 1 Introduction by humans or other agents of how to obey a set of behavioral constraints in the domain. Our agent then uses a contextual- Concerns about the ways in which autonomous decision mak- bandit-based orchestrator [Bouneffouf and Rish, 2019; Boun- ing systems behave when deployed in the real world are grow- effouf et al., 2017] to learn to blend the policies in a way ing. Stakeholders worry about systems achieving goals in that maximizes a convex combination of the rewards and con- ways that are not considered acceptable according to values straints. Within the RL community this can be seen as a par- and norms of the impacted community, also called “specifica- ticular type of apprenticeship learning [Abbeel and Ng, 2004] tion gaming” behaviors [Rossi and Mattei, 2019]. Thus, there where the agent is learning how to be safe, rather than only is a growing need to understand how to constrain the actions maximizing reward [Leike et al., 2017]. of an AI system by providing boundaries within which the One may argue that we should employ πC for all decisions system must operate. To tackle this problem, we may take as it will be more ‘safe’ than employing πR. Indeed, although inspiration from humans, who often constrain the decisions one could use πC exclusively, there are a number of reasons and actions they take according to a number of exogenous to employ the orchestrator. First, while demonstrators may priorities, be they moral, ethical, religious, or business val- be good at demonstrating the constrained behavior, they may ues [Sen, 1974; Loreggia et al., 2018a; 2018b], and we may not provide good examples of maximizing reward. Second, want the systems we build to be restricted in their actions by the demonstrators may not be as creative as the agent when ∗This paper is an extended abstract of an article in the the IBM mixing the two policies [Ventura and Gates, 2018]. By allow- Journal of Research & Development [Noothigattu et al., 2019]. ing the orchestrator to learn when to apply which policy, the 6377 Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19) agent may be able to devise better ways to blend the poli- 2 Proposed Approach cies, leading to behavior which both follows the constraints The overall approach we propose, aggregation at the policy and achieves higher reward than any of the human demon- phase, is depicted by Figure 1. It has three main components. strations. Third, we may not want to obtain demonstrations in The first is the IRL component to learn the desired constraints all parts of the domain e.g., there may be dangerous parts of (depicted in green). We apply IRL to demonstrations depict- the domain in which human demonstrations are too costly to ing desirable behavior to learn the underlying constraint re- obtain. In this case, having the agent learn what to do in the wards being optimized by the demonstrations. We then apply non-demonstrated parts through RL is complementary. RL on these learned rewards to learn a strongly constraint- Contributions. We propose and test a novel approach to satisfying policy πC . We augment πC with a pure reinforce- teach machines to act in ways blend multiple objectives. One ment learning component applied to the original environment objective is the desired goal and the other is a set of behav- rewards (depicted in red) to learn a domain reward maximiz- ioral constraints, learned from examples. Our technique uses ing policy πR. aspects of both traditional reinforcement learning and inverse Now we have two policies: the constraint-obeying pol- reinforcement learning to identify policies that both maxi- icy πC and the reward-maximizing policy πR. To combine mize rewards and follow particular constraints. Our agent them, we use the third component, the orchestrator (depicted then blends these policies in novel and interpretable ways us- in blue). This is a contextual bandit algorithm that orches- ing an orchestrator. We demonstrate the effectiveness of these trates the two policies, picking one of them to play at each techniques on Pac-Man where the agent is able to learn both point of time. The context is the state of the environment; a reward-maximizing and a constrained policy, and select be- the bandit decides which arm (policy) to play at each step. tween these policies in a transparent way based on context, We use a modified CTS algorithm to train the bandit. The to employ a policy that achieves high reward and obeys the context of the bandit is given by features of the current state demonstrated constraints. (for which we want to decide which policy to choose), i.e., d c(t) = Υ(st) 2 R . 1.1 Problem Setting and Notation The exact algorithm used to train the orchestrator is given Reinforcement learning defines a class of algorithms solving in Algorithm 1. Apart from the fact that arms are policies (in- problems modeled as a Markov decision process (MDP). An stead of atomic actions), the main difference from the CTS MDP is denoted by the tuple (S; A; T ; R; γ), where: S is a algorithm is the way rewards are fed into the bandit. For sim- set of possible states; A is a set of actions; T is a transition plicity, let the constraint policy πC be arm 0 and the reward 0 0 0 function defined by T (s; a; s ) = Pr(s js; a), where s; s 2 S policy πR be arm 1. First, all the parameters are initialized as and a 2 A; R : S × A × S 7! R is a reward function; γ is in the CTS algorithm (Line 1). For each time-step in the train- a discount factor that specifies how much long term reward ing phase (Line 3), we do the following. Pick an arm kt ac- is kept. The goal is to maximize the discounted long term cording to the Thompson Sampling algorithm and the context reward. Usually the infinite-horizon objective is considered: Υ(st) (Lines 4 and 5). Play the action according to the chosen P1 t max t=0 γ R(st; at; st+1). policy πkt (Line 6). This takes us to the next state st+1. We Solutions come in the form of policies π : S 7! A, also observe two rewards (Line 7): (i) the original reward in rR (t) = R(s ; a ; s ) which specify what action the agent should take in any given environment, at t t t+1 and (ii) the constraint state deterministically or stochastically. One way to solve rewards according to the rewards learnt by inverse reinforce- this problem is through Q-learning with function approxima- rC (t) = R (s ; a ; s ) rC (t) ment learning, i.e., at bC t t t+1 .

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us