Social Diversity and Social Preferences in Mixed-Motive Reinforcement Learning
Total Page:16
File Type:pdf, Size:1020Kb
Social Diversity and Social Preferences in Mixed-Motive Reinforcement Learning Kevin R. McKee, Ian Gemp, Brian McWilliams, Edgar A. Duéñez-Guzmán, Edward Hughes, & Joel Z. Leibo DeepMind London {kevinrmckee,imgemp,bmcw,duenez,edwardhughes,jzl}@google.com ABSTRACT categorical distinction is especially relevant for spatially and tem- Recent research on reinforcement learning in pure-conflict and porally extended Markov games. In these games, interdependence pure-common interest games has emphasized the importance of emerges both as a direct impact of one agent’s actions on another’s population heterogeneity. In contrast, studies of reinforcement outcomes and as an indirect effect of each agent on the state ofthe learning in mixed-motive games have primarily leveraged homo- substrate environment in which others co-exist. geneous approaches. Given the defining characteristic of mixed- motive games—the imperfect correlation of incentives between group members—we study the effect of population heterogeneity on a) mixed-motive reinforcement learning. We draw on interdependence theory from social psychology and imbue reinforcement learning agents with Social Value Orientation (SVO), a flexible formaliza- tion of preferences over group outcome distributions. We subse- quently explore the effects of diversity in SVO on populations of reinforcement learning agents in two mixed-motive Markov games. We demonstrate that heterogeneity in SVO generates meaningful and complex behavioral variation among agents similar to that Homogeneous Samples suggested by interdependence theory. Empirical results in these Population mixed-motive dilemmas suggest agents trained in heterogeneous populations develop particularly generalized, high-performing poli- cies relative to those trained in homogeneous populations. 1 INTRODUCTION In multi-agent reinforcement learning, the actions of one agent can influence the experience and outcomes for other agents—that Heterogeneous Samples is, agents are interdependent. Interdependent interactions can be Population sorted into two categories based on the alignment of incentives for b) the agents involved [47]: Homogeneous Heterogeneous (1) Pure-motive interactions, in which the group’s incentives are Pure-common interest Switch [52] Overcooked [11] either entirely aligned (pure-common interest) or entirely opposed (pure-conflict), Mixed-motive Coins [32] *This paper (2) and mixed-motive interactions, in which the group’s incen- Pure-conict Go [49] StarCraft II [55] arXiv:2002.02325v2 [cs.MA] 12 Feb 2020 tives are sometimes aligned and sometimes in conflict.1 Examples of the former include games such as Hanabi [6] and Go Figure 1: Homogeneity and heterogeneity in population- [49]. The latter category is typified by games like the Prisoner’s based multi-agent reinforcement learning. (a) Population Dilemma [46, 53] and the tragedy of the commons [20, 31]. This homogeneity and heterogeneity result in different training experiences for a given agent i. In the homogeneous case, 1 When Schelling originally introduced the pure- and mixed-motive framework, he agent policies are either identical or very similar (e.g., due explained, “Mixed-motive refers not, of course, to an individual’s lack of clarity about his own preferences but rather to the ambivalence of his relation to the other player— to identical training distributions or shared motivations). the mixture of mutual dependence and conflict, of partnership and competition” [47]. In the heterogeneous setting, a given agent i encounters a range of group compositions over time. The variability in Proc. of the 19th International Conference on Autonomous Agents and Multiagent Systems policies can stem from agents training under different distri- (AAMAS 2020), B. An, N. Yorke-Smith, A. El Fallah Seghrouchni, G. Sukthankar (eds.), May 2020, Auckland, New Zealand butions or with different motivations. (b) Representative ex- © 2020 International Foundation for Autonomous Agents and Multiagent Systems amples of previous multi-agent reinforcement learning re- (www.ifaamas.org). All rights reserved. search. We study the mixed-motive, heterogeneous setting. AAMAS’20, May 2020, Auckland, New Zealand K. McKee et al. In pure-conflict reinforcement learning, self-play solutions for theory holds that humans deviate from game theoretic predictions Markov games [5, 22, 49] have gradually given way to population- in economic games because each player acts not on the “given ma- based approaches [26, 55] (Figure 1a). A central impetus for this trix” of a game—which reflects the extrinsic payoffs set forth bythe shift has been an interest in ensuring agent performance is robust game rules—but on an “effective matrix”, which represents the set to opponent heterogeneity (i.e., variation in the set of potential of outcomes as subjectively evaluated by that player [29]. Players opponent policies). Similarly, recent work on pure-common inter- receive the given matrix from their environment and subsequently est reinforcement learning in Markov games has highlighted the apply an “outcome transformation” reflecting their individual pref- importance of robustness to diverse partner policies [1, 11]. In both erences for various outcome distributions. The combination of the of these contexts, it is desirable to train agents capable of adapting given matrix and the outcome transformation form the effective and best responding to a wide range of potential policy sets. matrix (Figure 2). Though an individual’s social preferences may de- In mixed-motive Markov games, the effects of partner hetero- crease their given payoff in a single game, groups with diverse sets geneity have not received much attention. Most mixed-motive rein- of preferences are capable of resisting suboptimal Nash equilibria. forcement learning research has produced policies through self-play In multi-agent reinforcement learning, reward sharing is com- [32, 43] or co-training policies in fixed groups [24, 31]. Such meth- monly used to resolve mixed-motive dilemmas [24, 44, 52, 56]. To ods foster homogeneity in the set of policies each agent encounters. date, agent hyperparameters controlling reward mixture have typi- We aim to introduce policy heterogeneity into mixed-motive cally been shared. This approach implies a homogeneous popula- reinforcement learning (Figure 1b). In recent years, a growing body tion of policies, echoing the representative agent assumption from of work has explored the effects of furnishing agents in mixed- economics [21, 30]. The continued reliance on shared reward mix- motive games with various motivations such as inequity aversion tures is especially striking considering that the ability to capture [24, 56], social imitation [15], and social influence28 [ ]. Thus, a heterogeneity is a key strength of agent-based models over other natural starting point for the study of heterogeneity is to explore the modeling approaches [19]. effects of diversity in intrinsic motivation51 [ ]. Here we endow our Homogeneous populations often fall prey to a peculiar variant agents with Social Value Orientation (SVO), an intrinsic motivation of the lazy agent problem [52], wherein one or more agents begin to prefer certain group reward distributions between self and others. ignoring the individual learning task at hand and instead optimize Psychology and economics research has repeatedly demonstrated for the shared reward [24]. These shared-reward agents shoulder that human groups sustain high levels of cooperation across differ- the burden of prosocial work, in a manner invariant to radical ent games through heterogeneous distributive preferences [7, 13, shifts in group composition across episodes. This “specialization” 16, 45, 50]. A particularly compelling account from interdependence represents a failure of training to generate generalized policies. To investigate the effects of heterogeneity in mixed-motive rein- forcement learning, we introduce a novel, generalized mechanism Outcome for reward sharing. We derive this reward-sharing mechanism, So- Given matrix transformation Eective matrix cial Value Orientation (SVO), from studies of human cooperative be- havior in social psychology. We show that across several games, het- c d c d erogeneous distributions of these social preferences within groups c -1 - 1 -4 + 1 c -2 -3 generate more generalized individual policies than do homoge- neous distributions. We subsequently explore how heterogeneity in 1 - 4 -2 - 2 -3 -4 SVO sustains positive group outcomes. In doing so, we demonstrate c d d d that this formalization of social preferences leads agents to discover c -1, -1 -4, 1 specific prosocial behaviors relevant to each environment. d 1, -4 -2, -2 c d c d 2 AGENTS c -1 + 0 -4 - 5 c -1 -9 2.1 Multi-agent reinforcement learning and Markov games d 1 + 5 -2 + 0 d 6 -2 In this work, we consider n-player partially observable Markov games. A partially observable Markov game M is defined on a Figure 2: Interdependence theory. The top pathway depicts finite set of states S. The game is endowed with an observation a transformation process for a row player who has altru- function O : S × f1;:::;ng ! Rd ; a set of available actions for istic preferences. In this case, the outcome transformation each player, A1;:::; An; and a stochastic transition function T : directly transfers the column player’s payoff into the ef- S×A1 ×· · ·×An ! ∆¹Sº, which maps from the joint actions taken fective matrix. The bottom pathway shows a transforma- by the n players to the set of discrete