Algorithms in Multi-Agent Systems: a Holistic Perspective from Reinforcement Learning and Game Theory

Algorithms in Multi-Agent Systems: a Holistic Perspective from Reinforcement Learning and Game Theory

Algorithms in Multi-Agent Systems: A Holistic Perspective from Reinforcement Learning and Game Theory Lu Yunlong Yan Kai [email protected] [email protected] School of EECS, Peking University School of EECS, Peking University Abstract plete information like Texas Holdem [4] and Avalon [5], and multi-player video games with extremely high com- Deep reinforcement learning has achieved state-of-the- plexity like Dota 2 [6] and StarCraft [7]. art performance in single-agent games, with a drastic de- The main reason for the current success in multi-agent velopment in its methods and applications. Recent works games is the combination of techniques from two main ar- are exploring its potential in multi-agent scenarios. How- eas: deep reinforcement learning and game theory. The for- ever, they are faced with lots of challenges and are seeking mer provides powerful algorithms for training agents with a for help from traditional game-theoretic algorithms, which, particular goal in an interactive environment, but it cannot in turn, show bright application promise combined with be straightforwardly applied to a multi-agent setting [8]; the modern algorithms and boosting computing power. In this latter is born to analyze the behavior of multiple agents, but survey, we first introduce basic concepts and algorithms in is developed mainly in theory, with algorithms capable of single-agent RL and multi-agent systems; then, we summa- solving problems in a very small scale. rize the related algorithms from three aspects. Solution Deep reinforcement learning (DRL) is the combination concepts from game theory give inspiration to algorithms of reinforcement learning (RL) and deep learning. RL [9] which try to evaluate the agents or find better solutions in is an area of machine learning concerned with how agents multi-agent systems. Fictitious self-play becomes popular ought to take actions in an environment in order to max- and has a great impact on the algorithm of multi-agent re- imize some notion of cumulative rewards. Deep learning inforcement learning. Counterfactual regret minimization [10] is a class of machine learning algorithms that uses is an important tool to solve games with incomplete infor- neural networks (NN) to extract high-level features from mation, and has shown great strength when combined with the raw input. Before the prevalence of deep learning, RL deep learning. needs manually-designed features to represent state infor- mation when it comes to complex games; neural network can serve as an adaptive function approximator, allowing 1. Introduction RL to scale to problems with high-dimensional state space [11] and continuous action space [12] [13] [14]. No man is an island entire of itself. We live in a world By adopting such approximator, DRL is proved to be where people constantly interact with each other, and most successful in single-player games with high-dimensional in- arXiv:2001.06487v3 [cs.GT] 31 Jan 2020 situations we confronted with every day involve cooperation put and large state space like Atari [15]. However, straight- and competition with others. Though recent years have wit- forward application of DRL to multi-agent scenarios only nessed dramatic progress in the field of AI, its application achieved limited success both empirically [16] and theoreti- will still be limited until intellectual agents have learnt how cally [17]. Modeling other agents as part of the environment to cope with others in a multi-agent system. The first at- makes the environment adversarial and no longer Markov, tempt of defining multi-agent systems (MAS) dates back to violating the key assumption in the original theory of RL 2000, when Stone and Veloso defined the area of MAS and [9]. stated its open problems [1], and the number of algorithms Therefore, a mature approach has to consider carefully and applications in this area has been rising ever since. In the nature of a multi-agent system. Compared with single- the last decade, multi-agent learning has achieved great suc- agent game where the only agent aims to maximize its own cess in games which were once considered impossible for payoff, in a multi-agent system the environment includes AI to conquer, including games with tremendous number of other agents, all of whom aim to maximize their payoffs. states like Go [2], Chess and Shogi [3], games with incom- There are special cases such as potential games or fully co- 1 operative games where all agents can reach the same global concepts give to multi-agent RL; Section 4 states how ficti- optimal; however, under most circumstances an optimal tious self-play grows into an important tool for multi-agent strategy for a given agent does not make sense any more, RL; Section 5 gives an introduction for counterfactual regret since the best strategy depends on the choices of others. In minimization and its application in multi-agent RL. game theory [18], researchers are focused on certain solu- tion concepts, like some kind of equilibrium, rather than 2. Background optimality [19]. Nash equilibrium [19] is one of the most fundamental 2.1. Single-Agent Reinforcement Learning solution concepts in game theory, and is widely used to One of the simplest forms of single-agent reinforcement modify single-agent reinforcement learning to tackle multi- learning problems is the multi-armed bandit problem [29], agent problems. However, Nash equilibrium is a static so- where players try to earn money (maximizing reward) by lution concept based solely on fixed point, which is lim- selecting an arm (an action) to play (interact with the envi- ited in describing the dynamic properties of a multi-agent ronment). This is an oversimplified model, where the situa- system like recurrent sets, periodic orbits, and limit cy- tion before and after each turn is exactly the same. In most cles [20] [21]. Some researchers try to establish new so- scenarios, you may face different conditions and take mul- lution concepts more capable of describing such proper- tiple actions in a row to complete one turn. This introduce ties, and use them to evaluate or train agents in a multi- the discrimination between states and their transitions. agent system. The attempts include bounded rationality More formally, a single-agent RL problem can be de- [22] and dynamic-system-based concepts such as Markov- scribed as a Markov Decision Process (MDP) [9], which Conley Chain (MCC), proposed by α-rank [21] and applied is a quintet < S; A; R; T; γ > where S represents a set of in [23] and [24]. states and A represents a set of actions. The transition func- In addition to focusing on solution concepts, some game- tion T : S × A −! ∆(S) maps each state-action pair to a theoretic algorithms have sparked a revolution in computer probability distribution over states. Thus, for each s; s0 2 S game playing of some of the most difficult games. and a 2 A, the function T determines the probability of go- One of them is called is called fictitious self-play [25], ing from state s to s0 after executing action a. The reward which is a sample-based variant of fictitious play [26], a function R : S × A × S −! R defines the immediate reward popular game-theoretic model of learning in games. In this that an agent would receive when transits from state s to s0 model, players repeatedly play a game, at each iteration via action a; sometimes the reward can also be a probability choosing a best response to their opponents average strate- distribution on R. γ 2 [0; 1] defines the discount factor to gies. Then the average strategy profile of fictitious players balance the trade-off between the reward in the next transi- converges to a Nash equilibrium in certain classes of games. tion and rewards in further steps, and the reward gained k This algorithm has laid the foundation of learning from self- steps later are discounted by a factor of γk in the accumula- play experiences and has a great impact on the algorithm of tion. multi-agent reinforcement learning, producing many excit- The solution of MDP is a policy π : S −! ∆(A), which ing results including AlphaZero [3] when combined with maps states to a probability distribution of actions, indicat- enough computing power. ing how an agent chooses actions when faced with each Another one of them is called counterfactual regret min- state. The goal of RL is to find the optimal policy π∗ to imization [27], which is based on the important game- maximize the expected discounted sum of rewards. There theoretic algorithm of regret matching introduced by Hart are different techniques for solving MDPs assuming all of and Mas-Colell in 2000 [28]. In this algorithm, players its components are known. One of the most common tech- reach equilibrium play by tracking regrets for past plays, niques is the value iteration algorithm [9] based on the Bell- making future plays proportional to positive regrets. This man equation: simple and intuitive technique has become the most pow- 0 0 erful tool to deal with games with incomplete information, vπ(s) = Ea∼π(s)Es0∼T (s;a)[R(s; a; s ) + γvπ(s )] (1) and has achieved great success in games like Poker [4] when combined with deep learning. In this survey, we will focus This equation expresses the value (expected payoffs) of a on the above two algorithms and how they are applied in re- state under policy π, which can be used to obtain the op- ∗ inforcement learning, especially in competitive multi-agent timal policy π = argmaxπvπ(s) i.e. the one that maxi- RL. mizes the value function. This survey consists of four parts: Section 2 includes Value iteration is a model-based RL algorithm since it the basic concepts and algorithms in single-agent reinforce- requires a complete model of the MDP.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    14 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us