
The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19) IPOMDP-Net: A Deep Neural Network for Partially Observable Multi-Agent Planning Using Interactive POMDPs Yanlin Han, Piotr Gmytrasiewicz Department of Computer Science University of Illinois at Chicago Chicago, IL 60607 Abstract ment learning methods, in which the rewards are usually as- sumed obtainable. While the model-free policy learning can This paper introduces the IPOMDP-net, a neural network be end-to-end, it lacks the model information for effective architecture for multi-agent planning under partial observ- generalization (Karkus, Hsu, and Lee 2017). Particularly, in ability. It embeds an interactive partially observable Markov decision process (I-POMDP) model and a QMDP planning the multi-agent domain, other agents’ actions impact the en- algorithm that solves the model in a neural network archi- vironment, which indirectly impact our policies. Therefore, tecture. The IPOMDP-net is fully differentiable and allows we prefer a model-based approach, since having a model of for end-to-end training. In the learning phase, we train an other agents helps our reasoning about their actions and con- IPOMDP-net on various fixed and randomly generated en- sequently benefits our decision making. vironments in a reinforcement learning setting, assuming ob- The unprecedented success of deep neural networks servable reinforcements and unknown (randomly initialized) (DNNs) in supervised learning (Ciregan, Meier, and model functions. In the planning phase, we test the trained network on new, unseen variants of the environments under Schmidhuber ; Farabet et al. 2013; Krizhevsky, Sutskever, the planning setting, using the trained model to plan without and Hinton 2012) provides new approaches to decision reinforcements. Empirical results show that our model-based making under uncertainty. Convolutional neural networks IPOMDP-net outperforms the other state-of-the-art model- (CNNs) and recurrent neural networks (RNNs) have been free network and generalizes better to larger, unseen environ- applied to tasks like Atari games (Mnih et al. 2015), robotics ments. Our approach provides a general neural computing ar- (Levine et al. 2016), and 2D path planning (Karkus, Hsu, and chitecture for multi-agent planning using I-POMDPs. It sug- Lee 2017). In these tasks, a DNN is trained to approximate gests that, in a multi-agent setting, having a model of other a policy function that maps an agent’s observations to opti- agents benefits our decision-making, resulting in a policy of mal actions. The deep Q-network (DQN), which consists of higher quality and better generalizability. convolutional layers for feature extraction and a fully con- nected layer for mapping features to actions, tackles many Introduction Atari games with the same network architecture (Mnih et al. 2015). DQN is inherently reactive and lacks explicit plan- Decision-making under partial observability is fundamen- ning computation (Tamar et al. 2016). The deep recurrent tally important but computationally hard, especially in Q-network (DRQN) extends DQN to the partially observ- multi-agent settings. In partially observable multi-agent en- able domain by replacing the fully connected layer with a vironments, an agent needs to deal with uncertainties from recurrent long short-term memory (LSTM) (Hochreiter and its own models as well as impacts from other agents ac- Schmidhuber 1997) layer to integrate temporal information tions caused by their models. To learn policies in such (Hausknecht and Stone 2015). Furthermore, the value iter- settings, one approach is to learn models and solve them ation network (VIN) started to embed specific computation through planning. If the models are known, it reduces to a structures (Tamar et al. 2016), particularly the value iteration planning problem and can be formulated as an interactive algorithm, in the network architecture and solves fully ob- partially observable Markov decision process (I-POMDP) servable Markov decision processes (MDPs). The QMDP- (Gmytrasiewicz and Doshi 2005). Although approximate al- net further embeds a POMDP model and a QMDP planning gorithms have made progress on mitigating the computing algorithm that solves the model in a RNN (Karkus, Hsu, complexity (Doshi and Gmytrasiewicz 2009; Han and Gmy- and Lee 2017). However, all the discussed neural networks trasiewicz 2018), solving I-POMDPs exactly is still com- are either model-free or for single-agent settings. Unless we putationally intractable for the worst case. Moreover, con- model other agents’ impact as noise and embed it into the structing I-POMDP models manually or learning them from world dynamic, which often leads to inferior solution qual- observations remains difficult. An alternative approach is to ity, it is not feasible to directly apply these networks into a learn policies directly, such as using model-free reinforce- multi-agent partially observable domain. Copyright c 2019, Association for the Advancement of Artificial In this work, we propose a neural network architec- Intelligence (www.aaai.org). All rights reserved. ture, the IPOMDP-net, for multi-agent planning under par- 6062 tial observability. We extends the QMDP-net to a multi- of agent j at level l − 1 is defined as θj;l−1 = agent domain by combining an interactive POMDP (I- hbj;l−1; A; Ωj;Tj;Oj;Rj; OCji, where bj;l−1 is agent j’s POMDP)(Gmytrasiewicz and Doshi 2005) model with a belief nested to the level (l − 1), bj;l−1 2 ∆(ISj;l−1), QMDP planning algorithm and embedding both in a re- and OCj is j’s optimality criterion. It can be rewritten as ^ ^ current neural network. We implement the IPOMDP-net on θj;l−1 = hbj;l−1; θji, where θj includes all elements of the GPU-based computing devices and train it on problems of intentional model other than the belief. various sizes and dimensions. The training starts from ran- The ISi;l can be defined in an inductive manner: domly initialized weights, and is performed in a reinforce- ^ ment learning fashion assuming the reward is obtainable. ISi;0 = S; θj;0 = fhbj;0; θj i : bj;0 2 ∆(S)g We then evaluate the trained IPOMDP-net by comparing its ISi;1 = S × θj;0; θj;1 = fhbj;1; θ^j i : bj;1 2 ∆(ISj;1)g performance with another state-of-the-art model-free neural :::::: (2) network. During the testing, we remove the obtainable re- ^ ward assumption of all trained neural networks and test them ISi;l = S × θj;l−1; θj;l = fhbj;l; θj i : bj;l 2 ∆(ISj;l)g on new unseen settings of the same problems. Therefore, All remaining components in an I-POMDP are similar to they are using learned policies and acting based on observa- those in a POMDP. A = A × A is the set of joint actions tion and action sequences in the same way as the symbolic i j of all agents. Ω is the set of agent i’s possible observations. I-POMDP does. We show empirical results that our model- i T : S × A × S ! [0; 1] is the transition function. O : S × based IPOMDP-net outperforms the state-of-the-art model- i i A×Ω ! [0; 1] is the observation function. R : IS ×A ! free network in different tasks. i i i is the reward function. Our approach provides a general neural computing ar- R chitecture for multi-agent planning using I-POMDPs. It combines the benefits of model-free learning and model- Interactive Belief Update based planning. Compared with model-free networks, the Given the definitions above, the interactive belief update can IPOMDP-net trained on small problem sizes generalizes be performed as follows, by considering others’ actions and more effectively to larger difficult settings. It suggests that, anticipated observations: in a multi-agent setting, having a model of other agents ben- efits our decision-making, resulting in a policy of higher bt (ist) = P r(istjbt−1; at−1; ot) (3) quality and better generalizability. i;l i;l i i X t−1 X t−1 t−1 t−1 t−1 t = α bi;l(is ) P r(aj jθj;l−1)T (s ; a ; s )× ist−1 t−1 Background aj In this section, we will briefly introduce the underlying t t−1 t X t t−1 t t−1 t−1 t t Oi(s ; a ; oi) Oj (s ; a ; oj )τ(bj;l−1; aj ; oj ; b;l−1) multi-agent planning model, the interactive POMDP (Gmy- t oj trasiewicz and Doshi 2005), that is encoded in our neural network architecture. Conveniently, the equation above can be summarized as t t−1 t−1 t bi;l = SE(bi;l ; ai ; oi). I-POMDP Framework Compared with POMDP, there are two major differences. I-POMDPs generalize POMDPs (Kaelbling, Littman, and First, the probability of other’s actions given his models Cassandra 1998) to multi-agent settings by including mod- needs to be computed since the state now depends on both els of other agents in the belief state space (Gmytrasiewicz agents’ actions (the second summation). Second, the model- and Doshi 2005). The resulting hierarchical belief structure ing agent needs to update his beliefs based on the anticipa- represents an agent’s belief about the physical state, belief tion of what observations the other agent might get and how about the other agents and their beliefs about others’ be- it updates (the third summation). liefs, and can be nested infinitely in this recursive manner. While exact interactive belief update is intractable, there For simplicity, we consider two interacting agents i and j. are sampling-based approximations using customized inter- This formalism generalizes to more number of agents in a active version of particle filters (Doshi and Gmytrasiewicz straightforward manner. 2009; ?), which will be embedded in the IPOMDP-net in a A computable finitely nested interactive POMDP of agent neural analogy.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-