Inverse Reinforcement Learning of Bird Flocking Behavior

Inverse Reinforcement Learning of Bird Flocking Behavior

Inverse Reinforcement Learning of Bird Flocking Behavior Robert Pinsler1 and Max Maag2 and Oleg Arenz2 and Gerhard Neumann2;3 Abstract— Birds within a flock are commonly assumed to Markov Decision Processes (MDPs) are a powerful math- be guided by simple rules, yet they show intelligent, collective ematical framework for modeling such decision making behavior that is not entirely understood. We address this problems. We assume that each bird follows a (possibly problem by modeling each bird as an agent of a separate Markov decision process, assuming that a bird makes decisions different) policy that maximizes its long-term reward under which maximize its own individual reward. By applying inverse the dynamics of the MDP. For instance, birds prefer to fly reinforcement learning techniques to recover the unknown in a flock because it increases their chances of survival reward functions, we (1) were able to explain and reproduce against predators. Assuming known dynamics, the problem the behavior of a flock of pigeons, and (2) propose a method for of explaining the behavior of the birds then reduces to finding learning a leader-follower hierarchy. In the future, the learned reward representation could for example be used to teach a their reward function. By viewing each bird of a flock as an swarms of robots how to fly in a flock. agent of a separate MDP, this issue can be formulated as an inverse reinforcement learning (IRL) problem [11], where the I. INTRODUCTION goal is to infer the underlying reward function of an agent from its observed actions. Flocks of birds can perform various complex maneuvers Recently, there has been great interest [12]–[15] in devis- while maintaining highly synchronized motions. For ex- ing IRL algorithms specifically tailored towards the multi- ample, in face of predators the evasion movement of one agent setting, often by exploiting shared structure among bird can be rapidly propagated through the flock, resulting the agents. However, usually these methods make additional in a coordinated turning maneuver [1]. The study of such assumptions (e.g. the availability of a central controller, the collective behavior, as seen in bird flocks or school of fish, possibility to collect more data using a learned policy, etc.) has spawned several mathematical models that are often that are not suited for our application. inspired by biology [2], [3] or physics [4]. Various models In this paper, we apply maximum entropy IRL to recover assume that each individual only follows the basic principles the reward functions of pigeons within a flock, where we of attraction, repulsion and alignment [5]–[7]. For example, used GPS data [9] from multiple flights of flocks of up Reynolds [5] was able to generate swarm-like behavior to ten pigeons as expert trajectories. Furthermore, we show within computer simulation using these rules, suggesting that how to learn a leader-follower hierarchy from the recovered each bird follows the very same policy. If this policy is reward functions. The learned reward functions serve as indecisive, conflicting actions are prioritized. Other models succinct, transferable representations of the task. This does resolve this conflict by introducing different zones, within not only allow us to study the collective behavior of birds in a which each rule is effective [6], [8]. However, despite those flock more closely but could also be used for apprenticeship attempts it is still largely unclear how exactly different rules learning [16] in swarms of robots. interplay. Furthermore, the proposed mechanisms might not suffice to model the behavior of real birds accurately. In fact, II. BACKGROUND Nagy et al. [9] were able to identify additional hierarchical patterns within small flocks of homing pigeons. The findings This section fixes the notation and serves as an introduc- suggest that such dynamic leader-follower relationships play tion for maximum entropy IRL in continuous MDPs. an important role for explaining flocking behavior. Understanding the way birds interact is not merely of bio- A. Preliminaries logical interest, however. One important field of application A finite MDP is a tuple (S; A; fPsag;R), where S is the is swarm robotics, where self-organization between different state space, A is the action space, fPsag are the transition autonomous agents is needed. Such robotic swarms can be dynamics when taking action a in state s, and r(s; a) is used for environmental monitoring, rescue missions or for the reward function. In the IRL setting, the reward function building up communication networks [10]. Our goal is to is unknown. We assume that the reward function is a linear use insights from bird flocking to improve the coordination combination of features φ 2 Rk, i.e. r(s; a) = θ>φ(s; a) of such multi-agent systems. We are therefore interested in with weights θ. The actions of the agent are selected ac- finding rules that explain the decisions of birds within a flock. cording to policy π(ajs). An optimal policy π∗ maximizes π PT the expected return J = π[ r(st; at)], which denotes 1 E t=0 Engineering Department, University of Cambridge, Cambridge, UK the sum of the expected rewards when following policy π, [email protected] ∗ π 2 Fachbereich Informatik, Technische Universitat¨ Darmstadt, Germany such that π = arg maxπ J . By using the definition of 3 School of Computer Science, University of Lincoln, Lincoln, UK the reward function, the expected return can be rewritten as π > ~π ~π PT J = θ φ , where φ = Eπ[ t=0 φ(st; at)] denotes the B. Modeling expected feature counts. The decision making of the observed pigeons is modeled by bird-specific MDPs that only differ in the reward function B. Maximum Entropy Inverse Reinforcement Learning of the respective bird. The problem of learning a reward Maximum entropy IRL [17] chooses the least committed function for each pigeon is thus decomposed into separate distribution over behaviors that still matches the expert fea- IRL problems. Because state and action spaces of birds are ture counts. Under this model, the likelihood of a trajectory continuous, we follow [18] to approximate the log likelihood ζi = fs1; a1; s2; a2;:::; sT ; aT g is proportional to the of the maximum entropy IRL objective. As system dynamics, exponential of the rewards obtained along the way: we assume a double integrator: st+1 = Ast + Bat 1 X > > P (ζijθ) = exp r(st; at) / exp θ φζi : (1) Z where st = x1 x_ 1 x2 x_ 2 x3 x_ 3 and at = t x¨ x¨ x¨ > with position x 2 R3. A is a block-diagonal However, evaluating the partition function Z is intractable 1 2 3 6 × 6 matrix with blocks for continuous domains. Levine and Koltun [18] therefore > proposed to approximate the likelihood (1) using a Laplace 20 dt 0 0 0 0 3 1 dt approximation, yielding: ; and B = 0 0 0 dt 0 0 ; 0 1 4 5 0 0 0 0 0 dt 1 g>H−1g 1 − da P (ζijθ) ≈ e 2 j − Hj 2 (2π) 2 ; where dt = 0:2. The reward function is modeled as a 2 linear combination of k features as defined in Table I. The where g = @r and H = @ r are the gradient and Hessian of @a @a2 Back Distance and Right Distance features are based on the sum of rewards along trajectory ζ w.r.t. action sequence i observations from Nagy et al. [9], according to which leaders a = [a ; : : : ; a ]1. The approximation is equivalent to 0 T in pigeon flocks often fly in the front and to the left of assuming that the expert trajectories are only locally optimal, the flock. Furthermore, the sum of each of the other birds’ eliminating the requirement of global optimality usually N repulsions will be denoted as φP = P p φ . Note assumed in IRL. The approximate log likelihood objective rep i=1 rep;i that using both φ and φ (or φP ) allows to punish the is given by attr rep rep agent when its distance to a flock member is either too small 1 1 d or too large. Finally, we define another bird p¯ (in addition L = g>H−1g + log j − Hj − a log 2π; (2) 2 2 2 to the existing pigeons in the data set), which represents the flock mean. Its states are calculated as s = 1 PN s . which is maximized using gradient-based optimization. p¯ N i=1 pi C. Hierarchy Learning III. APPROACH After learning the reward function for every pigeon we In this section, we present our approach towards learning leverage the learned feature weights to infer a hierarchy that the reward function of pigeons. We use the pigeon flocking encodes leader-follower relationships. In order to compare dataset of Nagy et al. [9] as training data. The position data the weights between pigeons, we apply the following nor- was collected at a sampling rate of 0.2s during two different malization to each feature φk of bird a: setups: free flights around their lair and homing flights. The φ − µ provided data contains position, velocity and acceleration φ^ = k;a φk ; k;a σ information. The GPS positions have a reported precision φk of 1-2m along the x- and y-coordinates and a substantially where µφk and σφk are the weight mean and standard larger error in the z-direction. deviation across the flock. We assume a pigeon a is following another pigeon p if the feature weight of a w.r.t. p is higher A. Data preprocessing than some threshold τ = 1:0. Intuitively, a high weight Prior to the learning process, we conduct several prepro- indicates that bird p has a high influence on the reward cessing steps.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    4 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us