Asynchronous Evolution: Emergence of Signal-Based

Olaf Witkowski and Takashi Ikegami

University of Tokyo, Japan [email protected]

Abstract towards a common direction. The movement itself may differ from species to species. Since Reynolds , swarming behavior has often been re- For example, fish and insects in three dimensions, produced in artificial models, but the conditions leading to whereas of sheep move only in two dimensions. More- its emergence are still subject to research, with candidates over, the can have quite diverse dynamics. ranging from obstacle avoidance to virtual leaders. In this While birds migrate in relatively ordered formations with paper, we present a multi-agent model in which individuals develop swarming using only their ability to listen to each constant velocity, fish schools change directions by align- others signals. Our model uses an original asynchronous ge- ing rapidly and keeping their distances, and insects swarms netic algorithm to evolve a population of agents controlled by move in a messy and random-looking way (Budrene et al. artificial neural networks, looking for an invisible resource 1991, Czirok´ et al. 1997, Shimoyama et al. 1996). in a 3D environment. The results demonstrate that agents Numerous evolutionary hypotheses have been proposed use the information exchanged between them via signaling to form temporary leader-follower relations allowing them to to explain swarming behavior across species. These include flock together. more efficient mating, good environment for learning, com- bined search for food resources, and reducing risks of pre- dation (Zaera et al., 1996). Partridge and Pitcher (1979) also Introduction mention energy saving in fish schools by reducing drag. In an effort to test the multiple theories, the past decades The ability of fish schools, insect swarms or starling mur- counted several experiments involving real animals, either murations (Figure 1) to shift shape as one and coordinate inside an experimental setup (Partridge, 1982; Ballerini their motion in space has been studied extensively because et al., 2008) or observed in their own ecological environ- of their implications for the evolution of social cognition, ment (Parrish and Edelstein-Keshet, 1999). Those experi- collective animal behavior and artificial life (Couzin 2009). ments present the inconvenience to be costly to reproduce. Furthermore, the colossal lapse of evolutionary time needed to evolve swarming, make it almost impossible to study the emergence of such behavior experimentally. Computer modeling has recently provided researchers with new, easier ways to test hypotheses on collective be- havior. As it is well known, simulating individuals on ma- chines offers easy modification of setup conditions and pa- rameters, tremendous data generation, full reproducibility of every experiment, and easier identification of the underlying dynamics of complex phenomena. From Reynolds’ boids to recent approaches Figure 1: Starling murmuration1 In a massively cited paper, Craig Reynolds (1987) intro- duces the boids model simulating 3D swarming of agents called boids controlled only by three simple rules: Swarming is the phenomenon in which a large number of individuals organize into a coordinated motion. Using only • Alignment: move in the same direction as neighbours the information at their disposition in the environment, they 1Copyright Walter Baxter and licensed for reuse under this Cre- are able to aggregate together, move en masse or migrate ative Commons Licence. • Cohesion: Remain close to neighbours Simulated agents move around in a three dimensional space, looking for a vital but invisible food resource ran- • Separation: Avoid collisions with neighbours domly distributed in the environment. The agents are emit- ting signals that can be perceived by other individuals’ sen- Various works have since then reproduced swarming be- sors within a certain radius. Both agent’s motion and signal- havior, often by the means of an explicitly coded set of rules. ing are controlled by an artificial neural network embedded For instance, Mataric (1992) proposes a generalization of in each agent, evolved over time by an asynchronous genetic Reynolds’ original model with an optimally weighted com- algorithm. Agents that consume enough food are enabled to bination of six basic interaction primitives2. Hartman & reproduce, whereas those whose energy drops to zero are Benes (2006) come up with yet another variant of the origi- removed from the simulation. nal model, by adding a complementary force to the align- During the first phase, we observe that agents progres- ment rule, that they call change of leadership. Unfortu- sively coordinate into clustered formations, which are pre- nately, in spite of the insight this kind of approach brings served through the second phase. Such patterns do not ap- into the dynamics of swarming, it shows little about the pres- pear in control experiments having the simulation start di- sures leading to its emergence. Many other approaches are rectly from the second phase, with the absence of resource based on informed agents or fixed leaders (Cucker & Huepe locations. If at any point the signaling is switched off, 2008, Su et al. 2009, Yu et al. 2010). the agents immediately stop swarming together. They start For that reason, experimenters attempted to simulate swarming again as soon as the communication is turned swarming without a fixed set of rules, rather by incorporat- back on. Furthermore, it is observed that simulations ing into each agent an artificial neural network brain that with signaling lead to agents gathering very closely around controls its movements. The swarming behavior is evolved food patches, whereas the control simulations with silented by copy with mutations of the chromosomes encoding the agents end up with them wandering around erratically. neural network parameters. By comparing the impact of dif- The main contribution of this work is to demonstrate that ferent selective pressures, this type of methodology eventu- collective motion can originate, without explicit central co- ally allows to analyze the evolutionary emergence of swarm- ordination, from the combination of a generic communica- ing. tion system and a simple resource gathering task. A spe- Tu and Terzopoulos (1994) have swarming emerge cific genetic algorithm with an asynchronous reproduction from the application of artificial pressures consisting of scheme is developed and used to evolve the agents’ neural hunger, libido and fear. Other experimenters have studied controllers. In addition, the search for resource is shown prey/predator systems to show the importance of sensory to improve from the agents clustering, eventually leading system and predator confusion in the evolution of swarm- to the agents gathering closely around goal areas. An in- ing in preys (Ward et al. 2001, Olson et al. 2013). depth analysis shows increasing information transfer be- In spite of many pressures hypothesized to produce tween agents throughout the learning phase, and the devel- swarming behavior, designed setups presented in the liter- opment of leader/follower relations that eventually push the ature are often complex and specific. Previous works typ- agents to organize into clustered formations. ically introduce models with very specific environments, The rest of the paper is organized as follows. The next where agents are given specialized sensors sensitive. While section describes the details of our model. Then simulation they are bringing valuable results to the community, one may settings and results are discussed, before finally drawing a wonder about systems with a simpler design. conclusion in the last section. In addition, even when studies focus on fish or insects that swarm in 3D (Ward et al. 2001) most keep their model in Model 2D. While the swarming can be considered to be similar in most cases, the mapping from 2D to 3D is found to be non- Agents in a 3D world trivial (Sayama 2012). Indeed, the addition of a third degree We simulate a group of agents moving around in a cubic, of freedom may enable agents to produce significantly dis- toroidal arena of 600×600×600. The agents rely on energy tinct and more complex behaviors. to survive. If at any point an agent’s energy drops to zero, it is immediately removed from the environment. The task for Signaling agents in a resource finding task the agents is to get as close as possible to a preset resource This paper studies the emergence of swarming in a popula- spot. By getting close to one of those spots, agents can gain tion of agents using a basic signaling system, while perform- more energy, allowing them to counterbalance the energy ing a simple resource gathering task. losses due to movement and signaling. An agent whose en- ergy drops to zero is removed from the simulation. In this 2Namely, those primitives are collision avoidance, following, regard, the energy also represents each agent’s fitness, and dispersion, aggregation, and flocking. in this paper both terms are used interchangeably. The agent’s position is determined by three floating point coordinates between 0.0 and 600.0. Each agent is positioned randomly at the start of the simulation, and then moves at a fixed speed of 1 unit per iteration. The direction of motion is decided by two motors controlling Euler angles ψ for pitch (i.e. elevation) and θ for yaw (i.e. heading). Communication among agents Every agent is also provided with one communication ac- tuator capable of sending signals with intensities (signals are encoded as floating point values ranging from 0.0 to 1.0), and six communication sensors allowing it to de- tect signals produced by other agents up to a distance of 100 from 6 directions, namely frontal (0, 1, 0), rear (0, −1, 0), left (1, 0, 0), right (−1, 0, 0), top (0, 0, 1) and bottom (0, 0, −1)). The communication sensors are imple- mented so that every source point in a 100-radius sphere around the agent is linked to one and only one of its sen- sors. The sensor whose direction is the closest to the sig- Figure 2: Architecture of the agent’s controller, composed of naling source receives one float value, equal to the sum of 6 input neurons (I to I ) , 10 hidden neurons (H to H ) every signal emitted within range, divided by the distance, 1 6 1 10 , 10 context neurons (C to C 0) and 3 output neurons (O and normalized between 0.0 and 1.0. 1 1 1 to O3).

goal. The reward value is controlled by the simulation such A neural network inside each agent that the population remains between 100 and 500 agents. The agent’s neural controller is implemented by an Elman All the way through the simulation, the agents also spend artificial neural network with 6 input neurons, encoding the a fixed amount of energy for movement (0.01 per itera- activation states of the corresponding 6 sensors, fully con- tion) and a variable amount of energy for signaling costs nected through a 10-neuron hidden layer to 3 output neu- (0.001 × signal intensity per iteration). rons controlling the two motors and the communication sig- The weights of every connection in the neural network nal emitted by the agent. The hidden layer is given a form (apart from the links from hidden to context nodes, which of memory feedback from a 10-neuron context layer, con- have fixed weights) are encoded in a genotype and evolved taining the values of the hidden layer from the previous time through generations of agents. Each weight is represented step. by a unique floating point value in the genotype vector, such All nodes in the neural network take values between 0.0 that the size of the vector is simply equal to the total number and 1.0. All output values are also floating values between of connections in the neural network. The simulation uses a 0.0 and 1.0, the motor outputs are then converted to angles genetic algorithm with overlapping generations to evolve the between −π to π. The activation state of internal neurons is weights of the neural networks. Whenever an agent accumu- updated according to a sigmoid function. lates 10.0 in energy, a replica of itself (with a 5% mutation in the genotype) is created and added to a random position An asynchronous reproduction scheme in the arena. The agent’s energy is decreased by 8.0 and the Our model differs from the usual genetic algorithm new replica’s energy is set to 2.0. The choice for random paradigm, in that it designs variation and selection in an initial positions is to avoid biasing the proximity of agents, asynchronous way. The reproduction takes place continu- so that reproduction does not become a way for agents to ously throughout the simulation, creating overlapping gen- create local clusters. erations of agents. This allows for a more natural, continu- Indeed, a local reproduction scheme (i.e. giving birth to ous model, as no global clock is defined, that could bias or offspring close to their parents), leads rapidly to an explo- weaken the model. sion in population size, as the agents that are close to the Every new agent is born with an energy equal to 2.0. In resource create many offspring that will be very fit too, thus the course of the simulation, each agent can gain or lose a able to replicate very fast as well. This is why the newborn variable amount of energy. At iteration t, the fitness func- offspring is placed randomly in the environment. r tion fi for agent i is defined by fi(t) = where r di(t) For our genetic algorithm to be effective, the number of is the reward value and di is the agent’s distance to the agents must be maintained above a certain level. Also, the computation power limits the population size. The fitness allowed to the agents is therefore adjusted in order to main- tain an acceptable number of agents (between 100 and 1000) alive throughout the simulation. In addition, agents above a certain age (5000 time steps) are removed from the simula- tion, to keep the evolution from standing still. Results Emergence of swarming We observe agents coordinate together in clustered groups. As shown in Figure 3 (top) the simulation goes through three distinct phases. In the first one, agents wander in an appar- ently random way across the space. Then, during the second phase, agents progressively cluster into a rapidly changing shape, reminiscent of animal flocks3. In the third phase, to- wards the end of the simulation, the flocks get closer and closer to the goal4, forming a compact ball around it.

Figure 4: Visualization of the swarming behavior occurring in the second phase of the simulation.

react faster to the environment, as each turn making one sen- sor face a particular direction allows a reaction to the signals coming from that direction. The faster the rotation, the more the information gathered by the agent about its environment Figure 3: Visualization of the development of a typical run is balanced for every direction. with a single resource spot. The agents start off in a ran- dom motion (left), then progressively come to coordinate in Neighborhood analysis a dynamic cluster (middle), and finally flock more and more We choose to measure swarming behavior in agents by look- closely to the goal (right). ing at the average number of neighbors within a radius of 100 distance around each agent. Figure 5 shows the evolu- Figure 4 shows more in detail the swarming behavior tak- tion of the average number of neighbors, over 10 different ing place in the second phase. The agents coordinate in a dy- runs, respectively with signaling turned on and off. A much namic, quickly changing shape, continuously extending and higher value is reached around time step 105 in the signaling compressing, while each individual is executing fast paced case, while the value remains for the silent control. rotations on itself. Note that this fast looping seems to be necessary to the emergence of swarming, as all trials with Average number of neighbors (10 runs) with signalling ON vs OFF 0.5 slower rotation settings never achieved this kind of dynam- signalling ON 0.45 signalling OFF ics. One regularly notices some agents reaching the border 0.4 of a swarm cluster, leaving the group, and most of the time 0.35 ending up coming back in the heart of the swarm. 0.3 0.25

In spite of the agents needing to pay a cost for signaling 0.2 (cf. description of the model), the signal keeps an average 0.15 value between 0.2 and 0.5 during the whole experiment (in Average number of neighbors 0.1 the case with signalling activated). 0.05 0 0 2 4 6 8 10 12 5 It is also noted that a minimal rotation speed is necessary Time steps x 10 for the evolution of swarming. Indeed, it allows the agent to

3 Figure 5: Comparison of the average number of neighbors As mentioned in the introduction, swarming can take multiple 6 forms depending on the situation and/or the species. In this case, (average over 10 runs, with 10 iterations) in the case sig- the clustering resemble in some aspects mosquito or starling flock- naling is turned on versus off ing. 4Even though results with one goal are presented in the paper, same behaviors are obtained in the case of two or more resource We also want to measure the influence of each agent on its spots. neighborhood. To do so, the inward average transfer entropy on agent’s velocities is calculated5 between each neighbor within a distance of 100 and the agent itself. We will re- fer to this measure as inward neighborhood transfer entropy (NTE). This can be considered a measure of how much the agents are “following” their neighborhood at a given time step. The values rapidly take off on the regular simulation (with signaling switched on), while they remain low for the silent control, as we can see for example on Figure 6.

Figure 7: Plot of the individual outward neighborhood trans- fer entropy. Each color corresponds to a distinct agent.

agents is more able to stick to a goal area once it finds it, since natural selection will increase the density of surviv- ing agents around a those areas. In the control experiments without signaling, it is observed that the agents, unable to form swarms, do not manage to gather around the goal in the same way as when the signaling is active. Figure 6: Plot of the average inward neighborhood transfer

Average distance to goal every iteration (regular run) entropy for signaling on (red curve) and off (blue curve) 500

450

400

Similarly, we can calculate the outward neighborhood 350 transfer entropy (i.e. the average transfer entropy from an 300 agent to its neighbors). We may look at the evolution of 250 200 Distance to goal this value through the simulation, in an attempt to capture 150 the apparition of local leaders in the swarm clusters. Even 100 though the notion of leadership is hard to define, the study of 50 0 0 2 4 6 8 10 12 the flow of information is essential in the study of swarms. 5 Simulation steps x 10 The single individuals’ outward NTE shows a succession of Average distance to goal every iteration (silent control simulation) 500 bursts coming everytime from different agents, as illustrated 450 on Figure 7. This frequent switching of the origin of in- 400 350 formation flow can be interpreted as a continual change of 300 leadership in the swarm. The agents tend to follow a small 250 200 number of agents, but this subset of leaders is not fixed over Distance to goal 150 time. 100 On the upper graph in Figure 8, between iteration 105 and 50 0 5 0 2 4 6 8 10 12 2 × 10 , we see the average distance to the goal drop to val- 5 Simulation steps x 10 ues oscillating between roughly 50 and 300, that is the best agents reach 50 units away from the goal, while other agents Figure 8: Average distance of agents to the goal with sig- remain about 300 units away. On the control experiment naling (top) and a control run with signaling switched off graph (Figure 8, bottom), we observe that the distance to the (bottom) goal remains around 400. Swarming, allowed by the signaling behavior, allows Controller response agents to stick close to each other. That ability allows for a winning strategy in the case when some agents already are After simulation, we test neural networks of each swarm- successful at remaining close to a resource area. Swarming ing agent and qualitatively compare them to non-swarming may also help agents find goals in the fact that they consti- ones’. We observed that characteristic shapes for the curve tute an efficient searching pattern. Whilst an agent alone is obtained with swarming agents presented a similarity (see subject to basic dynamics making it drift away, a bunch of Figure 9, top), and differed from the patterns of non- swarming agents (see Figure 9, bottom) which were also 5The calculations are analogous to Wibral et al. (2013). more diverse. In swarming individuals’ neural networks, patterns were observed leading to higher motor output re- number of neighbors throughout the simulation. The neigh- sponses in the case of higher signal inputs. This is char- borhood becomes denser around iteration 400k, showing a acteristic almost every swarming individuals, whereas non- higher portion of swarming agents. This leads to a firstly swarming agents present a wide range of response func- strong selection of the agents able to swarm together over tions. A higher motor response may allow the agent to slow the other individuals, a selection that is soon relaxed due to down its course accross the map by executing quick rota- the signaling pattern being largely spread, resulting in a het- tions around itself, therefore keeping its position nearly un- erogeneous population, as we can see on the upper plot, with changed. If this behavior is adopted in the case where the numerous branches towards the end of the simulation. signal is high, that is in presence of signaling agents, the The phylogenetic tree shows some heterogeneity, and the agent is able to remain close to them. average number of neighbors is a measure of swarming in the population. The swarming takes off around iteration 400k, where there seems to be a genetic drift, but the sig- naling helps agents form and keep swarming.

Figure 9: Plots of evolved agents’ motor responses to a range of value in input and context neurons. The three axes repre- sent signal input average values (right horizontal axis), con- Figure 10: Phylogenetic tree of agents created during a run. text unit average level (left horizontal axis), and average mo- The center corresponds to the start of the simulation. Each tor responses (vertical axis). The top two graphs correspond branch represents an agent, and every fork corresponds to a to the neural controllers of swarming agents, and the bottom reproduction process. ones correspond to non-swarming ones’.

Discussion Phylogeny In our simulation, agents progressively evolve the ability to To study the heterogeneity of the population, the phyloge- flock through communication to perform a foraging task. netic tree is visualized (Figure 10). At the center of the graph We observe a dynamical swarming behavior, including cou- is the root of the tree, which corresponds to time zero of the pling/decoupling phases between agents, allowed by the simulation, from which start the 200 initial branches. As only interaction at their disposal, that is signaling. Eventu- those branches progress outward, they create ramifications ally, agents come to react to their neighbors’ signals, which that represent the descendance of each agent. The time step is the only information they can use to improve their for- scale is preserved, and the segment drawn below serves as aging. This can lead them to either head towards or move a reference for 105 iterations. Every fork corresponds to a away from each other. While moving away from each other newborn agent6. Therefore, every “fork burst” corresponds has no special effect, moving towards each other, on the con- to a period of high fitness for the concerned agents. trary, leads to swarming. with each other may lead On Figure 11, one can observe another phylogenetic tree, agents to slow down their pace, which for some of them may represented horizontally in order to compare it to the average keep them closer to a food resource. This creates a benefi- cial feedback loop, since the fitness brought to the agents 6The parent forks counterclockwise, and the newborn forks will allow them to reproduce faster, and eventually multiply clockwise. this type of behavior within the total population. Finally, our results also show the advantage of swarming for resource finding (it’s only only through swarming, enabled by signaling behavior, that agents are able to reach and re- main around the goal areas), comparable to the advantages of particle swarm optimizations (Kennedy et al. 1995), here emerging in a model with a simplistic set of conditions. Conclusion In this work we have shown that swarming behavior can emerge from a communication system in a resource gather- ing task. We implemented a three-dimensional agent-based model with an asynchronous evolution through mutation and selection. The results show that from decentralized leader- follower interactions, a population of agents can evolve col- lective motion, in turn improving its fitness by reaching in- visible target areas. Our results represent an improvement on models using hard-coded rules to simulate swarming behavior, as they are Figure 11: Top: average number of neighbors during a single evolved from very simple conditions. Our model also does run. Bottom: agents phylogeny for the same run. The roots not rely on any explicit information from leaders, like previ- are on the left, and each bifurcation represents a newborn ously used in part of the literature (Cucker & Huepe 2008, agent. Su et al. 2009). It does not impose any explicit leader- follower relationship beforehand, letting simply the leader- follower dynamics emerge and self-organize. In spite of be- In this scenario, agents do not need extremely complex ing theoretical, the swarming model presented in this pa- learning to swarm and eventually get more easily to the per offers a simple, general approach to the emergence of resource, but rather rely on dynamics emerging from their swarming behavior once approached via the boids rules. communication system to create inertia and remain close to goal areas. References It should be noted that the simulated population has strong Ballerini, M., Cabibbo, N., Candelier, R., Cavagna, A., Cisbani, E., heterogeneity due to the asynchronous reproduction schema, Giardina, I., Lecomte, V., Orlandi, A., Parisi, G., Procaccini, which can be visualized in the phylogenetic tree (Figure 10). A., Viale, M., and Zdravkovic, V. (2008). Interaction ruling animal collective behavior depends on topological rather than Such heterogeneity may suppress swarming but the evolved metric distance: Evidence from a field study. Proceedings of signaling helps the population to form and keep swarming. the National Academy of Sciences, 105(4):1232–1237. The simulations do not exhibit strong selection pressures to adopt specific behavior apart from the use of the signal- Blondel, V., Hendrickx, J. M., Olshevsky, A., and Tsitsiklis, J. (2005). Convergence in multiagent coordination, consensus, ing. Without high homogeneity in the population, the signal- and flocking. In IEEE Conference on Decision and Control, ing alone allows for interaction dynamics sufficient to form volume 44, page 2996. IEEE; 1998. swarms, which proves in turn to be beneficial to get extra fitness as it has been mentioned above. Budrene, E. O., Berg, H. C., et al. (1991). Complex patterns formed by motile cells of escherichia coli. Nature, 349(6310):630– The results presented in this paper can be compared to 633. many works in the literature. Ward et al. (2001) and Olson et al. (2013) also show the emergence of swarming without Couzin, I. D. (2009). Collective cognition in animal groups. Trends explicit fitness, though those are based on a predator-prey in cognitive sciences, 13(1):36–43. model. The type of swarming obtained with simple pres- Cucker, F. and Huepe, C. (2008). Flocking with informed agents. sures is usually similar to the one obtained in this study, that Mathematics in Action, 1(1):1–25. presents the advantage of being based on a very simple sys- tem based on resource finding and signaling/sensing. Mod- Czirok,´ A., Barabasi,´ A.-L., and Vicsek, T. (1997). Collective mo- tion of self-propelled particles: Kinetic phase transition in els such as Blondel et al. (2005), Cucker & Huepe (2008) one dimension. arXiv preprint cond-mat/9712154. and Su et al. (2009) achieve swarming behavior based on ex- plicit exchange of information from leaders. Our simulation Huth, A. and Wissel, C. (1992). The simulation of the movement of improves on this kind of research in the sense that agents fish schools. Journal of theoretical biology, 156(3):365–385. naturally switch leadership and followership by exchanging Kennedy, J., Eberhart, R., et al. (1995). Particle swarm optimiza- information over a very limited channel of communication. tion. In Proceedings of IEEE international conference on neural networks, volume 4, pages 1942–1948. Perth, Aus- tralia.

Mataric, M. J. (1992). Integration of representation into goal- driven behavior-based robots. Robotics and Automation, IEEE Transactions on, 8(3):304–312.

Olson, R. S., Hintze, A., Dyer, F. C., Knoester, D. B., and Adami, C. (2013). Predator confusion is sufficient to evolve swarming behaviour. Journal of The Royal Society Interface, 10(85):20130305.

Parrish, J. K. and Edelstein-Keshet, L. (1999). Complexity, pattern, and evolutionary trade-offs in animal aggregation. Science, 284(5411):99–101.

Partridge, B. L. (1982). The structure and function of fish schools. Scientific american, 246(6):114–123.

Reynolds, C. W. (1987). Flocks, herds and schools: A distributed behavioral model. In ACM SIGGRAPH Computer Graphics, volume 21, pages 25–34. ACM.

Sayama, H. (2012). Morphologies of self-organizing swarms in 3d swarm chemistry. In Proceedings of the fourteenth interna- tional conference on Genetic and evolutionary computation conference, pages 577–584. ACM.

Shimoyama, N., Sugawara, K., Mizuguchi, T., Hayakawa, Y., and Sano, M. (1996). Collective motion in a system of motile elements. Physical Review Letters, 76(20):3870.

Su, H., Wang, X., and Lin, Z. (2009). Flocking of multi-agents with a virtual leader. Automatic Control, IEEE Transactions on, 54(2):293–307.

Tu, X. and Terzopoulos, D. (1994). Artificial fishes: Physics, loco- motion, perception, behavior. In Proceedings of the 21st an- nual conference on Computer graphics and interactive tech- niques, pages 43–50. ACM.

Ward, C. R., Gobet, F., and Kendall, G. (2001). Evolving collective behavior in an artificial ecology. Artificial life, 7(2):191–209.

Wibral, M., Pampu, N., Priesemann, V., Siebenhuhner,¨ F., Seiwert, H., Lindner, M., Lizier, J. T., and Vicente, R. (2013). Mea- suring information-transfer delays. PloS one, 8(2):e55809.

Yu, W., Chen, G., and Cao, M. (2010). Distributed leader– follower flocking control for multi-agent dynamical systems with time-varying velocities. Systems & Control Letters, 59(9):543–552.

Zaera, N., Cliff, D., and Janet, B. (1996). Not) evolving collective behaviours in synthetic fish. In Proceedings of International Conference on the Simulation of Adaptive Behavior. Citeseer.