<<

Automated Systems - Computer Engineering

Bibliographic Report

Simulation of a complex system: Behaviors

Jury : Pr. Laurent Hardouin Author : Dr. Nicolas Delanoue M. Clément Macadré Dr. Remy Guyonneau Pr. Jean-Baptiste Fasquel Pr. David Rousseau

Draft August 22, 2020

Acknowledgements

I would like to thank all the people responsible for the abundance of material and scientific papers available for free all over the Internet, which made the development of this report enjoyable and very interesting. Thanks as well for the guidance of my tutor, Pr. Laurent Hardouin.

i

Contents

Introduction1 0.1 Context...... 1 0.2 Outline of the research...... 1

1 Flocking simulation: State of the art3 1.1 Craig W. Reynolds : Flocks, , and Schools, A Distributed Behavioral Model...... 3 1.1.1 Flocking Behaviors...... 4 1.1.2 The environment’s pressure...... 5 1.1.3 Application of the flock model...... 5 1.2 James Shannon, BSc : Exploring the real world applications of cellular automata and its application to the simulation of flocking behavior....6 1.2.1 Cellular automaton...... 6 1.2.2 Cellular automaton applications...... 6 1.3 James Kennedy and Russell Eberhart : Particle Optimization...8 1.3.1 Summary of the PSO’s development...... 8 1.3.2 Multidimensional search and Neural Networks...... 9

2 Different approaches to the flocking simulation 11 2.1 simulation using Reynolds " theory"...... 11 2.1.1 A few details on the laws that govern the flock...... 11 2.2 Cellular Automata...... 15 2.2.1 Game of life...... 15 2.2.2 Flocking simulation...... 16

iii iv Contents

2.3 Particle System Optimization...... 18 2.3.1 PSO’s Algorithm...... 19 2.3.2 Use in training a neural network...... 20

3 Outcome of the study 25 3.1 Results of the simulations and the application...... 25 3.1.1 Flock simulation using Reynolds "Boids theory"...... 25 3.1.2 Cellular Automata...... 27 3.1.3 Particle System Optimization...... 28 3.1.4 PSO’s use in training a neural network...... 29

Conclusion 33 3.2 Evaluation...... 33 3.2.1 Future work...... 33 List of Figures

1.1 Complex patterns emerge from the flocking of birds...... 4 1.2 Pattern emerging from a Rule 30 cellular automaton with a specific initial state...... 7 1.3 A Conus textile shell similar in appearance to Rule 30...... 7

2.1 Alignment rule applied to the Boid...... 12 2.2 Separation rule applied to the Boid...... 13 2.3 Cohesion rule applied to the Boid...... 14 2.4 The board can be seen as a torus of revolution...... 15 2.5 Cell’s 2-layer neighborhood...... 17 2.6 CA grid...... 17 2.7 CA Steering...... 18 2.8 PSO’s search strategy...... 20 2.9 I/O of a single neuron...... 21 2.10 XOR neural network layout...... 22 2.11 XOR Output Computation...... 23

3.1 Flock Centering and Collision Avoidance implementation...... 26 3.2 Velocity Matching rule implementation...... 26 3.3 Flock of Boids...... 27 3.4 Conway’s Game of life...... 27 3.5 Cellular automaton: migration of a flock of cells...... 28 3.6 Particle system’s displacement...... 29 3.7 Optimizing with a particle system...... 29

v vi List of Figures

3.8 Neural network trained to solve the XOR logic...... 30 3.9 Neural network: Bias and weights values...... 30 List of acronyms

CA Cellular Automaton

PSO Particle System Optimization

NN Neural Network

XOR The exclusive-or logic

vii

Introduction

0.1 Context

This bibliographical report is a popularization and synthesis exercise which aims to provide a first experience of scientific research on a subject of my choice. Thus, based on the knowledge acquired during my first year of engineering studies in terms of simulation of complex systems and a certain curiosity for biomimicry [5], I decided to present and then carry out several approaches to the simulation of a bird flock as well as an application in neural networks[2].

0.2 Outline of the research

The first chapter of this report aims at presenting three scientific papers covering theories about the simulation of flocks of birds, schools of fishes and any system exhibiting emerging behaviors. The first approach concerns the work of Craig Reynolds [16] in 1986 with the development of rules between individuals called "Boid" in order to create an autonomous simulation. The second paper addresses the notion of cellular automata with the work of James Shannon [18] in 2013 and its application in the simulation of flocking behaviors. The third paper will focus on particle system optimizations developed by James Kennedy and Russell Eberhart [9] in 1995, which was originally inspired by the behavior of a flock of birds. In the second chapter we will present three simulations based on the theories previously detailed. Starting with a 3D simulation of a flock of birds based on Reynolds’ Boids theory, then we will create a cellular automaton based on Shannon’s theory with a little digression on Conway’s game of life [6]. Finally we will present the foraging simulation of a flock using a particle system that exhibits an optimizing behavior which we will use to train a neural network. In the last chapter we will comment on the final rendering of the simulations, as well as on the performance of the application mentioned previously.

1

Chapter 1

Flocking simulation: State of the art

It is easy to imagine by observing the movements of a flock of birds or a school of fish that we are dealing with a single giant entity with a will of its own. Indeed, one is quickly absorbed by the lightning synchronicity with which hundreds of members can gather, change direction or escape from a predator. And yet it is indeed the individual actions of the elements that make up the flock that allow such behaviors to emerge [17]. A flock of bird is therefore considered to be a complex system [13]. Under this name are grouped systems composed of a multitude of members whose individual interactions bring out global properties. One property of such a system is its ability to self-organize through a collection of simple mechanisms. And at the origin of these mechanisms are the interactions of each element with its environment.

1.1 Craig W. Reynolds : Flocks, Herds, and Schools, A Distributed Behavioral Model

Synthesis of Craig W.Reynolds’s Publication [16] Reynolds explains in his publica- tion that it is possible to reproduce the behavior of a pack, , swarm or flock by applying a minimum number of rules to the individuals that make up this system. Each individual is called a "Boid", and will behave according to its own perception of the system. In fact, one could have the illusion that a flock of birds moves as a single unit, with coordination, whereas in reality it is the reaction of each individual towards his neighbors that causes the behavior we observe to emerge. (fig. 1.1). Reynolds notes that a flocking simulation is similar to a particle system, which is a graphical technique simulating many natural phenomena such as fire, explosion, smoke, etc. The similarity lies in the fact that the system is composed of a large number of particles, with their own behavior, interacting with neighboring particles. Except that in

3 4 Chapter 1. Flocking simulation: State of the art

Figure 1.1: Complex patterns emerge from the flocking of birds. our simulation, the particles are replaced by individuals with a geometric model. Therefore, these individuals must manage their orientations to cope with the direction in which they are moving. It is important to consider that the formation of a flock is an asset for the survival of its members [3]. Indeed, there is safety in numbers and it offers multiple advantages in order to escape predators, save energy by taking advantage of group dynamics, find food more quickly and expose themselves to social and reproductive interactions. We can therefore deduce that the flock aspires to be as massive as possible. We also know that the integrity of a flock depends on the interactions of its members, and their ability to respect certain behaviors. We can therefore observe a willingness of individuals to group together while avoiding collisions.

1.1.1 Flocking Behaviors

Reynolds therefore sought to identify and simplify as much as possible the rules that, applied to each individual, allow a flock to emerge spontaneously. He identified that these rules are based on two parameters: the distance between two individuals and their velocities. The rules are therefore as follows: "Collision Avoidance", "Velocity Matching" 1.1. Craig W. Reynolds : Flocks, Herds, and Schools, A Distributed Behavioral Model 5 and "Flock Centering" as well as environmental pressures. This collection of rules prevents the members of the flock from touching each other, forces individuals to adapt their speeds to those of their neighbors and encourages the grouping of individuals in a flock. In the next chapter we will provide details on these rules in order to implement them in a simulation. Each of the three rules will thus produce a suggestion of direction and speed to adopt to take part in the flock. And depending on the situation, these suggestions will be more or less important, for example, if a collision is imminent, it will be necessary to favour an avoidance manoeuvre rather than adapting the speed of movement to those of the surroundings. These subtleties will therefore have to be taken into account when designing a simulation.

1.1.2 The environment’s pressure

In order to create a convincing simulation, the swarm must interact with its environment. It must be able to break up and reform to avoid an obstacle or flee from a predator. With- out pressure from the environment (avoiding an obstacle, countering gravity, etc...) the rules applied to our system will bring the flock to a state of rest. Indeed each Boid tries to position itself at a distance respecting the rules of separation and cohesion. Without disturbance, the flock comes to a standstill in a balance that the environment must upset. The tuning of these disturbances is therefore an important element to create a credible model. One solution is to create a repulsive force field emanating from an obstacle towards the outside. This will gradually repel the Boids as they approach the obstacle. Provided that its trajectory is not precisely perpendicular to the action of the repulsion field, oth- erwise the Boid will simply be slowed down and will not turn. Not avoiding an obstacle and coming to a standstill in full flight is obviously not a natural behavior. Fortunately the rest of the flock will be deflected, so the Boid will be able to reorient itself according to the alignment rule and thus the general direction of the flock to keep going.

1.1.3 Application of the flock model

Reynolds observes that dense car traffic follows essentially the same rules as a flock, in the sense that cars will obviously try to avoid collisions and stay together. Although the latter constraint is imposed by the capacity of the highway and is not sought by motorists. Nevertheless, the efficiency of new infrastructures could be tested before they are even built by subjecting them to a "stress test" in simulation. Finally in the next chapter, we will create a 3D simulation of a flocking behavior using Reynolds’s Boid theory. 6 Chapter 1. Flocking simulation: State of the art

1.2 James Shannon, BSc : Exploring the real world ap- plications of cellular automata and its application to the simulation of flocking behavior

1.2.1 Cellular automaton

Synthesis of James Shannon Publication [18] Shannon explains that a cellular au- tomaton (CA) is an n-dimensional arrangement of cells. These cells can be of variable geometry and arranged in a discrete or infinite repetitive pattern. For example, the squares of a chess board could be seen as a two-dimensional automaton. These cells have a given number of states, usually 1 or 0, living or dead. They also have a specific number of neighboring cells, depending on the structure and dimensions of the board. In these sim- ulations the states of the cells are all modified simultaneously, according to the state of the neighboring cells and the rules that determine the evolution of the individuals. To better grasp the idea, let us focus on a one-dimensional automaton, where each cell has a Boolean state, 0 or 1. A cell thus has two neighbors, one on the left and one on the right. We then have a neighborhood of 3 cells (counting the one whose state is updated). Therefore, this one-dimensional CA has 23 (8) possible configuration. In two dimensions with a neighborhood of 8 cells, there are 29 (512) configurations and thus as many rules to apply to the automaton. These rules are classified in four categories according to their effect on the evolution of the cells which Shannon details as: Category 1: Nearly all layouts evolve quickly into stable and homogeneous structures. All the initial random elements of these layouts disappear. Category 2: Almost all arrangements evolve into stable or oscillating structures. Some of the initial random arrangements may dissipate, but usually a small portion of them remain. Finally, local changes in the original structures do not spread to neighboring structures. Category 3: Almost all models evolve in a pseudo-random or chaotic manner. All structures that appear in these models are susceptible to destruction by ambient noise. Any changes in the local models will generally spread throughout the simulation space indefinitely. Category 4: Almost all models evolve into complex structures that behave in fascinating ways. Local structures tend to form and can survive for long periods of time.

1.2.2 Cellular automaton applications

Shannon is particularly interested in Category 4 rules, some of which are capable of trans- forming a cellular automaton into a universal computing system [15]. This has been proven 1.2. James Shannon, BSc : Exploring the real world applications of cellular automata and its application to the simulation of flocking behavior 7 for Rule 110 and for Conway’s game of life. This implies that with enough resources and time a cellular automaton associated with the right rule can solve any computational prob- lem based on an algorithmic procedure. One can also mention the category 3 rule 30 which induces a chaotic and aperiodic behavior of a cellular automaton. This rule is considered to be the key to understanding how simple rules can eventually exhibit emergent and chaotic behavior in nature:

Figure 1.2: Pattern emerging from a Rule 30 cellular automaton with a specific initial state

Figure 1.3: A Conus textile shell similar in appearance to Rule 30

Above is the appearance of Rule 30-like pattern on the shell of a conical snail species known as the textile Conus. 8 Chapter 1. Flocking simulation: State of the art

In short, we have a dynamic system capable of evolving complex structures in space using very simple rules applied to members of this system. There are obvious similarities with the modeling of a flock of birds using Reynolds’s Boids theory. Therefore, in the next chapter, we will use those concepts to create the simulation of a flock of birds using a cellular automaton.

1.3 James Kennedy and Russell Eberhart : Particle Swarm Optimization

Synthesis of James Kennedy’s and Russell Eberhart’s Publication [9] Kennedy and Eberhart present the origins of particle system optimization (PSO) through the simulation of a simplified social behavior: the search for food of a school of fish and evolutionary algorithms. This type of algorithm is inspired by natural mechanisms to solve problems of all types. As for the PSO, the idea is to make a swarm of solutions evolve towards an optimal solution. They are stochastic algorithms, i.e. they use iteratively ran- dom processes. The PSO is an algorithm that solves basic mathematical operations and thus, is inexpensive in terms of memory and computing speed. This algorithm is based on simple concept and rules, yet it solves complex problems. Indeed, to find an optimal solution, one just needs to correctly understand the question and then identify the right parameters to be applied to the model. The authors summarize this situation elegantly: "This algorithm [...] allows wisdom to emerge rather than trying to impose it". The authors point out that, until now, the methods use to simulate flocks of birds have been based on sets of rules that maintain an optimal distance between individuals during their journey. They also observe that, according to sociobiologist E. 0. Wilson [20], a school of fish takes advantage of the large number of members to systematically find food, thanks to individual discoveries and their experiences. In other words, a member solves the problem of "finding food" through social and cognitive factors.

1.3.1 Summary of the PSO’s development

Kennedy and Eberhart remind us that the particle system was originally used to simulate the foraging of a flock of birds using rules that are now familiar to us. At each iteration in the original algorithm, the members of the flock would copy the speed of their neighbors, such as Reynolds’ "Velocity matching" rule. This rule has the effect of homogenizing the speed and direction of movement of the flock, giving it an unrealistic behavior. To overcome this problem, a random variation of speed and direction of the agents was introduced to disturb the flock. Then a new approach was introduced called the "Cornfield Vector" which 1.3. James Kennedy and Russell Eberhart : Particle Swarm Optimization 9 aims to make the swarm gravitate and converge towards a point symbolizing food. The principle is very close to the final PSO algorithm, in fact, at each iteration the individuals compare their position with that of the target using a cost function. A cost function aims at minimizing or maximizing a parameter to an optimum solution, here we try to minimize the distance separating the particles from their objective. The particles remember their best placement (Pbest) and this information is pooled with the flock to obtain the best overall position (Gbest). The best personal position is obtained from cognitive factors specific to the particle, while the best position is acquired socially. The particle will then move a random distance in the general direction of the best global position while staying in the vicinity of its best personal position, keep in mind that these values may change with each iteration. It is also possible to apply weights to the Pbest and Gbest information to moderate their respective contributions to the movement of the particle. Using this algorithm, a convergence of the flock towards the target is observed as expected. Then a last element was added to the algorithm called "Acceleration by Distance" which consists in varying the range of the displacement of a particle as a function of its distance from the target. Indeed if the particle is far from the objective we want it to move quickly to cover a larger distance. As it gets closer to the target it must slow down to explore smaller areas and refine the best overall position. This example of foraging is a particle system optimization because Gbest can be interpreted as the solution to the "feeding" problem. Indeed the particle swarm combined with the algorithm presented earlier will allow the approximation of the source of food’s position.

1.3.2 Multidimensional search and Neural Networks

We know from the previous example that particle system optimization is able to solve a linear problem in two dimensions. The authors indicate that it is possible to handle N- dimensional problems by simply changing the dimensions of the position vectors. Indeed, our position vectors have for the moment two components, the x and y position of the particle in space and we can easily imagine adding the z component to move to a 3- dimensional problem. Kennedy and Eberhart then introduce us to an application of the PSO, the training of a neural network (NN). Indeed the algorithm was used to train a NN to solve the XOR logic using the solutions obtained by a particle swarm evolving in a 13 dimension search-space, allowing the neural network to achieve its goal with a success rate of 95%. In the next chapter we will implement a simulation of the behavior of a flock of birds mi- grating towards a goal and then we will reproduce in detail the Neural network application mentioned previously.

Chapter 2

Different approaches to the flocking simulation

This chapter will present the conception of three different flocking simulation inspired by the articles discussed previously.

2.1 Flock simulation using Reynolds "Boids theory"

To model a flock of birds, I used the Unity graphics engine, well known for its performance in 3D modeling and the ease of use of its tools thanks to the abundance of online documen- tation. Moreover, the modeling of a system composed of objects lends itself particularly well to an object-oriented programming language, in this case C#. Reynolds observed that the only constraint on swarm size is the space available to move around. Thus a large number of individuals must not influence the behavior of the group, i.e. there is no limit on the number of individuals after which the swarm would be forced to explode and form smaller clusters. Therefore, it is already known that in terms of programming it will be necessary to ensure that the Boids communicate only with their neighbors in a small area of perception, and not with all the individuals that make up the swarm. Otherwise the complexity of the algorithm would be quadratic (O(n2)), i.e. the amount of work to solve the algorithm would increase with the square of the number of individuals. Thus Unity makes it possible to attach a "sphere of perception" to each individual. This reduces the computational load to an almost linear complexity (O(n)), increasing performance considerably.

2.1.1 A few details on the laws that govern the flock

Presentation of the rules that govern behavior:

11 12 Chapter 2. Different approaches to the flocking simulation

Reynolds has identified 3 rules applied to each member of a flock: "Separation", "Align- ment" and "Cohesion". Alignment: To avoid collisions, each Boid should try to move at the same speed as his neighbors to avoid overtaking them, or getting caught up. The speed is represented as a 3-dimensional vector describing the distance travelled each second along the x, y and z axes of the global   Vx reference. V = VyThus by collecting the velocity vectors of the neighbors, we can Vz deduce the speed to be adopted but also in which direction.

Figure 2.1: Alignment rule applied to the Boid

To apply this rule, we will collect the velocity vector of each Boid present in the sphere of perception of the Boid to be steered. Then we will add these vectors (in black on the fig.2.1). We will divide the result by the number of neighbors to obtain an average velocity (red dotted vector). Finally we won’t directly assign this vector to the Boid, otherwise it will behave unnaturally since it will change direction and velocity immediately. To steer our Boid correctly, we subtract its own velocity from the average velocity to obtain a steering vector (in green). This way, the Boid will be subjected to an acceleration that will 2.1. Flock simulation using Reynolds "Boids theory" 13 progressively redirect it towards the direction and general speed of the group.

V + ... + V  V  1 x1 xN x a = V + ... + V − V (2.1) Alignment N  y1 yN   y Vz1 + ... + VzN Vz

Separation: The separation rule will be used to move the Boids away from each other, according to the distance between them. Indeed, a Boid will try to move further away from a close neighbor than from a more distant one.

Figure 2.2: Separation rule applied to the Boid

Referring to the fig.2.2, we collect the distance between each neighbor and the Boid to be oriented (A). A repulsive force is applied to (A), inversely proportional to the distance between the Boid and its counterpart (red dotted arrow). The average of these forces (in orange) represents the ideal direction and speed to keep a sufficient distance between the Boids. But applying it directly to the Boid would not be natural, therefore we subtract the speed vector of A (black dotted arrow) from the repulsion vector to form a second steering vector (green). This vector will redirect the Boid to the desired trajectory.

N X  X V  1 X 1 i x aSeparation = ( ( Yi − Y )) − Vy (2.2) p 2 2 2 2 2 2       N (xi − x ) + (yi − y ) + (zi − z ) i=1 Zi Z Vz

Cohesion: 14 Chapter 2. Different approaches to the flocking simulation

Each Boid will seek to place itself at the center of mass of the formation to which it belongs. Indeed, the average position where the individual will be attracted will therefore depend on the number of individuals in its sphere of perception. As Reynolds explains, if a Boid is located deep in the swarm, the density of neighbors will be homogeneous and it will place itself at an equivalent distance from its counterparts: the center of mass. But if an individual is at the edge of the flock, neighboring Boids will be more concentrated on one side. The centre of gravity of the neighboring Boids will therefore be shifted towards the body of the swarm. In this case, the need to get closer to the flock is stronger and the flight path will be slightly deviated towards the center of the overall flock.

Figure 2.3: Cohesion rule applied to the Boid

In the simulation, fig.2.3 we will therefore collect the position of each Boid in the sphere of perception of (A) in order to obtain the center of gravity of the local cluster. Then we create a vector emanating from (A) towards the center of gravity (Red Arrow). Again, this vector is not directly assigned to the Boid (A) for for the sake of a realistic simulation. So we create a third steering vector (in green).

N X  X V  1 X i x a = ( Y − Y ) − V ) (2.3) Cohesion N  i     y i=1 Zi Z Vz

Finally, the addition of the 3 steering vectors (2.1, 2.2, 2.3) from the three rules de- veloped by Craig Reynolds allows a Boid to move while avoiding collisions and remaining grouped with its swarm. We will discuss the results of the simulation in the next chapter. 2.2. Cellular Automata 15

2.2 Cellular Automata

2.2.1 Game of life

Shannon familiarizes us with the concept of cellular automatons by introducing us to the game of life [6], designed by the British mathematician John Horton Conway in 1970. This game takes place on a two dimensional grid similar to a chess board, and progresses through generations. Indeed, at each iteration, a cell will count how many of its 8 neighboring cells are alive. And this information is then subjected to the rules that govern the game. We can already notice similarities with Reynolds’s Boids theory, indeed each element evolves in the world according to a limited "sphere of perception". And most importantly, the rules governing their movements are excessively simple. 1. A cell that is currently dead will be reborn in the next generation if exactly 3 of its neighbors are alive in the current generation. 2. A living cell will only stay alive in the next generation if 2 or 3 of its neighbors are alive, otherwise it dies. To get used to cellular automatons, I reproduced the game of life in a console application with the C language. The task requires to create a grid in the form of a "torus of revolution" (fig.2.4), i.e. each symmetrically opposite edge will be linked together. And that kind of grid is a feature that we will use in the incoming flocking simulation.

Figure 2.4: The board can be seen as a torus of revolution

In short, this means that when an object leaves the screen on one side it reappears on the other, essentially creating an infinite board. Once the grid has been created, a 16 Chapter 2. Different approaches to the flocking simulation reasonable proportion of "living" cells appear randomly. These cells will then count their living neighbors and follow the rules explained above. As explained in the first chapter, the game of life is a category 4 cellular automaton, i.e. local structures can form and persist over time. Since the game of life is very famous, many structures have been discovered [19]. For example, oscillating structures called spaceships, which move in the space of the game by copying themselves to an adjacent position after a number of iterations. There are also structures that, starting from a seed, bloom for a certain time to form explosions, flowers or complex patterns. We can also mention cannons, which are oscillating structures capable of creating vessels periodically. In short it is established that a cellular automaton is a system capable of producing a tangible simulation of a flock of birds [1].

2.2.2 Flocking simulation

Based on the work of Shannon [18] I tried to reproduce a flock of bird using the graphical programming tool Processing and the Java language. This object-oriented language allows us to create classes composed of parameters and methods. A common popularization is to compare a class to an everyday object, like a car. Thus the car class is a model for all the other cars that will be derived from it. These cars have wheels (parameters) and can be driven (method). In the case of our cellular automaton we will try to simplify as much as possible the parameters assigned to the cells, in this case, an alive or dead state and an account of living neighbors. The state of the cell is binary, 0 or 1, dead or alive. Our system is subjected to a variant of the "Collision Avoidance", "Velocity Matching" and "Flock Centering" rules detailed in the previous simulation of a flock according to Reynolds. Indeed the "Velocity matching" rule is not necessary since the moving cells travel only by one cell each turn, thus at the same speed. Setting up a cell: A cell symbolizes a member of the flock that will have to interact with its neighbors by respecting a small number of rules. This cell will be assigned a reduced area of perception in accordance with the fact that a bird cannot coordinate with all the members of a flock, but only with a few neighbors. In our cellular automaton, this area of perception, and thus the number of possible neighbors, depends on the geometry of the board. At each iteration, this neighborhood will be explored in search of living cells. Moreover, the neighborhood is composed of two layers (see fig.2.5), the first one (in red) is used to detect neighbors too close to apply the collision avoidance rule, and the second one (in orange) is used to detect neighbors for the flock centering rule. Setting up the board: Shannon established in his research that a grid with hexagonal cells allows the agents to take less rigid trajectories than a grid with square cells. Moreover Processing allows us to easily draw polygons. Thus the grid will be composed of lines of hexagons nested in quincunxes rows, which will create an offset that will have to be taken 2.2. Cellular Automata 17

into account in the design of the simulation’s algorithms.

Figure 2.5: Cell’s 2-layer neighborhood

Setting up the environment: As we observed earlier with the simulation of a flock of Boids, the environment plays an important role in the simulation. Without external disturbances the swarm would find a resting position where the rules of separation and clustering are simultaneously respected. Thus it is necessary to establish a migration point that the cells will try to reach. In our simulation this point will be the mouse cursor. Below is a preview of the grid:

Figure 2.6: CA grid

Basics of the algorithm: The evolution of a cellular automaton is generational, i.e. all cells will update themselves simultaneously. We can therefore create an algorithm that will update the position of a cell according to its environment and the rules established by Shannon. The choice of the best displacement is thus made in five steps (refer to fig:2.7): 1- First, the cell explores its neighborhood and counts the number of living cells in its first neighborhood layer (in blue) then in the second one (in purple). Here we have two neighbors. 18 Chapter 2. Different approaches to the flocking simulation

2- If there are neighbors in the first layer, we record their positions, and then steer the cell away from them. (blue arrow) 3- If there are neighbors in the second layer, we record their positions, and then steer the cell closer from them. (purple arrow) 4- Finally a last steering suggestion is made to move the cell closer to the migration point. (red arrow) 5- The last step is to sum up these suggestions to achieve a displacement sensitive to the rules of collision avoidance, flock centering and environmental disturbance (here it is the need to migrate). The black cell will then be instructed to move to the grey cell.

Figure 2.7: CA Steering

The results of this simulation will be discussed in the next section.

2.3 Particle System Optimization

Let us now turn our attention to particle systems, which use very simple displacement rules to converge towards a global extremum, hence the notion of optimization. Indeed, an optimization is the search for an extremum of a function, in this case, in our simulation we use a swarm of particles to discover the location of a randomly placed migration point [8]. At the beginning, the particles appear at random positions in the search space. Each particle’s position is a potential solution to our problem, and we will try to converge these solutions to the global minimum, which is the location of our goal, the migration point. In 2.3. Particle System Optimization 19 our example, we know the position of this point so we will trivially use a simple distance test to make the algorithm evolve. On the other hand, this global extremum will not be known for the optimization of the parameters of a neural network that we will present later. As explained in the previous chapter, the search for an optimal position is based on three concepts, the speed of the particle, the best position reached individually (Pbest) and the best global position (Gbest).

2.3.1 PSO’s Algorithm

Each particle starts its search by moving in a random direction, and then tests whether its new position is closer to the migration point than before. If so, this position will become the best personal position reached, the Pbest. Then the swarm will update, if necessary, the best global position Gbest based on the best positions reached by individual particles. Finally the speed and thus the direction is updated according to the following formula:

vi,d = ωvi,d + ϕprp(pi,d − xi,d) + ϕgrg(gd − xi,d) (2.4) and then we can update the position of the particles:

xi = xi + vi (2.5)

Let’s break down these formulas with the fig:2.8 below as a support, we can see that (2.4) is composed of 3 members [10]:

ωvi,d : The vector of inertia (orange arrow). vi,d corresponds to the speed of the particle and ω is the weight of inertia. This weight decreases with each iteration in order to slow down the particle as it gets closer to the objective in order to refine the PSO’s solution.

ϕprp(pi,d − xi,d) : The cognitive vector (blue arrow) based on the Pbest (blue drop) value, with ϕp a weight adjusted in order to increase or decrease the particle’s personal search. rp is a random value between 0 and 1, which varies the length of the cognitive vector. Finally (pi,d − xi,d) is the operation that orients the cognitive vector towards the position of Pbest.

ϕgrg(gd − xi,d) : The social vector (red arrow) based on the Gbest (red drop) value, with ϕg a weight adjusted in order to attract more or less strongly the swarm towards the best solution found globally. rg is a random value between 0 and 1. Finally (gd − xi,d) orients the social vector towards the Gbest location. The addition of these vectors allows the creation of a potential displacement zone (in green), since the lengths of the 3 vectors will vary according to ω, rp and rg. In (2.5) the position of the particle is updated using the velocity previously calculated with (2.4). 20 Chapter 2. Different approaches to the flocking simulation

Figure 2.8: PSO’s search strategy

A swarm of particles subjected to this algorithm is then able to converge towards a common migration point [14].

2.3.2 Use in training a neural network

A neural network (NN) is used to solve problems that are not linearly separable like the XOR logic [4]. A NN is organized in several layers of interconnected neurons, in fact there are several types of networks configuration but in our application, we will use a fully connected multilayer perceptron, whose dimensions depend on the problem to be solved. Here solving the XOR logic requires 3 layers: an input, a hidden and an output layer. The first layer is composed of as many neurons as there are inputs. Therefore, for the XOR logic we have two inputs which take the values 0 or 1. Another example, for the use of a NN in image processing, we will have as many input-neurons as there are pixels in the image. The number of neurons in the hidden layer depends on Kolmogorov’s law [7]:

hidden = 2n + 1, where n = number of input. (2.6)

Referring to the diagram below (fig.2.9), each of the neurons of the hidden and output layers are assigned a weighted input sum and a weighted bias. This sum is then fed into an activation function, which will adjust the output. In our example we use the sigmoid function which transforms the input into an output in the range ] − 1 ; 1[. 2.3. Particle System Optimization 21

Figure 2.9: I/O of a single neuron

This type of system is trained to solve a specific task with the help of examples for which the desired input and output are known. In the case of XOR logic, it is known that the output must be 1 if and only if the two inputs are different from each other. In the case of animal and object image recognition, labeled image banks are therefore used to train the NNs. During the training phase the network configuration is tested and the parameter values are corrected according to the performance of the network. Thus we find the analogy with brain’s neurons since, for example, when recognizing a cat, attributes such as whiskers will be associated with the activation of a group of neurons, which simulates associative memory. On the other hand, an NN can be trained to recognize handwritten numbers at a very acceptable success rate (95%) [11], but if a cat image is submitted to it, this NN will be certain that the image represents, for example, a 4. An NN will only perform in the task for which it has been trained, and will only output responses related to that task. Finally, our network will have the following structure: 22 Chapter 2. Different approaches to the flocking simulation

Figure 2.10: XOR neural network layout

Knowing that each line of color represents the transfer of a weighted value. The output will result from the following calculation: (Refer to the fig.2.10 above for the colors) 2.3. Particle System Optimization 23

Figure 2.11: XOR Output Computation

We can therefore see that with our architecture, two inputs eventually produce a single output, and that all the weights and biases of the network influence this output. The classical learning method for an NN is the error backpropagation [12]: we submit a known dataset to the network, and by comparing the output produced to the expected output, we can compute the error committed by the system. Then one can deduce which weights and biases are most responsible for the error and correct them little by little. We can imagine the configuration of the NN as a point evolving in a 21-dimensional space (the number of parameters) and which would be slowly oriented towards the optimal configu- ration. Now we will experiment with a different learning method, the particle system optimiza- tion. As a reminder, the PSO is an algorithm which, using a swarm of particles, is able to approximate the extremum of a function. Here our function is the error committed by the NN. Indeed we will assign to a fleet of 20 particles 21 random parameter values, and then subject the NN to a known dataset. The NN will take as parameters the coordinates, in turn, of each of the particles. The final output from the NN will be used to assess the efficiency of each configurations and thus used to compute an error to be minimized. This way, the swarm of particles will gradually move in a 21-dimensional solution-space to 24 Chapter 2. Different approaches to the flocking simulation approximate the optimal configuration to solve our problem [2]. We will comment the results of this experiment in the next chapter. Chapter 3

Outcome of the study

3.1 Results of the simulations and the application

3.1.1 Flock simulation using Reynolds "Boids theory"

As a reminder, the first simulation presented in this report is based on the rules identified by Reynolds in 1986 which are the following: "Collision Avoidance", "Velocity Matching" and "Flock Centering" as well as environmental pressures. To create the simulation I used the Unity game engine and subjected a flock of 3D object to Reynolds’s rules. We can see in the fig.3.1 the effects of only two mechanism. Which are The Flock Cen- tering and Collision Avoidance rules where the flock spontaneously and homogeneously accumulates around the center of gravity of the group and stays there for lack of environ- mental pressure. In the fig.3.2 the flock is subjected only to the Velocity Matching rule, we can see that all the Boids are moving in the same direction, indeed, the velocity combines the notion of speed and direction of a movement.

25 26 Chapter 3. Outcome of the study

Figure 3.1: Flock Centering and Collision Avoidance implementation

Figure 3.2: Velocity Matching rule implementation

The space in which the flock moves is pseudo-infinite, in fact when a particle crosses a wall, it is teleported (while keeping its trajectory) to the symmetrically opposed wall. Thus, I added to the Boid a fourth acceleration vector oriented towards the center of space. This acceleration manipulates the flock so that it tends to stay in the visible space. Once the three Reynolds’s rules are applied to the system fig.3.3, we see, as expected, a swarm-like behavior that is the result of individual interactions between particles. 3.1. Results of the simulations and the application 27

Figure 3.3: Flock of Boids

3.1.2 Cellular Automata

I grasped the notion of cellular automatons, like many, with Conway’s game of life. We can see in the image below the presence of a grid with living or dead cells scattered around it. The number in the centre of the cells represents the amount of neighboring living cells. And once again, the board is infinite in the manner of a torus of revolution. This game of life is coded in C language in a console application fig.3.4. Later, to create the simulation of a flock of birds, I turned to the Processing (java) software in order to achieve a better graphic rendering.

Figure 3.4: Conway’s Game of life 28 Chapter 3. Outcome of the study

Below is a caption of the cellular automaton simulating the migration of a flock of birds fig.3.5. The grid is composed of hexagonal cells in accordance with Shannon’s experiments. The migration point (in red) follows the movement of the mouse and we can observe the flock respecting the Collision Avoidance and Flock Centering rules (similar to those of Reynolds) during their movements. In this application, using a grid of cells restricts the possibilities of movement and some situations may impose equal forces in opposite directions on a cell, forcing it to remain static. This is a relatively rare problem in the previous simulation due to the presence of a third dimension. There are various solutions to this problem, such as removing one of the rules (Flock Centering) or increasing the cell neighborhood from two to three levels so that the flock centering and collision avoidance rules apply on different neighborhood layers, with a buffer layer in between, to avoid conflicts. With these solutions a swarm-like migration behavior can be observed using a cellular automaton.

Figure 3.5: Cellular automaton: migration of a flock of cells

3.1.3 Particle System Optimization

The final simulation concerns the foraging of a flock of birds which inspired the creation of the particle swarm optimization that we detailed in the previous chapter. We can see in the fig.3.6 that the particles create their displacement vector (in green) by adding other vectors (yellow blue and red) that correspond to the 3 components of the velocity update equation (2.5). All the green arrows point towards the general direction of the target in red, suggesting their future convergence towards it. 3.1. Results of the simulations and the application 29

Figure 3.6: Particle system’s displacement

In the fig.3.7 we can see that the flock progressively approximates the position of the objective, with an error that decreases with successive iterations. The search is stopped after a certain number of iterations or when it is estimated that the objective has been found with a given precision. In this test the flock found the migration point in 45 iterations and we can see that the approximate coordinates are very close to the solution. This performance is dependent on particle velocity, the number of particles and the number of dimensions of the problem. Their values should therefore not be left to random selection and are found empirically [7].

Figure 3.7: Optimizing with a particle system

3.1.4 PSO’s use in training a neural network

Similarly, when training a neural network using the PSO, we can see that as the iterations go by, the average error of the network output compared to the expected output decreases. 30 Chapter 3. Outcome of the study

On the image below fig.3.8, the cube on the right is used to visualize the progress of the learning of the NN. Indeed, the outputs of the network are projected on the edges of the cube. We can see for example that for the input pair [0 , 0] the NN outputs a value close to 0 (in blue) according to the XOR logic. On the other hand, the pair [0 , 1] outputs a value close to 1 (in red), and we know that two inputs different from each other in XOR logic must output a 1.

1 unit

Figure 3.8: Neural network trained to solve the XOR logic

To achieve these results, the weights and biases of the network evolved with the PSO algorithm, until the values below (fig.3.9) were reached at the 66th iteration.

Figure 3.9: Neural network: Bias and weights values

Thus, with this configuration and the "transfer function" of an NN detailed earlier (2.11) the NN will be trained to reproduce the XOR logic. 3.1. Results of the simulations and the application 31

If you want to experiment with those simulation yourself, you can head over to my website where you will also find the source code of those simulation in javascript using p5.js.

Conclusion

3.2 Evaluation

Through this report we established that the behavior of a complex system such as a flock of birds is in fact governed by simple interactions between members of the system. Indeed, they obey only a few simple laws to move and interact with each other, creating a system with emergent behaviors. Thanks to Reynolds’s work we have found that a good approxi- mation for simulating a flock of bird could be achieved by using only three rules and some environmental pressure. Then, we were able to familiarize ourselves with the notion of cellular automatons with Conway’s game of life and with the realization of a simulation based on Shannon’s work. Finally we approached the optimization capability of a particle flock using the publication of Kennedy and Eberhar by creating a visualization of the pro- gression of this algorithm, and applying this technique to the training of a neural network. In conclusion, this report was very useful to produce in the sense that the diversity of the new topics covered brought challenging and exciting difficulties which were interesting to overcome and brought me valuable knowledge.

3.2.1 Future work

This report paves the way for several improvements in my simulations and especially the resolution of countless non-linear problems using a neural network and the PSO learning algorithm. For example, the Unity simulation of Reynlods’s Boids could be used to help a flock travel from one point to another with the presence of obstacles in the way. The flock will therefore have to perform avoidance maneuvers without any human supervision in order to reach their objective. In the same way the visualization of the PSO algorithm could be used to find the shortest trajectory for a displacement problem. For example, to compute the best path for a robotic manipulator in industries. The PSO can also be used to optimize the parameters of a multi-variable problem, as it has been done in neural network training. The problems that can be solved using a NN are numerous and are at the source of artificial intelligence and machine learning, which suggests a very large number of applications.

33 Summary — In this bibliographic report, we will focus on the simulation of flocks of birds, schools of fish or any system capable of demonstrating emergent behaviors. We will use theoretical approaches such as Reynolds’s Boid theory, Shannon’s cellular automata model or Kennedy’s particle systems to create autonomous simulations. In fact, we will discover that the creation of a Unity simulation of a flock of birds in 3D can be achieved with only three rules and some environmental pressure. We will also learn about cellular automata, starting with Conway’s game of life and progressing to the creation of our own cellular automaton to simulate flock behaviors. Next, we will discuss an optimization technique using particle systems that was originally based on the behavior of a flock foraging for food. Finally, we will visualize this optimization technique through a simulation and then use it to train a neural network. Key words : Complex sytems, Emergent behaviors, Boids, Cellular autom- tata, Particle system optimization, Neural networks Bibliography

[1] DB Bahr and M Bekoff. “Predicting flock vigilance from simple passerine interactions: modelling with cellular automata”. In: Animal behaviour 58.4 (Oct. 1999), pp. 831– 839. issn: 0003-3472. doi: 10.1006/anbe.1999.1227. url: https://doi.org/10. 1006/anbe.1999.1227. [2] Marcio Carvalho and Teresa Bernarda Ludermir. “Particle Swarm Optimization of Neural Network Architectures and Weights”. In: 2007. [3] Colin W. Clark and Marc Mangel. “Foraging and Flocking Strategies: Information in an Uncertain Environment”. In: The American Naturalist 123.5 (1984), pp. 626–641. doi: 10.1086/284228. [4] Siddhartha Dutta. Implementing the XOR Gate using Backpropagation in Neural Networks. 2019. url: https://towardsdatascience.com/implementing- the- xor-gate-using-backpropagation-in-neural-networks-c1f255b4f20d. [5] Gary Flake. The Computational Beauty of Nature: Computer Explorations of Frac- tals, Chaos, Complex Systems, and Adaptation. Vol. 3. June 2000. isbn: 0262561271. doi: 10.2307/2589369. [6] Mathematical Games. “The fantastic combinations of John Conway’s new solitaire game “life” by Martin Gardner”. In: Scientific American 223 (1970), pp. 120–123. [7] Haza Nuzly Abdull Hamed, Siti Mariyam Shamsuddin, and Naomie Salim. “Par- ticle Swarm Optimization For Neural Network Learning Enhancement”. In: Jurnal Teknologi 49 (Dec. 2008), pp. 13–26. doi: 10.11113/jt.v49.194. [8] Xiaohui Hu. PSO Tutorial. 2006. url: http : / / www . swarmintelligence . org / tutorials.php. [9] J. Kennedy and R. Eberhart. “Particle swarm optimization”. In: Proceedings of ICNN’95 - International Conference on Neural Networks. Vol. 4. 1995, 1942–1948 vol.4. [10] Yong-Hyuk Kim, Kang Hoon Lee, and Yourim Yoon. “Visualizing the search process of particle swarm optimization”. In: July 2009, pp. 49–56. doi: 10.1145/1569901. 1569909.

35 36 Bibliography

[11] S. Knerr, L. Personnaz, and G. Dreyfus. “Handwritten digit recognition by neural networks with single-layer training”. In: IEEE Transactions on Neural Networks 3.6 (1992), pp. 962–968. [12] Kamil Krzyk. Coding Deep Learning for Beginners — Linear Regression (Part 2): Cost Function. 2018. url: https : / / towardsdatascience . com / coding - deep - learning-for-beginners-linear-regression-part-2-cost-function-49545303d29f. [13] James Ladyman, James Lambert, and Karoline Wiesner. “What is a complex sys- tem?” In: European Journal for Philosophy of Science 3 (June 2013). doi: 10.1007/ s13194-012-0056-8. [14] Gonçalo Pereira. “Particle Swarm Optimization”. In: (May 2011). [15] Paul Rendell. Turing Machine implemented in Conway’s Game of Life. 2015. url: http://rendell-attic.org/gol/tm.htm. [16] Craig Reynolds. “Flocks, Herds and Schools: A Distributed Behavioral Model”. In: Computer Graphics 21 (Jan. 1987), pp. 25–34. doi: 10.1145/37402.37406. [17] Craig Reynolds. “Interaction with Groups of Autonomous Characters”. In: Game Developers Conference 21 (July 2000). [18] James Shannon. “Exploring the real world applications of cellular automata and its application to the simulation of flocking behaviour”. In: 2013. [19] Wikipedia. Conway’s Game of Life. 2020. url: https://en.wikipedia.org/wiki/ Conway%5C%27s_Game_of_Life#Examples_of_patterns. [20] Edward O. Wilson. Sociobiology-The New Synthesis. Belknap Press, Cambridge, MA, 1975.

Polytech Angers 62, avenue Notre Dame du Lac 49000 Angers