
Error Cascades in Collective Behavior A Case Study of the Gradient Algorithm on 1000 Physical Agents ∗ Melvin Gauci Monica E. Ortiz* Harvard University Harvard University Cambridge, MA, USA Cambridge, MA, USA [email protected] [email protected] Michael Rubenstein Radhika Nagpal Northwestern University Harvard University Evanston, IL, USA Cambridge, MA, USA [email protected] [email protected] ABSTRACT The gradient, or hop count, algorithm is inspired by nat- ural phenomena such as the morphogen gradients present in multi-cellular development. It has several applications in multi-agent systems and sensor networks, serving as a basis for self-organized coordinate system formation, and finding shortest paths for message passing. It is a simple and well- understood algorithm in theory. However, we show here that in practice, it is highly sensitive to specific rare errors that emerge at larger scales. We implement it on a system of 1000 physical agents (Kilobot robots) that communicate asynchronously via a noisy wireless channel. We observe that spontaneous, short-lasting rare errors made by a sin- gle agent (e.g. due to message corruption) propagate spa- tially and temporally, causing cascades that severely hinder the algorithm's functionality. We develop a mathematical model for temporal error propagation and validate it with experiments on 100 agents. This work shows how multi- agent algorithms that are believed to be simple and robust Figure 1: The gradient algorithm running on 1000 physical from theoretical insight may be highly challenging to im- agents (Kilobot robots, inset). Increasing gradient values are plement on physical systems. Systematically understanding represented by the repeating cycle of colors red, magenta, and quantifying their current limitations is a first step in the blue, cyan, green, and yellow. direction of improving their robustness for implementation. 1. INTRODUCTION CCS Concepts The biological world abounds with self-organizing systems where large numbers of relatively simple agents use local in- •Computing methodologies ! Multi-agent systems; teractions to produce impressive global behaviors, such that •Hardware ! Transient errors and upsets; System-level the system as a whole is greater than the sum of its parts. fault tolerance; •Networks ! Network performance mod- These systems exhibit several properties that are highly de- eling; sirable from an engineering perspective, such as robustness, scalability, and flexibility [3, 4, 12]. Presently, however, Keywords there exists a substantial gap between the large body of lit- Complex systems; spatio-temporal error propagation; multi- erature on mathematical and computational models of self- agent systems in hardware; reality gap; Kilobots organizing systems and the small body of work on realizable physical systems that could verify the predictions made by such models. These models necessarily involve simplifica- ∗ tions, and can fail to manifest emergent behaviors that arise Co-first authors. through the intricate physical interactions and variability that exists in real systems. As such, their utility is lim- Appears in: Proceedings of the 16th International Confer- ited if the fidelity to the physical world remains untested. ence on Autonomous Agents and Multiagent Systems (AA- This leaves a wide gap in our scientific understanding { we MAS 2017), S. Das, E. Durfee, K. Larson, M. Winikoff critically need physical experimental systems to discover and (eds.), May 8{12, 2017, S~aoPaulo, Brazil. Copyright c 2017, International Foundation for Autonomous Agents and develop effective mathematical tools for predicting how such Multiagent Systems (www.ifaamas.org). All rights reserved. complex systems will scale, adapt, or fail. 1404 In this paper we take a well-known collective behavior{the represents the minimum number of hops that a message orig- gradient algorithm{and study how it behaves in an physical inating from the seed must make in order to reach that agent. system as we increase the size of the collective (Figure 1). In an ideal scenario, each agent continuously transmits This algorithm is inspired by biological phenomena such as the minimum of its received gradient values from its neigh- the morphogen gradients present in multi-cellular develop- bors, plus one. Under these conditions, the algorithm is ment [13]. It is used as a component behavior for many provably correct, and converges to a stable equilibrium in higher-level collective behaviors, such as coordinate system linear time with respect to the network's diameter. It is formation [8], message routing and programming in sensor also self-repairing; that is, if there is a change in the net- networks [3, 5, 6], and self-assembly in modular robots [3, work topology (e.g., in the location of the seed), the system 10, 11, 12]. Unlike many other collective behaviors, it is re-converges in finite time to correctly reflect this change [7]. well-understood analytically in models of multi-agent sys- In a practical scenario, such as a wireless sensor network, tems and has attractive self-stabilizing features [7, see asyn- agents communicate with each other by passing discrete chronous breadth-first search]. While theoretical models in- messages and do not operate in lock step (i.e., they are asyn- corporate some non-idealities (e.g. asynchrony, message loss, chronous) [5, 6]. Under these conditions, the gradient algo- agent failure/removal), how the algorithm behaves in real rithm is typically implemented with agents having a polling large-scale systems has not yet been tested. period during which they collect as many neighbor messages We tested the basic gradient algorithm using the Kilobot as possible. During each period, an agent collects messages robot [9], which allows us to create networks of 10 to 1000 from its neighbors, and transmits its gradient value as com- agents that communicate wirelessly with their local neigh- puted at the end of the previous period. The period duration bors. We observed the emergence of error cascades, where is chosen to be large enough so as to minimize the chance a single, short-lived agent error is amplified by correct ap- of message loss (i.e. of not receiving any message from a plication of the algorithm to cause large scale errors and neighbor a period), but not too large that it slows down the instabilities in the system. Small-scale experimental sys- gradient formation time. If an agent receives no messages tems behave as predicted by analytical models; however as (from any neighbor) during a period, it retains its gradient the network size goes up, the system shows a high level of value from the previous period. The algorithm as used in instability, and a 1000-agent network never achieves a stable this paper is presented below: gradient. We show that these cascades are caused by rare er- rors not considered before in analytical models, that exploit own value = random number (≥ 1) this algorithm's vulnerability. The rare error is amplified by min neighbor value = 1 an agent's neighbors, resulting in an error `infection' that last updated = timer rapidly spreads both temporally and spatially in a way that while (1) do is exponentially related to system scale. These error cas- if message received = 1 then cades/infections occur in small systems (e.g. 100 agents) if neighbor value < min neighbor value then but are quickly self-corrected. However, as the system size min neighbor value = neighbor value increases, the entire system exhibits instability due to am- message received = 0 plification and overlapping of cascades{a phenomenon which end is counter to expectations from previous analytical models. end We present a mathematical model that demonstrates what if timer > last updated + PERIOD then properties a rare error must have to create a cascade in the if min neighbor value < 1 then gradient algorithm, and how local connectivity amplifies the own value = min neighbor value + 1 errors spatially, and local asynchrony amplifies them tempo- min neighbor value = 1 rally. Using this model, we are able to predict the probabil- end ity of a particular agent being in error as a function of both last updated = timer the probability of such rare errors and the network size. We end validate our new model with controlled experiments where end errors are purposefully injected into a simple line network. Algorithm 1: The gradient algorithm as implemented in Our experimental case study highlights the need for in- this work. It is assumed that a \timer" variable is available termediate systems that can close the gap between existing that increases in the background at a known, fixed rate. It theory and large-scale implementation by uncovering emer- is also assumed that whenever a message from a neighbor gent behaviors and providing a basis for new theory models. arrives, an interrupt handler stores the \neighbor value" and sets the \message received" flag to 1. 2. AGENT AND GRADIENT MODEL Consider a network of spatially-distributed agents, in which 3. THE KILOBOT ROBOT each agent can communicate with its neighbors within a We implemented the gradients algorithm using the Kilo- fixed distance threshold [2, 3, 5, 6]. Within this context, bot robot [9] as a physical agent. The Kilobot is a low-cost the gradient algorithm assumes the presence of a designated robot designed for being operated in large collectives, with seed agent (see Figure 2, right) from which the gradient em- no need for individual attention. Here we use the Kilobot as anates. The seed agent has a fixed gradient value of zero. a static agent in a wireless sensor network, and will there- Each non-seed agent computes its estimated `distance' from fore only describe its communication capabilities. Kilobots the seed agent in the form of a hop count: The agent con- communicate with each other via an infrared transceiver lo- siders its neighbors within a limited visibility range, and cated at the bottom of the robot, with a communication computes its own gradient value as the minimum gradient distance of up to three body lengths.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages9 Page
-
File Size-