
Differentiable Agent-Based Simulation for Gradient-Guided Simulation-Based Optimization Philipp Andelfinger University of Rostock Institute for Visual and Analytic Computing Rostock, Germany ABSTRACT Simulation Model X Per-Agent Update Logic f(X) Gradient-Free Simulation-based optimization using agent-based models is typi- Optimization cally carried out under the assumption that the gradient describing Method the sensitivity of the simulation output to the input cannot be eval- uated directly. To still apply gradient-based optimization methods, (a) Gradient-free simulation-based optimization. which efficiently steer the optimization towards a local optimum, gradient estimation methods can be employed. However, many Simulation Model simulation runs are needed to obtain accurate estimates if the input X Per-Agent Update Logic f(X) dimension is large. Automatic differentiation (AD) is a family of techniques to compute gradients of general programs directly. Here, Gradient-Based Recorded operations Optimization we explore the use of AD in the context of time-driven agent-based Method simulations. By substituting common discrete model elements such Automatic ∇f(X) as conditional branching with smooth approximations, we obtain Differentiation gradient information across discontinuities in the model logic. On the example of microscopic traffic models and an epidemics model, Non-differentiable model building block Differentiable model building block we study the fidelity and overhead of the differentiable models, as well as the convergence speed and solution quality achieved by (b) Gradient-based simulation-based optimization. gradient-based optimization compared to gradient-free methods. In traffic signal timing optimization problems with high inputdi- Figure 1: Our approach: by substituting agent model ele- mension, the gradient-based methods exhibit substantially superior ments with differentiable counterparts, automatic differen- performance. Finally, we demonstrate that the approach enables tiation can be used to enable gradient-based optimization. gradient-based training of neural network-controlled simulation entities embedded in the model logic. agent-based models. An existing method to obtain gradients is In- finitesimal Perturbation Analysis (IPA) [27], which the literature applies by determining derivative expressions by a manual model 1 INTRODUCTION analysis (e.g., [11, 20, 31]), limiting its applicability to relatively Simulation-based optimization comprises methods to determine a simple models. Alternatively, gradients can be estimated based on simulation input parameter combination that minimizes or maxi- finite differences. However, the number of required simulation runs mizes an output statistic [9, 30], with applications in a vast array grows linearly with the input dimension, rendering this approach of domains such as supply chain management [35], transporta- prohibitively expensive in non-trivial applications. tion [49], building planning [62], and health care [64]. The problem A similar problem, as well as an efficient solution, exists in the can be viewed as a special case of mathematical optimization in field of deep learning, where the ability to optimize neural networks which an evaluation of the objective function is reflected by the with millions of parameters within tolerable timeframes rests on arXiv:2103.12476v1 [cs.LG] 23 Mar 2021 execution of one or more simulation runs. Many mathematical gradient information determined using the backpropagation algo- optimization methods evaluate not only the objective function it- rithm [54]. Backpropagation is a special case of automatic differ- self but also its partial derivatives to inform the choice of the next entiation, a family of methods to compute derivatives of computer candidate solution. Given a suitable initial guess, gradient-based programs written in general-purpose programming languages [45]. methods efficiently steer the optimization towards a local optimum, In this paper, we explore the use of automatic differentiation with provable convergence under certain conditions [53]. for gradient-based optimization of agent-based simulations. The In contrast, simulation-based optimization using agent-based main obstacle towards this goal is given by discontinuous model models usually relies either on surrogate models, which typically elements, which are considered among the constituent features abandon the individual-based level of detail of the original model [2], of agent-based models [7]. To enable differentiation across such or on gradient-free methods such as genetic algorithms [8]. While elements, we propose substitutes constructed from known smooth gradient-free simulation-based optimization is a time-tested ap- approximations of basic operations (cf. Fig. 1). In contrast to classical proach, the hypothesis underlying the present paper is that the surrogate modeling approaches, our approach retains the per-agent targeted local search carried out by gradient-based methods may logic of the original model. The resulting agent-based models are achieve faster convergence or higher-quality solutions for certain differentiable regarding some or all model aspects, depending on Philipp Andelfinger the degree to which discontinuous elements are substituted. We E1 x ()2 E3 refer to this approach as differentiable agent-based simulation. E E To evaluate the approach, we implement models from the trans- 4 5 portation and epidemics domains in a differentiable fashion and E2 ~ × sin study 1. the fidelity of the results as compared to purely discrete reference models, 2. the overhead introduced by relying on smooth model building blocks, and 3. the relative convergence behavior and Figure 2: Simple example function sin (x2~). Automatic dif- solution quality in simulation-based optimization problems as com- ferentiation propagates values reflecting the sensitivities of pared to gradient-free methods. To further showcase the potential of the intermediate results w.r.t. the inputs along the nodes vi. the approach, we extend the traffic simulation model by embedding neural network-controlled traffic signals in the differentiable model required. Conversely, reverse-mode AD computes the derivatives of logic, which enables their training using gradient-based methods. one output variable w.r.t. arbitrarily many inputs in a single pass. Our main contributions are as follows: Given our use case of simulation-based optimization, where we • Differentiable agent-based simulation, i.e., the construc- expect the input dimension to be larger than the output dimension, tion of agent-based models from differentiable building blocks the remainder of the paper will rely on reverse-mode AD. to enable automatic differentiation and gradient-based opti- During the execution of the program, the computational oper- mization. An initial set of building blocks to construct differ- ations and intermediate results are recorded in a graph. At termi- entiable model implementations is presented. nation, the graph is traversed in reverse, starting from the output. • Differentiable model implementations based on well- The chain rule is applied at each operation to update the intermedi- known models from the literature to demonstrate the ap- ate derivative calculation. When arriving at an input variable, the proach. We measure the fidelity and performance compared computed value is the partial derivative of the simulation output w.r.t. the respective input. We give a brief example of reverse-mode to discrete reference implementations. 2 • Comparison of convergence and solution quality in a AD of a program implementing the function 5 (G,~) = sin(G ~) simulation-based traffic signal timing optimization using (cf. Figure 2). The intermediate results during execution (denoted gradient-free and gradient-based methods. by E8 ) and during reverse-mode AD (denoted by E¯8 ) are as follows: • Embedding and training of neural network-controlled E1 = G E¯5 = 1 E = ~ E¯ = mE5 E¯ = cos(E )1 = cos(G2~) entities, demonstrating their training based on simulation 2 4 mE4 5 4 gradient information on the example of dynamic traffic lights. E = E 2 = G2 E¯ = mE4 E¯ = E E¯ = ~ cos(G2~) 3 1 3 mE3 4 2 4 E = E E = G2~ E¯ = mE4 E¯ = E E¯ = G2 cos(G2~) The remainder of the paper is structured as follows: In Section 2, 4 3 2 2 mE2 4 3 4 E = sin(E ) = sin(G2~) E¯ = mE3 E¯ = 2E E¯ = 2G~ cos(G2~) we briefly introduce automatic differentiation, which forms the 5 4 1 mE1 3 1 3 basis for our approach. In Section 3, the concept of differentiable The partial derivatives of the function are m5 = E¯ = 2G~ cos(G2~) agent-based simulation is introduced and building blocks are pro- mG 1 m5 2 2 vided to construct differentiable models. In Section 4, we describe and m~ = E¯2 = G cos(G ~). the differentiable models we implemented to demonstrate the ap- Mature implementations of AD are available for programming proach. In Section 5, experiment results are presented to evaluate languages such as C and C++ [22, 29], Java [56], and Julia [34]. the performance and fidelity of the model implementations as well Modern AD tools rely on expression templates [29] or source-to- as the benefits of our approach in simulation-based optimization source transformation [34] to generate efficient differentiation code. problems. In Section 6, we discuss limitations of the approach and The backpropagation algorithm used to train neural networks is a various directions for future research. Section 7 describes related special case of AD [3], where the computation
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages12 Page
-
File Size-