
8 STOCHASTIC SIMULATION 59 8 STOCHASTIC SIMULATION Whereas in optimization we seek a set of parameters ~x to minimize a cost, or to maximize a reward function J(~x), here we pose a related but different question. Given a system S, it is desired to understand how variations in the de¯ning parameters ~x lead to variations in the system output. We will focus on the case where ~x is a set of random variables, that can be considered unchanging - they are static. In the context of robotic systems, these unknown parameters could be masses, stiffness, or geometric attributes. How does the system behavior depend on variations in these physical parameters? Such a calculation is immensely useful because real systems have to be robust against modeling errors. At the core of this question, the random parameters xi in our discussion are described by distributions; for example each could have a pdf p(xi). If the variable is known to be normal or uniformly distributed, then of course it suffices to specify the mean and variance, but in the general case, more information may be needed. 8.1 Monte Carlo Simulation Suppose that we make N simulations, each time drawing the needed random parameters xi from a random number ”black box” (about which we will give more details in the next section). We define the high-level output of our system S to be g(~x). For simplicity, we will say that g(~x) is a scalar. g(~x) can be virtually any output of interest, for example: the value of one state at a given time after an impulsive input, or the integral over time of the trajectory of one of the outputs, with a given input. In what follows, will drop the vector notation on x for clarity. Let the estimator G of g(x) be defined as N 1 X G = g(xj ): N j=1 You recognize this as a straight average. Indeed, taking the expectation on both sides, N 1 X E(G) = E(g(xj )); N j=1 it is clear that E(G) = E(g). At the same time, however, we do not know E(g); we calculate G understanding that with a very large number of trials, G should approach E(g). Now let’s look at the variance of the estimator. This conceptually results from an infinite number of estimator trials, each one of which involves N evaluations of g according to the above definition. It is important to keep in mind that such a variance involves samples of the estimator (each involving N evaluations) - not the underlying function g(x). We have 8 STOCHASTIC SIMULATION 60 2 3 1 XN 2 2 4 5 σ (G) = σ g(xj ) N j=1 2 3 1 XN = σ2 4 g(x )5 2 j N j=1 1 XN = σ2(g) 2 N j=1 1 = σ2(g): N This relation is key. The first equality follows from the fact that σ2(nx) = n2σ2(x), if n is a constant. The second equality is true because σ2(x + y) = σ2(x) + σ2(y), where x and y are random variables. The major result is that σ2(G) = σ2(g) if only one-sample trials are considered, but that σ2(G) ! 0 as N ! 1. Hence with a large enough N, we can indeed expect that our G will be very close to E(g). Let us take this a bit further, to get an explicit estimate for the error in G as we go to large N. Define a nondimensional estimator error G ¡ E(g) q = σ(G) p (G ¡ E(g)) N = ; σ(g) p where the second equality comes from the result above. We call the factor σ(g)= N the standard error. Invoking the central limit theorem, which guarantees that the distribution of G becomes Gaussian for large enough N, we have Z b 1 ¡t2=2 lim prob(a < q < b) = p e dt N !1 a 2¼ = F (a) ¡ F (b); where F (x) is the cumulative probability function of the standard Gaussian variable: Z a 1 ¡t2=2 F (a) = p e dt ¡1 2¼ Looking up some values for F (x), we see that the nondimensional error is less than one in 68.3% of trials; it is less than two in 95.4% of trials, and less than three in 99.7% of trials. The 99.7% confidence interval corresponds with p ¡3 · (G ¡ E(g)) N/σ(g) · 3 ! p p ¡3σ(g)= N · G ¡ E(g) · 3σ(g)= N: 8 STOCHASTIC SIMULATION 61 In general, quadrupling the number of trials improves the error by a factor of two. So far we have been describing a single estimator G, which recovers the mean. The mean, however, is in fact an integral over the random domain: Z E(g) = p(x)g(x)dx; x²X where p(x) is the pdf of random variable x. So the Monte Carlo estimator G is in fact an integrator: Z G ' p(x)g(x)dx: x²X We can just as easily de¯ne estimators of statistical moments: XN Z 1 n n Gn = xj g(xj) ' x p(x)g(x)dx; N j=1 x9 X which will follow the same basic convergence trends of the mean estimator G. These moments can be calculated all using the same N evaluations of g(x). The above equation gives another point of view to understand how the Monte Carlo approach works: the e®ect of the probability density function in the integral is replaced by the fact that random variables in MC are drawn from the same distribution. In other words, a high p(x) in a given area of the domain X ampli¯es g(x) there. MC does the same thing, because there are in fact more x drawn from this area, in making the N evaluations. 8.2 Making Random Numbers The Monte Carlo method requires that we ¯re into our evaluation g(x) a group of N random numbers (or sets of random numbers), drawn from a distribution (or a set of distributions for more than one element in x). Here we describe how to generate such data from simple distributions. Note that both the normal and the uniform distributions are captured in standard MATLAB commands. We describe in particular how to generate samples of a given distribution, from random numbers taken from an underlying uniform distribution. First, we say that the cumulative probability function of the uniform distribution is 8 <> 0; w · 0 P (w) = w; 0 < w < 1 > : 1; w ¸ 1 8 STOCHASTIC SIMULATION 62 If x = r(w) where r is the transformation we seek, recall that the cumulative probabilities are P (x) = P (r(w)) = P (w) = w; and the result we need is that Z x w = P (x) = p(x)dx: ¡1 Our task is to come up with an x that goes with the uniformly distributed w - it is not as hard as it would seem. As an example, suppose we want to generate a normal variable x (zero mean, unity variance). We have Z x 1 ¡t2=2 P (x) = p e dt = w ¡1 2¼ F (x) = w; or ¡1 x = F (w); where F (x) is the cumulative probability function of a standard Gaussian variable (zero mean, unity variance), and can be looked up or calculated with standard routines. Note F (x) is related within a scale factor to the error function (erf). As another example, consider the exponential distribution ¡¸x p(x) = ¸e ; this distribution is often used to describe the time of failure in complex systems. We have Z x ¡¸t P (x) = ¸e dt = w 0 1 ¡ e¡¸x = w or log(1 ¡ w) x = ¡ : ¸ Similarly, this procedure applied to the Rayleigh distribution 2 p(x) = xe¡x =2 q gives x = ¡2 log(1 ¡ w). In these formulas, we can replace (w ¡ 1) with w throughout; w is uniformly distributed on the interval [0,1], so they are equivalent. 8.3 Grid-Based Techniques As noted above, moment calculations on the output variable are essentially an integral over the domain of the random variables. Given this fact, an obvious approach is simply to focus 8 STOCHASTIC SIMULATION 63 on a high-quality integration routine that uses some ¯xed points x - in the Monte Carlo method, these were chosen at random. In one dimension, we have the standard trapezoid rule: Z b Xn g(x)dx ¼ wig(xi); with a i=1 w1 = (b ¡ a)=2(n ¡ 1) wn = (b ¡ a)=2(n ¡ 1) w2; ¢ ¢ ¢ ; wn¡1 = (b ¡ a)=(n ¡ 1) xi = a + (i ¡ 1)(b ¡ a)=(n ¡ 1): Here the wi are simply weights, and g is to be evaluated at the different abscissas xi. This 2 rule has error of order 1=n , meaning that a doubling in the number of points n gives a fourfold improvement in the error. To make an integration in two dimensions we take the tensor product of two single-dimension calculations: Z Z b d Xn1 Xn2 g(x)dx ¼ wiwj g(xij ): a c i=0 j=0 Here the abscissas have two elements, taken according to the grids in the two directions. In many applications, the trapezoid rule in multi-dimensions works quite well and can give a good understanding of the various moments of the given system. Extensions such as Romberg integration, with the tensor product, can be applied to improve results. A particularly powerful technique involving orthogonal polynomials is also available and it gives truly remarkable accuracy for smooth functions.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages9 Page
-
File Size-