Constrained Nonlinear Programming

Total Page:16

File Type:pdf, Size:1020Kb

Constrained Nonlinear Programming 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach the constrained nonlinear program is transformed into an unconstrained problem (or more commonly, a series of unconstrained problems) and solved using one (or some variant) of the algorithms we have already seen. Transformation methods may further be classified as (1) Exterior Penalty Function Methods (2) Interior Penalty (or Barrier) Function Methods (3) Augmented Lagrangian (or Multiplier) Methods 2. PRIMAL METHODS: These are methods that work directly on the original problem. We shall look at each of these in turn; we first start with transformation methods. 150 Exterior Penalty Function Methods These methods generate a sequence of infeasible points whose limit is an optimal solution to the original problem. The original constrained problem is transformed into an unconstrained one (or a series of unconstrained problems).The constraints are incorporated into the objective by means of a "penalty parameter" which penalizes any constraint violation. Consider the problem Min f(x) n st gj(x) ≤ 0, j=1,2,…,m, hj(x) = 0, j=1,2,…,p, x∈R Define a penalty function p as follows: ( ) = Max{0, ( )} + ( ) ℎ where α is some positive integer. It is easy to see that If gj(x) ≤ 0 then Max{0, gj(x)} = 0 If gj(x) > 0 then Max{0, gj(x)} = gj(x) (violation) Now define the auxiliary function as P(x)= f(x) + µ (x),where µ >> 0. Thus if a constraint is violated, (x)>0 and a "penalty" of µ (x) is incurred. 151 The (UNCONSTRAINED) problem now is: Minimize P(x)= f(x) + µ (x) st x∈Rn EXAMPLE: Minimize f(x)=x, st g(x)=2-x ≤ 0, x ∈R (Note that minimum is at x* =2) Let (x)= Max[{0,g(x)}]2 , ) if < 2 i. e. , ( ) = 0 if (2 − If (x)=0, then the optimal solution to Minimize ( ) +≥ 2( ) is at x*= -∞ and this is infeasible; so ( ) + ( ) = + ) Minimize P( ) = ( ) + ( ) has(optimum2 − solution . ∗ 1 = 2 − : As (= ) 2 ∗ → ∞, 2 − → 2 f+µ2p µ1=0.5 µ1=0.5 µ2p µ2=1.5 µ2=1.5 p f+µ1p µ1p -3 -2 -1 0 1 2 -1 0 1 2 3 Penalty Function Auxiliary Function 152 In general: 1. We convert the constrained problem to an unconstrained one by using the penalty function. 2. The solution to the unconstrained problem can be made arbitrarily close to that of the original one by choosing sufficiently large µ. In practice, if µ is very large, too much emphasis is placed on feasibility. Often, algorithms for unconstrained optimization would stop prematurely because the step sizes required for improvement become very small. Usually, we solve a sequence of problems with successively increasing µ values; the optimal point from one iteration becomes the starting point for the next problem. PROCEDURE: Choose the following initially: 1. a tolerance ε, 2. an “increase” factor β, 3. a starting point x1, and 4. an initial µ1. At Iteration i n i 1. Solve the problem Minimize f(x)+µip(x); st x∈X (usually R ). Use x as the starting point and let the optimal solution be xi+1 i+1 2. If µip(x ) ≤ ε then STOP; else let µi+1 =(µi)β and start iteration (i+1) 153 2 2 EXAMPLE: Minimize f(x) = x1 + 2x2 2 st g(x) = 1–x1 –x2 ≤ 0; x∈R Define the penalty function ) if ( ) > 0 ( ) = 0 if ( ) (1 − − The unconstrained problem is ≤ 0 2 2 Minimize x1 + 2x2 + µ (x) If (x)=0, then the optimal solution is x* =(0,0) INFEASIBLE! 2 2 2 2 ∴ (x) = (1–x1 –x2) , ⇒ P(x) = x1 + 2x2 + µ [(1–x1 –x2) ], and the necessary conditions for the optimal solution (∇P(x)=0) yield the following: P P = 2 + 2 )( ) = 0, = 4 + 2 )( ) = 0 (1 − − −1 (1 − − −1 2 Thus = and = (2 + 3 ) (2 + 3 ) ∗ ∗ Starting with µ =0.1, β=10 and x1=(0,0) and using a tolerance of 0.001 (say), we have the following: i+1 i+1 i+1 Iter. (i) µi x g(x ) µip(x ) 1 0.1 (0.087, 0.043) 0.87 0.0757 2 1.0 (0.4, 0.2) 0.40 0.16 3 10 (0.625, 0.3125) 0.0625 0.039 4 100 (0.6622, 0.3311) 0.0067 0.00449 5 1000 (0.666, 0.333) 0.001 0.001 Thus the optimal solution is x*= (2/3, 1/3). 154 INTERIOR PENALTY (BARRIER) FUNCTION METHODS These methods also transform the original problem into an unconstrained one; however the barrier functions prevent the current solution from ever leaving the feasible region. These require that the interior of the feasible sets be nonempty, which is impossible if equality constraints are present. Therefore, they are used with problems having only inequality constraints. A barrier function B is one that is continuous and nonnegative over the interior of {x | g(x)≤0}, i.e., over the set {x | g(x)<0}, and approaches ∞ as the boundary is approached from the interior. Let ( ) ≥ 0 if y<0 and lim ( ) → = ∞ Then ( ) = [ ( )] Usually, ( ) or ( ) ( )) ( ) = − ∑ = − ∑ (− In both cases, note that lim ( ) ( ) → = ∞ The auxiliary function is now f(x) + µB(x) where µ is a SMALL positive number. 155 Q. Why should µ be SMALL? A. Ideally we would like B(x)=0 if gj(x)<0 and B(x)=∞ if gj(x)=0, so that we never leave the region{x | g(x) ≤ 0}. However, B(x) is now discontinuous. This causes serious computational problems during the unconstrained optimization. Consider the same problem as the one we looked at with (exterior) penalty functions: Minimize f(x) = x , st –x+2≤0 Define the barrier function B(x)= -1/(-x+2) So the auxiliary function is f(x) + µB(x) = x - µ/(-x+2), and has its optimum solution at x*=2 + , and as µ→0, x*→2. 8 √ 12 µ1=0.5 10 f+µ2B 6 µ1=0.5 µ2B 8 µ2=1.5 4 µ2=1.5 6 B 4 2 2 f+µ1B 0 µ1B 0 2 3 4 5 6 7 8 2 3 4 5 6 7 8 Barrier Function Auxiliary Function Similar to exterior penalty functions we don't just choose one small value for rather we start with some µ1 and generate a sequence of points. 156 PROCEDURE: Initially choose a tolerance ε, a “decrease” factor β, an interior 1 starting point x , and an initial µ1. At Iteration i n i 1. Solve the problem Minimize f(x)+µiB(x),st x∈X (usually R ). Use x as the starting point and let the optimal solution be xi+1 i+1 2. If µiB(x ) < ε then STOP; else let µi+1 =(µi)β and start iteration (i+1) Consider the previous example once again… 2 2 EXAMPLE: Minimize f(x) = x1 + 2x2 2 st g(x) = 1–x1 –x2 ≤ 0; x∈R Define the barrier function ( ) ( ) ( + The unconstrained problem = is− − = − − 1) 2 2 2 2 Minimize x1 + 2x2 + µ (x) = x1 + 2x2 - µ ( + The necessary conditions for the optimal solution (∇f( x)=0) yield− 1 the) following: = 2 /( + = 4 /( + − − 1) = 0, − − 1) = 0 157 1 ± 1 + 3 1 ± 1 + 3 Solving, we get = and = 3 6 Since the negative signs lead to infeasibility, we = and = ∗ ∗ Starting with µ =1, β=0.1 and x1=(0,0) and using a tolerance of 0.005 (say), we have the following: i+1 i+1 i+1 Iter. (i) µi x g(x ) µiB(x ) 1 1 (1.0, 0.5) -0.5 0.693 2 0.1 (0.714, 0.357) -0.071 0.265 3 0.01 (0.672, 0.336) -0.008 0.048 4 0.001 (0.6672, 0.3336) -0.0008 0.0070 5 0.0001 (0.6666, 0.3333) -0.0001 0.0009 Thus the optimal solution is x*= (2/3, 1/3). 158 Penalty Function Methods and Lagrange Multipliers Consider the penalty function approach to the problem below Minimize f(x) st gj(x) ≤ 0; j = 1,2,…,m k n hj(x) = 0; j = m+1,m+2,…,l x ∈R . Suppose we use the usual interior penalty function described earlier, with p=2. The auxiliary function that we minimize is then given by 2 2 P(x) = f(x) + µp(x) = f(x) + µ{∑j[Max(0, gj(x))] + ∑j[hj(x)] } 2 2 = f(x) + {∑jµ[Max(0, gj(x))] + ∑jµ[hj(x)] } The necessary condition for this to have a minimum is that ∇P(x) = ∇f(x) + µ∇p(x) = 0, i.e., ∇f(x) + ∑j2µ[Max(0, gj(x)]∇gj(x) + ∑j2µ[hj(x)]∇hj(x) = 0 (1) k Suppose that the solution to (1) for a fixed µ (say µk>0) is given by x . Let us also designate k 2µk[Max{0, gj(x )}] = λj(µk); j=1,2,…,m (2) k 2µk[hj(x )] = λj(µk); j=m+1,…,l (3) so that for µ=µk we may rewrite (1) as 159 ∇f(x) + ∑jλj(µk) ∇gj(x) + ∑jλj(µk) ∇hj(x) = 0 (4) Now consider the Lagrangian for the original problem: L(x,λ) = f(x) + ∑jλjgj(x) + ∑jλjhj(x) The usual K-K-T necessary conditions yield ∇f(x) + ∑j=1..m {λj∇gj(x)} + ∑j=m+1..l {λj∇hj(x)} = 0 (5) plus the original constraints, complementary slackness for the inequality constraints and λj≥0 for j=1,…,m. Comparing (4) and (5) we can see that when we minimize the auxiliary function using µ=µk, the λj(µk) values given by (2) and (3) estimate the Lagrange multipliers in (5).
Recommended publications
  • Lecture 3 1 Geometry of Linear Programs
    ORIE 6300 Mathematical Programming I September 2, 2014 Lecture 3 Lecturer: David P. Williamson Scribe: Divya Singhvi Last time we discussed how to take dual of an LP in two different ways. Today we will talk about the geometry of linear programs. 1 Geometry of Linear Programs First we need some definitions. Definition 1 A set S ⊆ <n is convex if 8x; y 2 S, λx + (1 − λ)y 2 S, 8λ 2 [0; 1]. Figure 1: Examples of convex and non convex sets Given a set of inequalities we define the feasible region as P = fx 2 <n : Ax ≤ bg. We say that P is a polyhedron. Which points on this figure can have the optimal value? Our intuition from last time is that Figure 2: Example of a polyhedron. \Circled" corners are feasible and \squared" are non feasible optimal solutions to linear programming problems occur at \corners" of the feasible region. What we'd like to do now is to consider formal definitions of the \corners" of the feasible region. 3-1 One idea is that a point in the polyhedron is a corner if there is some objective function that is minimized there uniquely. Definition 2 x 2 P is a vertex of P if 9c 2 <n with cT x < cT y; 8y 6= x; y 2 P . Another idea is that a point x 2 P is a corner if there are no small perturbations of x that are in P . Definition 3 Let P be a convex set in <n. Then x 2 P is an extreme point of P if x cannot be written as λy + (1 − λ)z for y; z 2 P , y; z 6= x, 0 ≤ λ ≤ 1.
    [Show full text]
  • Extreme Points and Basic Solutions
    EXTREME POINTS AND BASIC SOLUTIONS: In Linear Programming, the feasible region in Rn is defined by P := {x ∈ Rn | Ax = b, x ≥ 0}. The set P , as we have seen, is a convex subset of Rn. It is called a convex polytope. The term convex polyhedron refers to convex polytope which is bounded. Polytopes in two dimensions are often called polygons. Recall that the vertices of a convex polytope are what we called extreme points of that set. Recall that extreme points of a convex set are those which cannot be represented as a proper convex combination of two other (distinct) points of the convex set. It may, or may not be the case that a convex set has any extreme points as shown by the example in R2 of the strip S := {(x, y) ∈ R2 | 0 ≤ x ≤ 1, y ∈ R}. On the other hand, the square defined by the inequalities |x| ≤ 1, |y| ≤ 1 has exactly four extreme points, while the unit disk described by the ineqality x2 + y2 ≤ 1 has infinitely many. These examples raise the question of finding conditions under which a convex set has extreme points. The answer in general vector spaces is answered by one of the “big theorems” called the Krein-Milman Theorem. However, as we will see presently, our study of the linear programming problem actually answers this question for convex polytopes without needing to call on that major result. The algebraic characterization of the vertices of the feasible polytope confirms the obser- vation that we made by following the steps of the Simplex Algorithm in our introductory example.
    [Show full text]
  • The Feasible Region for Consecutive Patterns of Permutations Is a Cycle Polytope
    Séminaire Lotharingien de Combinatoire XX (2020) Proceedings of the 32nd Conference on Formal Power Article #YY, 12 pp. Series and Algebraic Combinatorics (Ramat Gan) The feasible region for consecutive patterns of permutations is a cycle polytope Jacopo Borga∗1, and Raul Penaguiaoy 1 1Department of Mathematics, University of Zurich, Switzerland Abstract. We study proportions of consecutive occurrences of permutations of a given size. Specifically, the feasible limits of such proportions on large permutations form a region, called feasible region. We show that this feasible region is a polytope, more precisely the cycle polytope of a specific graph called overlap graph. This allows us to compute the dimension, vertices and faces of the polytope. Finally, we prove that the limits of classical occurrences and consecutive occurrences are independent, in some sense made precise in the extended abstract. As a consequence, the scaling limit of a sequence of permutations induces no constraints on the local limit and vice versa. Keywords: permutation patterns, cycle polytopes, overlap graphs. This is a shorter version of the preprint [11] that is currently submitted to a journal. Many proofs and details omitted here can be found in [11]. 1 Introduction 1.1 Motivations Despite not presenting any probabilistic result here, we give some motivations that come from the study of random permutations. This is a classical topic at the interface of combinatorics and discrete probability theory. There are two main approaches to it: the first concerns the study of statistics on permutations, and the second, more recent, looks arXiv:2003.12661v2 [math.CO] 30 Jun 2020 for the limits of permutations themselves.
    [Show full text]
  • Linear Programming
    Stanford University | CS261: Optimization Handout 5 Luca Trevisan January 18, 2011 Lecture 5 In which we introduce linear programming. 1 Linear Programming A linear program is an optimization problem in which we have a collection of variables, which can take real values, and we want to find an assignment of values to the variables that satisfies a given collection of linear inequalities and that maximizes or minimizes a given linear function. (The term programming in linear programming, is not used as in computer program- ming, but as in, e.g., tv programming, to mean planning.) For example, the following is a linear program. maximize x1 + x2 subject to x + 2x ≤ 1 1 2 (1) 2x1 + x2 ≤ 1 x1 ≥ 0 x2 ≥ 0 The linear function that we want to optimize (x1 + x2 in the above example) is called the objective function.A feasible solution is an assignment of values to the variables that satisfies the inequalities. The value that the objective function gives 1 to an assignment is called the cost of the assignment. For example, x1 := 3 and 1 2 x2 := 3 is a feasible solution, of cost 3 . Note that if x1; x2 are values that satisfy the inequalities, then, by summing the first two inequalities, we see that 3x1 + 3x2 ≤ 2 that is, 1 2 x + x ≤ 1 2 3 2 1 1 and so no feasible solution has cost higher than 3 , so the solution x1 := 3 , x2 := 3 is optimal. As we will see in the next lecture, this trick of summing inequalities to verify the optimality of a solution is part of the very general theory of duality of linear programming.
    [Show full text]
  • Math 112 Review for Exam Ii (Ws 12 -18)
    MATH 112 REVIEW FOR EXAM II (WS 12 -18) I. Derivative Rules • There will be a page or so of derivatives on the exam. You should know how to apply all the derivative rules. (WS 12 and 13) II. Functions of One Variable • Be able to find local optima, and to distinguish between local and global optima. • Be able to find the global maximum and minimum of a function y = f(x) on the interval from x = a to x = b, using the fact that optima may only occur where f(x) has a horizontal tangent line and at the endpoints of the interval. Step 1: Compute the derivative f’(x). Step 2: Find all critical points (values of x at which f’(x) = 0.) Step 3: Plug all the values of x from Step 2 that are in the interval from a to b and the endpoints of the interval into the function f(x). Step 4: Sketch a rough graph of f(x) and pick off the global max and min. • Understand the following application: Maximizing TR(q) starting with a demand curve. (WS 15) • Understand how to use the Second Derivative Test. (WS 16) If a is a critical point for f(x) (that is, f’(a) = 0), and the second derivative is: f ’’(a) > 0, then f(x) has a local min at x = a. f ’’(a) < 0, then f(x) has a local max at x = a. f ’’(a) = 0, then the test tells you nothing. IMPORTANT! For the Second Derivative Test to work, you must have f’(a) = 0 to start with! For example, if f ’’(a) > 0 but f ’(a) ≠ 0, then the graph of f(x) is concave up at x = a but f(x) does not have a local min there.
    [Show full text]
  • The Feasible Region for Consecutive Patterns of Permutations Is a Cycle Polytope
    Algebraic Combinatorics Draft The feasible region for consecutive patterns of permutations is a cycle polytope Jacopo Borga & Raul Penaguiao Abstract. We study proportions of consecutive occurrences of permutations of a given size. Specifically, the limit of such proportions on large permutations forms a region, called feasible region. We show that this feasible region is a polytope, more precisely the cycle polytope of a specific graph called overlap graph. This allows us to compute the dimension, vertices and faces of the polytope, and to determine the equations that define it. Finally we prove that the limit of classical occurrences and consecutive occurrences are in some sense independent. As a consequence, the scaling limit of a sequence of permutations induces no constraints on the local limit and vice versa. (1,0,0,0,0,0) 1 1 (0; 2 ; 2 ; 0; 0; 0) 1 1 1 1 (0; 0; 2 ; 2 ; 0; 0) (0; 2 ; 0; 2 ; 0; 0) 1 1 (0; 0; 0; 2 ; 2 ; 0) arXiv:1910.02233v2 [math.CO] 30 Jun 2020 Figure 1. The four-dimensional polytope P3 given by the six pat- terns of size three (see Eq. (2) for a precise definition). We highlight in light-blue one of the six three-dimensional facets of P3. This facet is a pyramid with square base. The polytope itself is a four-dimensional pyramid, whose base is the highlighted facet. This paper has been prepared using ALCO author class on 1st July 2020. Keywords. Permutation patterns, cycle polytopes, overlap graphs. Acknowledgements. This work was completed with the support of the SNF grants number 200021- 172536 and 200020-172515.
    [Show full text]
  • Mathematical Modelling and Applications of Particle Swarm Optimization
    Master’s Thesis Mathematical Modelling and Simulation Thesis no: 2010:8 Mathematical Modelling and Applications of Particle Swarm Optimization by Satyobroto Talukder Submitted to the School of Engineering at Blekinge Institute of Technology In partial fulfillment of the requirements for the degree of Master of Science February 2011 Contact Information: Author: Satyobroto Talukder E-mail: [email protected] University advisor: Prof. Elisabeth Rakus-Andersson Department of Mathematics and Science, BTH E-mail: [email protected] Phone: +46455385408 Co-supervisor: Efraim Laksman, BTH E-mail: [email protected] Phone: +46455385684 School of Engineering Internet : www.bth.se/com Blekinge Institute of Technology Phone : +46 455 38 50 00 SE – 371 79 Karlskrona Fax : +46 455 38 50 57 Sweden ii ABSTRACT Optimization is a mathematical technique that concerns the finding of maxima or minima of functions in some feasible region. There is no business or industry which is not involved in solving optimization problems. A variety of optimization techniques compete for the best solution. Particle Swarm Optimization (PSO) is a relatively new, modern, and powerful method of optimization that has been empirically shown to perform well on many of these optimization problems. It is widely used to find the global optimum solution in a complex search space. This thesis aims at providing a review and discussion of the most established results on PSO algorithm as well as exposing the most active research topics that can give initiative for future work and help the practitioner improve better result with little effort. This paper introduces a theoretical idea and detailed explanation of the PSO algorithm, the advantages and disadvantages, the effects and judicious selection of the various parameters.
    [Show full text]
  • Lesson . Geometry and Algebra of “Corner Points”
    SAh§þ Linear Programming Spring z§zË Uhan Lesson Ëþ.GeomeTry and Algebra of “Corner Points” §W arm up Example Ë. Consider The sysTem of equaTions hxË + xz − 6xh = Ë6 xË + þxz = Ë (∗) −zxË + ËËxh = −z ⎛ h Ë −6⎞ LeT A = ⎜ Ë þ § ⎟. We have ThaT deT(A) = Ç . ⎝−z § ËË ⎠ ● Does (∗) have a unique soluTion, no soluTions, or an inûniTe number of soluTions? ● Are The row vecTors of A linearly independent? How abouT The column vecTors of A? ● WhaT is The rank of A? Does A have full row rank? ËO verview xz ● Due To convexiTy, local opTimal soluTions of LPs are global opTimal soluTions þ ⇒ Improving search ûnds global opTimal soluTions of LPs ● _e simplex meThod: improving search among “corner points” of The h ↑ (Ë) feasible region of an LP z (z ) ↓ ● How can we describe “corner points” of The feasible region of an LP? Ë ● For LPs, is There always an opTimal soluTion ThaT is a “corner poinT”? xË Ë z h Ë z Polyhedra and exTreme points ● A polyhedron is a seT of vecTors x ThaT saTisfy a ûniTe collecTion of linear consTraints (equaliTies and inequaliTies) ○ Also referred To as a polyhedral seT ● In parTicular: ● Recall: The feasible region of an LP – a polyhedron – is a convex feasible region ● Given a convex feasible region S, a soluTion x ∈ S is an exTreme poinT if There does noT exisT Two disTincT soluTions y, z ∈ S such ThaT x is on The line segmenT joining y and z ○ i.e. There does noT exisT λ ∈ (§, Ë) such ThaT x = λy + (Ë − λ)z Example z.
    [Show full text]
  • Geometry and Visualizations of Linear Programs (PDF)
    15.053/8 February 12, 2013 Geometry and visualizations of linear programs © Wikipedia User: 4C. License CC BY-SA. This content is excluded from our Creative Commons license. For more information, see http://ocw.mit.edu/help/faq-fair-use/. 1 Quotes of the day “You don't understand anything until you learn it more than one way.” Marvin Minsky “One finds limits by pushing them.” Herbert Simon 2 Overview Views of linear programming – Geometry/Visualization – Algebra – Economic interpretations 3 What does the feasible region of an LP look like? Three 2-dimensional examples 4 Some 3-dimensional LPs Courtesy of Wolfram Research, Inc. Used with permission. Source: Weisstein, Eric W. "Convex Polyhedron." From MathWorld -- A Wolfram Web Resource. 5 Goal of this Lecture: visualizing LPs in 2 and 3 dimensions. What properties does the feasible region have? – convexity – corner points What properties does an optimal solution have? How can one find the optimal solution: – the “geometric method” – The simplex method Introduction to sensitivity analysis – What happens if the RHS changes? 6 A Two Variable Linear Program (a variant of the DTC example) objective z = 3x + 5y 2x + 3y 10 (1) Constraints x + 2y 6 (2) x + y 5 (3) x 4 (4) y 3 (5) x, y 0 (6) 7 Finding an optimal solution Introduce yourself to your partner Try to find an optimal solution to the linear program, without looking ahead. 8 Inequalities A single linear inequality determines a unique y half-plane 5 4 x + 2y 6 3 2 1 x 1 2 3 4 5 6 9 Graphing the Feasible Region y Graph the Constraints: 2x+ 3y 10 (1) 5 x 0 , y 0.
    [Show full text]
  • B1 Optimization – Solutions
    B1 Optimization – Solutions A. Zisserman, Michaelmas Term 2018 1. The Rosenbrock function is f(x; y) = 100(y − x2)2 + (1 − x)2 (a) Compute the gradient and Hessian of f(x; y). (b) Show that that f(x; y) has zero gradient at the point (1; 1). (c) By considering the Hessian matrix at (x; y) = (1; 1), show that this point is a mini- mum. (a) Gradient and Hessian 400x3 − 400xy + 2x − 2 1200x2 − 400y + 2 −400x rf = H = 200(y − x2) −400x 200 (b) gradient at the point (1; 1) 400x3 − 400xy + 2x − 2 0 rf = = 200(y − x2) 0 (b) Hessian at the point (1; 1) 1200x2 − 400y + 2 −400x 802 −400 H = = −400x 200 −400 200 Examine eigenvalues: • det is positive, so eigenvalues have same sign (thus not saddle point) • trace is positive, so eigenvalues are positive • Thus a minimum • λ1 = 1001:6006; λ2 = 0:39936077 1 2. In Newton type minimization schemes the update step is of the form δx = −H−1g where g = rf. By considering g.δx compare convergence of: (a) Newton, to (b) Gauss Newton for a general function f(x) (i.e. where H may not be positive definite). A note on positive definite matrices An n × n symmetric matrix M is positive definite if • x>Mx > 0 for all non-zero vectors x • All the eigen-values of M are positive In each case consider df = g:δx. This should be negative for convergence. (a) Newton g:δx = −g>H−1g Can be positive if H not positive definite.
    [Show full text]
  • Load Feasible Region Determination by Using Adaptive Particle Swarm Optimization
    Article Load Feasible Region Determination by Using Adaptive Particle Swarm Optimization Patchrapa Wongchaia and Sotdhipong Phichaisawatb,* Department of Electrical Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok 10330, Thailand E-mail: [email protected], [email protected] Abstract. The proposed method determines points in a feasible region by using an adaptive particle swarm optimization in order to solve the boundary region which represented by the obtained points. This method is also used for calculating a large-scale power system. In any contingency case, it will be illustrated with an x-axis and y-axis space which is given by the power flow analysis. In addition, this presented approach in this paper not only demonstrates the optimal points through the boundary tracing method of the feasible region but also presents the boundary points obtained the particle swarm optimization. Moreover, decreasing loss function and operational physical constraints such as voltage level, equipment specification are all simultaneously considered. The points in the feasible region are also determined the boundary points which a point happening a contingency in the power system is already taken into account and the stability of load demand is ascertained into the normal operation, i.e. the power system can be run without violation. These feasible points regulate the actions of the system and the robustness of the operating points. Finally, the proposed method is evaluated on the test system to examine the impact of system parameters relevant to generation and consumption. Keywords: Particle swarm optimization, feasible region, boundary tracing method, power flow solution space. ENGINEERING JOURNAL Volume 23 Issue 6 Received 28 July 2019 Accepted 30 September 2019 Published 30 November 2019 Online at http://www.engj.org/ DOI:10.4186/ej.2019.23.6.239 DOI:10.4186/ej.2019.23.6.239 1.
    [Show full text]
  • Modeling with Nonlinear Programming
    CHAPTER 4 Modeling with Nonlinear Programming By nonlinear programming we intend the solution of the general class of problems that can be formulated as min f(x) subject to the inequality constraints g (x) 0 i ≤ for i = 1,...,p and the equality constraints hi(x) = 0 for i = 1,...,q. We consider here methods that search for the solution using gradient information, i.e., we assume that the function f is differentiable. EXAMPLE 4.1 Given a fixed area of cardboard A construct a box of maximum volume. The nonlinear program for this is min xyz subject to 2xy + 2xz + 2yz = A EXAMPLE 4.2 Consider the problem of determining locations for two new high schools in a set of P subdivisions Nj . Let w1j be the number of students going to school A and w2j be the number of students going to school B from subdivision Nj. Assume that the student capacity of school A is c1 and the capacity of school B is c2 and that the total number of students in each subdivision is rj . We would like to minimize the total distance traveled by all the students given that they may attend either school A or B. It is possible to construct a nonlinear program to determine the locations (a, b) and (c, d) of high schools A and B, respectively assuming the location of each subdivision Ni is modeled as a single point denoted (xi,yi). 1 1 P 2 2 min w (a x )2 + (b y )2 + w (c x )2 + (d y )2 1j − j − j 2j − j − j Xj=1 64 Section 4.1 Unconstrained Optimization in One Dimension 65 subject to the constraints w c ij ≤ i Xj w1j + w2j = rj for j = 1,...,P .
    [Show full text]