ABS Algorithms for Linear Equations and Optimization

Total Page:16

File Type:pdf, Size:1020Kb

ABS Algorithms for Linear Equations and Optimization Journal of Computational and Applied Mathematics 124 (2000) 155–170 www.elsevier.nl/locate/cam View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector ABS algorithms for linear equations and optimization Emilio Spedicatoa; ∗, Zunquan Xiab, Liwei Zhangb aDepartment of Mathematics, University of Bergamo, 24129 Bergamo, Italy bDepartment of Applied Mathematics, Dalian University of Technology, Dalian, China Received 20 May 1999; received in revised form 18 November 1999 Abstract In this paper we review basic properties and the main achievements obtained by the class of ABS methods, developed since 1981 to solve linear and nonlinear algebraic equations and optimization problems. c 2000 Elsevier Science B.V. All rights reserved. Keywords: Linear equations; Optimization; ABS methods; Quasi-Newton methods; Linear programming; Feasible direction methods; KT equations; Interior point methods 1. The scaled ABS class: general properties ABS methods were introduced by Aba y et al. [2], in a paper considering the solution of linear equations via what is now called the basic or unscaled ABS class. The basic ABS class was later generalized to the so-called scaled ABS class and subsequently applied to linear least-squares, nonlinear equations and optimization problems, as described by Aba y and Spedicato [4] and Zhang et al. [43]. Recent work has also extended ABS methods to the solution of diophantine equations, with possible applications to combinatorial optimization. There are presently over 350 papers in the ABS ÿeld, see the bibliography by Spedicato and Nicolai [24]. In this paper we review the basic properties of ABS methods for solving linear determined or underdetermined systems and some of their applications to optimization problems. Let us consider the general linear (determined or underdetermined) system, where rank(A)is arbitrary, Ax = b; x ∈ Rn;b∈ Rm;m6n (1) ∗ Corresponding author. E-mail address: [email protected] (E. Spedicato). 0377-0427/00/$ - see front matter c 2000 Elsevier Science B.V. All rights reserved. PII: S 0377-0427(00)00419-2 156 E. Spedicato et al. / Journal of Computational and Applied Mathematics 124 (2000) 155–170 or T ai x − bi =0;i=1;:::;m; (2) where T a1 A = ··· : (3) T am The steps of the scaled ABS class algorithms are the following: n n;n (A) Let x1 ∈ R be arbitrary, H1 ∈ R be nonsingular arbitrary, v1 be an arbitrary nonzero vector in Rm. Set i =1. (B) Compute the residual ri = Axi − b.Ifri = 0 stop (xi solves the problem.) Otherwise compute T T si = HiA vi.Ifsi 6= 0, then go to (C). If si = 0 and = vi ri = 0, then set xi+1 = xi;Hi+1 = Hi and go to (F). Otherwise stop (the system has no solution). (C) Compute the search vector pi by T pi = Hi zi; (4) n where zi ∈ R is arbitrary save for the condition T T vi AHi zi 6=0: (5) (D) Update the estimate of the solution by xi+1 = xi − ipi; (6) where the stepsize i is given by T T i = vi ri=ri Api: (7) (E) Update the matrix Hi by T T T T Hi+1 = Hi − HiA viwi Hi=wi HiA vi; (8) n where wi ∈ R is arbitrary save for the condition T T wi HiA vi 6=0: (9) m (F) If i = m, then stop (xm+1 solves the system). Otherwise deÿne vi+1 as an arbitrary vector in R but linearly independent of v1;:::;vi. Increment i by one and go to (B). The matrices Hi appearing in step (E) are generalizations of (oblique) projection matrices. They probably ÿrst appeared in a book by Wedderburn [34]. They have been named Abaans since the First International Conference on ABS methods (Luoyang, China, 1991) and this name will be used here. It should be noted that there are several alternative formulations of the linear algebra of the above process, using, e.g., vectors instead of the square matrix Hi (see [4]). Such alternative formulations di er in storage requirement and arithmetic cost and the choice for the most convenient formulation may depend on the relative values of m and n. The above recursion deÿnes a class of algorithms, each particular method being determined by the choice of the parameters H1;vi;zi;wi. The basic ABS class is obtained by taking vi = ei;ei being E. Spedicato et al. / Journal of Computational and Applied Mathematics 124 (2000) 155–170 157 m the ith unitary vector in R . Parameters wi;zi;H1 have been introduced, respectively, by Aba y, Broyden and Spedicato, whose initials are referred to in the name of the class. It is possible to show that the scaled ABS class is a complete realization of the so-called Petrov–Galerkin iteration for solving a linear system, where the iteration has the form xi+1 = xi − ipi with i;pi chosen so that T ri+1vj =0;j=1;:::;i, holds, the vectors vj being arbitrary and linearly independent. It appears that all deterministic algorithms in the literature having ÿnite termination on a linear system are members of the scaled ABS class. Moreover, the quasi-Newton methods of the Broyden class, which (under some conditions) are known to have termination in at most 2n steps, can be embedded in the ABS class. The iterate of index 2i − 1 generated by Broyden’s iteration corresponds to the ith iterate of a certain algorithm in the scaled ABS class. Referring to the monograph of Aba y and Spedicato [4] for proofs, we give some properties of methods of the scaled ABS class, assuming, for simplicity, that A has full rank: T T • Deÿne Vi =(v1;:::;vi);Wi =(w1;:::;wi). Then Hi+1A Vi =0;Hi+1Wi = 0, meaning that vectors T A vj;wj;j=1;:::;i; span the null spaces of Hi+1 and its transpose, respectively. T T • The vectors HiA vi;Hi wi are nonzero if and only if ai;wi are linearly independent of a1;:::;ai−1; w1;:::;wi−1, respectively. T T • Deÿne Pi =(p1;:::;pi). Then the implicit factorization Vi Ai Pi =Li holds, where Li is nonsingular lower triangular. From this relation, if m = n, one obtains the following semiexplicit factorization of the inverse, with P = Pn;V= Vn;L= Ln: A−1 = PL−1V T: (10) For several choices of the matrix V the matrix L is diagonal, hence formula (10) gives a fully explicit factorization of the inverse as a byproduct of the ABS solution of a linear system, a property that does not hold for the classical solvers. It can also be shown that all possible fac- torizations of form (10) can be obtained by proper parameter choices in the scaled ABS class, another completeness result. T T • Deÿne Si and Ri by Si =(s1;:::;si);Ri =(r1;:::;ri), where si = HiA vi;ri = Hi wi. Then the T Abaan can be written in the form Hi+1 = H1 − SiRi and the vectors si, ri can be built via a Gram–Schmidt-type iteration involving the previous vectors (the search vector pi can be built in a similar way). This representation of the Abaan in terms of 2i vectors is computationally convenient when the number of equations is much less than the number of variables. Notice that there is also a representation in terms of n − i vectors. • A compact formula for the Abaan in terms of the parameter matrices is the following: T T T −1 T Hi+1 = H1 − H1A Vi(Wi H1A Vi) Wi H1: (11) Letting V = Vm;W= Wm, one can show that the parameter matrices H1;V;W are admissible (i.e. T T are such that condition (9) is satisÿed) i the matrix Q = V AH1 W is strongly nonsingular (i.e., is LU factorizable). Notice that this condition can always be satisÿed by suitable exchanges of the columns of V or W , equivalent to a row or a column pivoting on the matrix Q.IfQ is strongly nonsingular and we take, as is done in all algorithms so far considered, zi = wi, then condition (5) is also satisÿed. It can be shown that the scaled ABS class corresponds to applying (implicitly) the unscaled ABS algorithm to the scaled (or preconditioned) system V TAx = V Tb, where V is an arbitrary nonsingular 158 E. Spedicato et al. / Journal of Computational and Applied Mathematics 124 (2000) 155–170 matrix of order m. Therefore, the scaled ABS class is also complete with respect to all possible left preconditioning matrices, which in the ABS context are deÿned implicitly and dynamically (only the ith column of V is needed at the ith iteration, and it can also be a function of the previous column choices). 2. Subclasses of the ABS class In the monograph of Aba y and Spedicato [4] nine subclasses are considered of the scaled ABS class. Here we mention only two of them. (a) The conjugate direction subclass. This class is obtained by setting vi = pi. It is well deÿned under the condition (sucient but not necessary) that A is symmetric and positive deÿnite. It contains the implicit Choleski algorithm and the ABS versions of the Hestenes–Stiefel and the Lanczos algorithms (i.e., algorithms that generate the same sequence xi as these classical methods in exact arithmetic). This class generates all possible algorithms whose search directions are A-conjugate. The vector xi+1 minimizes the energy or A-weighted Euclidean norm of the error over x1 + Span(p1;:::;pi). If x1 = 0 then the solution is approached monotonically from below in the energy norm.
Recommended publications
  • A New Mathematical Model for Tiling Finite Regions of the Plane with Polyominoes
    Volume 15, Number 2, Pages 95{131 ISSN 1715-0868 A NEW MATHEMATICAL MODEL FOR TILING FINITE REGIONS OF THE PLANE WITH POLYOMINOES MARCUS R. GARVIE AND JOHN BURKARDT Abstract. We present a new mathematical model for tiling finite sub- 2 sets of Z using an arbitrary, but finite, collection of polyominoes. Unlike previous approaches that employ backtracking and other refinements of `brute-force' techniques, our method is based on a systematic algebraic approach, leading in most cases to an underdetermined system of linear equations to solve. The resulting linear system is a binary linear pro- gramming problem, which can be solved via direct solution techniques, or using well-known optimization routines. We illustrate our model with some numerical examples computed in MATLAB. Users can download, edit, and run the codes from http://people.sc.fsu.edu/~jburkardt/ m_src/polyominoes/polyominoes.html. For larger problems we solve the resulting binary linear programming problem with an optimization package such as CPLEX, GUROBI, or SCIP, before plotting solutions in MATLAB. 1. Introduction and motivation 2 Consider a planar square lattice Z . We refer to each unit square in the lattice, namely [~j − 1; ~j] × [~i − 1;~i], as a cell.A polyomino is a union of 2 a finite number of edge-connected cells in the lattice Z . We assume that the polyominoes are simply-connected. The order (or area) of a polyomino is the number of cells forming it. The polyominoes of order n are called n-ominoes and the cases for n = 1; 2; 3; 4; 5; 6; 7; 8 are named monominoes, dominoes, triominoes, tetrominoes, pentominoes, hexominoes, heptominoes, and octominoes, respectively.
    [Show full text]
  • Iterative Properties of Birational Rowmotion II: Rectangles and Triangles
    Iterative properties of birational rowmotion II: Rectangles and triangles The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation Grinberg, Darij, and Tom Roby. "Iterative properties of birational rowmotion II: Rectangles and triangles." Electronic Journal of Combinatorics 22(3) (2015). As Published http://www.combinatorics.org/ojs/index.php/eljc/article/view/ v22i3p40 Publisher European Mathematical Information Service (EMIS) Version Final published version Citable link http://hdl.handle.net/1721.1/100753 Terms of Use Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. Iterative properties of birational rowmotion II: rectangles and triangles Darij Grinberg∗ Tom Robyy Department of Mathematics Department of Mathematics Massachusetts Institute of Technology University of Connecticut Massachusetts, U.S.A. Connecticut, U.S.A. [email protected] [email protected] Submitted: May 1, 2014; Accepted: Sep 8, 2015; Published: Sep 20, 2015 Mathematics Subject Classifications: 06A07, 05E99 Abstract Birational rowmotion { a birational map associated to any finite poset P { has been introduced by Einstein and Propp as a far-reaching generalization of the (well- studied) classical rowmotion map on the set of order ideals of P . Continuing our exploration of this birational rowmotion, we prove that it has order p+q on the (p; q)- rectangle poset (i.e., on the product of a p-element chain with a q-element chain); we also compute its orders on some triangle-shaped posets. In all cases mentioned, it turns out to have finite (and explicitly computable) order, a property it does not exhibit for general finite posets (unlike classical rowmotion, which is a permutation of a finite set).
    [Show full text]
  • Arxiv:1806.05346V4 [Math.AG]
    How many zeroes? Counting the number of solutions of systems of polynomials via geometry at infinity Part I: Toric theory Pinaki Mondal arXiv:1806.05346v4 [math.AG] 26 Mar 2020 Preface In this book we describe an approach through toric geometry to the following problem: “estimate the number (counted with appropriate multiplicity) of isolated solutions of n polynomial equations in n variables over an algebraically closed field k.” The outcome of this approach is the number of solutions for “generic” systems in terms of their Newton polytopes, and an explicit characterization of what makes a system “generic.” The pioneering work in this field was done in the 1970s by Kushnirenko, Bernstein and Khovanskii, who completely solved the problem of counting solutions of generic systems on the “torus” n (k 0 ) . In the context of our problem, however, the natural domain of solutions is not the torus, but the \{ } n affine space k . There were a number of works on extension of Bernstein’s theorem to the case of affine space, and recently it has been completely resolved, the final steps having been carried out by the author. The aim of this book is to present these results in a coherent way. We start from the beginning, namely Bernstein’s beautiful theorem which expresses the number of solutions of generic systems in terms of the mixed volume of their Newton polytopes. We give complete proofs, over arbitrary algebraically closed fields, of Bernstein’s theorem and its recent extension to the affine space, and describe some open problems. We also apply the developed techniques to derive and generalize Kushnirenko’s results on Milnor numbers of hypersurface singularities which in 1970s served as a precursor to the development of toric geometry.
    [Show full text]
  • System of Linear Equations - Wikipedia, the Free Encyclopedia
    System of linear equations - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/System_of_linear_equations System of linear equations From Wikipedia, the free encyclopedia In mathematics, a system of linear equations (or linear system) is a collection of linear equations involving the same set of variables. For example, is a system of three equations in the three variables x, y, z. A solution to a linear system is an assignment of numbers to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by A linear system in three variables determines a collection of planes. The intersection point is the solution. since it makes all three equations valid.[1] In mathematics, the theory of linear systems is the basis and a fundamental part of linear algebra, a subject which is used in most parts of modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in engineering, physics, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system. Very often, the coefficients of the equations are real or complex numbers and the solutions are searched in the same set of numbers, but the theory and the algorithms apply for coefficients and solutions in any field. For solutions in an integral domain like the ring of the integers, or in other algebraic structures, other theories have been developed.
    [Show full text]
  • On the Generic Low-Rank Matrix Completion
    On the Generic Low-Rank Matrix Completion Yuan Zhang, Yuanqing Xia∗, Hongwei Zhang, Gang Wang, and Li Dai Abstract—This paper investigates the low-rank matrix com- where the matrix to be completed consists of spatial coordi- pletion (LRMC) problem from a generic vantage point. Unlike nates and temporal (frame) indices [9, 10]. most existing work that has focused on recovering a low-rank matrix from a subset of the entries with specified values, the The holy grail of the LRMC is that, the essential information only information available here is just the pattern (i.e., positions) of a low-rank matrix could be contained in a small subset of of observed entries. To be precise, given is an n × m pattern matrix (n ≤ m) whose entries take fixed zero, unknown generic entries. Therefore, there is a chance to complete a matrix from values, and missing values that are free to be chosen from the a few observed entries [5, 11, 12]. complex field, the question of interest is whether there is a matrix completion with rank no more than n − k (k ≥ 1) for almost all Thanks to the widespread applications of LRMC in di- values of the unknown generic entries, which is called the generic verse fields, many LRMC techniques have been developed low-rank matrix completion (GLRMC) associated with (M,k), [4, 5, 8, 12–16]; see [11, 17] for related survey. Since the rank and abbreviated as GLRMC(M,k). Leveraging an elementary constraint is nonconvex, the minimum rank matrix completion tool from the theory on systems of polynomial equations, we give a simple proof for genericity of this problem, which was first (MRMC) problem which asks for the minimum rank over observed by Kirly et al., that is, depending on the pattern of the all possible matrix completions, is NP-hard [4].
    [Show full text]
  • Real Algebraic Geometry with Numerical Homotopy Methods
    ABSTRACT HARRIS, KATHERINE ELIZABETH. Real Algebraic Geometry with Numerical Homotopy Methods. (Under the direction of Agnes Szanto.) Many algorithms for determining properties of (real) semi-algebraic sets rely upon the ability to compute smooth points. We present a simple procedure based on computing the critical points of some well-chosen function that guarantees the computation of smooth points in each connected bounded component of an atomic semi-algebraic set. We demonstrate that our technique is intuitive in principal, as it uses well-known tools like the Extreme Value Theorem and optimization via Lagrange multipliers. It also performs well on previously difficult examples, such as the classically difficult singular cusps of a “Thom’s lips" curve. Most importantly, the practical efficiency of this new technique is demonstrated by solving a conjecture on the number of equilibria of the Kuramoto model for the n = 4 case. In the presentation of our approach, we utilize definitions and notation from numerical algebraic geometry. Although our algorithms can also be implemented entirely symbolically, the worst case complexity bounds do not improve on current state of the art methods. Since the usefulness of our technique lies in its computational effectiveness, such as in the examples we described above, it is therefore intuitive for us to present our results under the computational framework which we use in the implementation of the algorithms via existing numerical algebraic geometry software. We also present an approach for finding our “well-chosen" function as described above via isosingular deflation, a known numerical algebraic geometry technique. In order to apply our approach to non-equidimensional algebraic sets, we perturb using infinites- imals and compute on these perturbed sets.
    [Show full text]
  • B553 Lecture 5: Matrix Algebra Review
    B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in Rn and gradients of functions. Matrices represent linear transformations of vector quantities. This lecture will present standard matrix notation, conventions, and basic identities that will be used throughout this course. During the course of this discussion we will also drop the boldface notation for vectors, and it will remain this way for the rest of the class. 1 Matrices A matrix A represents a linear transformation of an n-dimensional vector space to an m-dimensional one. It is given by an m×n array of real numbers. Usually matrices are denoted as uppercase letters (e.g., A; B; C), with the entry in the i'th row and j'th column denoted in the subscript ·i;j, or when it is unambiguous, ·ij (e.g., A1;2;A1p). 2 3 A1;1 ··· A1;n 6 . 7 A = 4 . 5 (1) Am;n ··· Am;n 1 1.1 Matrix-Vector Product An m × n matrix A transforms vectors x = (x1; : : : ; xn) into m-dimensional vectors y = (y1; : : : ; ym) = Ax as follows: n X y1 = A1jxj j=1 ::: (2) n X ym = Amjxj j=1 Pn Or, more concisely, yi = j=1 Aijxj for i = 1; : : : ; m. (Note that matrix- vector multiplication is not symmetric, so xA is an invalid operation.) Linearity of matrix-vector multiplication. We can see that matrix- vector multiplication is linear, that is A(ax+by) = aAx+bAy for all a, b, x, and y.
    [Show full text]
  • Generalizing Cramer's Rule: Solving Uniformly Linear Systems Of
    Generalizing Cramer’s Rule: Solving Uniformly Linear Systems of Equations Gema M. Diaz–Toca∗ Laureano Gonzalez–Vega∗ Dpto. de Matem´atica Aplicada Dpto. de Matem´aticas Univ. de Murcia, Spain Univ. de Cantabria, Spain [email protected] [email protected] Henri Lombardi ∗ Equipe´ de Math´ematiques, UMR CNRS 6623 Univ. de Franche-Comt´e,France [email protected] Abstract Following Mulmuley’s Lemma, this paper presents a generalization of the Moore–Penrose Inverse for a matrix over an arbitrary field. This generalization yields a way to uniformly solve linear systems of equations which depend on some parameters. Introduction m×n Given a system of linear equations over an arbitrary field K with coefficient matrix A ∈ K , A x = v, the existence of solution depends only on the ranks of the matrices A and A|v. These ranks can be easily obtained by using Gaussian elimination, but this method is not uniform. When the entries of A and v depend on parameters, the resolution of the considered linear system may involve discussions on too many cases (see, for example, [18]). Solving parametric linear systems of equations is an interesting computational issue encountered, for example, when computing the characteristic solutions for differential equations (see [5]), when dealing with coloured Petri nets (see [13]) or in linear prediction problems and in different applications of operation research and engineering (see, for example, [6], [8], [12], [15] or [19] where the elements of the matrix have a linear affine dependence on a set of parameters). More general situations where the elements are polynomials in a given set of parameters are encountered in computer algebra problems (see [1] and [18]).
    [Show full text]
  • Chapter 4 Numerical Algebraic Geometry
    Chapter 4 Numerical Algebraic Geometry Outline: 1. Core Numerical Algorithms. 113—122 2. Homotopy continuation. 123—133 3. Numerical Algebraic Geometry 134—142 4. Numerical Irreducible Decomposition 143—145 5. Regeneration 146— 6. Smale’s α-theory 147— 7. Notes 148 Chapter 2 presented fundamental symbolic algorithms, including the classical resultant and general methods based on Gr¨obner bases. Those algorithms operate on the algebraic side of the algebraic-geometric dictionary underlying algebraic geometry. This chapter develops the fundamental algorithms of numerical algebraic geometry, which uses tools from numerical analysis to represent and study algebraic varieties on a computer. As we will see, this field primarily operates on the geometric side of our dictionary. Since numer- ical algebraic geometry involves numerical computation, we will work over the complex numbers. 4.1 Core Numerical Algorithms Numerical algebraic geometry rests on two core numerical algorithms, which go back at least to Newton and Euler. Newton’s method refines approximations to solutions to sys- tems of polynomial equations. A careful study of its convergence leads to methods for certifying numerical output, which we describe in Section 4.5. The other core algorithm comes from Euler’s method for computing a solution to an initial value problem. This is a first-order iterative method, and more sophisticated higher-order methods are used in practice for they have better convergence. These two algorithms are used together for path-tracking in numerical homotopy continuation, which we develop in subsequent sec- tions as a tool for solving systems of polynomial equations and for manipulating varieties 127 128 CHAPTER 4.
    [Show full text]
  • Solution Set
    Elementary example The simplest kind of linear system involves two equations and two variables: One method for solving such a system is as follows. First, solve the top equation for in terms of : Now substitute this expression for x into the bottom equation: This results in a single equation involving only the variable . Solving gives , and substituting this back into the equation for yields . This method generalizes to systems with additional variables (see "elimination of variables" below, or the article on elementary algebra.) General form A general system of m linear equations with n unknowns can be written as Here are the unknowns, are the coefficients of the system, and are the constant terms. Often the coefficients and unknowns are real or complex numbers, but integers and rational numbers are also seen, as are polynomials and elements of an abstract algebraic structure. Solution set The solution set for the equations x − y = −1 and 3x + y = 9 is the single point (2, 3). A solution of a linear system is an assignment of values to the variables x1, x2, ..., xn such that each of the equations is satisfied. The set of all possible solutions is called the solution set. A linear system may behave in any one of three possible ways: 1. The system has infinitely many solutions. 2. The system has a single unique solution. 3. The system has no solution. Geometric interpretation For a system involving two variables (x and y), each linear equation determines a line on the xy-plane. Because a solution to a linear system must satisfy all of the equations, the solution set is the intersection of these lines, and is hence either a line, a single point, or the empty set.
    [Show full text]
  • ABS Methods for Continuous and Integer Linear Equations and Optimization
    CEJOR DOI 10.1007/s10100-009-0128-9 ORIGINAL PAPER ABS methods for continuous and integer linear equations and optimization Spedicato Emilio · Bodon Elena · Xia Zunquan · Mahdavi-Amiri Nezam © Springer-Verlag 2009 Abstract ABS methods are a large class of algorithms for solving continuous and integer linear algebraic equations, and nonlinear continuous algebraic equations, with applications to optimization. Recent work by Chinese researchers led by Zunquan Xia has extended these methods also to stochastic, fuzzy and infinite systems, extensions not considered here. The work on ABS methods began almost thirty years. It involved an international collaboration of mathematicians especially from Hungary, England, China and Iran, coordinated by the university of Bergamo. The ABS method are based on the rank reducing matrix update due to Egerváry and can be considered as the most fruitful extension of such technique. They have led to unification of classes of methods for several problems. Moreover they have produced some special algorithms with better complexity than the standard methods. For the linear integer case they have provided the most general polynomial time class of algorithms so far known; such algorithms have been extended to other integer problems, as linear inequalities and LP problems, in over a dozen papers written by Iranian mathematicians led by Nezam Mahdavi-Amiri. ABS methods can be implemented generally in a stable way, techniques existing to enhance their accuracy. Extensive numerical experiments have There are now probably over 400 papers on ABS methods. A partial list up to 1999 of about 350 papers was published by Spedicato et al. (2000) in the journal Ricerca Operativa.
    [Show full text]
  • Application of Lie-Group Symmetry Analysis to an Infinite Hierarchy Of
    Application of Lie-group symmetry analysis to an infinite hierarchy of differential equations at the example of first order ODEs Michael Frewer∗ Tr¨ubnerstr. 42, 69121 Heidelberg, Germany November 3, 2015 Abstract This study will explicitly demonstrate by example that an unrestricted infinite and forward recursive hierarchy of differential equations must be identified as an unclosed system of equations, despite the fact that to each unknown function in the hierarchy there exists a corresponding determined equation to which it can be bijectively mapped to. As a direct consequence, its admitted set of symmetry transformations must be identified as a weaker set of indeterminate equivalence transformations. The reason is that no unique general solution can be constructed, not even in principle. Instead, infinitely many disjoint and thus independent general solution manifolds exist. This is in clear contrast to a closed system of differential equations that only allows for a single and thus unique general solution manifold, which, by definition, covers all possible particular solutions this system can admit. Herein, different first order Riccati-ODEs serve as an example, but this analysis is not restricted to them. All conclusions drawn in this study will translate to any first order or higher order ODEs as well as to any PDEs. Keywords: Ordinary Differential Equations, Infinite Systems, Lie Symmetries and Equivalences, Unclosed Systems, General Solutions, Linear and Nonlinear Equations, Initial Value Problems ; MSC2010: 22E30, 34A05, 34A12, 34A25, 34A34, 34A35, 40A05 1. Introduction, motivation and a brief overview what to expect Infinite systems of ordinary differential equations (ODEs) appear naturally in many appli- cations (see e.g.
    [Show full text]