Sums of Squares, Moment Matrices and Optimization Over Polynomials

Total Page:16

File Type:pdf, Size:1020Kb

Sums of Squares, Moment Matrices and Optimization Over Polynomials SUMS OF SQUARES, MOMENT MATRICES AND OPTIMIZATION OVER POLYNOMIALS MONIQUE LAURENT∗ Updated version: February 6, 2010 Abstract. We consider the problem of minimizing a polynomial over a semialge- braic set defined by polynomial equations and inequalities, which is NP-hard in general. Hierarchies of semidefinite relaxations have been proposed in the literature, involving positive semidefinite moment matrices and the dual theory of sums of squares of poly- nomials. We present these hierarchies of approximations and their main properties: asymptotic/finite convergence, optimality certificate, and extraction of global optimum solutions. We review the mathematical tools underlying these properties, in particular, some sums of squares representation results for positive polynomials, some results about moment matrices (in particular, of Curto and Fialkow), and the algebraic eigenvalue method for solving zero-dimensional systems of polynomial equations. We try whenever possible to provide detailed proofs and background. Key words. positive polynomial, sum of squares of polynomials, moment problem, polynomial optimization, semidefinite programming AMS(MOS) subject classifications. 13P10, 13J25, 13J30, 14P10, 15A99, 44A60, 90C22, 90C30 Contents 1 Introduction............................... 3 1.1 The polynomial optimization problem . 4 1.2 Thescopeofthispaper .................... 6 1.3 Preliminaries on polynomials and semidefinite programs . 7 1.3.1 Polynomials ..................... 7 1.3.2 Positive semidefinite matrices . 8 1.3.3 Flat extensions of matrices . 9 1.3.4 Semidefinite programs . 9 1.4 Contentsofthepaper ..................... 11 2 Algebraic preliminaries . 11 2.1 Polynomial ideals and varieties . 11 2.2 The quotient algebra R[x]/I ................. 14 2.3 Gr¨obner bases and standard monomial bases . 16 2.4 Solving systems of polynomial equations . 18 2.4.1 Motivation: The univariate case. 18 2.4.2 The multivariate case. 19 2.4.3 Computing VC(I) with a non-derogatory multi- plicationmatrix. 21 2.4.4 Root counting with Hermite’s quadratic form . 23 2.5 Border bases and commuting multiplication matrices . 26 3 Positive polynomials and sums of squares . 29 ∗Centrum Wiskunde & Informatica (CWI), Science Park 123, 1098 XG Amsterdam, Netherlands. Email: [email protected]. 1 2 Monique Laurent 3.1 Somebasicfacts ........................ 29 3.2 Sums of squares and positive polynomials: Hilbert’s result . 29 3.3 Recognizing sums of squares of polynomials . 31 3.4 SOS relaxations for polynomial optimization . 33 3.5 Convex quadratic optimization . 34 3.6 Some representation results for positive polynomials . 35 3.6.1 Positivity certificates via the Positivstellensatz . 35 3.6.2 Putinar’s Positivstellensatz . 37 3.6.3 Representation results in the univariate case . 39 3.6.4 Other representation results . 41 3.6.5 Sums of squares and convexity . 42 3.7 Proof of Putinar’s theorem . 47 3.8 The cone of sums of squares is closed . 49 4 Moment sequences and moment matrices . 52 4.1 Somebasicfacts ........................ 52 4.1.1 Measures ....................... 52 4.1.2 Moment sequences . 52 4.1.3 Moment matrices . 53 4.1.4 Moment matrices and (bi)linear forms on R[x] . 53 4.1.5 Necessary conditions for moment sequences . 54 4.2 Moment relaxations for polynomial optimization . 55 4.3 Convex quadratic optimization (revisited) . 57 4.4 Themomentproblem ..................... 59 4.4.1 Duality between sums of squares and moment se- quences........................ 59 4.4.2 Bounded moment sequences . 61 4.5 The K-momentproblem. 62 4.6 Proof of Haviland’s theorem . 64 4.7 Proof of Schm¨udgen’s theorem . 66 5 Moreaboutmomentmatrices . 68 5.1 Finite rank moment matrices . 68 5.2 Finite atomic measures for truncated moment sequences . 70 5.3 Flat extensions of moment matrices . 74 5.3.1 First proof of the flat extension theorem . 75 5.3.2 A generalized flat extension theorem . 77 5.4 Flat extensions and representing measures . 81 5.5 The truncated moment problem in the univariate case . 85 6 Back to the polynomial optimization problem . 89 6.1 Hierarchies of relaxations . 89 6.2 Duality ............................. 90 6.3 Asymptotic convergence . 93 6.4 Approximating the unique global minimizer via the mo- mentrelaxations . .. .. .. .. .. .. 94 6.5 Finite convergence . 95 6.6 Optimalitycertificate . 97 6.7 Extracting global minimizers . 101 6.8 Software and examples . 102 7 Application to optimization - Some further selected topics . 106 7.1 Approximating positive polynomials by sums of squares . 106 Sums of Squares, Moment Matrices and Polynomial Optimization 3 7.1.1 Bounds on entries of positive semidefinite mo- mentmatrices . .107 7.1.2 Proof of Theorem 7.2 . 109 7.1.3 Proof of Theorem 7.3 . 110 7.2 Unconstrained polynomial optimization . 112 7.2.1 Case 1: p attains its minimum and a ball is known containing a minimizer . 114 7.2.2 Case 2: p attains its minimum, but no informa- tion about minimizers is known . 114 7.2.3 Case 3: p does not attain its minimum . 115 7.3 Positive polynomials over the gradient ideal . 118 8 Exploiting algebraic structure to reduce the problem size . 121 8.1 Exploiting sparsity . 122 8.1.1 Using the Newton polynomial . 122 8.1.2 Structured sparsity on the constraint and objec- tivepolynomials . .122 8.1.3 Proof of Theorem 8.9 . 125 8.1.4 Extracting global minimizers . 127 8.2 Exploitingequations . .128 8.2.1 The zero-dimensional case . 129 8.2.2 The 0/1 case . 131 8.2.3 Exploiting sparsity in the 0/1 case . 133 8.3 Exploitingsymmetry. .134 9 Bibliography ..............................139 Note. This is an updated version of the article Sums of Squares, Moment Matrices and Polynomial Optimization, published in Emerging Applications of Algebraic Geometry, Vol. 149 of IMA Volumes in Mathe- matics and its Applications, M. Putinar and S. Sullivant (eds.), Springer, pages 157-270, 2009. 1. Introduction. This survey focuses on the following polynomial op- timization problem: Given polynomials p,g ,...,g R[x], find 1 m ∈ min p := inf p(x) subject to g1(x) 0,...,gm(x) 0, (1.1) x∈Rn ≥ ≥ the infimum of p over the basic closed semialgebraic set K := x Rn g (x) 0,...,g (x) 0 . (1.2) { ∈ | 1 ≥ m ≥ } Here R[x]= R[x1,..., xn] denotes the ring of multivariate polynomials in the n-tuple of variables x = (x1,..., xn). This is a hard, in general non- convex, optimization problem. The objective of this paper is to survey relaxation methods for this problem, that are based on relaxing positiv- ity over K by sums of squares decompositions, and the dual theory of moments. The polynomial optimization problem arises in numerous appli- cations. In the rest of the Introduction, we present several instances of this problem, discuss the scope of the paper, and give some preliminaries about polynomials and semidefinite programming. 4 Monique Laurent 1.1. The polynomial optimization problem. We introduce sev- eral instances of problem (1.1). The unconstrained polynomial minimization problem. This is the problem pmin = inf p(x), (1.3) x∈Rn of minimizing a polynomial p over the full space K = Rn. We now men- tion several problems which can be cast as instances of the unconstrained polynomial minimization problem. Testing matrix copositivity. An n n symmetric matrix M is said T R×n to be copositive if x Mx 0 for all x +; equivalently, M is copositive min ≥ ∈ n 2 2 if and only if p = 0 in (1.3) for the polynomial p := i,j=1 xi xj Mij . Testing whether a matrix is not copositive is an NP-complete problem [111]. P The partition problem. The partition problem asks whether a given sequence a1,...,an of positive integer numbers can be partitioned, i.e., whether xT a = 0 for some x 1 n. Equivalently, the sequence can be min ∈ {± } n 2 partitioned if p = 0 in (1.3) for the polynomial p := ( i=1 aixi) + n (x2 1)2. The partition problem is an NP-complete problem [45]. i=1 i − P RE P The distance realization problem. Let d = (dij )ij∈E be a given set of scalars (distances) where E is a given set of pairs∈ ij with 1 i < j n. Given an integer k 1 one says that d is realizable ≤ k ≤ ≥ k in R if there exist vectors v1,...,vn R such that dij = vi vj for all ij E. Equivalently, d is realizable∈ in Rk if pmin =k 0 for− thek polynomial∈ p := (d2 k (x x )2)2 in the variables x ij∈E ij − h=1 ih − jh ih (i = 1,...,n,h = 1,...,k). Checking whether d is realizable in Rk is an P P NP-complete problem, already for dimension k = 1 (Saxe [142]). Note that the polynomials involved in the above three instances have degree 4. Hence the unconstrained polynomial minimization problem is a hard problem, already for degree 4 polynomials, while it is polynomial time solvable for degree 2 polynomials (cf. Section 3.2). The problem (1.1) also contains (0/1) linear programming. (0/1) Linear programming. Given a matrix A Rm×n and vectors b Rm, c Rn, the linear programming problem can∈ be formulated as ∈ ∈ min cT x s.t. Ax b, ≤ thus it is of the form (1.1) where the objective function and the constraints are all linear (degree at most 1) polynomials. As is well known it can be solved in polynomial time (cf. e.g. [146]). If we add the quadratic 2 constraints xi = xi (i = 1,...,n) we obtain the 0/1 linear programming problem: min cT x s.t. Ax b, x2 = x i =1,...,n, ≤ i i ∀ well known to be NP-hard. Sums of Squares, Moment Matrices and Polynomial Optimization 5 The stable set problem. Given a graph G = (V, E), a set S V is said to be stable if ij E for all i, j S.
Recommended publications
  • Fast and Efficient Algorithms for Evaluating Uniform and Nonuniform
    World Academy of Science, Engineering and Technology International Journal of Computer and Information Engineering Vol:13, No:8, 2019 Fast and Efficient Algorithms for Evaluating Uniform and Nonuniform Lagrange and Newton Curves Taweechai Nuntawisuttiwong, Natasha Dejdumrong Abstract—Newton-Lagrange Interpolations are widely used in quadratic computation time, which is slower than the others. numerical analysis. However, it requires a quadratic computational However, there are some curves in CAGD, Wang-Ball, DP time for their constructions. In computer aided geometric design and Dejdumrong curves with linear complexity. In order to (CAGD), there are some polynomial curves: Wang-Ball, DP and Dejdumrong curves, which have linear time complexity algorithms. reduce their computational time, Bezier´ curve is converted Thus, the computational time for Newton-Lagrange Interpolations into any of Wang-Ball, DP and Dejdumrong curves. Hence, can be reduced by applying the algorithms of Wang-Ball, DP and the computational time for Newton-Lagrange interpolations Dejdumrong curves. In order to use Wang-Ball, DP and Dejdumrong can be reduced by converting them into Wang-Ball, DP and algorithms, first, it is necessary to convert Newton-Lagrange Dejdumrong algorithms. Thus, it is necessary to investigate polynomials into Wang-Ball, DP or Dejdumrong polynomials. In this work, the algorithms for converting from both uniform and the conversion from Newton-Lagrange interpolation into non-uniform Newton-Lagrange polynomials into Wang-Ball, DP and the curves with linear complexity, Wang-Ball, DP and Dejdumrong polynomials are investigated. Thus, the computational Dejdumrong curves. An application of this work is to modify time for representing Newton-Lagrange polynomials can be reduced sketched image in CAD application with the computational into linear complexity.
    [Show full text]
  • Matrix-Valued Moment Problems
    Matrix-valued moment problems A Thesis Submitted to the Faculty of Drexel University by David P. Kimsey in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Mathematics June 2011 c Copyright 2011 David P. Kimsey. All Rights Reserved. ii Acknowledgments I wish to thank my advisor Professor Hugo J. Woerdeman for being a source of mathematical and professional inspiration. Woerdeman's encouragement and support have been extremely beneficial during my undergraduate and graduate career. Next, I would like to thank Professor Robert P. Boyer for providing a very important men- toring role which guided me toward a career in mathematics. I owe many thanks to L. Richard Duffy for providing encouragement and discussions on fundamental topics. Professor Dmitry Kaliuzhnyi-Verbovetskyi participated in many helpful discussions which helped shape my understanding of several mathematical topics of interest to me. In addition, Professor Anatolii Grinshpan participated in many useful discussions gave lots of encouragement. I wish to thank my Ph.D. defense committee members: Professors Boyer, Lawrence A. Fialkow, Grinshpan, Pawel Hitczenko, and Woerde- man for pointing out any gaps in my understanding as well as typographical mistakes in my thesis. Finally, I wish to acknowledge funding for my research assistantship which was provided by the National Science Foundation, via the grant DMS-0901628. iii Table of Contents Abstract ................................................................................ iv 1. INTRODUCTION ................................................................
    [Show full text]
  • Mials P
    Euclid's Algorithm 171 Euclid's Algorithm Horner’s method is a special case of Euclid's Algorithm which constructs, for given polyno- mials p and h =6 0, (unique) polynomials q and r with deg r<deg h so that p = hq + r: For variety, here is a nonstandard discussion of this algorithm, in terms of elimination. Assume that d h(t)=a0 + a1t + ···+ adt ;ad =06 ; and n p(t)=b0 + b1t + ···+ bnt : Then we seek a polynomial n−d q(t)=c0 + c1t + ···+ cn−dt for which r := p − hq has degree <d. This amounts to the square upper triangular linear system adc0 + ad−1c1 + ···+ a0cd = bd adc1 + ad−1c2 + ···+ a0cd+1 = bd+1 . adcn−d−1 + ad−1cn−d = bn−1 adcn−d = bn for the unknown coefficients c0;:::;cn−d which can be uniquely solved by back substitution since its diagonal entries all equal ad =0.6 19aug02 c 2002 Carl de Boor 172 18. Index Rough index for these notes 1-1:-5,2,8,40 cartesian product: 2 1-norm: 79 Cauchy(-Bunyakovski-Schwarz) 2-norm: 79 Inequality: 69 A-invariance: 125 Cauchy-Binet formula: -9, 166 A-invariant: 113 Cayley-Hamilton Theorem: 133 absolute value: 167 CBS Inequality: 69 absolutely homogeneous: 70, 79 Chaikin algorithm: 139 additive: 20 chain rule: 153 adjugate: 164 change of basis: -6 affine: 151 characteristic function: 7 affine combination: 148, 150 characteristic polynomial: -8, 130, 132, 134 affine hull: 150 circulant: 140 affine map: 149 codimension: 50, 53 affine polynomial: 152 coefficient vector: 21 affine space: 149 cofactor: 163 affinely independent: 151 column map: -6, 23 agrees with y at Λt:59 column space: 29 algebraic dual: 95 column
    [Show full text]
  • CS321-001 Introduction to Numerical Methods
    CS321-001 Introduction to Numerical Methods Lecture 3 Interpolation and Numerical Differentiation Professor Jun Zhang Department of Computer Science University of Kentucky Lexington, KY 40506-0633 Polynomial Interpolation Given a set of discrete values, how can we estimate other values between these data The method that we will use is called polynomial interpolation. We assume the data we had are from the evaluation of a smooth function. We may be able to use a polynomial p(x) to approximate this function, at least locally. A condition: the polynomial p(x) takes the given values at the given points (nodes), i.e., p(xi) = yi with 0 ≤ i ≤ n. The polynomial is said to interpolate the table, since we do not know the function. 2 Polynomial Interpolation Note that all the points are passed through by the curve 3 Polynomial Interpolation We do not know the original function, the interpolation may not be accurate 4 Order of Interpolating Polynomial A polynomial of degree 0, a constant function, interpolates one set of data If we have two sets of data, we can have an interpolating polynomial of degree 1, a linear function x x1 x x0 p(x) y0 y1 x0 x1 x1 x0 y1 y0 y0 (x x0 ) x1 x0 Review carefully if the interpolation condition is satisfied Interpolating polynomials can be written in several forms, the most well known ones are the Lagrange form and Newton form. Each has some advantages 5 Lagrange Form For a set of fixed nodes x0, x1, …, xn, the cardinal functions, l0, l1,…, ln, are defined as 0 if i j li (x j ) ij
    [Show full text]
  • Lagrange & Newton Interpolation
    Lagrange & Newton interpolation In this section, we shall study the polynomial interpolation in the form of Lagrange and Newton. Given a se- quence of (n +1) data points and a function f, the aim is to determine an n-th degree polynomial which interpol- ates f at these points. We shall resort to the notion of divided differences. Interpolation Given (n+1) points {(x0, y0), (x1, y1), …, (xn, yn)}, the points defined by (xi)0≤i≤n are called points of interpolation. The points defined by (yi)0≤i≤n are the values of interpolation. To interpolate a function f, the values of interpolation are defined as follows: yi = f(xi), i = 0, …, n. Lagrange interpolation polynomial The purpose here is to determine the unique polynomial of degree n, Pn which verifies Pn(xi) = f(xi), i = 0, …, n. The polynomial which meets this equality is Lagrange interpolation polynomial n Pnx=∑ l j x f x j j=0 where the lj ’s are polynomials of degree n forming a basis of Pn n x−xi x−x0 x−x j−1 x−x j 1 x−xn l j x= ∏ = ⋯ ⋯ i=0,i≠ j x j −xi x j−x0 x j−x j−1 x j−x j1 x j−xn Properties of Lagrange interpolation polynomial and Lagrange basis They are the lj polynomials which verify the following property: l x = = 1 i= j , ∀ i=0,...,n. j i ji {0 i≠ j They form a basis of the vector space Pn of polynomials of degree at most equal to n n ∑ j l j x=0 j=0 By setting: x = xi, we obtain: n n ∑ j l j xi =∑ j ji=0 ⇒ i =0 j=0 j=0 The set (lj)0≤j≤n is linearly independent and consists of n + 1 vectors.
    [Show full text]
  • Mathematical Topics
    A Mathematical Topics This chapter discusses most of the mathematical background needed for a thor­ ough understanding of the material presented in the book. It has been mentioned in the Preface, however, that math concepts which are only used once (such as the mediation operator and points vs. vectors) are discussed right where they are introduced. Do not worry too much about your difficulties in mathematics, I can assure you that mine a.re still greater. - Albert Einstein. A.I Fourier Transforms Our curves are functions of an arbitrary parameter t. For functions used in science and engineering, time is often the parameter (or the independent variable). We, therefore, say that a function g( t) is represented in the time domain. Since a typical function oscillates, we can think of it as being similar to a wave and we may try to represent it as a wave (or as a combination of waves). When this is done, we have the function G(f), where f stands for the frequency of the wave, and we say that the function is represented in the frequency domain. This turns out to be a useful concept, since many operations on functions are easy to carry out in the frequency domain. Transforming a function between the time and frequency domains is easy when the function is periodic, but it can also be done for certain non periodic functions. The present discussion is restricted to periodic functions. 1IIIt Definition: A function g( t) is periodic if (and only if) there exists a constant P such that g(t+P) = g(t) for all values of t (Figure A.la).
    [Show full text]
  • Geometry of the Pfaff Lattices 3
    GEOMETRY OF THE PFAFF LATTICES YUJI KODAMA∗ AND VIRGIL U. PIERCE∗∗ Abstract. The (semi-infinite) Pfaff lattice was introduced by Adler and van Moerbeke [2] to describe the partition functions for the random matrix models of GOE and GSE type. The partition functions of those matrix models are given by the Pfaffians of certain skew-symmetric matrices called the moment matrices, and they are the τ-functions of the Pfaff lattice. In this paper, we study a finite version of the Pfaff lattice equation as a Hamiltonian system. In particular, we prove the complete integrability in the sense of Arnold-Liouville, and using a moment map, we describe the real isospectral varieties of the Pfaff lattice. The image of the moment map is a convex polytope whose vertices are identified as the fixed points of the flow generated by the Pfaff lattice. Contents 1. Introduction 2 1.1. Lie algebra splitting related to SR-factorization 2 1.2. Hamiltonian structure 3 1.3. Outline of the paper 5 2. Integrability 6 2.1. The integrals Fr,k(L) 6 2.2. The angle variables conjugate to Fr,k(L) 10 2.3. Extended Jacobian 14 3. Matrix factorization and the τ-functions 16 3.1. Moment matrix and the τ-functions 17 3.2. Foliation of the phase space by Fr,k(L) 21 3.3. Examples from the matrix models 22 4. Real solutions 23 arXiv:0705.0510v1 [nlin.SI] 3 May 2007 4.1. Skew-orthogonal polynomials 23 4.2. Fixed points of the Pfaff flows 26 4.3.
    [Show full text]
  • Pre-Publication Accepted Manuscript
    Peter Forrester, Shi-Hao Li Classical discrete symplectic ensembles on the linear and exponential lattice: skew orthogonal polynomials and correlation functions Transactions of the American Mathematical Society DOI: 10.1090/tran/7957 Accepted Manuscript This is a preliminary PDF of the author-produced manuscript that has been peer-reviewed and accepted for publication. It has not been copyedited, proofread, or finalized by AMS Production staff. Once the accepted manuscript has been copyedited, proofread, and finalized by AMS Production staff, the article will be published in electronic form as a \Recently Published Article" before being placed in an issue. That electronically published article will become the Version of Record. This preliminary version is available to AMS members prior to publication of the Version of Record, and in limited cases it is also made accessible to everyone one year after the publication date of the Version of Record. The Version of Record is accessible to everyone five years after publication in an issue. CLASSICAL DISCRETE SYMPLECTIC ENSEMBLES ON THE LINEAR AND EXPONENTIAL LATTICE: SKEW ORTHOGONAL POLYNOMIALS AND CORRELATION FUNCTIONS PETER J. FORRESTER AND SHI-HAO LI Abstract. The eigenvalue probability density function for symplectic invariant random matrix ensembles can be generalised to discrete settings involving either a linear or exponential lattice. The corresponding correlation functions can be expressed in terms of certain discrete, and q, skew orthogonal polynomials respectively. We give a theory of both of these classes of polynomials, and the correlation kernels determining the correlation functions, in the cases that the weights for the corresponding discrete unitary ensembles are classical.
    [Show full text]
  • A Low‐Cost Solution to Motion Tracking Using an Array of Sonar Sensors and An
    A Low‐cost Solution to Motion Tracking Using an Array of Sonar Sensors and an Inertial Measurement Unit A thesis presented to the faculty of the Russ College of Engineering and Technology of Ohio University In partial fulfillment of the requirements for the degree Master of Science Jason S. Maxwell August 2009 © 2009 Jason S. Maxwell. All Rights Reserved. 2 This thesis titled A Low‐cost Solution to Motion Tracking Using an Array of Sonar Sensors and an Inertial Measurement Unit by JASON S. MAXWELL has been approved for the School of Electrical Engineering Computer Science and the Russ College of Engineering and Technology by Maarten Uijt de Haag Associate Professor School of Electrical Engineering and Computer Science Dennis Irwin Dean, Russ College of Engineering and Technology 3 ABSTRACT MAXWELL, JASON S., M.S., August 2009, Electrical Engineering A Low‐cost Solution to Motion Tracking Using an Array of Sonar Sensors and an Inertial Measurement Unit (91 pp.) Director of Thesis: Maarten Uijt de Haag As the desire and need for unmanned aerial vehicles (UAV) increases, so to do the navigation and system demands for the vehicles. While there are a multitude of systems currently offering solutions, each also possess inherent problems. The Global Positioning System (GPS), for instance, is potentially unable to meet the demands of vehicles that operate indoors, in heavy foliage, or in urban canyons, due to the lack of signal strength. Laser‐based systems have proven to be rather costly, and can potentially fail in areas in which the surface absorbs light, and in urban environments with glass or mirrored surfaces.
    [Show full text]
  • The AMATYC Review, Fall 1992-Spring 1993. INSTITUTION American Mathematical Association of Two-Year Colleges
    DOCUMENT RESUME ED 354 956 JC 930 126 AUTHOR Cohen, Don, Ed.; Browne, Joseph, Ed. TITLE The AMATYC Review, Fall 1992-Spring 1993. INSTITUTION American Mathematical Association of Two-Year Colleges. REPORT NO ISSN-0740-8404 PUB DATE 93 NOTE 203p. AVAILABLE FROMAMATYC Treasurer, State Technical Institute at Memphis, 5983 Macon Cove, Memphis, TN 38134 (for libraries 2 issues free with $25 annual membership). PUB TYPE Collected Works Serials (022) JOURNAL CIT AMATYC Review; v14 n1-2 Fall 1992-Spr 1993 EDRS PRICE MF01/PC09 Plus Postage. DESCRIPTORS *College Mathematics; Community Colleges; Computer Simulation; Functions (Mathematics); Linear Programing; *Mathematical Applications; *Mathematical Concepts; Mathematical Formulas; *Mathematics Instruction; Mathematics Teachers; Technical Mathematics; Two Year Colleges ABSTRACT Designed as an avenue of communication for mathematics educators concerned with the views, ideas, and experiences of two-year collage students and teachers, this journal contains articles on mathematics expoition and education,as well as regular features presenting book and software reviews and math problems. The first of two issues of volume 14 contains the following major articles: "Technology in the Mathematics Classroom," by Mike Davidson; "Reflections on Arithmetic-Progression Factorials," by William E. Rosentl,e1; "The Investigation of Tangent Polynomials with a Computer Algebra System," by John H. Mathews and Russell W. Howell; "On Finding the General Term of a Sequence Using Lagrange Interpolation," by Sheldon Gordon and Robert Decker; "The Floating Leaf Problem," by Richard L. Francis; "Approximations to the Hypergeometric Distribution," by Chltra Gunawardena and K. L. D. Gunawardena; aid "Generating 'JE(3)' with Some Elementary Applications," by John J. Edgeli, Jr. The second issue contains: "Strategies for Making Mathematics Work for Minorities," by Beverly J.
    [Show full text]
  • Representational Models: a Common Framework for Understanding Encoding, Pattern-Component, and Representational-Similarity Analysis
    Representational models: A common framework for understanding encoding, pattern-component, and representational-similarity analysis Jörn Diedrichsen1 & Nikolaus Kriegeskorte2 1. Brain and Mind Institute, Western University, Canada 2. Cognitive and Brain Sciences Unit, Cambridge University, UK Abstract Representational models explain how activity patterns in populations of neurons (or, more generally, in multivariate brain activity measurements) relate to sensory stimuli, motor actions, or cognitive processes. In an experimental context, representational models can be defined as probabilistic hypotheses about what profiles of activations across conditions are likely to be observed. We describe three methods to test such models – encoding models, pattern component modeling (PCM), and representational similarity analysis (RSA). We show that these methods are closely related in that they all evaluate the statistical second moment of the activity profile distribution. Using simulated data from three different fMRI experiments, we compare the power of the approaches to adjudicate between competing representational models. PCM implements a likelihood-ratio test and therefore constitutes the most powerful test if its assumptions hold. However, the other two approaches – when conducted appropriately – can perform similarly. In encoding approaches, the linear model needs to be appropriately regularized, which imposes a prior on the activity profiles. Without such a prior, encoding approaches do not test well-defined representational models. In RSA, the unequal variances and dependencies of the distance measures can be taken into account using a multi-normal approximation to the sampling distribution of the distance measures, so as to enable optimal inference. Each of the three techniques renders different information explicit (e.g. single response tuning in encoding models and population representational dissimilarity in RSA) and has specific advantages in terms of computational demands, ease of use, and extensibility.
    [Show full text]
  • Chapter 3 Interpolation and Polynomial Approximation
    Chapter 3 Interpolation and Polynomial Approximation The computational procedures used in computer software for the evaluation of a li- brary function such as sin(x); cos(x), or ex, involve polynomial approximation. The state-of-the-art methods use rational functions (which are the quotients of polynomi- als). However, the theory of polynomial approximation is suitable for a first course in numerical analysis, and we consider them in this chapter. Suppose that the function f(x) = ex is to be approximated by a polynomial of degree n = 2 over the interval [¡1; 1]. The Taylor polynomial is shown in Figure 1.1(a) and can be contrasted with 2 x The Taylor polynomial p(x)=1+x+0.5x2 which approximates f(x)=ex over [−1,1] The Chebyshev approximation q(x)=1+1.129772x+0.532042x for f(x)=e over [−1,1] 3 3 2.5 2.5 y=ex 2 2 y=ex 1.5 1.5 2 1 y=1+x+0.5x 1 y=q(x) 0.5 0.5 0 0 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 Figure 1.1(a) Figure 1.1(b) Figure 3.1 (a) The Taylor polynomial p(x) = 1:000000+ 1:000000x +0:500000x2 which approximate f(x) = ex over [¡1; 1]. (b) The Chebyshev approximation q(x) = 1:000000 + 1:129772x + 0:532042x2 for f(x) = ex over [¡1; 1].
    [Show full text]