Permuted Orthogonal Block-Diagonal Transformation Matrices for Large Scale Optimization Benchmarking Ouassim Ait Elhara, Anne Auger, Nikolaus Hansen

Total Page:16

File Type:pdf, Size:1020Kb

Permuted Orthogonal Block-Diagonal Transformation Matrices for Large Scale Optimization Benchmarking Ouassim Ait Elhara, Anne Auger, Nikolaus Hansen Permuted Orthogonal Block-Diagonal Transformation Matrices for Large Scale Optimization Benchmarking Ouassim Ait Elhara, Anne Auger, Nikolaus Hansen To cite this version: Ouassim Ait Elhara, Anne Auger, Nikolaus Hansen. Permuted Orthogonal Block-Diagonal Trans- formation Matrices for Large Scale Optimization Benchmarking. GECCO 2016, Jul 2016, Denver, United States. pp.189-196, 10.1145/2908812.2908937. hal-01308566v3 HAL Id: hal-01308566 https://hal.inria.fr/hal-01308566v3 Submitted on 11 May 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Permuted Orthogonal Block-Diagonal Transformation Matrices for Large Scale Optimization Benchmarking Ouassim Ait ElHara Anne Auger Nikolaus Hansen LRI, Univ. Paris-Sud Inria Inria Université Paris-Saclay LRI, Univ. Paris-Sud LRI, Univ. Paris-Sud TAO Team, Inria Université Paris-Saclay Université Paris-Saclay firstname.ait [email protected] [email protected] [email protected] ABSTRACT In the context of optimization, it consists in running an We propose a general methodology to construct large-scale algorithm on a set of test functions and extracting differ- testbeds for the benchmarking of continuous optimization ent performance measures from the generated data. To be algorithms. Our approach applies an orthogonal transfor- meaningful, the functions on which the algorithms are tested mation on raw functions that involve only a linear number of should relate to real-world problems. The COCO platform is operations in order to obtain large scale optimization bench- a benchmarking platform for comparing continuous optimiz- mark problems. The orthogonal transformation is sampled ers [3]. In COCO, the philosophy is to provide meaningful from a parametrized family of transformations that are the and quantitative performance measures. The test functions product of a permutation matrix times a block-diagonal ma- are comprehensible, modeling well-identified difficulties en- trix times a permutation matrix. We investigate the impact countered in real-world problems such as ill-conditionning, of the different parameters of the transformation on its shape multi-modality, non-separability, skewness and the lack of a and on the difficulty of the problems for separable CMA-ES. structure that can be exploited. We illustrate the use of the above defined transformation in In this paper, we are interested in designing a large-scale the BBOB-2009 testbed as replacement for the expensive test suite for benchmarking large-scale optimization algo- orthogonal (rotation) matrices. We also show the practica- rithms. For this purpose we are building on the BBOB-2009 bility of the approach by studying the computational cost testbed of the COCO platform. We consider a continuous and its applicability in a large scale setting. optimization problem to be of large scale when the number of variables is in the hundreds or thousands. Here, we start by extending the dimensions used in COCO (up to 40) and Keywords consider values of d up to at least 640. Large Scale Optimization; Continuous Optimization; Bench- There are already some attempts on large scale continu- marking. ous optimization benchmarking. The CEC special sessions and competitions on large-scale global optimization have ex- 1. INTRODUCTION tended some of the standard functions to the large scale set- The general context of this paper is numerical optimiza- ting [9,8]. In the CEC large scale testbed, four classes of tion in a so-called black-box scenario. More precisely one is functions are introduced depending on their level of separa- interested in estimating the optimum of a function bility. These classes are: (a) separable functions; (b) par- tially separable functions with a single group of dependent xopt = arg min f(x) ; (1) variables; (c) partially separable functions with several in- dependent groups, each group comprised of dependent vari- with x 2 S ⊆ d and d ≥ 1 the dimension of the problem. R ables; and (d) fully non-separable functions. On the par- The black-box scenario means that the only information on tially separable functions, variables are divided into groups f that an algorithm can use is the objective function value and the overall fitness is defined as the sum of the fitness of f(x) at a given queried point x. There are two conflict- one or more functions on each of these groups independently. ing objectives: (i) estimating x as precisely as possible opt Dependencies are obtained by applying an orthogonal ma- with (ii) the least possible cost, that is the least number of trix to the variables of a group. By limiting the sizes of the function evaluations. groups, the orthogonal matrices remain of reasonable size, Benchmarking is a compulsory task to evaluate the per- which allows their application in a large scale setting. formance of optimization algorithms designed to solve (1). The approach we consider in this paper is to start with the BBOB-2009 testbed [6] from the COCO platform and Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed replace the transformations that do not scale linearly with for profit or commercial advantage and that copies bear this notice and the full cita- the problem dimension d with more efficient variants whilst tion on the first page. Copyrights for components of this work owned by others than trying to conserve these functions' properties. ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re- publish, to post on servers or to redistribute to lists, requires prior specific permission The paper is organized as follows: in Section2 we moti- and/or a fee. Request permissions from [email protected]. vate and introduce our particular transformation matrix for GECCO ’16, July 20-24, 2016, Denver, CO, USA large scale benchmarking and identify its different parame- c 2016 ACM. ISBN 978-1-4503-4206-3/16/07. $15.00 ters. Section3 treats the effects of the different parameters DOI: http://dx.doi.org/10.1145/2908812.2908937 on the transformed problem and its difficulty. We choose In order to introduce non-separability and coordinate sys- possible default values for the parameters in Section4 and tem independence, another transformation consists in apply- give some implementation details and complexity measures ing an orthogonal matrix to the search space: x 7! z = Rx ; d×d in Section5 where we also illustrate how the transformation with R 2 R an orthogonal matrix. can be applied in COCO. We sum up in Section6. If we take the Rastrigin function (f15) from [6] (and nor- malize it in our case), it combines all the transformations 2. BBOB-2009 RATIONALE AND THE CASE defined above: Rastrigin FOR LARGE SCALE f15(x) = fraw (z) + fopt ; (5) Our building of large-scale functions within the COCO 10 asy osz Rastrigin framework is based on the BBOB-2009 testbed [6]. We are where z = RΛ QT0:2 (T (R (x − xopt))), fraw as explaining in this section how this testbed is built, and how defined in Table1 and R and Q two d × d orthogonal ma- we intend to make it large-scale friendly. trices. 2.1 The BBOB-2009 Testbed 2.2 Extension to Large Scale The BBOB-2009 testbed relies on the use of a number of Complexity-wise, all the transformations above, bar the raw functions from which 24 different problems are gener- application of an orthogonal matrix, can be computed in lin- ated. Here, the notion of raw function designates functions ear time, and thus remain applicable in practice in a large in their basic form applied to a non-transformed (canonical scale setting. The problems in COCO are defined, in most base) search space. For example, we call the raw ellipsoid cases, by combining the above defined transformations. Ad- function the convex quadratic function with a diagonal Hes- ditional transformations are of linear cost and can be applied 6 i−1 sian matrix whose ith element equals 2 × 10 d−1 , that is in a large scale setting without affecting computational com- d 6 i−1 plexity. f Elli(x) = P 10 d−1 x2. On the other hand, the separa- i=1 i Our idea is to derive a computationally feasible large scale ble ellipsoid function used in BBOB-2009 (f ) sees the search 2 optimization test suite from the BBOB-2009 testbed [6], space transformed by a translation and an oscillation T osz while preserving the main characteristics of the original func- (4), that is, it is called on z = T osz(x − x ) instead of x, opt tions. To achieve this goal, we replace the computationally with x 2 d. Table1 shows normalized and generalized opt R expensive transformations that we identified above, namely examples of such widely used raw functions. full orthogonal matrices, with orthogonal transformations The 24 BBOB-2009 test functions are obtained by apply- of linear computational complexity: permuted orthogonal ing a series of transformations on such raw functions that block-diagonal matrices. serve two main purposes: (i) have non trivial problems that can not be solved by simply exploiting some of their proper- 2.2.1 Objectives ties (separability, optimum at fixed position...) and (ii) allow to generate different instances, ideally of similar difficulty, The main properties we want the replacement transfor- of a same problem.
Recommended publications
  • MATH 2030: MATRICES Introduction to Linear Transformations We Have
    MATH 2030: MATRICES Introduction to Linear Transformations We have seen that we may describe matrices as symbol with simple algebraic properties like matrix multiplication, addition and scalar addition. In the particular case of matrix-vector multiplication, i.e., Ax = b where A is an m × n matrix and x; b are n×1 matrices (column vectors) we may represent this as a transformation on the space of column vectors, that is a function F (x) = b , where x is the independent variable and b the dependent variable. In this section we will give a more rigorous description of this idea and provide examples of such matrix transformations, which will lead to the idea of a linear transformation. To begin we look at a matrix-vector multiplication to give an idea of what sort of functions we are working with 21 0 3 1 A = 2 −1 ; v = : 4 5 −1 3 4 then matrix-vector multiplication yields 2 1 3 Av = 4 3 5 −1 We have taken a 2 × 1 matrix and produced a 3 × 1 matrix. More generally for any x we may describe this transformation as a matrix equation y 21 0 3 2 x 3 x 2 −1 = 2x − y : 4 5 y 4 5 3 4 3x + 4y From this product we have found a formula describing how A transforms an arbi- 2 3 trary vector in R into a new vector in R . Expressing this as a transformation TA we have 2 x 3 x T = 2x − y : A y 4 5 3x + 4y From this example we can define some helpful terminology.
    [Show full text]
  • Periodic Staircase Matrices and Generalized Cluster Structures
    PERIODIC STAIRCASE MATRICES AND GENERALIZED CLUSTER STRUCTURES MISHA GEKHTMAN, MICHAEL SHAPIRO, AND ALEK VAINSHTEIN Abstract. As is well-known, cluster transformations in cluster structures of geometric type are often modeled on determinant identities, such as short Pl¨ucker relations, Desnanot–Jacobi identities and their generalizations. We present a construction that plays a similar role in a description of general- ized cluster transformations and discuss its applications to generalized clus- ter structures in GLn compatible with a certain subclass of Belavin–Drinfeld Poisson–Lie brackets, in the Drinfeld double of GLn, and in spaces of periodic difference operators. 1. Introduction Since the discovery of cluster algebras in [4], many important algebraic varieties were shown to support a cluster structure in a sense that the coordinate rings of such variety is isomorphic to a cluster algebra or an upper cluster algebra. Lie theory and representation theory turned out to be a particularly rich source of varieties of this sort including but in no way limited to such examples as Grassmannians [5, 18], double Bruhat cells [1] and strata in flag varieties [15]. In all these examples, cluster transformations that connect distinguished coordinate charts within a ring of regular functions are modeled on three-term relations such as short Pl¨ucker relations, Desnanot–Jacobi identities and their Lie-theoretic generalizations of the kind considered in [3]. This remains true even in the case of exotic cluster structures on GLn considered in [8, 10] where cluster transformations can be obtained by applying Desnanot–Jacobi type identities to certain structured matrices of a size far exceeding n.
    [Show full text]
  • Arxiv:Math/0010243V1
    FOUR SHORT STORIES ABOUT TOEPLITZ MATRIX CALCULATIONS THOMAS STROHMER∗ Abstract. The stories told in this paper are dealing with the solution of finite, infinite, and biinfinite Toeplitz-type systems. A crucial role plays the off-diagonal decay behavior of Toeplitz matrices and their inverses. Classical results of Gelfand et al. on commutative Banach algebras yield a general characterization of this decay behavior. We then derive estimates for the approximate solution of (bi)infinite Toeplitz systems by the finite section method, showing that the approximation rate depends only on the decay of the entries of the Toeplitz matrix and its condition number. Furthermore, we give error estimates for the solution of doubly infinite convolution systems by finite circulant systems. Finally, some quantitative results on the construction of preconditioners via circulant embedding are derived, which allow to provide a theoretical explanation for numerical observations made by some researchers in connection with deconvolution problems. Key words. Toeplitz matrix, Laurent operator, decay of inverse matrix, preconditioner, circu- lant matrix, finite section method. AMS subject classifications. 65T10, 42A10, 65D10, 65F10 0. Introduction. Toeplitz-type equations arise in many applications in mathe- matics, signal processing, communications engineering, and statistics. The excellent surveys [4, 17] describe a number of applications and contain a vast list of references. The stories told in this paper are dealing with the (approximate) solution of biinfinite, infinite, and finite hermitian positive definite Toeplitz-type systems. We pay special attention to Toeplitz-type systems with certain decay properties in the sense that the entries of the matrix enjoy a certain decay rate off the diagonal.
    [Show full text]
  • Vectors, Matrices and Coordinate Transformations
    S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between them, physical laws can often be written in a simple form. Since we will making extensive use of vectors in Dynamics, we will summarize some of their important properties. Vectors For our purposes we will think of a vector as a mathematical representation of a physical entity which has both magnitude and direction in a 3D space. Examples of physical vectors are forces, moments, and velocities. Geometrically, a vector can be represented as arrows. The length of the arrow represents its magnitude. Unless indicated otherwise, we shall assume that parallel translation does not change a vector, and we shall call the vectors satisfying this property, free vectors. Thus, two vectors are equal if and only if they are parallel, point in the same direction, and have equal length. Vectors are usually typed in boldface and scalar quantities appear in lightface italic type, e.g. the vector quantity A has magnitude, or modulus, A = |A|. In handwritten text, vectors are often expressed using the −→ arrow, or underbar notation, e.g. A , A. Vector Algebra Here, we introduce a few useful operations which are defined for free vectors. Multiplication by a scalar If we multiply a vector A by a scalar α, the result is a vector B = αA, which has magnitude B = |α|A. The vector B, is parallel to A and points in the same direction if α > 0.
    [Show full text]
  • Support Graph Preconditioners for Sparse Linear Systems
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Texas A&M University SUPPORT GRAPH PRECONDITIONERS FOR SPARSE LINEAR SYSTEMS A Thesis by RADHIKA GUPTA Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE December 2004 Major Subject: Computer Science SUPPORT GRAPH PRECONDITIONERS FOR SPARSE LINEAR SYSTEMS A Thesis by RADHIKA GUPTA Submitted to Texas A&M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Approved as to style and content by: Vivek Sarin Paul Nelson (Chair of Committee) (Member) N. K. Anand Valerie E. Taylor (Member) (Head of Department) December 2004 Major Subject: Computer Science iii ABSTRACT Support Graph Preconditioners for Sparse Linear Systems. (December 2004) Radhika Gupta, B.E., Indian Institute of Technology, Bombay; M.S., Georgia Institute of Technology, Atlanta Chair of Advisory Committee: Dr. Vivek Sarin Elliptic partial differential equations that are used to model physical phenomena give rise to large sparse linear systems. Such systems can be symmetric positive definite and can be solved by the preconditioned conjugate gradients method. In this thesis, we develop support graph preconditioners for symmetric positive definite matrices that arise from the finite element discretization of elliptic partial differential equations. An object oriented code is developed for the construction, integration and application of these preconditioners. Experimental results show that the advantages of support graph preconditioners are retained in the proposed extension to the finite element matrices. iv To my parents v ACKNOWLEDGMENTS I would like to express sincere thanks to my advisor, Dr.
    [Show full text]
  • THREE STEPS on an OPEN ROAD Gilbert Strang This Note Describes
    Inverse Problems and Imaging doi:10.3934/ipi.2013.7.961 Volume 7, No. 3, 2013, 961{966 THREE STEPS ON AN OPEN ROAD Gilbert Strang Massachusetts Institute of Technology Cambridge, MA 02139, USA Abstract. This note describes three recent factorizations of banded invertible infinite matrices 1. If A has a banded inverse : A=BC with block{diagonal factors B and C. 2. Permutations factor into a shift times N < 2w tridiagonal permutations. 3. A = LP U with lower triangular L, permutation P , upper triangular U. We include examples and references and outlines of proofs. This note describes three small steps in the factorization of banded matrices. It is written to encourage others to go further in this direction (and related directions). At some point the problems will become deeper and more difficult, especially for doubly infinite matrices. Our main point is that matrices need not be Toeplitz or block Toeplitz for progress to be possible. An important theory is already established [2, 9, 10, 13-16] for non-Toeplitz \band-dominated operators". The Fredholm index plays a key role, and the second small step below (taken jointly with Marko Lindner) computes that index in the special case of permutation matrices. Recall that banded Toeplitz matrices lead to Laurent polynomials. If the scalars or matrices a−w; : : : ; a0; : : : ; aw lie along the diagonals, the polynomial is A(z) = P k akz and the bandwidth is w. The required index is in this case a winding number of det A(z). Factorization into A+(z)A−(z) is a classical problem solved by Plemelj [12] and Gohberg [6-7].
    [Show full text]
  • Arxiv:2004.05118V1 [Math.RT]
    GENERALIZED CLUSTER STRUCTURES RELATED TO THE DRINFELD DOUBLE OF GLn MISHA GEKHTMAN, MICHAEL SHAPIRO, AND ALEK VAINSHTEIN Abstract. We prove that the regular generalized cluster structure on the Drinfeld double of GLn constructed in [9] is complete and compatible with the standard Poisson–Lie structure on the double. Moreover, we show that for n = 4 this structure is distinct from a previously known regular generalized cluster structure on the Drinfeld double, even though they have the same compatible Poisson structure and the same collection of frozen variables. Further, we prove that the regular generalized cluster structure on band periodic matrices constructed in [9] possesses similar compatibility and completeness properties. 1. Introduction In [6, 8] we constructed an initial seed Σn for a complete generalized cluster D structure GCn in the ring of regular functions on the Drinfeld double D(GLn) and proved that this structure is compatible with the standard Poisson–Lie structure on D(GLn). In [9, Section 4] we constructed a different seed Σ¯ n for a regular D D generalized cluster structure GCn on D(GLn). In this note we prove that GCn D shares all the properties of GCn : it is complete and compatible with the standard Poisson–Lie structure on D(GLn). Moreover, we prove that the seeds Σ¯ 4(X, Y ) and T T Σ4(Y ,X ) are not mutationally equivalent. In this way we provide an explicit example of two different regular complete generalized cluster structures on the same variety with the same compatible Poisson structure and the same collection of frozen D variables.
    [Show full text]
  • The Spectra of Large Toeplitz Band Matrices with A
    MATHEMATICS OF COMPUTATION Volume 72, Number 243, Pages 1329{1348 S 0025-5718(03)01505-9 Article electronically published on February 3, 2003 THESPECTRAOFLARGETOEPLITZBANDMATRICES WITH A RANDOMLY PERTURBED ENTRY A. BOTTCHER,¨ M. EMBREE, AND V. I. SOKOLOV Abstract. (j;k) This paper is concerned with the union spΩ Tn(a) of all possible spectra that may emerge when perturbing a large n n Toeplitz band matrix × Tn(a)inthe(j; k) site by a number randomly chosen from some set Ω. The main results give descriptive bounds and, in several interesting situations, even (j;k) provide complete identifications of the limit of sp Tn(a)asn .Also Ω !1 discussed are the cases of small and large sets Ω as well as the \discontinuity of (j;k) the infinite volume case", which means that in general spΩ Tn(a)doesnot converge to something close to sp(j;k) T (a)asn ,whereT (a)isthecor- Ω !1 responding infinite Toeplitz matrix. Illustrations are provided for tridiagonal Toeplitz matrices, a notable special case. 1. Introduction and main results For a complex-valued continuous function a on the complex unit circle T,the infinite Toeplitz matrix T (a) and the finite Toeplitz matrices Tn(a) are defined by n T (a)=(aj k)j;k1 =1 and Tn(a)=(aj k)j;k=1; − − where a` is the `th Fourier coefficient of a, 2π 1 iθ i`θ a` = a(e )e− dθ; ` Z: 2π 2 Z0 Here, we restrict our attention to the case where a is a trigonometric polynomial, a , implying that at most a finite number of the Fourier coefficients are nonzero; equivalently,2P T (a) is a banded matrix.
    [Show full text]
  • Conversion of Sparse Matrix to Band Matrix Using an Fpga For
    CONVERSION OF SPARSE MATRIX TO BAND MATRIX USING AN FPGA FOR HIGH-PERFORMANCE COMPUTING by Anjani Chaudhary, M.S. A thesis submitted to the Graduate Council of Texas State University in partial fulfillment of the requirements for the degree of Master of Science with a Major in Engineering December 2020 Committee Members: Semih Aslan, Chair Shuying Sun William A Stapleton COPYRIGHT by Anjani Chaudhary 2020 FAIR USE AND AUTHOR’S PERMISSION STATEMENT Fair Use This work is protected by the Copyright Laws of the United States (Public Law 94-553, section 107). Consistent with fair use as defined in the Copyright Laws, brief quotations from this material are allowed with proper acknowledgement. Use of this material for financial gain without the author’s express written permission is not allowed. Duplication Permission As the copyright holder of this work, I, Anjani Chaudhary, authorize duplication of this work, in whole or in part, for educational or scholarly purposes only. ACKNOWLEDGEMENTS Throughout my research and report writing process, I am grateful to receive support and suggestions from the following people. I would like to thank my supervisor and chair, Dr. Semih Aslan, Associate Professor, Ingram School of Engineering, for his time, support, encouragement, and patience, without which I would not have made it. My committee members Dr. Bill Stapleton, Associate Professor, Ingram School of Engineering, and Dr. Shuying Sun, Associate Professor, Department of Mathematics, for their essential guidance throughout the research and my graduate advisor, Dr. Vishu Viswanathan, for providing the opportunity and valuable suggestions which helped me to come one more step closer to my goal.
    [Show full text]
  • Wronskian Solutions to the Kdv Equation Via B\" Acklund
    Wronskian solutions to the KdV equation via B¨acklund transformation Qi-fei Xuan∗, Mei-ying Ou, Da-jun Zhang† Department of Mathematics, Shanghai University, Shanghai 200444, P.R. China October 27, 2018 Abstract In the paper we discuss the B¨acklund transformation of the KdV equation between solitons and solitons, between negatons and negatons, between positons and positons, between rational solution and rational solution, and between complexitons and complexitons. We investigate the conditions that Wronskian entries satisfy for the bilinear B¨acklund transformation of the KdV equation. By choosing suitable Wronskian entries and the parameter in the bilinear B¨acklund transformation, we obtain transformations between many kinds of solutions. Keywords: the KdV equation, Wronskian solution, bilinear form, B¨acklund transformation 1 Introduction The Wronskian can be considered as a bridge connecting with many classical methods in soliton theory. This is not only because soliton solutions in Wronskian form can be obtained from the Darboux transformation[1], Sato theory[2, 3] and Wronskian technique[4]-[10], but also because the exponential polynomial for N-solitons derived from Hirota method[11, 12] and the matrix form given by the Inverse Scattering Transform[13, 14] can be transformed into a Wronskian by extracting some exponential factors. The special structure of a Wronskian contributes simple arXiv:0706.3487v1 [nlin.SI] 24 Jun 2007 forms of its derivatives, and this admits solution verification by direct substituting Wronskians into a bilinear soliton equation or a bilinear B¨acklund transformation(BT). This approach is re- ferred to as Wronskian technique[4]. In the approach a bilinear soliton equation is some algebraic identity provided that Wronskian entry vector satisfies some differential equation set which we call Wronskian condition.
    [Show full text]
  • Copyrighted Material
    JWST600-IND JWST600-Penney September 28, 2015 7:40 Printer Name: Trim: 6.125in × 9.25in INDEX Additive property of determinants, 244 Cofactor expansion, 240 Argument of a complex number, 297 Column space, 76 Column vector, 2 Band matrix, 407 Complementary vector, 319 Band width, 407 Complex eigenvalue, 302 Basis Complex eigenvector, 302 chain, 438 Complex linear transformation, 304 coordinate matrix for a basis, 217 Complex matrices, 301 coordinate transformation, 223 Complex number definition, 104 argument, 297 normalization, 315 conjugate, 302 ordered, 216 Complex vector space, 303 orthonormal, 315 Conjugate of a complex number, 302 point transformation, 223 Consumption matrix, 198 standard Coordinate matrix, 217 M(m,n), 119 Coordinates n, 119 coordinate vector, 216 Rn, 118 coordinate matrix for a basis, 217 COPYRIGHTEDcoordinate MATERIAL vector, 216 Cn, 302 point matrix, 217 Chain basis, 438 point transformation, 223 Characteristic polynomial, 275 Cramer’s rule, 265 Coefficient matrix, 75 Current, 41 Linear Algebra: Ideas and Applications, Fourth Edition. Richard C. Penney. © 2016 John Wiley & Sons, Inc. Published 2016 by John Wiley & Sons, Inc. Companion website: www.wiley.com/go/penney/linearalgebra 487 JWST600-IND JWST600-Penney September 28, 2015 7:40 Printer Name: Trim: 6.125in × 9.25in 488 INDEX Data compression, 349 Haar function, 346 Determinant Hermitian additive property, 244 adjoint, 412 cofactor expansion, 240 symmetric, 414 Cramer’s rule, 265 Hermitian, orthogonal, 413 inverse matrix, 267 Homogeneous system, 80 Laplace
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]