Anisotropic Scaling Matrices and Subdivision Schemes

Total Page:16

File Type:pdf, Size:1020Kb

Anisotropic Scaling Matrices and Subdivision Schemes Anisotropic Scaling Matrices and Subdivision Schemes Mira Bozzini1, Milvia Rossini2, Tomas Sauer3 and Elena Volontè4 1 Università degli studi Milano-Bicocca, Italy, [email protected] 2 Università degli studi Milano-Bicocca, Italy, [email protected] 3Universität Passau, Germany , [email protected] 4 Università degli studi Milano-Bicocca, Italy, [email protected] Abstract Unlike wavelet, shearlets have the capability to detect directional discontinuities together with their directions. To achieve this, the considered scaling matrices have to be not only expanding, but also anisotropic. Shearlets allow for the definition of a directional multiple multiresolution analysis, where perfect reconstruction of the filterbank can be easily ensured by choosing an appropriate interpolatory multi- ple subdivision scheme. A drawback of shearlets is their relative large determinant that leads to a substantial complexity. The aim of the paper is to find scaling matrices in Zd×d which share the properties of shearlet matrices, i.e. anisotropic expanding matrices with the so- called slope resolution property, but with a smaller determinant. The proposed matrices provide a directional multiple multiresolution anal- ysis whose behaviour is illustrated by some numerical tests on images. 1 Introduction Shearlets were introduced by [6] as a continuous transformation and quite immediately became a popular tool in signal processing (see e.g. [7]) due to the possibility of detecting directional features in the analysis process. In the shearlet context, the scaling is performed by matrices of the form 4Ip 0 d×d M = D2S; D2 = 2 R ; 0 < p < d; (1) 0 2Id−p 1 and I ∗ S = p : (2) 0 Id−p The diagonal matrix D2 results in the the anisotropy of the resulting shear- lets while the so-called shear matrix S, is responsible for the directionality of the shearlets. With a proper mix of these two properties, shearlets give a wavelet-like construction that is suited to deal with anisotropic problems such as detection of the singularities along curves or wavefront resolution [7, 8, 3]. In contrast to other approaches, such as curvelets or ridgelets, shearlets admit, besides a transform, also a suitably extended version of the multires- olution analysis and therefore an efficient implementation in terms of filter- banks. This is the concept of Multiple Multiresolution Analysis (MMRA) [8]. As for the classical multiresolution analysis, also the MMRA can be con- nected to a convergent subdivision scheme, or more precisely to a multiple subdivision scheme [13]. The refinement in a multiple subdivision scheme is controlled by choosing different scaling matrices from a finite dictionary (Mj : j 2 Zs), where we use the convenient abbreviation Zs := f0; : : : ; s−1g. If those scaling matrices are products of a common anisotropic dilation ma- trix with different shears, the resulting shearlet matrices yield a directionally adapted analysis of a signal. In particular, if we set Mj = D2Sj, j 2 Zs, an n–fold product of such matrices, n M := Mn ··· M1 ; 2 Zs ; allows to explore singularities along a direction directly connected to the sequence . In the shearlet case, this connection is essentially a binary expansion of the slope [8] while for more general matrices the relationship can be more complicated [2]. The key ingredient for the construction of a multiple subdivision scheme and its associated filterbanks [12] is the dictionary of scaling matrices. It turns out that, in order to have a well–defined (interpolatory) multiple sub- division scheme and perfect reconstruction filterbanks, we need to consider matrices (Mj : j 2 Zs) that are expanding individually and jointly expanding considering them all together. Shearlet scaling matrices quite automatically satisfy these properties and they are also able to catch all the directions in the space due to the so-called slope resolution property. This means that any possible directions can be generated by M applied to a given refer- ence direction. Therefore, any transform that analyses singularities across this reference direction in the unsheared case, analyses singularities across arbitrary lines for an appropriate . 2 The drawback of the discrete shearlets is the huge determinant of the scaling matrix which is 2p+d, as minimum, and increases dramatically with the dimension d. The determinant of the scaling matrix gives the number of analysis and synthesis filters needed in the critically sampled filterbank, soit is related to the efficiency and the computational cost of any implementation. Therefore, it is worthwhile to try to reduce, or even better minimize it. The aim of this paper is to present a family of matrices with lower determi- nant that in any dimension d enjoys all the valuable properties of shearlets by essentially just giving up the requirement that the scaling has to be parabolic. This was also the aim of [2], where the authors present a set of matrices in 2×2 Z with minimum determinant among anisotropic integer valued matrices. d×d In contrast to that, here we consider matrices M 2 Z which are the product of an anisotropic diagonal matrix of the form aI 0 D = p ; a 6= b > 1; 0 < p < d; (3) 0 bId−p with a shear matrix. The choice to work with matrices of this form is motivated by the desire to preserve another feature of the shearlet matrices: the so-called pseudo-commuting property [1]. Even if this property is not necessary to construct a directional multiresolution analysis, it is very useful n to give an explicit expression for the iterated matrices M, 2 Zs and the −1 d associated refined grid M Z . In this setting, the matrices with minimal determinant are 3I 0 D = p : (4) 0 2Id−p In this paper, we consider exactly this type of matrices and we prove that this new family has all the desired properties: they are expanding, jointly expanding, and provide the slope resolution property as well as the pseudo commuting property. Furthermore, it is possible to generate convergent multiple subdivision schemes and filterbanks related to these matrices. The paper is organized as follows. In Section 2, we recall some background material. Sections 2.2 and 2.3 introduce the concepts of MMRA and the related construction of multiple subdivision schemes and filterbanks for a general set of jointly expanding matrices. In Sections 2.4 we study the properties of shearlet scaling matrices and in Section 2.5 we recall some theorems by [1] that show which unimodular matrices pseudo commute with an anisotropic diagonal matrix in dimension d. This allows us to find pseudo- commuting matrices. Then, in Section 3, we introduce our choice of scaling 3 matrices for a general dimension d. These matrices are pseudo commuting matrices and we prove that they satisfy all the requested properties. Finally, in Section 4 few examples illustrate how our bidimensional matrices work on images. 2 Background material In this section, we recall some of the fundamental concepts needed for MM- RAs. 2.1 Basic notation and definitions All concepts that follow are based on expanding matrices which are the key ingredient and fix the way to refine the given data, hence to manipulate d×d d them. A matrix M 2 Z is called a dilation or expanding matrix for Z if all its eigenvalues are greater than one in absolute value, or, equivalently if kM −nk ! 0 for some matrix norm as n increases. These properties suggest that j det Mj is an integer ≥ 2. A matrix is said to be isotropic if all its eigenvalues have the same modulus and is called anisotropic if this is not the case. In this paper we consider a set of s expanding matrices, denoted by (Mj : j 2 Zs) where Zs := f0; : : : ; s − 1g. d d In order to recover the lattice Z from the shifts of the sublattice MZ , we have to consider the cosets d d d d d Zξ := ξ + MZ := fξ + Mk : k 2 Z g; ξ 2 M[0; 1) \ Z ; where ξ is called the representer of the coset. Any two cosets are either d identical or disjoint, and the union of all cosets gives Z ; the number of cosets is j det Mj. For the definition of a subdivision scheme, we have to fix some notation for infinite sequences. The space of real valued bi-infinite sequences indexedby d d d Z or, equivalently, the space of all functions Z ! R is denoted by `(Z ). d By `p(Z ), 1 ≤ p < 1, we denote the sequences such that 0 11=p X p kck d := jc(α)j < 1; `p(Z ) @ A α2Zd d d and use `1(Z ) for all bounded sequences. Finally, `00(Z ) is set of sequences d from Z to R with finite support. 4 2.2 Multiple subdivision schemes and MMRA The concept of multiresolution analysis provides the theoretical background needed in the discrete wavelets context. Unlike curvelets and ridgelets, shearlets allow to define and use a fully discrete transform based oncas- caded filterbanks using the concept of MMRA. Here we recall the basic as- pects of MMRA and show when and how it is possible to generate a multiple multiresolution analysis with different families of scaling matrices. d d d A subdivision operator S : `(Z ) ! `(Z ) based on a mask a 2 `00(Z ) maps a sequence c 2 `(Zd) to the sequence given by X Sc = a(· − Mα) c(α); α2Zd −1 d which is then interpreted as a function on the finer grid M Z . An algebraic tool in studying subdivision operators is its symbol, defined as the Laurent polynomial whose coefficients are the values of the mask a, ∗ X α d a (z) = a(α)z ; z 2 (C n f0g) : α2Zd As shown by [13], it is always possible to generate MMRA for an arbitrary family of jointly expanding scaling matrices (Mj : j 2 Zs) in the shearlet context for an arbitrary dimension d, based on a generalization of subdivision schemes: multiple subdivision.
Recommended publications
  • Systems Analysis of Stochastic and Population Balance Models for Chemically Reacting Systems
    Systems Analysis of Stochastic and Population Balance Models for Chemically Reacting Systems by Eric Lynn Haseltine A dissertation submitted in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY (Chemical Engineering) at the UNIVERSITY OF WISCONSIN–MADISON 2005 c Copyright by Eric Lynn Haseltine 2005 All Rights Reserved i To Lori and Grace, for their love and support ii Systems Analysis of Stochastic and Population Balance Models for Chemically Reacting Systems Eric Lynn Haseltine Under the supervision of Professor James B. Rawlings At the University of Wisconsin–Madison Chemical reaction models present one method of analyzing complex reaction pathways. Most models of chemical reaction networks employ a traditional, deterministic setting. The short- comings of this traditional framework, namely difficulty in accounting for population het- erogeneity and discrete numbers of reactants, motivate the need for more flexible modeling frameworks such as stochastic and cell population balance models. How to efficiently use models to perform systems-level tasks such as parameter estimation and feedback controller design is important in all frameworks. Consequently, this thesis focuses on three main areas: 1. improving the methods used to simulate and perform systems-level tasks using stochas- tic models, 2. formulating and applying cell population balance models to better account for experi- mental data, and 3. applying moving-horizon estimation to improve state estimates for nonlinear reaction systems. For stochastic models, we have derived and implemented techniques that improve simulation efficiency and perform systems-level tasks using these simulations. For discrete stochastic models, these systems-level tasks rely on approximate, biased sensitivities, whereas continuous models (i.e.
    [Show full text]
  • Opengl Transformations
    Transformations CS 537 Interactive Computer Graphics Prof. David E. Breen Department of Computer Science E. Angel and D. Shreiner: Interactive Computer Graphics 6E © Addison-Wesley 2012 1 Objectives • Introduce standard transformations - Rotation - Translation - Scaling - Shear • Derive homogeneous coordinate transformation matrices • Learn to build arbitrary transformation matrices from simple transformations E. Angel and D. Shreiner: Interactive Computer Graphics 6E © Addison-Wesley 2012 2 General Transformations A transformation maps points to other points and/or vectors to other vectors v=T(u) Q=T(P) E. Angel and D. Shreiner: Interactive Computer Graphics 6E © Addison-Wesley 2012 3 Affine Transformations • Line preserving • Characteristic of many physically important transformations - Rigid body transformations: rotation, translation - Scaling, shear • Importance in graphics is that we need only transform endpoints of line segments and let implementation draw line segment between the transformed endpoints E. Angel and D. Shreiner: Interactive Computer Graphics 6E © Addison-Wesley 2012 4 Pipeline Implementation T (from application program) frame u T(u) buffer transformation rasterizer v T(v) T(v) v T(v) T(u) u T(u) vertices vertices pixels E. Angel and D. Shreiner: Interactive Computer Graphics 6E © Addison-Wesley 2012 5 Notation We will be working with both coordinate-free representations of transformations and representations within a particular frame P,Q, R: points in an affine space u, v, w: vectors in an affine space α, β, γ: scalars p, q, r: representations of points -array of 4 scalars in homogeneous coordinates u, v, w: representations of vectors -array of 4 scalars in homogeneous coordinates E. Angel and D. Shreiner: Interactive Computer Graphics 6E © Addison-Wesley 2012 6 Translation • Move (translate, displace) a point to a new location P’ d P • Displacement determined by a vector d - Three degrees of freedom - P’= P+d E.
    [Show full text]
  • Affine Transforms and Matrix Algebra
    Chapter 5 Matrix Algebra and Affine Transformations 5.1 The need for multiple coordinate frames So far we have been dealing with cameras, lights and geometry in our scene by specifying everything with respect to a common origin and coordinate system. This turns out to be very limiting. For example, for animation we will want to be able to move the camera and models and render a sequence of images as time advances to create an illusion of motion. We may wish to build models separately and then combine them together to make a scene. We may wish to define one object relative to another { for example we may want to place a hand at the end of an arm. In order to provide this flexibility we need a good mechanism to provide for multiple coordinate systems and for easy transformation between them. We will call these coordinate systems, which we use to represent a scene, coordinate frames. Figure 5.1 shows an example of what we mean. A cylinder has been built in its own modeling coordinate frame and then rotated and translated into the entire scene's coordinate frame. The set of operations providing for all transformations such as these, are known as the affine transforms. The affines include translations and all linear transformations, like scale, rotate, and shear. 29 30CHAPTER 5. MATRIX ALGEBRA AND AFFINE TRANSFORMATIONS m o Cylinder in model Cylinder in world frame O. The coordinates M. cylinder has been rotated and translated. Figure 5.1: Object in model and world coordinate frames. 5.2 Affine transformations Let us first examine the affine transforms in 2D space, where it is easy to illustrate them with diagrams, then later we will look at the affines in 3D.
    [Show full text]
  • Part IA — Vectors and Matrices
    Part IA | Vectors and Matrices Based on lectures by N. Peake Notes taken by Dexter Chua Michaelmas 2014 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures. They are nowhere near accurate representations of what was actually lectured, and in particular, all errors are almost surely mine. Complex numbers Review of complex numbers, including complex conjugate, inverse, modulus, argument and Argand diagram. Informal treatment of complex logarithm, n-th roots and complex powers. de Moivre's theorem. [2] Vectors 3 Review of elementary algebra of vectors in R , including scalar product. Brief discussion n n of vectors in R and C ; scalar product and the Cauchy-Schwarz inequality. Concepts of linear span, linear independence, subspaces, basis and dimension. Suffix notation: including summation convention, δij and "ijk. Vector product and triple product: definition and geometrical interpretation. Solution of linear vector equations. Applications of vectors to geometry, including equations of lines, planes and spheres. [5] Matrices Elementary algebra of 3 × 3 matrices, including determinants. Extension to n × n complex matrices. Trace, determinant, non-singular matrices and inverses. Matrices as linear transformations; examples of geometrical actions including rotations, reflections, dilations, shears; kernel and image. [4] Simultaneous linear equations: matrix formulation; existence and uniqueness of solu- tions, geometric interpretation; Gaussian elimination. [3] Symmetric, anti-symmetric, orthogonal, hermitian and unitary matrices. Decomposition of a general matrix into isotropic, symmetric trace-free and antisymmetric parts. [1] Eigenvalues and Eigenvectors Eigenvalues and eigenvectors; geometric significance. [2] Proof that eigenvalues of hermitian matrix are real, and that distinct eigenvalues give an orthogonal basis of eigenvectors.
    [Show full text]
  • Characteristic Polynomials of P-Adic Matrices
    Characteristic polynomials of p-adic matrices Xavier Caruso David Roe Tristan Vaccon Université Rennes 1 University of Pittsburg Université de Limoges [email protected] [email protected] [email protected] ABSTRACT and O˜(n2+ω/2) operations. Second, while the lack of divi- N We analyze the precision of the characteristic polynomial sion implies that the result is accurate modulo p as long as Z χ(A) of an n × n p-adic matrix A using differential pre- M ∈ Mn( p), they still do not yield the optimal precision. cision methods developed previously. When A is integral A faster approach over a field is to compute the Frobe- N nius normal form of M, which is achievable in running time with precision O(p ), we give a criterion (checkable in time ω O˜(nω)) for χ(A) to have precision exactly O(pN ). We also O˜(n ) [15]. However, the fact that it uses division fre- give a O˜(n3) algorithm for determining the optimal preci- quently leads to catastrophic losses of precision. In many sion when the criterion is not satisfied, and give examples examples, no precision remains at the end of the calcula- when the precision is larger than O(pN ). tion. Instead, we separate the computation of the precision CCS Concepts of χM from the computation of an approximation to χM . Given some precision on M, we use [3, Lem. 3.4] to find the •Computing methodologies → Algebraic algorithms; best possible precision for χM . The analysis of this preci- Keywords sion is the subject of much of this paper.
    [Show full text]
  • Equivalence of Matrices
    Equivalence of Matrices Math 542 May 16, 2001 1 Introduction The first thing taught in Math 340 is Gaussian Elimination, i.e. the process of transforming a matrix to reduced row echelon form by elementary row operations. Because this process has the effect of multiplying the matrix by an invertible matrix it has produces a new matrix for which the solution space of the corresponding linear system is unchanged. This is made precise by Theorem 2.4 below. The theory of Gaussian elimination has the following features: 1. There is an equivalence relation which respects the essential properties of some class of problems. Here the equivalence relation is called row equivalence by most authors; we call it left equivalence. 2. The equivalence classes of this relation are the orbits of a group action. In the case of left equivalence the group is the general linear group acting by left multiplication. 3. There is a characterization of the equivalence relation in terms of some invariant (or invariants) associated to a matrix. In the case of left equivalence the characterization is provided by Theorem 2.4 which says that two matrices of the same size are left equivalent if and only if they have the same null space. 4. There is a normal form and a theorem which says that each matrix is equivalent to a unique matrix in normal form. In the case of left equivalence the normal form is reduced row echelon form (not explained in this paper). 1 Our aim in this paper is to give other examples of equivalence relations which fit this pattern.
    [Show full text]
  • A Fast Algorithm for General Raster Rotation
    - 77 - A FAST ALGORITHM FOR GENERAL RASTER ROTATION Ala" W. Paeth Computer Graphics Laboratory, Department of Computer Science University of Waterloo, Waterloo, Ontario, Canada N2L 3G1 Tel: (519) 888-4534, E-Mail: A WPaeth%[email protected] ABSTRACT INTRODUCI10N The rotation of a digitized raster by an arbitrary angle We derive a high-speed raster rotation algorithm is an essential function for many raster manipulation based on the decomposition of a 2-D rotation matrix systems. We derive and implement a particularly fast into the product of three shear matrices. Raster algorithm which rotates (with scaling invariance) rasters shearing is done on a scan-line basis, and is particularly arbitrarily; skewing and translation of the raster is also efficient. A useful shearing approximation is averaging made possible by the implementation. This operation is adjacent pixels, where the blending ratios remain conceptually simple, and is a good candidate for constant for each scan-line. Taken together, our inclusion in digital paint or other interactive systems, technique rotates (with anti-aliasing) rasters faster than where near real-time performance is required. previous methods. The general derivation of rotation also sheds light on two common techniques: small angle RESUME rotation using a two-pass algorithm, and three-pass 9O-degree rotation. We also provide a comparative La rotation d'un "raster" d'un angle arbitraire est une analysis of Catmull and Smith's method [Catm80n and foilction essentielle de plusieurs logiciels de a discussion of implementation strategies on frame manipulation de "raster". Nous avons implemente un buffer hardware. algorithme rapide de rotation de "raster" conservant l'echelle de I'image.
    [Show full text]
  • Practical Linear Algebra: a Geometry Toolbox Fourth Edition Chapter 5: 2 2 Linear Systems ×
    Practical Linear Algebra: A Geometry Toolbox Fourth Edition Chapter 5: 2 2 Linear Systems × Gerald Farin & Dianne Hansford A K Peters/CRC Press www.farinhansford.com/books/pla c 2021 Farin & Hansford Practical Linear Algebra 1 / 50 Outline 1 Introduction to 2 2 Linear Systems × 2 Skew Target Boxes Revisited 3 The Matrix Form 4 A Direct Approach: Cramer’s Rule 5 Gauss Elimination 6 Pivoting 7 Unsolvable Systems 8 Underdetermined Systems 9 Homogeneous Systems 10 Kernel 11 Undoing Maps: Inverse Matrices 12 Defining a Map 13 Change of Basis 14 Application: Intersecting Lines 15 WYSK Farin & Hansford Practical Linear Algebra 2 / 50 Introduction to 2 2 Linear Systems × Two families of lines are shown Intersections of corresponding line pairs marked For each intersection: solve a 2 × 2 linear system Farin & Hansford Practical Linear Algebra 3 / 50 Skew Target Boxes Revisited Geometry of a 2 2 system × a1 and a2 define a skew target box Given b with respect to the [e1, e2]-system: What are the components of b with respect to the [a1, a2]-system? 2 4 4 a = , a = , b = 1 1 2 6 4 2 1 4 4 1 + = × 1 2 × 6 4 Farin & Hansford Practical Linear Algebra 4 / 50 Skew Target Boxes Revisited Two equations in the two unknowns u1 and u2 2u1 + 4u2 = 4 u1 + 6u2 = 4 Solution: u1 = 1 and u2 = 1/2 This chapter dedicated to solving these equations Farin & Hansford Practical Linear Algebra 5 / 50 The Matrix Form Two equations a1,1u1 + a1,2u2 = b1 a2,1u1 + a2,2u2 = b2 Also called a linear system a a u b 1,1 1,2 1 = 1 a2,1 a2,2u2 b2 Au = b u called the solution of linear
    [Show full text]
  • 1. Which of the Following Could Relate to Determinants? A) Invertibility of A
    1. Which of the following could relate to determinants? a) invertibility of a 2 × 2 matrix b) determinant 1 (or -1) coding matrix with integer entries will ensure we don’t pick up fractions in the decoding matrix c) both of the above http://www.mathplane.com/gate_dwebcomics/math_comics_archive_spring_2017 Dr. Sarah i-clickers in 1.8, 1.9, 2.7 2. Which of the following matrices does not have an inverse? 1 2 a) 3 4 2 2 b) 4 4 0 4 c) 2 0 d) more than one do not have inverses e) all have inverses Cayley’s [1855] introductory paper in matrix theory introduces... the ideas of inverse matrix and of matrix multiplication, or “compounding” as Cayley called it [Richard Feldmann] Dr. Sarah i-clickers in 1.8, 1.9, 2.7 1 0 3. Which of the following are true about the matrix A = k 1 a) determinant of A is 1 b) A is a vertical shear matrix c) When we multiply AB2×n then we have applied 0 r2 = kr1 + r2 to B, because A is the elementary matrix representing that row operation d) more than one of the above e) all of the above Google Scholar search of elementary matrix Dr. Sarah i-clickers in 1.8, 1.9, 2.7 4. By hand, use Laplace expansion as directed 25 2 0 0 −23 60 1 4 3 2 7 6 7 60 0 2 6 3 7 6 7 40 0 3 4 1 5 0 0 0 0 2 Step 1: First expand down the first column to take advantage of the 0s.
    [Show full text]
  • Memory, Penrose Limits and the Geometry of Gravitational
    Published for SISSA by Springer Received: November 26, 2018 Accepted: December 12, 2018 Published: December 21, 2018 Memory, Penrose limits and the geometry of JHEP12(2018)133 gravitational shockwaves and gyratons Graham M. Shore Department of Physics, College of Science, Swansea University, Swansea, SA2 8PP, U.K. E-mail: [email protected] Abstract: The geometric description of gravitational memory for strong gravitational waves is developed, with particular focus on shockwaves and their spinning analogues, gyratons. Memory, which may be of position or velocity-encoded type, characterises the residual separation of neighbouring `detector' geodesics following the passage of a gravi- tational wave burst, and retains information on the nature of the wave source. Here, it is shown how memory is encoded in the Penrose limit of the original gravitational wave spacetime and a new `timelike Penrose limit' is introduced to complement the original plane wave limit appropriate to null congruences. A detailed analysis of memory is presented for timelike and null geodesic congruences in impulsive and extended gravitational shockwaves of Aichelburg-Sexl type, and for gyratons. Potential applications to gravitational wave astronomy and to quantum gravity, especially infra-red structure and ultra-high energy scattering, are briefly mentioned. Keywords: Classical Theories of Gravity, Models of Quantum Gravity ArXiv ePrint: 1811.08827 Open Access, c The Authors. https://doi.org/10.1007/JHEP12(2018)133 Article funded by SCOAP3. Contents 1 Introduction2
    [Show full text]
  • MTH 211 Notes: Matrix Factorizations
    MTH 211 Notes: Matrix Factorizations 1 Introduction When you have things that can be multiplied, it is often useful to think about ways that they can be factored into simpler parts. You've definitely seen stuff like this before! • Any integer n ≥ 2 can be factored into a product of prime numbers. • Factorization of polynomials can be useful in solving polynomial equations. We can multiply matrices, so if we are given a matrix A, we could ask whether it is possible to factor A into simpler parts. Of course, this is question is vague | what do we mean by \simpler parts"? Well, as we'll see later, this ambiguity is good. Different types of \simpler parts" will lead to different ways of factoring matrices, useful in different situations. As we go through the semester, we'll learn about a few different ways to factor matrices. In these notes, we'll learn about a way to factor matrices that is useful in the context that we've been studying, that of solving matrix equations A~x = ~b. In fact, there's a sense in which we've already been doing this factorization without knowing it! 2 Elementary matrices Let's start by introducing three types of matrices that might be considered \simple". The simplest matrix is an identity matrix, but that's too simple. The elementary matrices are, in a sense, minimally more complex than identity matrices; they are matrices that you can get by performing a single row operation on an identity matrix. Because there are three kinds of row operations, there are three kinds of elementary matrices: 1.
    [Show full text]
  • Using Works of Visual Art to Teach Matrix Transformations
    Using Works of Visual Art to Teach Matrix Transformations James Luke Akridge*, Rachel Bowman*, Peter Hamburger, Bruce Kessler Department of Mathematics Western Kentucky University 1906 College Heights Blvd. Bowling Green, KY 42101 E-mail: [email protected], [email protected], [email protected], [email protected] * Undergraduate students. Abstract 2 The authors present a modern technique for teaching matrix transformations on R that incorporates works of visual art and computer programming. Two of the authors were undergraduate students in Dr. Hamburger’s linear algebra class, where this technique was implemented as a special project for the students. The two students generated the im- ages seen in this paper, and the movies that can be found on the accompanying webpage www.wku.edu/˜bruce.kessler/. 1 Introduction As we struggle to increase the number of students succeeding in the science, technology, engineering, and mathematics disciplines, we must keep two things in mind. • We must find ways to motivate a larger audience as to the importance and relevance of these disciplines. • We can not be tied to the old ways of teaching these disciplines to interested students if we intend to keep them interested. As an example of how we can improve the way that we present topics, consider the way that matrix multiplication is typically presented. Matrix multiplication has a number of applications that can attract students, but as we move to a more abstract understanding, we sometimes lose the student’s interest. In general, multiplying an n-vector by a m × n-matrix (that is, a matrix with m rows and n columns) generates n m an m-vector, so it can be interpreted as a transformation from R to R .
    [Show full text]