1 Useful Background Information in This Section of the Notes, Various

Total Page:16

File Type:pdf, Size:1020Kb

1 Useful Background Information in This Section of the Notes, Various 1 Useful Background Information In this section of the notes, various definitions and results from calculus, linear/matrix algebra, and least-squares regression will be summarized. I will refer to these items at various times during the semester. 1.1 Taylor Series (k) th 1. Let η (x) denote the k derivative of function η(x). For function η and x0 in some interval I, define (x − x )2 (x − x )n P (x; x ) = η(x ) + η(1)(x )(x − x ) + η(2)(x ) 0 + ··· + η(n)(x ) 0 n 0 0 0 0 0 2! 0 n! (x − c)n+1 R (x; c) = η(n+1)(c): n (n + 1)! Then, there exists some number z between x and x0 such that η(x) = Pn(x; x0) + Rn(x; z) 2. Taylor Series for functions of one variable: If η is a function that has derivatives of all orders throughout an interval I containing x0 and if lim Rn(x; x0) = 0 for every x0 in I, then n!1 η(x) can be represented by the Taylor series about x0 for any x0 in I. That is, 1 k X (x − x0) η(x) = η(x ) + η(k)(x ) 0 k! 0 k=1 th 3. Note that Pn(x; x0) is a polynomial of degree n. Thus, Pn(x; x0) is an n -order Taylor series approximation of η(x) because Rn(x; x0) vanishes as n increases. 4. Practically, this means that even if the true form of η(x) is unknown, we can use a polynomial f(x) = Pn(x; x0) to approximate it with the approximation improving as n increases. 5. In statistics, we may fit a linear model 2 n f(x) = β0 + β1x + β2x + ··· + βnx : What we are actually doing is fitting (1) (2) 2 (n) n f(x) = Pn(x; 0) = η(0) + η (0)x + η (0)x + ··· + η (0)x (i) i where β0 = η0 and βi = η (0)x for i = 1; 2; : : : ; n and we assume the remainder Rn(x; 0) is negligible. 6. Taylor series can be generalized to higher dimensions. I will only review the 2-dimensional case. @ηn 7. For function η(x; y) let be the nth-order partial derivative with derivation taken k @xk@yn−k times with respect to x and (n − k) times with respect to y. 8. If η is a function of (x; y) that has partial derivatives of all orders inside a ball B containing p0 and if lim Rn(p; p0) = 0 for every p0 in B, then η(p) can be represented by the 2-variable n!1 Taylor series about p0 for any p0 in B. 4 9. For function η(x; y) and p0 = (x0; y0) in some open ball B containing p0, define p = (x; y) and (x − x0) @η (y − y0) @η Pn(p; p ) = η(p ) + + 0 0 1! @x 1! @y p0 p0 2 2 2 2 2 (x − x0) @ η (x − x0)(y − y0) @ η (y − y0) @ η + + + 2! @x2 1!1! @x@y 2! @y2 p0 p0 p0 + ··· n−1 k (n−1−k) k X (x − x0) (y − y0) @ η + k!(n − 1 − k)! @kx@(n−1−k)y k=0 p0 n k (n−k) k X (x − x0) (y − y0) @ η + k!(n − k)! @kx@(n−k)y k=0 p0 n+1 k (n+1−k) k ∗ X (x − x0) (y − y0) @ η Rn(p; p ) = k (n−k) k!(n + 1 − k)! @ x@ y ∗ k=0 p ∗ where p is a point on the line segment joining p and p0. 10. Taylor Series for functions of two variables: There exists some point pz on the line segment joining p and p0 such that η(p) = Pn(p; p0) + Rn(p; pz) 11. Note that Pn(p; p0) is a polynomial of degree n in variables x and y. Thus, Pn(p; p0) is an th n -order Taylor series approximation of η(p) because Rn(p; p0) vanishes as n increases. 12. Practically, this means that even if the true form of η(p) is unknown, we can use a polynomial f(p) = Pn(p; p0) to approximate it with the approximation improving as n increases. 13. In statistics, we may fit a linear model n n−i X X i j f(x; y) = βi;jx y i=0 j=0 What we are actually doing is fitting f(x) = Pn(p; (0; 0)) where β0;0 = η(0; 0) and βi;j = (i+j) i j η (0; 0)x y for i + j = 1; 2; : : : ; n, and we assume the remainder Rn(p; (0; 0)) is negligible. @2f @2f @2f 14. On the following page: f = f = f = : 12 @x@y 11 @x2 22 @y2 @2f 2 @2f @2f Thus, ∆ = − : @x@y @x2 @y2 5 66 1.2 Matrix Theory Terminology and Useful Results 2 3 x11 x12 x13 ··· x1k 6 x21 x22 x23 ··· x2k 7 6 7 0 15. If x = 6 x31 x32 x33 ··· x3k 7 then the symmetric matrix X X can be written as 6 7 4 ····· 5 xn1 xn2 xn3 ··· xnk 2 Pn 2 Pn Pn Pn 3 p=1 xp1 p=1 xp1xp2 p=1 xp1xp3 ··· p=1 xp1xpk Pn 2 Pn Pn 6 p=1 xp2 p=1 xp2xp3 ··· p=1 xp2xpk 7 0 6 Pn 2 Pn 7 X X = 6 symmetric x ··· xp3xpk 7 6 p=1 p3 p=1 7 4 ······ 5 Pn 2 p=1 xpk 16. Transpose of a product of two matrices: (AB)0 = B0A0: 0 0 0 0 0 17. Transpose of a product of k matrices: If B = A1A2 ··· Ak−1Ak then B = AkAk−1 ··· A2A1: 18. The trace of a square matrix A, denoted tr(A), is the sum of the diagonal elements of A. 19. For two k-square matrices A and B, tr(A ± B) = tr(A) ± tr(B). 20. Given an m × n matrix A and an n × m matrix B, then tr(AB) = tr(BA). 21. The rank of a matrix A, denoted rank(A), is the number of linearly independent rows (or columns) of A. 22. If the determinant is nonzero for at least one matrix formed from r rows and r columns of matrix A but no matrix formed from r + 1 rows and r + 1 columns of A has nonzero determinant, then the rank of A is r. 23. Consider a k-square matrix A with rank(A) = k. The k-square matrix A−1 where AA−1 = −1 A A = Ik is called the inverse matrix of A. 24. A k-square matrix A is singular if A is not invertible. This is equivalent to saying jAj = 0 or rank(A) < k. 25. Any nonsingular square matrix (i.e., its determinant =6 0) will have a unique inverse. 26. In the use of least squares as an estimation procedure, it is often required to invert matrices which are symmetric. The inverse matrix is also important as a means of solving sets of simultaneous independent linear equations. If the set of equations is not independent, there is no unique solution. 27. The set of k linearly independent equations a11x1 + a12x2 + ··· + a1kxk = g1 a21x1 + a22x2 + ··· + a2kxk = g2 ········· ak1x1 + ak2x2 + ··· + akkxk = gk can be written in matrix form as Ax = g: Thus, the solution is x = A−1g: 7 28. If A = diag(a1; a2; ··· ; ak) is a diagonal matrix with nonzero diagonal elements −1 a1; a2; ··· ; ak, then A = diag(1=a1; 1=a2; ··· ; 1=ak) is a diagonal matrix with diagonal ele- ments 1=a1; 1=a2; ··· ; 1=ak. 29. If S is a nonsingular symmetric matrix, then (S−1)0 = S−1: Thus, the inverse of a nonsingular symmetric matrix is itself symmetric. 30. A square matrix A is idempotent if A2 = A. 0 −1 0 31. A nonsingular k-square matrix P is orthogonal if P = P ; or equivalently, PP = Ik: 32. Suppose P is a k-square orthogonal matrix, x is a k × 1 vector, and y = P x is a k × 1 vector. The transformation y = P x is called an orthogonal transformation. 33. If y = P x is an orthogonal transformation then y0y = x0P 0P x = x0x. 1.3 Eigenvalues, Eigenvectors, and Quadratic Forms 34. If A is a k-square matrix and λ is a scalar variable, then A − λIk is called the characteristic matrix of A. 35. The determinant jA − λIkj = h(λ) is called the characteristic function of A. 36. The roots of the equation h(λ) = 0 are called the characteristic roots or eigenvalues of A. 37. Suppose λ∗ is an eigenvalue of a k-square matrix A, then an eigenvector associated with λ∗ ∗ ∗ is defined as a column vector x which is a solution to Ax = λ x or (A − λ Ik)x = 0: 38. An important use of eigenvalues and eigenvectors in response surface methodology is in the application to problems of finding optimum experimental conditions. 39. The quadratic form in k variables x1; x2; : : : ; xk is k X 2 XX Q = biixi + 2 bijxixj (1) i=1 i<j where we assume the elements bij (i = 1; : : : ; k j = 1; : : : ; k) are real-valued. 40. In matrix notation: Q = x0Bx where 2 3 2 3 x1 b11 b12 ··· b1k 6 x2 7 6 b22 ··· b2k 7 x = 6 7 B = 6 7 4 ··· 5 4 ······ 5 xk symmetric bkk 41. B and jBj are, respectively, called the matrix and determinant of the quadratic form Q.
Recommended publications
  • Quadratic Forms and Automorphic Forms
    Quadratic Forms and Automorphic Forms Jonathan Hanke May 16, 2011 2 Contents 1 Background on Quadratic Forms 11 1.1 Notation and Conventions . 11 1.2 Definitions of Quadratic Forms . 11 1.3 Equivalence of Quadratic Forms . 13 1.4 Direct Sums and Scaling . 13 1.5 The Geometry of Quadratic Spaces . 14 1.6 Quadratic Forms over Local Fields . 16 1.7 The Geometry of Quadratic Lattices – Dual Lattices . 18 1.8 Quadratic Forms over Local (p-adic) Rings of Integers . 19 1.9 Local-Global Results for Quadratic forms . 20 1.10 The Neighbor Method . 22 1.10.1 Constructing p-neighbors . 22 2 Theta functions 25 2.1 Definitions and convergence . 25 2.2 Symmetries of the theta function . 26 2.3 Modular Forms . 28 2.4 Asymptotic Statements about rQ(m) ...................... 31 2.5 The circle method and Siegel’s Formula . 32 2.6 Mass Formulas . 34 2.7 An Example: The sum of 4 squares . 35 2.7.1 Canonical measures for local densities . 36 2.7.2 Computing β1(m) ............................ 36 2.7.3 Understanding βp(m) by counting . 37 2.7.4 Computing βp(m) for all primes p ................... 38 2.7.5 Computing rQ(m) for certain m ..................... 39 3 Quaternions and Clifford Algebras 41 3.1 Definitions . 41 3.2 The Clifford Algebra . 45 3 4 CONTENTS 3.3 Connecting algebra and geometry in the orthogonal group . 47 3.4 The Spin Group . 49 3.5 Spinor Equivalence . 52 4 The Theta Lifting 55 4.1 Classical to Adelic modular forms for GL2 ..................
    [Show full text]
  • Lecture 13: Simple Linear Regression in Matrix Format
    11:55 Wednesday 14th October, 2015 See updates and corrections at http://www.stat.cmu.edu/~cshalizi/mreg/ Lecture 13: Simple Linear Regression in Matrix Format 36-401, Section B, Fall 2015 13 October 2015 Contents 1 Least Squares in Matrix Form 2 1.1 The Basic Matrices . .2 1.2 Mean Squared Error . .3 1.3 Minimizing the MSE . .4 2 Fitted Values and Residuals 5 2.1 Residuals . .7 2.2 Expectations and Covariances . .7 3 Sampling Distribution of Estimators 8 4 Derivatives with Respect to Vectors 9 4.1 Second Derivatives . 11 4.2 Maxima and Minima . 11 5 Expectations and Variances with Vectors and Matrices 12 6 Further Reading 13 1 2 So far, we have not used any notions, or notation, that goes beyond basic algebra and calculus (and probability). This has forced us to do a fair amount of book-keeping, as it were by hand. This is just about tolerable for the simple linear model, with one predictor variable. It will get intolerable if we have multiple predictor variables. Fortunately, a little application of linear algebra will let us abstract away from a lot of the book-keeping details, and make multiple linear regression hardly more complicated than the simple version1. These notes will not remind you of how matrix algebra works. However, they will review some results about calculus with matrices, and about expectations and variances with vectors and matrices. Throughout, bold-faced letters will denote matrices, as a as opposed to a scalar a. 1 Least Squares in Matrix Form Our data consists of n paired observations of the predictor variable X and the response variable Y , i.e., (x1; y1);::: (xn; yn).
    [Show full text]
  • Introducing the Game Design Matrix: a Step-By-Step Process for Creating Serious Games
    Air Force Institute of Technology AFIT Scholar Theses and Dissertations Student Graduate Works 3-2020 Introducing the Game Design Matrix: A Step-by-Step Process for Creating Serious Games Aaron J. Pendleton Follow this and additional works at: https://scholar.afit.edu/etd Part of the Educational Assessment, Evaluation, and Research Commons, Game Design Commons, and the Instructional Media Design Commons Recommended Citation Pendleton, Aaron J., "Introducing the Game Design Matrix: A Step-by-Step Process for Creating Serious Games" (2020). Theses and Dissertations. 4347. https://scholar.afit.edu/etd/4347 This Thesis is brought to you for free and open access by the Student Graduate Works at AFIT Scholar. It has been accepted for inclusion in Theses and Dissertations by an authorized administrator of AFIT Scholar. For more information, please contact [email protected]. INTRODUCING THE GAME DESIGN MATRIX: A STEP-BY-STEP PROCESS FOR CREATING SERIOUS GAMES THESIS Aaron J. Pendleton, Captain, USAF AFIT-ENG-MS-20-M-054 DEPARTMENT OF THE AIR FORCE AIR UNIVERSITY AIR FORCE INSTITUTE OF TECHNOLOGY Wright-Patterson Air Force Base, Ohio DISTRIBUTION STATEMENT A APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. The views expressed in this document are those of the author and do not reflect the official policy or position of the United States Air Force, the United States Department of Defense or the United States Government. This material is declared a work of the U.S. Government and is not subject to copyright protection in the United States. AFIT-ENG-MS-20-M-054 INTRODUCING THE GAME DESIGN MATRIX: A STEP-BY-STEP PROCESS FOR CREATING LEARNING OBJECTIVE BASED SERIOUS GAMES THESIS Presented to the Faculty Department of Electrical and Computer Engineering Graduate School of Engineering and Management Air Force Institute of Technology Air University Air Education and Training Command in Partial Fulfillment of the Requirements for the Degree of Master of Science in Cyberspace Operations Aaron J.
    [Show full text]
  • Stat 5102 Notes: Regression
    Stat 5102 Notes: Regression Charles J. Geyer April 27, 2007 In these notes we do not use the “upper case letter means random, lower case letter means nonrandom” convention. Lower case normal weight letters (like x and β) indicate scalars (real variables). Lowercase bold weight letters (like x and β) indicate vectors. Upper case bold weight letters (like X) indicate matrices. 1 The Model The general linear model has the form p X yi = βjxij + ei (1.1) j=1 where i indexes individuals and j indexes different predictor variables. Ex- plicit use of (1.1) makes theory impossibly messy. We rewrite it as a vector equation y = Xβ + e, (1.2) where y is a vector whose components are yi, where X is a matrix whose components are xij, where β is a vector whose components are βj, and where e is a vector whose components are ei. Note that y and e have dimension n, but β has dimension p. The matrix X is called the design matrix or model matrix and has dimension n × p. As always in regression theory, we treat the predictor variables as non- random. So X is a nonrandom matrix, β is a nonrandom vector of unknown parameters. The only random quantities in (1.2) are e and y. As always in regression theory the errors ei are independent and identi- cally distributed mean zero normal. This is written as a vector equation e ∼ Normal(0, σ2I), where σ2 is another unknown parameter (the error variance) and I is the identity matrix. This implies y ∼ Normal(µ, σ2I), 1 where µ = Xβ.
    [Show full text]
  • Uncertainty of the Design and Covariance Matrices in Linear Statistical Model*
    Acta Univ. Palacki. Olomuc., Fac. rer. nat., Mathematica 48 (2009) 61–71 Uncertainty of the design and covariance matrices in linear statistical model* Lubomír KUBÁČEK 1, Jaroslav MAREK 2 Department of Mathematical Analysis and Applications of Mathematics, Faculty of Science, Palacký University, tř. 17. listopadu 12, 771 46 Olomouc, Czech Republic 1e-mail: [email protected] 2e-mail: [email protected] (Received January 15, 2009) Abstract The aim of the paper is to determine an influence of uncertainties in design and covariance matrices on estimators in linear regression model. Key words: Linear statistical model, uncertainty, design matrix, covariance matrix. 2000 Mathematics Subject Classification: 62J05 1 Introduction Uncertainties in entries of design and covariance matrices influence the variance of estimators and cause their bias. A problem occurs mainly in a linearization of nonlinear regression models, where the design matrix is created by deriva- tives of some functions. The question is how precise must these derivatives be. Uncertainties of covariance matrices must be suppressed under some reasonable bound as well. The aim of the paper is to give the simple rules which enables us to decide how many ciphers an entry of the mentioned matrices must be consisted of. *Supported by the Council of Czech Government MSM 6 198 959 214. 61 62 Lubomír KUBÁČEK, Jaroslav MAREK 2 Symbols used In the following text a linear regression model (in more detail cf. [2]) is denoted as k Y ∼n (Fβ, Σ), β ∈ R , (1) where Y is an n-dimensional random vector with the mean value E(Y) equal to Fβ and with the covariance matrix Var(Y)=Σ.ThesymbolRk means the k-dimensional linear vector space.
    [Show full text]
  • The Concept of a Generalized Inverse for Matrices Was Introduced by Moore(1920)
    J. Japanese Soc. Comp. Statist. 2(1989), 1-7 THE MOORE-PENROSE INVERSE MATRIX POR THE BALANCED ANOVA MODELS Byung Chun Kim* and Ha Sik Sunwoo* ABSTRACT Since the design matrix of the balanced linear model with no interactions has special form, the general solution of the normal equations can be easily found. From the relationships between the minimum norm least squares solution and the Moore-Penrose inverse we can obtain the explicit form of the Moore-Penrose inverse X+ of the design matrix of the model y = XƒÀ + ƒÃ for the balanced model with no interaction. 1. Introduction The concept of a generalized inverse for matrices was introduced by Moore(1920). He developed it in the context of linear transformations from n-dimensional to m-dimensional vector space over a complex field with usual Euclidean norm. Unaware of Moore's work, Penrose(1955) established the existence and the uniqueness of the generalized inverse that satisfies the Moore's condition under L2 norm. It is commonly known as the Moore-Penrose inverse. In the early 1970's many statisticians, Albert (1972), Ben-Israel and Greville(1974), Graybill(1976),Lawson and Hanson(1974), Pringle and Raynor(1971), Rao and Mitra(1971), Rao(1975), Schmidt(1976), and Searle(1971, 1982), worked on this subject and yielded sev- eral texts that treat the mathematics and its application to statistics in some depth in detail. Especially, Shinnozaki et. al.(1972a, 1972b) have surveyed the general methods to find the Moore-Penrose inverse in two subject: direct methods and iterative methods. Also they tested their accuracy for some cases.
    [Show full text]
  • QUADRATIC FORMS and DEFINITE MATRICES 1.1. Definition of A
    QUADRATIC FORMS AND DEFINITE MATRICES 1. DEFINITION AND CLASSIFICATION OF QUADRATIC FORMS 1.1. Definition of a quadratic form. Let A denote an n x n symmetric matrix with real entries and let x denote an n x 1 column vector. Then Q = x’Ax is said to be a quadratic form. Note that a11 ··· a1n . x1 Q = x´Ax =(x1...xn) . xn an1 ··· ann P a1ixi . =(x1,x2, ··· ,xn) . P anixi 2 (1) = a11x1 + a12x1x2 + ... + a1nx1xn 2 + a21x2x1 + a22x2 + ... + a2nx2xn + ... + ... + ... 2 + an1xnx1 + an2xnx2 + ... + annxn = Pi ≤ j aij xi xj For example, consider the matrix 12 A = 21 and the vector x. Q is given by 0 12x1 Q = x Ax =[x1 x2] 21 x2 x1 =[x1 +2x2 2 x1 + x2 ] x2 2 2 = x1 +2x1 x2 +2x1 x2 + x2 2 2 = x1 +4x1 x2 + x2 1.2. Classification of the quadratic form Q = x0Ax: A quadratic form is said to be: a: negative definite: Q<0 when x =06 b: negative semidefinite: Q ≤ 0 for all x and Q =0for some x =06 c: positive definite: Q>0 when x =06 d: positive semidefinite: Q ≥ 0 for all x and Q = 0 for some x =06 e: indefinite: Q>0 for some x and Q<0 for some other x Date: September 14, 2004. 1 2 QUADRATIC FORMS AND DEFINITE MATRICES Consider as an example the 3x3 diagonal matrix D below and a general 3 element vector x. 100 D = 020 004 The general quadratic form is given by 100 x1 0 Q = x Ax =[x1 x2 x3] 020 x2 004 x3 x1 =[x 2 x 4 x ] x2 1 2 3 x3 2 2 2 = x1 +2x2 +4x3 Note that for any real vector x =06 , that Q will be positive, because the square of any number is positive, the coefficients of the squared terms are positive and the sum of positive numbers is always positive.
    [Show full text]
  • Quadratic Form - Wikipedia, the Free Encyclopedia
    Quadratic form - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Quadratic_form Quadratic form From Wikipedia, the free encyclopedia In mathematics, a quadratic form is a homogeneous polynomial of degree two in a number of variables. For example, is a quadratic form in the variables x and y. Quadratic forms occupy a central place in various branches of mathematics, including number theory, linear algebra, group theory (orthogonal group), differential geometry (Riemannian metric), differential topology (intersection forms of four-manifolds), and Lie theory (the Killing form). Contents 1 Introduction 2 History 3 Real quadratic forms 4 Definitions 4.1 Quadratic spaces 4.2 Further definitions 5 Equivalence of forms 6 Geometric meaning 7 Integral quadratic forms 7.1 Historical use 7.2 Universal quadratic forms 8 See also 9 Notes 10 References 11 External links Introduction Quadratic forms are homogeneous quadratic polynomials in n variables. In the cases of one, two, and three variables they are called unary, binary, and ternary and have the following explicit form: where a,…,f are the coefficients.[1] Note that quadratic functions, such as ax2 + bx + c in the one variable case, are not quadratic forms, as they are typically not homogeneous (unless b and c are both 0). The theory of quadratic forms and methods used in their study depend in a large measure on the nature of the 1 of 8 27/03/2013 12:41 Quadratic form - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Quadratic_form coefficients, which may be real or complex numbers, rational numbers, or integers. In linear algebra, analytic geometry, and in the majority of applications of quadratic forms, the coefficients are real or complex numbers.
    [Show full text]
  • Kriging Models for Linear Networks and Non‐Euclidean Distances
    Received: 13 November 2017 | Accepted: 30 December 2017 DOI: 10.1111/2041-210X.12979 RESEARCH ARTICLE Kriging models for linear networks and non-Euclidean distances: Cautions and solutions Jay M. Ver Hoef Marine Mammal Laboratory, NOAA Fisheries Alaska Fisheries Science Center, Abstract Seattle, WA, USA 1. There are now many examples where ecological researchers used non-Euclidean Correspondence distance metrics in geostatistical models that were designed for Euclidean dis- Jay M. Ver Hoef tance, such as those used for kriging. This can lead to problems where predictions Email: [email protected] have negative variance estimates. Technically, this occurs because the spatial co- Handling Editor: Robert B. O’Hara variance matrix, which depends on the geostatistical models, is not guaranteed to be positive definite when non-Euclidean distance metrics are used. These are not permissible models, and should be avoided. 2. I give a quick review of kriging and illustrate the problem with several simulated examples, including locations on a circle, locations on a linear dichotomous net- work (such as might be used for streams), and locations on a linear trail or road network. I re-examine the linear network distance models from Ladle, Avgar, Wheatley, and Boyce (2017b, Methods in Ecology and Evolution, 8, 329) and show that they are not guaranteed to have a positive definite covariance matrix. 3. I introduce the reduced-rank method, also called a predictive-process model, for creating valid spatial covariance matrices with non-Euclidean distance metrics. It has an additional advantage of fast computation for large datasets. 4. I re-analysed the data of Ladle et al.
    [Show full text]
  • Ellipse Axes, Eccentricity, and Direction of Rotation
    Math 558 Prof. J. Rauch Ellipse Axes. Aspect ratio, and, Direction of Rotation for Planar Centers This handout concerns 2×2 constant coefficient real homogeneous linear systems X0 = AX in the case that A has a pair of complex conjugate eigenvalues a ± ib, b 6= 0. The orbits are elliptical if a = 0 while in the general case, e−atX(t) is elliptical. The latter curves are the solutions of the equation tr A X0 = (A − aI)X; a = : 2 For either elliptical or spiral orbits we associate this modified equation that has elliptical orbits. The new coefficient matrix A − (tr A=2)I has trace equal to zero and positive determinant. These two conditions characterize the matrices with a pair of non zero complex conjugate eigenvalues on the imaginary axis. Those are the matrices for which the phase portrait is a center. We show how to compute the axes of the ellipse, the eccentricity of the ellipse, and the direction of rotation, clockwise or counterclockwise. x1. Direction of rotation. To determine the direction of rotation it suffices to find the direction of the rotation on the positive x1-axis. If the flow is upward (resp. downward) then the swirl is counterclockwise (resp. clockwise). For a matrix A that has no real eigenvalues, the direction of swirl is counterclockwise if and only if the second coordinate of 1 a A = 11 0 a21 is positive, if and only if a21 > 0. The swirl is counterclockwise if and only if a21 > 0. Freeing this computation from the choice (1; 0) introduces an interesting real quadratic form.
    [Show full text]
  • Linear Regression with Shuffled Data: Statistical and Computational
    3286 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 64, NO. 5, MAY 2018 Linear Regression With Shuffled Data: Statistical and Computational Limits of Permutation Recovery Ashwin Pananjady ,MartinJ.Wainwright,andThomasA.Courtade Abstract—Consider a noisy linear observation model with an permutation matrix, and w Rn is observation noise. We refer ∈ unknown permutation, based on observing y !∗ Ax∗ w, to the setting where w 0asthenoiseless case.Aswithlinear d = + where x∗ R is an unknown vector, !∗ is an unknown = ∈ n regression, there are two settings of interest, corresponding to n n permutation matrix, and w R is additive Gaussian noise.× We analyze the problem of∈ permutation recovery in a whether the design matrix is (i) deterministic (the fixed design random design setting in which the entries of matrix A are case), or (ii) random (the random design case). drawn independently from a standard Gaussian distribution and There are also two complementary problems of interest: establish sharp conditions on the signal-to-noise ratio, sample recovery of the unknown !∗,andrecoveryoftheunknownx∗. size n,anddimensiond under which !∗ is exactly and approx- In this paper, we focus on the former problem; the latter imately recoverable. On the computational front, we show that problem is also known as unlabelled sensing [5]. the maximum likelihood estimate of !∗ is NP-hard to compute for general d,whilealsoprovidingapolynomialtimealgorithm The observation model (1) is frequently encountered in when d 1. scenarios where there is uncertainty in the order in which = Index Terms—Correspondenceestimation,permutationrecov- measurements are taken. An illustrative example is that of ery, unlabeled sensing, information-theoretic bounds, random sampling in the presence of jitter [6], in which the uncertainty projections.
    [Show full text]
  • Week 7: Multiple Regression
    Week 7: Multiple Regression Brandon Stewart1 Princeton October 24, 26, 2016 1These slides are heavily influenced by Matt Blackwell, Adam Glynn, Jens Hainmueller and Danny Hidalgo. Stewart (Princeton) Week 7: Multiple Regression October 24, 26, 2016 1 / 145 Where We've Been and Where We're Going... Last Week I regression with two variables I omitted variables, multicollinearity, interactions This Week I Monday: F matrix form of linear regression I Wednesday: F hypothesis tests Next Week I break! I then ::: regression in social science Long Run I probability ! inference ! regression Questions? Stewart (Princeton) Week 7: Multiple Regression October 24, 26, 2016 2 / 145 1 Matrix Algebra Refresher 2 OLS in matrix form 3 OLS inference in matrix form 4 Inference via the Bootstrap 5 Some Technical Details 6 Fun With Weights 7 Appendix 8 Testing Hypotheses about Individual Coefficients 9 Testing Linear Hypotheses: A Simple Case 10 Testing Joint Significance 11 Testing Linear Hypotheses: The General Case 12 Fun With(out) Weights Stewart (Princeton) Week 7: Multiple Regression October 24, 26, 2016 3 / 145 Why Matrices and Vectors? Here's one way to write the full multiple regression model: yi = β0 + xi1β1 + xi2β2 + ··· + xiK βK + ui Notation is going to get needlessly messy as we add variables Matrices are clean, but they are like a foreign language You need to build intuitions over a long period of time (and they will return in Soc504) Reminder of Parameter Interpretation: β1 is the effect of a one-unit change in xi1 conditional on all other xik . We are going to review the key points quite quickly just to refresh the basics.
    [Show full text]