Numerical Linear Algebra (Continued)

Total Page:16

File Type:pdf, Size:1020Kb

Numerical Linear Algebra (Continued) Numerical Linear Algebra (Continued) CS 740 Ajit Rajwade 1 Eigenvectors and Eigenvalues • Given an n x n matrix A, a non-zero vector v is said to be an eigenvector of A with eigenvalue λ if Av = λv. • In other words, when transformed by matrix A, vector v either shrinks (|λ| < 1) or expands (|λ| > 1) (decreases or increases in magnitude) but its direction remains the same or flips over. Of course, when |λ| = 1, the magnitude of v undergoes no change. 2 Eigenvectors and Eigenvalues • The preservation of the direction of the vector v when transformed by A gives rise to the name “eigen” – which is German for “self”. • Eigenvectors and eigenvalues come in useful in several applications in physics, image and signal processing and machine learning. • Here we stick to real matrices, though almost all the theory is applicable to complex matrices as well. 3 Examples 4 Non-uniqueness • Eigenvectors and eigenvalues are not unique. An n x n matrix has n eigenvectors (not necessarily all distinct) and n eigenvalues (not necessarily all distinct). • An eigenvector when scaled by an arbitrary non-zero constant remains an eigenvector with the same eigenvalue. By convention, eigenvectors are normalized to unit magnitude. 5 Characteristic equation • The eigenvalues of A can be obtained by finding the roots of the following equation called the characteristic equation: Av v Iv (A I)v 0 This is a homogeneous equation and it will have a solution if and only if (A I) is a singular matrix. det(A I) 0 • This equation has n (not necessarily distinct or real) roots – the eigenvalues of A. The equation can be written as follows: p() ( 1)( 2 )...( n ) 0 6 Characteristic equation • The characteristic equation of an n x n matrix A has degree n. • It always has n roots – this is as per a result called the fundamental theorem of algebra. • The roots may not be distinct. • The roots may not be real even if A is real. 7 Matrix of eigenvectors and eigenvalues A Rnn Av1 1v1 Av2 2v2 . Avn nvn 0 0 1 2 0 A(v | v |...| v ) ( v | v |...| v ) (v | v |...| v ) . 0 1 2 n 1 1 2 2 n n 1 2 n . 0 0 0 0 0 n AV V 8 http://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors Physical example 1 In this transformation, the red arrow changes direction but the blue arrow does not. The blue arrow is an eigenvector of this shear mapping, and since its length is unchanged its eigenvalue is 1. The red direction in the second figure is also an eigenvector – it is the direction in which the cloth is stretched and it has an eigenvalue greater than 1. 9 Physical example 1 • This example helps us design transformation matrices that accomplish a certain task. • Let’s say you wanted to stretch by a factor of 2 in direction [1,1] and apply no stretch or shrinking in direction [1,0]. • We treat these directions as eigenvectors with eigenvalues 2 and 1 respectively. • This gives us 1 1 2 0 V , 1 0 0 1 AV V 1 1 1 A VV 10 0 2 Physical example 2 • Consider the following matrix which you will often see in computer graphics: cos sin 0 sin cos 0 0 0 1 • It is a matrix that rotates a 3D point about the Z axis – the Z coordinate remains unchanged, but the X and Y coordinates change by a rotation in the XY plane. • In this case, the Z axis, i.e. vector (0 0 1)t is an eigenvector with eigenvalue 1. • If the axis of rotation changes, the rotation matrix changes. But the axis of rotation will always be an eigenvector of the rotation matrix with eigenvalue 1. 11 Properties of Eigenvalues and Eigenvectors • Trace of a matrix = sum of its eigenvalues • Determinant of a matrix = product of its eigenvalues (hence a singular matrix has at least one 0 eigenvalue). • A matrix is invertible iff (= if and only if) all its eigenvalues are non-zero. • The eigenvalues of A-1 are reciprocals of the eigenvalues of A (can you prove this?). • The eigenvectors of A and A-1 are the same (can you reason why?). • Every eigenvalue of an orthonormal/unitary matrix has absolute value 1 (why?). • Complex eigenvalues always occur in conjugate pairs, i.e. if a+ib is an eigenvalue of real matrix A for eigenvector v, then a-ib will also be an eigenvalue of A for eigenvector v* (can you prove this?). 12 Properties of Eigenvalues and Eigenvectors • Given the eigenvalues and eigenvectors of A, what can you say about the eigenvalues and eigenvectors of A-σI for some scalar σ? • Given the eigenvalues and eigenvectors of A, what can you say about the eigenvalues and eigenvectors of A2 or An for some n > 0? • Note that An =A*A*A*…*A (product with itself n-1 times). This is different from A.^n for which A.^n(i,j) = A(i,j)^n. 13 Properties of Eigenvalues and Eigenvectors • Trace of a matrix = sum of its eigenvalues • Determinant of a matrix = product of its eigenvalues p() ( 1)( 2 )...( n ) det(I A) n n p(0) (1) 12...n det(A) (1) det(A) det(A) 12...n n1 The coefficien t of in p() is (1 2 ... n ). The coefficien t of the term containing n1 in det(I A) can be proved to be the negative sum of the diagonal entries of A, i.e. - trace(A). 14 Hence trace(A) (1 2 ... n ). Eigenvectors and eigenvalues of special matrices • Diagonal matrix: eigenvalues are elements along the diagonal, eigenvectors are the column vectors normalized to unit magnitude. • Symmetric matrix: eigenvalues are real (proof), eigenvectors corresponding to distinct eigenvalues are orthogonal (proof). AV V, A VV T • The converse is also true, i.e. a matrix with orthonormal eigenvectors and real eigenvalues is symmetric. A VV T AT (VV T )T VV T A 15 Computation of Eigenvectors and Eigenvalues • The following MATLAB commands compute eigenvectors and eigenvalues: [V,D] = eig(A) returns two outputs. D is a diagonal matrix containing the eigenvalues. V is a matrix whose columns are the corresponding right eigenvectors. [V,D] = eigs(A) returns a diagonal matrix D of A's six largest magnitude eigenvalues and a matrix V whose columns are the corresponding eigenvectors. [V,D] = eigs(A,k) returns a diagonal matrix D of A's k largest magnitude eigenvalues and a matrix V whose columns are the corresponding eigenvectors. 16 Computation of Eigenvectors and Eigenvalues • Eigenvalues can be computed by computing the roots of the characteristic equation. • This is easy for 2 x 2 matrices, but not for larger matrices, since polynomials in arbitrary degree (greater than 5) usually have no closed form solutions. • How do you compute eigenvectors and eigenvalues? 17 Computing eigenvectors • Assume n x n matrix A has a unique eigenvalue λ1 of largest magnitude. Let the corresponding eigenvector be v1. • Consider vector x0 which is expressible as a linear combination of eigenvectors of A, i.e. n x0 ivi i1 • Repeated multiplication of A with x0 converges to a multiple of v1. Why? 18 n x0 ivi i1 k n n n x A k x A k v k v k v i v k 0 i i i i i 1 1 1 i i i1 i1 i2 1 k 11v1 Each such term tends to 0, as k tends to ∞, because λ1 is the largest magnitude eigenvalue of A. This method of eigenvector computation is called as power iteration. It converges to the eigenvector with the largest eigenvalue. Caution: The possibility that the starting vector x0 has no component in the direction v1 (i.e. if α1 = 0) is very small, and can be eliminated by slightly and randomly perturbing the starting vector. Caution: If there are two or more eigenvalues that are the largest and have the same magnitude, the starting vector will converge to some linear combination of the corresponding eigenvectors. Caution: If the initial vector is real and A is real, it will not converge to a complex eigenvector!19 Computation of Eigenvectors and Eigenvalues • How do you compute the largest eigenvalue given the eigenvector? t t t v1 Av1 Av1 1v1 v1 Av1 v11v1 1 t v1v1 • What if you wanted to compute the eigenvector corresponding to the smallest eigenvalue? Work with A-1 instead of A. • What about other eigenvectors? There exist more advanced algorithms for those, which we skip in this course. But we handle one special case. 20 Computation of Eigenvectors and Eigenvalues • Let A be a symmetric matrix. So its eigenvalues are real and its eigenvectors are orthonormal. AV VΛ n T t A VV ivivi i1 • To compute the second eigenvector, work with the following matrix instead of A (v1 ) t A A 1v1v1 where v1 and λ1 are the estimated first eigenvector and first eigenvalue. • The eigenvector of A(-v1) corresponding to its largest eigenvalue is the eigenvector of A corresponding to its second largest eiegnvalue. 21 Computation of Eigenvectors and Eigenvalues • Several more advanced algorithms exist – we skip the algorithms in this course. • The algorithms have ready implementation in a famous library called LAPACK – written in Fortran 90, with C/C++/MATLAB interfaces. 22 Singular Value Decomposition (SVD) 23 Singular value Decomposition • For any m x n matrix A, the following decomposition always exists: A USVT , A Rmn , T T mm U U UU Im,U R , T T nn Diagonal matrix with non- V V VV In,V R , negative entries on the mn diagonal – called singular S R values.
Recommended publications
  • Projects Applying Linear Algebra to Calculus
    PROJECTS APPLYING LINEAR ALGEBRA TO CALCULUS Dr. Jason J. Molitierno Department of Mathematics Sacred Heart University Fairfield, CT 06825-1000 MAJOR PROBLEMS IN TEACHING A LINEAR ALGEBRA CLASS: • Having enough time in the semester to cover the syllabus. • Some problems are long and involved. ◦ Solving Systems of Equations. ◦ Finding Eigenvalues and Eigenvectors • Concepts can be difficult to visualize. ◦ Rotation and Translation of Vectors in multi-dimensioanl space. ◦ Solutions to systems of equations and systems of differential equations. • Some problems may be repetitive. A SOLUTION TO THESE ISSUES: OUT OF CLASS PROJECTS. THE BENEFITS: • By having the students learn some material outside of class, you have more time in class to cover additional material that you may not get to otherwise. • Students tend to get lost when the professor does a long computation on the board. Students are like "scribes." • Using technology, students can more easily visualize concepts. • Doing the more repetitive problems outside of class cuts down on boredom inside of class. • Gives students another method of learning as an alternative to be lectured to. APPLICATIONS OF SYSTEMS OF LINEAR EQUATIONS: (taken from Lay, fifth edition) • Nutrition: • Electrical Currents: • Population Distribution: VISUALIZING LINEAR TRANSFORMATIONS: in each of the five exercises below, do the following: (a) Find the matrix A that performs the desired transformation. (Note: In Exercises 4 and 5 where there are two transformations, multiply the two matrices with the first transformation matrix being on the right.) (b) Let T : <2 ! <2 be the transformation T (x) = Ax where A is the matrix you found in part (a). Find T (2; 3).
    [Show full text]
  • Direct Shear Mapping – a New Weak Lensing Tool
    MNRAS 451, 2161–2173 (2015) doi:10.1093/mnras/stv1083 Direct shear mapping – a new weak lensing tool C. O. de Burgh-Day,1,2,3‹ E. N. Taylor,1,3‹ R. L. Webster1,3‹ and A. M. Hopkins2,3 1School of Physics, David Caro Building, The University of Melbourne, Parkville VIC 3010, Australia 2The Australian Astronomical Observatory, PO Box 915, North Ryde NSW 1670, Australia 3ARC Centre of Excellence for All-sky Astrophysics (CAASTRO), Australian National University, Canberra, ACT 2611, Australia Accepted 2015 May 12. Received 2015 February 16; in original form 2013 October 24 ABSTRACT We have developed a new technique called direct shear mapping (DSM) to measure gravita- tional lensing shear directly from observations of a single background source. The technique assumes the velocity map of an unlensed, stably rotating galaxy will be rotationally symmet- ric. Lensing distorts the velocity map making it asymmetric. The degree of lensing can be inferred by determining the transformation required to restore axisymmetry. This technique is in contrast to traditional weak lensing methods, which require averaging an ensemble of background galaxy ellipticity measurements, to obtain a single shear measurement. We have tested the efficacy of our fitting algorithm with a suite of systematic tests on simulated data. We demonstrate that we are in principle able to measure shears as small as 0.01. In practice, we have fitted for the shear in very low redshift (and hence unlensed) velocity maps, and have obtained null result with an error of ±0.01. This high-sensitivity results from analysing spa- tially resolved spectroscopic images (i.e.
    [Show full text]
  • Vol 45 Ams / Maa Anneli Lax New Mathematical Library
    45 AMS / MAA ANNELI LAX NEW MATHEMATICAL LIBRARY VOL 45 10.1090/nml/045 When Life is Linear From Computer Graphics to Bracketology c 2015 by The Mathematical Association of America (Incorporated) Library of Congress Control Number: 2014959438 Print edition ISBN: 978-0-88385-649-9 Electronic edition ISBN: 978-0-88385-988-9 Printed in the United States of America Current Printing (last digit): 10987654321 When Life is Linear From Computer Graphics to Bracketology Tim Chartier Davidson College Published and Distributed by The Mathematical Association of America To my parents, Jan and Myron, thank you for your support, commitment, and sacrifice in the many nonlinear stages of my life Committee on Books Frank Farris, Chair Anneli Lax New Mathematical Library Editorial Board Karen Saxe, Editor Helmer Aslaksen Timothy G. Feeman John H. McCleary Katharine Ott Katherine S. Socha James S. Tanton ANNELI LAX NEW MATHEMATICAL LIBRARY 1. Numbers: Rational and Irrational by Ivan Niven 2. What is Calculus About? by W. W. Sawyer 3. An Introduction to Inequalities by E. F.Beckenbach and R. Bellman 4. Geometric Inequalities by N. D. Kazarinoff 5. The Contest Problem Book I Annual High School Mathematics Examinations 1950–1960. Compiled and with solutions by Charles T. Salkind 6. The Lore of Large Numbers by P.J. Davis 7. Uses of Infinity by Leo Zippin 8. Geometric Transformations I by I. M. Yaglom, translated by A. Shields 9. Continued Fractions by Carl D. Olds 10. Replaced by NML-34 11. Hungarian Problem Books I and II, Based on the Eotv¨ os¨ Competitions 12. 1894–1905 and 1906–1928, translated by E.
    [Show full text]
  • Multi-View Metric Learning in Vector-Valued Kernel Spaces
    Multi-view Metric Learning in Vector-valued Kernel Spaces Riikka Huusari Hachem Kadri Cécile Capponi Aix Marseille Univ, Université de Toulon, CNRS, LIS, Marseille, France Abstract unlabeled examples [26]; the latter tries to efficiently combine multiple kernels defined on each view to exploit We consider the problem of metric learning for information coming from different representations [11]. multi-view data and present a novel method More recently, vector-valued reproducing kernel Hilbert for learning within-view as well as between- spaces (RKHSs) have been introduced to the field of view metrics in vector-valued kernel spaces, multi-view learning for going further than MKL by in- as a way to capture multi-modal structure corporating in the learning model both within-view and of the data. We formulate two convex opti- between-view dependencies [20, 14]. It turns out that mization problems to jointly learn the metric these kernels and their associated vector-valued repro- and the classifier or regressor in kernel feature ducing Hilbert spaces provide a unifying framework for spaces. An iterative three-step multi-view a number of previous multi-view kernel methods, such metric learning algorithm is derived from the as co-regularized multi-view learning and manifold reg- optimization problems. In order to scale the ularization, and naturally allow to encode within-view computation to large training sets, a block- as well as between-view similarities [21]. wise Nyström approximation of the multi-view Kernels of vector-valued RKHSs are positive semidefi- kernel matrix is introduced. We justify our ap- nite matrix-valued functions. They have been applied proach theoretically and experimentally, and with success in various machine learning problems, such show its performance on real-world datasets as multi-task learning [10], functional regression [15] against relevant state-of-the-art methods.
    [Show full text]
  • MATRICES Part 2 3. Linear Equations
    MATRICES part 2 (modified content from Wikipedia articles on matrices http://en.wikipedia.org/wiki/Matrix_(mathematics)) 3. Linear equations Matrices can be used to compactly write and work with systems of linear equations. For example, if A is an m-by-n matrix, x designates a column vector (i.e., n×1-matrix) of n variables x1, x2, ..., xn, and b is an m×1-column vector, then the matrix equation Ax = b is equivalent to the system of linear equations a1,1x1 + a1,2x2 + ... + a1,nxn = b1 ... am,1x1 + am,2x2 + ... + am,nxn = bm . For example, 3x1 + 2x2 – x3 = 1 2x1 – 2x2 + 4x3 = – 2 – x1 + 1/2x2 – x3 = 0 is a system of three equations in the three variables x1, x2, and x3. This can be written in the matrix form Ax = b where 3 2 1 1 1 A = 2 2 4 , x = 2 , b = 2 1 1/2 −1 푥3 0 � − � �푥 � �− � A solution to− a linear system− is an assignment푥 of numbers to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by x1 = 1 x2 = – 2 x3 = – 2 since it makes all three equations valid. A linear system may behave in any one of three possible ways: 1. The system has infinitely many solutions. 2. The system has a single unique solution. 3. The system has no solution. 3.1. Solving linear equations There are several algorithms for solving a system of linear equations. Elimination of variables The simplest method for solving a system of linear equations is to repeatedly eliminate variables.
    [Show full text]
  • Karhunen-Loève Analysis for Weak Gravitational Lensing
    c Copyright 2012 Jacob T. Vanderplas arXiv:1301.6657v1 [astro-ph.CO] 28 Jan 2013 Karhunen-Lo`eve Analysis for Weak Gravitational Lensing Jacob T. Vanderplas A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy University of Washington 2012 Reading Committee: Andrew Connolly, Chair Bhuvnesh Jain Andrew Becker Program Authorized to Offer Degree: Department of Astronomy University of Washington Abstract Karhunen-Lo`eve Analysis for Weak Gravitational Lensing Jacob T. Vanderplas Chair of the Supervisory Committee: Professor Andrew Connolly Department of Astronomy In the past decade, weak gravitational lensing has become an important tool in the study of the universe at the largest scale, giving insights into the distribution of dark matter, the expansion of the universe, and the nature of dark energy. This thesis research explores several applications of Karhunen-Lo`eve (KL) analysis to speed and improve the comparison of weak lensing shear catalogs to theory in order to constrain cosmological parameters in current and future lensing surveys. This work addresses three related aspects of weak lensing analysis: Three-dimensional Tomographic Mapping: (Based on work published in VanderPlas et al., 2011) We explore a new fast approach to three-dimensional mass mapping in weak lensing surveys. The KL approach uses a KL-based filtering of the shear signal to reconstruct mass structures on the line-of-sight, and provides a unified framework to evaluate the efficacy of linear reconstruction techniques. We find that the KL-based filtering leads to near-optimal angular resolution, and computation times which are faster than previous approaches.
    [Show full text]
  • Practical Computer Vision: Theory & Applications
    Practical Computer Vision: Theory & Applications Carmen Alonso Montes 23rd-27th November 2015 Contents Course Structure Basic concepts of Computer Vision Pixels Colour vs. Black&White Noise and linear filtering Image enhancement Some practical examples My first programme with images Image denoising Image enhancement Basic commands matlab Carmen Alonso Montes 2 Course Structure This course is an introductory course for students with very few practical knowledge on computer vision Resources available for students through BCAM website Course material: slides, exercises and examples What do you need for this course? The most important thing is your self-motivation! Programming: examples will be done in Matlab, and also, if needed, I will explain how to do it in OpenCV (C++) For those C++ or Phyton programmers, I suggest to use a proper code editor (eclipse, netbeans, …) Some examples of useful tools (not needed) but good to know Versioning: git, SVN Automatic documentation generation: Doxygen Project management: Redmine Carmen Alonso Montes 3 Course Structure: Contents Each class will last 2 hours and a half 50 min Theory (9:30-10:20) 10 min break 50 min Theory (10:30-11:20) 10 min break 30 hour practice (11:30-12:00) Each slide, will contain a blue box with the command in matlab, in case you want to try it during the lessons imread At the end of the slides, you will have a table with the commands used during the lessons, for a quick guide. Carmen Alonso Montes 4 Some good books Digital Image Processing, 3rd Ed. (DIP/3e) by Gonzalez and Woods, (2008) This is the bible in this domain Learning OpenCV, Computer Vision in C++ with the OpenCV Library, by Adrian Kaehler, Gary Bradski.
    [Show full text]
  • Algorithms for Designing Pop-Up Cards
    Algorithms for Designing Pop-Up Cards Zachary Abel∗ Erik D. Demainey Martin L. Demainey Sarah Eisenstaty Anna Lubiwz Andr´eSchulzx Diane L. Souvaine{ Giovanni Vigliettak Andrew Winslow{ Abstract We prove that every simple polygon can be made as a (2D) pop-up card/book that opens to any desired angle between 0 and 360◦. More precisely, given a simple polygon attached to the two walls of the open pop-up, our polynomial-time algorithm subdivides the polygon into a single-degree-of-freedom linkage structure, such that closing the pop-up flattens the linkage without collision. This result solves an open problem of Hara and Sugihara from 2009. We also show how to obtain a more efficient construction for the special case of orthogonal polygons, and how to make 3D orthogonal polyhedra, from pop-ups that open to 90◦, 180◦, 270◦, or 360◦. 1 Introduction Pop-up books have been entertaining children with their surprising and playful mechanics since their mass production in the 1970s. But the history of pop-ups is much older [27], and they were originally used for scientific and historical illustrations. The earliest known example of a \movable book" is Matthew Paris's Chronica Majora (c. 1250), which uses turnable disks (volvelle) to represent a calendar and uses flaps to illustrate maps. A more recent scientific example is George Spratt's Obstetric Tables (1850), which uses flaps to illustrate procedures for delivering babies including by Caesarean section. From the same year, Dean & Sons' Little Red Riding Hood (1850) is the first known movable book where a flat page rises into a 3D scene, though here it was actuated by pulling a string.
    [Show full text]
  • Generalized Multiple Correlation Coefficient As a Similarity
    Generalized Multiple Correlation Coefficient as a Similarity Measurement between Trajectories Julen Urain1 Jan Peters1;2 Abstract— Similarity distance measure between two trajecto- ries is an essential tool to understand patterns in motion, for example, in Human-Robot Interaction or Imitation Learning. The problem has been faced in many fields, from Signal Pro- cessing, Probabilistic Theory field, Topology field or Statistics field. Anyway, up to now, none of the trajectory similarity measurements metrics are invariant to all possible linear trans- formation of the trajectories (rotation, scaling, reflection, shear mapping or squeeze mapping). Also not all of them are robust in front of noisy signals or fast enough for real-time trajectory classification. To overcome this limitation this paper proposes a similarity distance metric that will remain invariant in front of any possible linear transformation. Based on Pearson’s Correlation Coefficient and the Coefficient of Determination, our similarity metric, the Generalized Multiple Correlation Fig. 1: GMCC applied for clustering taught trajectories in Coefficient (GMCC) is presented like the natural extension of Imitation Learning the Multiple Correlation Coefficient. The motivation of this paper is two-fold: First, to introduce a new correlation metric that presents the best properties to compute similarities between trajectories invariant to linear transformations and compare human-robot table tennis match. In [11], HMM were used it with some state of the art similarity distances. Second, to to recognize human gestures. The recognition was invariant present a natural way of integrating the similarity metric in an to the starting position of the gestures. The sequences of Imitation Learning scenario for clustering robot trajectories.
    [Show full text]
  • Direct Shear Mapping: Prospects for Weak Lensing Studies of Individual Galaxy–Galaxy Lensing Systems
    Publications of the Astronomical Society of Australia (PASA), Vol. 32, e040, 16 pages (2015). C Astronomical Society of Australia 2015; published by Cambridge University Press. doi:10.1017/pasa.2015.39 Direct Shear Mapping: Prospects for Weak Lensing Studies of Individual Galaxy–Galaxy Lensing Systems C. O. de Burgh-Day1,2,3,4, E. N. Taylor1,3,R.L.Webster1,3 and A. M. Hopkins2,3 1School of Physics, David Caro Building, The University of Melbourne, Parkville VIC 3010, Australia 2The Australian Astronomical Observatory, PO Box 915, North Ryde NSW 1670, Australia 3ARC Centre of Excellence for All-sky Astrophysics (CAASTRO), School of Physics, David Caro Building, The University of Melbourne, Parkville VIC 3010, Australia 4Email: [email protected] (Received April 23, 2012; Accepted September 21, 2012) Abstract Using both a theoretical and an empirical approach, we have investigated the frequency of low redshift galaxy-galaxy lensing systems in which the signature of 3D weak lensing might be directly detectable. We find good agreement between these two approaches. Using data from the Galaxy and Mass Assembly redshift survey we estimate the frequency of detectable weak lensing at low redshift. We find that below a redshift of z ∼ 0.6, the probability of a galaxy being weakly γ ≥ . ∼ . − lensed by 0 02 is 0 01. We have also investigated the feasibility of measuring the scatter in the M∗ Mh relation using shear statistics. We estimate that for a shear measurement error of γ = 0.02 (consistent with the sensitivity of the Direct Shear Mapping technique), with a sample of ∼50,000 spatially and spectrally resolved galaxies, the scatter in the − M∗ Mh relation could be measured.
    [Show full text]
  • Matrix & Power Series Methods
    ii MATRIX AND POWER SERIES METHODS Mathematics 306 All You Ever Wanted to Know About Matrix Algebra and Infinite Series But Were Afraid To Ask By John W. Lee Department of Mathematics Oregon State University January 2006 Contents Introduction ix I Background and Review 1 1ComplexNumbers 3 1.1Goals................................. 3 1.2Overview............................... 3 1.3TheComplexNumberSystem.................... 4 1.4PropertiesofAbsoluteValuesandComplexConjugates..... 8 1.5TheComplexPlane......................... 8 1.6CirclesintheComplexPlane.................... 12 1.7ComplexFunctions.......................... 14 1.8SuggestedProblems......................... 15 2 Vectors, Lines, and Planes 19 2.1Goals................................. 19 2.2Overview............................... 19 2.3Vectors................................ 19 2.4DotProducts............................. 24 2.5RowandColumnVectors...................... 27 2.6LinesandPlanes........................... 28 2.7SuggestedProblems......................... 30 II Matrix Methods 33 3 Linear Equations 35 3.1Goals................................. 35 3.2Overview............................... 35 3.3ThreeBasicPrinciples........................ 36 3.4SystematicEliminationofUnknowns................ 40 3.5Non-SquareSystems......................... 45 3.6 EfficientEvaluationofDeterminants................ 47 iii iv CONTENTS 3.7ComplexLinearSystems....................... 48 3.8SuggestedProblems......................... 48 4 Matrices and Linear Systems 51 4.1Goals................................
    [Show full text]
  • Euclidean Plane and Its Relatives
    Euclidean Plane and its Relatives A minimalistic introduction Anton Petrunin arXiv:1302.1630v7 [math.HO] 25 Jan 2015 This work is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/4.0/. http://www.createspace.com/4122116 ISBN-13: 978-1481918473 ISBN-10: 1481918478 Contents Introduction 7 Prerequisite. Overview. 1 Preliminaries 11 What is axiomatic approach? What is model? Metric spaces. Examples. Shortcut for distance. Isometries, mo- tions and lines. Half-lines and segments. Angles. Reals modulo 2 π. Continuity. Congruent triangles. · Euclidean geometry 2 The Axioms 23 The Axioms. Lines and half-lines. Zero angle. Straight angle. Vertical angles. 3 Half-planes 29 Sign of angle. Intermediate value theorem. Same sign lem- mas. Half-planes. Triangle with the given sides. 4 Congruent triangles 37 Side-angle-side condition. Angle-side-angle condition. Isosceles triangles. Side-side-side condition. 5 Perpendicular lines 41 Right, acute and obtuse angles. Perpendicular bisector. Uniqueness of perpendicular. Reflection. Perpendicular is shortest. Angle bisectors. Circles. Geometric construc- tions. 3 4 CONTENTS 6 Parallel lines and similar triangles 51 Parallel lines. Similar triangles. Pythagorean theorem. Angles of triangle. Transversal property. Parallelograms. Method of coordinates. 7 Triangle geometry 61 Circumcircle and circumcenter. Altitudes and orthocenter. Medians and centroid. Bisector of triangle. Incenter. Inversive geometry 8 Inscribed angles 69 Angle between a tangent line and a chord. Inscribed angle. Inscribed quadrilaterals. Arcs. 9 Inversion 77 Cross-ratio. Inversive plane and circlines. Ptolemy’s iden- tity. Perpendicular circles. Angles after inversion. Non-Euclidean geometry 10 Absolute plane 89 Two angles of triangle.
    [Show full text]