6Inner-Pt1.Pdf

Total Page:16

File Type:pdf, Size:1020Kb

6Inner-Pt1.Pdf 6 Inner Product Spaces You should be familiar with the scalar or dot product in R2, and with its basic properties. For any x1 y1 vectors x = ( x2 ) and y = y2 , define x · y := x1y1 + x2y2 p The norm or length of a vector is jjxjj = x · x, The angle q between vectors satisfies x · y = jjxjj jjyjj cos q, Vectors are orthogonal or perpendicular precisely when x · y = 0. The dot product is precisely what allows us to compute lengths of and angles between vectors in R2. An inner product is an algebraic structure that generalizes this idea to other vector spaces. 6.1 Inner Products and Norms Definition 6.1. An inner product on a vector space V over R is a positive definite, symmetric, bilinear form; that is, a function h , i : V × V ! R satisfying the following properties: 8l 2 R, x, y, z 2 V, (a) Positive definiteness x 6= 0 =) hx, xi > 0 (b) Symmetry hy, xi = hx, yi (c) Bilinearity hlx + y, zi = l hx, zi + hy, zi and hz, lx + yi = l hz, xi + hz, yi A real vector space equipped with an inner product is a (real) inner product space. p The norm or length of x 2 V is jjxjj := hx, xi.A unit vector has jjxjj = 1. Vectors x, y are perpendicular or orthogonal if hx, yi = 0. Vectors are orthonormal if they are orthogonal unit vectors. The bilinearity condition is worth unpacking slightly: • An inner product is linear in both slots: if z 2 V is constant, then we have two linear maps Tz,Uz : V ! R defined by Tz(x) = hx, zi ,Uz(x) = hz, xi • We need only assume the first linearity condition: that the inner product is linear in the second slot (Uz linear) follows from the symmetry condition. Euclidean Space The standard or Euclidean inner product on Rn is the usual dot product, v n u n h i = T = = + ··· + jj jj = u 2 x, y y x ∑ xjyj x1y1 xnyn, x t∑ xj j=1 j=1 where the norm is the Euclidean/Pythagorean length function. The resulting inner product space is often called Euclidean space. Unless indicated, if we refer to Rn as an inner product space, then we always mean with respect to the standard inner product. There are, however, many other ways to make Rn into an inner product space. 1 Example 6.2. (Weighted inner products) Define an alternative inner product on R2 via hx, yi = x1y1 + 3x2y2 It is easy to check that this satisfies the required properties: 2 2 (a) Positive definiteness If x 6= 0, then hx, xi = x1 + 3x2 > 0; (b) Symmetry follows from the commutativity of multiplication in R: hy, xi = y1x1 + 3y2x2 = x1y1 + 3x2y2 = hx, yi (c) Bilinearity follows from symmetry and the associative/distributive laws in R: hlx + y, zi = (lx1 + y1)z1 + 3(lx2 + y2)z2 = l(x1z1 + 3x2z2) + (y1z1 + 3y2z2) = l hx, zi + hy, zi With respect to this new inner product, the concept of orthogonality in R2 has changed: for example, if x = 1 1 and y = p1 3 , we see that fx, yg is an orthonormal set with respect to h , i: 2 1 2 3 −1 1 1 1 jjxjj2 = (12 + 3 · 12) = 1, jjyjj2 = (32 + 3 · 12) = 1, hx, yi = p (3 − 3) = 0 4 12 4 3 With respect to the standard dot product, hwoever, these vectors are not at all special: 1 5 1 x · x = , y · y = , x · y = p 2 6 2 3 We have the same vector space R2, but different inner product spaces: (R2, ·) 6= (R2, h , i). This example is easily generalized: let a1,..., an be positive real numbers (called weights) and define n hx, yi = ∑ ajxjyj = a1x1y1 + ··· + anxnyn j=1 It is a simple exercise to check that this defines an inner product on Rn. In particular, this means that Rn may be viewed as an inner product space in infinitely many distinct ways. There are even more inner products on Rn which aren’t described in the above example. For instance, if A 2 Mn(R) is a symmetric matrix, then hx, yi := yT Ax certainly satisfies the symmetry and bilinearity conditions. The matrix is called positive definite if hx, yi is an inner product. Here is an example of an inner product: 3 1 A = hx, yi = 3x y + x y + x y + x y 1 1 1 1 1 2 2 1 2 2 2 Basic properties of real inner products Lemma 6.3. Let V be a real inner product space, let x, y, z 2 V and l 2 R. 1. hx, 0i = 0 2. jjxjj = 0 () x = 0 3. jjlxjj = jlj jjxjj 4. hx, yi = hx, zi for all x =) y = z 5. (Cauchy–Schwarz inequality) jhx, yij ≤ jjxjj jjyjj with equality () x and y are parallel 6. (Triangle Inequality) jjx + yjj ≤ jjxjj + jjyjj with equality () x and y are parallel and point in the same direction Be careful with notation: jlj means absolute value (applies to scalars), while jjxjj means norm (applies to vectors). Proof. We leave parts 1–3 as exercises. 4. Let x = y − z and apply the bilinearity condition and part 2 to see that hx, yi = hx, zi =) 0 = hy − z, y − zi = jjy − zjj2 =) y = z 5. If y = 0, the result is trivial. WLOG we may assume jjyjj = 1: if the inequality holds for this, then it holds for all non-zero y by parts 1 and 4. Now expand: 0 ≤ jjx − hx, yi yjj2 (positive definiteness) = jjxjj2 + jhx, yij2 jjyjj2 − hx, yi hx, yi − hx, yi hy, xi (bilinearity) = jjxjj2 − jhx, yij2 (symmetry) Taking square-roots establishes the inequality. By part 2, equality holds if and only if x = hx, yi y, which is precisely when x, y are linearly dependent. 6. We establish the squared result: 2 jjxjj + jjyjj − jjx + yjj2 = jjxjj2 + 2 jjxjj jjyjj + jjyjj2 − jjxjj2 − hx, yi − hy, xi − jjyjj2 = 2jjxjj jjyjj − hx, yi ≥ 2jjxjj jjyjj − jhx, yij ≥ 0 (Cauchy–Schwarz) Equality requires both equality in Cauchy–Schwarz (x, y parallel) and that hx, yi ≥ 0: since x, y are already parallel, this means that one is a non-negative multiple of the other. 3 Complex Inner Product Spaces We can also define an inner product on a complex vector space. Since we will be primarily interested in the real case, we will often simply state the differences, if there are any. Definition 6.4. An inner product on a vector space V over C is a positive definite, conjugate-symmetric, sesquilinear form; a function h , i : V × V ! C satisfying the following properties: 8l 2 C, x, y, z 2 V, (a) Positive definiteness x 6= 0 =) hx, xi > 0 (b) Conjugate-symmetry hy, xi = hx, yi (c) Sesquilinearity hlx + y, zi = l hx, zi + hy, zi and hz, lx + yi = l hz, xi + hz, yi A complex vector space equipped with an inner product is a (complex) inner product space. The concepts of norm and orthogonality are identical to the real case. • The prefix sesqui- means one-and-a-half, reflecting the property of being linear in the first slot and conjugate-linear in the second. • A separate definition for the real case is not required. Simply replace C with F above: if F = R, we recover Definition 6.1. Note that an inner product space can only have base field R or C. For the remainder of this chapter, when we say an inner product space over F, we implicitly assume that F is R or C. • We need only assume that a complex inner product is linear in its first slot; conjugate-symmetry then demonstrates the conjugate-linearity property. • The entirety of Lemma 6.3 holds in a complex inner product space. A little extra care is needed for the proofs of the Cauchy–Schwarz and triangle inequalities. Note that jlj now means the complex modulus of l 2 C. • Warning! In case you dabble in the dark arts, be aware that the convention in Physics is for an inner product to be conjugate-linear in the first entry rather than the second! The standard (Hermitian) inner product and norm on Cn are v n u n ∗ u 2 hx, yi = y x = ∑ xjyj = x1y1 + ··· + xnyn, jjxjj = t∑ xj j=1 j=1 where y∗ = yT denotes the conjugate transpose of y. We can also define weighted inner products exactly as in Example 6.2: n hx, yi = ∑ ajxjyj = a1x1y1 + ··· + anxnyn j=1 note that the weights aj must still be positive real numbers. 4 Further Examples of Inner Products The Frobenius inner product If F = R or C, and A, B 2 Mm×n(F), define hA, Bi = tr B∗ A ∗ T where tr is the trace of a square matrix and B = B is the conjugate transpose. This makes Mm×n(F) into an inner product space. Example 6.5. In M3×2(C), *0 1 i 1 0 0 7 1+ 0 1 i 1 0 1 3 + 2i 2 − i −3 − 2i @2 − i 0 A , @ 1 2iA = tr @2 − i 0 A = tr 7 −2i 4 9 − 4i −4 + 7i 0 −1 3 − 2i 4 0 −1 = 2 − i − 4 + 7i = −2 − 8i 0 1 2 0 1 1 i 1 i 1 2 + i 0 6 i @2 − i 0 A = tr @2 − i 0 A = tr = 8 −i 0 −1 −i 2 0 −1 0 −1 This isn’t really a new example: if we map A ! Fm×n by stacking the columns of A, then the Frobenius inner product becomes the standard inner product.
Recommended publications
  • Math 317: Linear Algebra Notes and Problems
    Math 317: Linear Algebra Notes and Problems Nicholas Long SFASU Contents Introduction iv 1 Efficiently Solving Systems of Linear Equations and Matrix Op- erations 1 1.1 Warmup Problems . .1 1.2 Solving Linear Systems . .3 1.2.1 Geometric Interpretation of a Solution Set . .9 1.3 Vector and Matrix Equations . 11 1.4 Solution Sets of Linear Systems . 15 1.5 Applications . 18 1.6 Matrix operations . 20 1.6.1 Special Types of Matrices . 21 1.6.2 Matrix Multiplication . 22 2 Vector Spaces 25 2.1 Subspaces . 28 2.2 Span . 29 2.3 Linear Independence . 31 2.4 Linear Transformations . 33 2.5 Applications . 38 3 Connecting Ideas 40 3.1 Basis and Dimension . 40 3.1.1 rank and nullity . 41 3.1.2 Coordinate Vectors relative to a basis . 42 3.2 Invertible Matrices . 44 3.2.1 Elementary Matrices . 44 ii CONTENTS iii 3.2.2 Computing Inverses . 45 3.2.3 Invertible Matrix Theorem . 46 3.3 Invertible Linear Transformations . 47 3.4 LU factorization of matrices . 48 3.5 Determinants . 49 3.5.1 Computing Determinants . 49 3.5.2 Properties of Determinants . 51 3.6 Eigenvalues and Eigenvectors . 52 3.6.1 Diagonalizability . 55 3.6.2 Eigenvalues and Eigenvectors of Linear Transfor- mations . 56 4 Inner Product Spaces 58 4.1 Inner Products . 58 4.2 Orthogonal Complements . 59 4.3 QR Decompositions . 59 Nicholas Long Introduction In this course you will learn about linear algebra by solving a carefully designed sequence of problems. It is important that you understand every problem and proof.
    [Show full text]
  • Notes, Since the Quadratic Form Associated with an Inner Product (Kuk2 = Hu, Ui) Must Be Positive Def- Inite
    Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases 1 n I have two favorite vector spaces : R and the space Pd of polynomials of degree at most d. For Rn, we have a canonical basis: n R = spanfe1; e2; : : : ; eng; where ek is the kth column of the identity matrix. This basis is frequently convenient both for analysis and for computation. For Pd, an obvious- seeming choice of basis is the power basis: 2 d Pd = spanf1; x; x ; : : : ; x g: But this obvious-looking choice turns out to often be terrible for computation. Why? The short version is that powers of x aren't all that strongly linearly dependent, but we need to develop some more concepts before that short description will make much sense. The range space of a matrix or a linear map A is just the set of vectors y that can be written in the form y = Ax. If A is full (column) rank, then the columns of A are linearly independent, and they form a basis for the range space. Otherwise, A is rank-deficient, and there is a non-trivial null space consisting of vectors x such that Ax = 0. Rank deficiency is a delicate property2. For example, consider the matrix 1 1 A = : 1 1 This matrix is rank deficient, but the matrix 1 + δ 1 A^ = : 1 1 is not rank deficient for any δ 6= 0. Technically, the columns of A^ form a basis for R2, but we should be disturbed by the fact that A^ is so close to a singular matrix.
    [Show full text]
  • Scaling Algorithms and Tropical Methods in Numerical Matrix Analysis
    Scaling Algorithms and Tropical Methods in Numerical Matrix Analysis: Application to the Optimal Assignment Problem and to the Accurate Computation of Eigenvalues Meisam Sharify To cite this version: Meisam Sharify. Scaling Algorithms and Tropical Methods in Numerical Matrix Analysis: Application to the Optimal Assignment Problem and to the Accurate Computation of Eigenvalues. Numerical Analysis [math.NA]. Ecole Polytechnique X, 2011. English. pastel-00643836 HAL Id: pastel-00643836 https://pastel.archives-ouvertes.fr/pastel-00643836 Submitted on 24 Nov 2011 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Th`esepr´esent´eepour obtenir le titre de DOCTEUR DE L'ECOLE´ POLYTECHNIQUE Sp´ecialit´e: Math´ematiquesAppliqu´ees par Meisam Sharify Scaling Algorithms and Tropical Methods in Numerical Matrix Analysis: Application to the Optimal Assignment Problem and to the Accurate Computation of Eigenvalues Jury Marianne Akian Pr´esident du jury St´ephaneGaubert Directeur Laurence Grammont Examinateur Laura Grigori Examinateur Andrei Sobolevski Rapporteur Fran¸coiseTisseur Rapporteur Paul Van Dooren Examinateur September 2011 Abstract Tropical algebra, which can be considered as a relatively new field in Mathemat- ics, emerged in several branches of science such as optimization, synchronization of production and transportation, discrete event systems, optimal control, oper- ations research, etc.
    [Show full text]
  • THESIS MULTIDIMENSIONAL SCALING: INFINITE METRIC MEASURE SPACES Submitted by Lara Kassab Department of Mathematics in Partial Fu
    THESIS MULTIDIMENSIONAL SCALING: INFINITE METRIC MEASURE SPACES Submitted by Lara Kassab Department of Mathematics In partial fulfillment of the requirements For the Degree of Master of Science Colorado State University Fort Collins, Colorado Spring 2019 Master’s Committee: Advisor: Henry Adams Michael Kirby Bailey Fosdick Copyright by Lara Kassab 2019 All Rights Reserved ABSTRACT MULTIDIMENSIONAL SCALING: INFINITE METRIC MEASURE SPACES Multidimensional scaling (MDS) is a popular technique for mapping a finite metric space into a low-dimensional Euclidean space in a way that best preserves pairwise distances. We study a notion of MDS on infinite metric measure spaces, along with its optimality properties and goodness of fit. This allows us to study the MDS embeddings of the geodesic circle S1 into Rm for all m, and to ask questions about the MDS embeddings of the geodesic n-spheres Sn into Rm. Furthermore, we address questions on convergence of MDS. For instance, if a sequence of metric measure spaces converges to a fixed metric measure space X, then in what sense do the MDS embeddings of these spaces converge to the MDS embedding of X? Convergence is understood when each metric space in the sequence has the same finite number of points, or when each metric space has a finite number of points tending to infinity. We are also interested in notions of convergence when each metric space in the sequence has an arbitrary (possibly infinite) number of points. ii ACKNOWLEDGEMENTS We would like to thank Henry Adams, Mark Blumstein, Bailey Fosdick, Michael Kirby, Henry Kvinge, Facundo Mémoli, Louis Scharf, the students in Michael Kirby’s Spring 2018 class, and the Pattern Analysis Laboratory at Colorado State University for their helpful conversations and support throughout this project.
    [Show full text]
  • Proved for Real Hilbert Spaces. Time Derivatives of Observables and Applications
    AN ABSTRACT OF THE THESIS OF BERNARD W. BANKSfor the degree DOCTOR OF PHILOSOPHY (Name) (Degree) in MATHEMATICS presented on (Major Department) (Date) Title: TIME DERIVATIVES OF OBSERVABLES AND APPLICATIONS Redacted for Privacy Abstract approved: Stuart Newberger LetA andH be self -adjoint operators on a Hilbert space. Conditions for the differentiability with respect totof -itH -itH <Ae cp e 9>are given, and under these conditionsit is shown that the derivative is<i[HA-AH]e-itHcp,e-itHyo>. These resultsare then used to prove Ehrenfest's theorem and to provide results on the behavior of the mean of position as a function of time. Finally, Stone's theorem on unitary groups is formulated and proved for real Hilbert spaces. Time Derivatives of Observables and Applications by Bernard W. Banks A THESIS submitted to Oregon State University in partial fulfillment of the requirements for the degree of Doctor of Philosophy June 1975 APPROVED: Redacted for Privacy Associate Professor of Mathematics in charge of major Redacted for Privacy Chai an of Department of Mathematics Redacted for Privacy Dean of Graduate School Date thesis is presented March 4, 1975 Typed by Clover Redfern for Bernard W. Banks ACKNOWLEDGMENTS I would like to take this opportunity to thank those people who have, in one way or another, contributed to these pages. My special thanks go to Dr. Stuart Newberger who, as my advisor, provided me with an inexhaustible supply of wise counsel. I am most greatful for the manner he brought to our many conversa- tions making them into a mutual exchange between two enthusiasta I must also thank my parents for their support during the earlier years of my education.Their contributions to these pages are not easily descerned, but they are there never the less.
    [Show full text]
  • Linear Algebra) 2
    MATH 680 Computation Intensive Statistics December 3, 2018 Matrix Basics Contents 1 Rank (linear algebra) 2 1.1 Proofs that column rank = row rank . .2 1.2 Properties . .4 2 Kernel (linear algebra) 5 2.1 Properties . .6 2.2 Illustration . .9 3 Matrix Norm 12 3.1 Matrix norms induced by vector norms . 12 3.2 Entrywise matrix norms . 14 3.3 Schatten norms . 18 1 1 RANK (LINEAR ALGEBRA) 2 4 Projection 19 4.1 Properties and classification . 20 4.2 Orthogonal projections . 21 1 Rank (linear algebra) In linear algebra, the rank of a matrix A is the dimension of the vector space generated (or spanned) by its columns. This corresponds to the maximal number of linearly independent columns of A. This, in turn, is identical to the dimension of the space spanned by its rows. The column rank of A is the dimension of the column space of A, while the row rank of A is the dimension of the row space of A. A fundamental result in linear algebra is that the column rank and the row rank are always equal. 1.1 Proofs that column rank = row rank Let A be an m × n matrix with entries in the real numbers whose row rank is r. There- fore, the dimension of the row space of A is r. Let x1; x2; : : : ; xr be a basis of the row space of A. We claim that the vectors Ax1; Ax2; : : : ; Axr are linearly independent. To see why, consider a linear homogeneous relation involving these vectors with scalar coefficients c1; c2; : : : ; cr : 0 = c1Ax1 + c2Ax2 + ··· + crAxr = A(c1x1 + c2x2 + ··· + crxr) = Av; 1 RANK (LINEAR ALGEBRA) 3 where v = c1x1 + c2x2 + ··· + crxr: We make two observations: 1.
    [Show full text]
  • On Relaxation of Normality in the Fuglede-Putnam Theorem
    proceedings of the american mathematical society Volume 77, Number 3, December 1979 ON RELAXATION OF NORMALITY IN THE FUGLEDE-PUTNAM THEOREM TAKAYUKIFURUTA1 Abstract. An operator means a bounded linear operator on a complex Hubert space. The familiar Fuglede-Putnam theorem asserts that if A and B are normal operators and if X is an operator such that AX = XB, then A*X = XB*. We shall relax the normality in the hypotheses on A and B. Theorem 1. If A and B* are subnormal and if X is an operator such that AX = XB, then A*X = XB*. Theorem 2. Suppose A, B, X are operators in the Hubert space H such that AX = XB. Assume also that X is an operator of Hilbert-Schmidt class. Then A*X = XB* under any one of the following hypotheses: (i) A is k-quasihyponormal and B* is invertible hyponormal, (ii) A is quasihyponormal and B* is invertible hyponormal, (iii) A is nilpotent and B* is invertible hyponormal. 1. In this paper an operator means a bounded linear operator on a complex Hilbert space. An operator T is called quasinormal if F commutes with T* T, subnormal if T has a normal extension and hyponormal if [ F*, T] > 0 where [S, T] = ST - TS. The inclusion relation of the classes of nonnormal opera- tors listed above is as follows: Normal § Quasinormal § Subnormal ^ Hyponormal; the above inclusions are all proper [3, Problem 160, p. 101]. The familiar Fuglede-Putnam theorem is as follows ([3, Problem 152], [4]). Theorem A. If A and B are normal operators and if X is an operator such that AX = XB, then A*X = XB*.
    [Show full text]
  • Inner Products and Orthogonality
    Advanced Linear Algebra – Week 5 Inner Products and Orthogonality This week we will learn about: • Inner products (and the dot product again), • The norm induced by the inner product, • The Cauchy–Schwarz and triangle inequalities, and • Orthogonality. Extra reading and watching: • Sections 1.3.4 and 1.4.1 in the textbook • Lecture videos 17, 18, 19, 20, 21, and 22 on YouTube • Inner product space at Wikipedia • Cauchy–Schwarz inequality at Wikipedia • Gram–Schmidt process at Wikipedia Extra textbook problems: ? 1.3.3, 1.3.4, 1.4.1 ?? 1.3.9, 1.3.10, 1.3.12, 1.3.13, 1.4.2, 1.4.5(a,d) ??? 1.3.11, 1.3.14, 1.3.15, 1.3.25, 1.4.16 A 1.3.18 1 Advanced Linear Algebra – Week 5 2 There are many times when we would like to be able to talk about the angle between vectors in a vector space V, and in particular orthogonality of vectors, just like we did in Rn in the previous course. This requires us to have a generalization of the dot product to arbitrary vector spaces. Definition 5.1 — Inner Product Suppose that F = R or F = C, and V is a vector space over F. Then an inner product on V is a function h·, ·i : V × V → F such that the following three properties hold for all c ∈ F and all v, w, x ∈ V: a) hv, wi = hw, vi (conjugate symmetry) b) hv, w + cxi = hv, wi + chv, xi (linearity in 2nd entry) c) hv, vi ≥ 0, with equality if and only if v = 0.
    [Show full text]
  • Communication-Optimal Parallel 2.5D Matrix Multiplication and LU Factorization Algorithms
    Introduction 2.5D matrix multiplication 2.5D LU factorization Conclusion Communication-optimal parallel 2.5D matrix multiplication and LU factorization algorithms Edgar Solomonik and James Demmel UC Berkeley September 1st, 2011 Edgar Solomonik and James Demmel 2.5D algorithms 1/ 36 Introduction 2.5D matrix multiplication 2.5D LU factorization Conclusion Outline Introduction Strong scaling 2.5D matrix multiplication Strong scaling matrix multiplication Performing faster at scale 2.5D LU factorization Communication-optimal LU without pivoting Communication-optimal LU with pivoting Conclusion Edgar Solomonik and James Demmel 2.5D algorithms 2/ 36 Introduction 2.5D matrix multiplication Strong scaling 2.5D LU factorization Conclusion Solving science problems faster Parallel computers can solve bigger problems I weak scaling Parallel computers can also solve a xed problem faster I strong scaling Obstacles to strong scaling I may increase relative cost of communication I may hurt load balance Edgar Solomonik and James Demmel 2.5D algorithms 3/ 36 Introduction 2.5D matrix multiplication Strong scaling 2.5D LU factorization Conclusion Achieving strong scaling How to reduce communication and maintain load balance? I reduce communication along the critical path Communicate less I avoid unnecessary communication Communicate smarter I know your network topology Edgar Solomonik and James Demmel 2.5D algorithms 4/ 36 Introduction 2.5D matrix multiplication Strong scaling matrix multiplication 2.5D LU factorization Performing faster at scale Conclusion
    [Show full text]
  • Affine Transformations  Foley, Et Al, Chapter 5.1-5.5
    Reading Optional reading: Angel 4.1, 4.6-4.10 Angel, the rest of Chapter 4 Affine transformations Foley, et al, Chapter 5.1-5.5. David F. Rogers and J. Alan Adams, Mathematical Elements for Computer Graphics, 2nd Ed., McGraw-Hill, New York, Daniel Leventhal 1990, Chapter 2. Adapted from Brian Curless CSE 457 Autumn 2011 1 2 Linear Interpolation More Interpolation 3 4 1 Geometric transformations Vector representation Geometric transformations will map points in one We can represent a point, p = (x,y), in the plane or space to points in another: (x', y‘, z‘ ) = f (x, y, z). p=(x,y,z) in 3D space These transformations can be very simple, such as column vectors as scaling each coordinate, or complex, such as non-linear twists and bends. We'll focus on transformations that can be represented easily with matrix operations. as row vectors 5 6 Canonical axes Vector length and dot products 7 8 2 Vector cross products Inverse & Transpose 9 10 Two-dimensional Representation, cont. transformations We can represent a 2-D transformation M by a Here's all you get with a 2 x 2 transformation matrix matrix M: If p is a column vector, M goes on the left: So: We will develop some intimacy with the If p is a row vector, MT goes on the right: elements a, b, c, d… We will use column vectors. 11 12 3 Identity Scaling Suppose we choose a=d=1, b=c=0: Suppose we set b=c=0, but let a and d take on any positive value: Gives the identity matrix: Gives a scaling matrix: Provides differential (non-uniform) scaling in x and y: Doesn't move the points at all 1 2 12 x y 13 14 ______________ ____________ Suppose we keep b=c=0, but let either a or d go Now let's leave a=d=1 and experiment with b.
    [Show full text]
  • 1 Review of Inner Products 2 the Approximation Problem and Its Solution Via Orthogonality
    Approximation in inner product spaces, and Fourier approximation Math 272, Spring 2018 Any typographical or other corrections about these notes are welcome. 1 Review of inner products An inner product space is a vector space V together with a choice of inner product. Recall that an inner product must be bilinear, symmetric, and positive definite. Since it is positive definite, the quantity h~u;~ui is never negative, and is never 0 unless ~v = ~0. Therefore its square root is well-defined; we define the norm of a vector ~u 2 V to be k~uk = ph~u;~ui: Observe that the norm of a vector is a nonnegative number, and the only vector with norm 0 is the zero vector ~0 itself. In an inner product space, we call two vectors ~u;~v orthogonal if h~u;~vi = 0. We will also write ~u ? ~v as a shorthand to mean that ~u;~v are orthogonal. Because an inner product must be bilinear and symmetry, we also obtain the following expression for the squared norm of a sum of two vectors, which is analogous the to law of cosines in plane geometry. k~u + ~vk2 = h~u + ~v; ~u + ~vi = h~u + ~v; ~ui + h~u + ~v;~vi = h~u;~ui + h~v; ~ui + h~u;~vi + h~v;~vi = k~uk2 + k~vk2 + 2 h~u;~vi : In particular, this gives the following version of the Pythagorean theorem for inner product spaces. Pythagorean theorem for inner products If ~u;~v are orthogonal vectors in an inner product space, then k~u + ~vk2 = k~uk2 + k~vk2: Proof.
    [Show full text]
  • A Geometric Model of Allometric Scaling
    1 Site model of allometric scaling and fractal distribution networks of organs Walton R. Gutierrez* Touro College, 27West 23rd Street, New York, NY 10010 *Electronic address: [email protected] At the basic geometric level, the distribution networks carrying vital materials in living organisms, and the units such as nephrons and alveoli, form a scaling structure named here the site model. This unified view of the allometry of the kidney and lung of mammals is in good agreement with existing data, and in some instances, improves the predictions of the previous fractal model of the lung. Allometric scaling of surfaces and volumes of relevant organ parts are derived from a few simple propositions about the collective characteristics of the sites. New hypotheses and predictions are formulated, some verified by the data, while others remain to be tested, such as the scaling of the number of capillaries and the site model description of other organs. I. INTRODUCTION Important Note (March, 2004): This article has been circulated in various forms since 2001. The present version was submitted to Physical Review E in October of 2003. After two rounds of reviews, a referee pointed out that aspects of the site model had already been discovered and proposed in a previous article by John Prothero (“Scaling of Organ Subunits in Adult Mammals and Birds: a Model” Comp. Biochem. Physiol. 113A.97-106.1996). Although modifications to this paper would be required, it still contains enough new elements of interest, such as (P3), (P4) and V, to merit circulation. The q-bio section of www.arxiv.org may provide such outlet.
    [Show full text]