Math 217: Multilinearity of Determinants Professor Karen Smith (C)2015 UM Math Dept Licensed Under a Creative Commons By-NC-SA 4.0 International License

Total Page:16

File Type:pdf, Size:1020Kb

Math 217: Multilinearity of Determinants Professor Karen Smith (C)2015 UM Math Dept Licensed Under a Creative Commons By-NC-SA 4.0 International License Math 217: Multilinearity of Determinants Professor Karen Smith (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. A. Let V −!T V be a linear transformation where V has dimension n. 1. What is meant by the determinant of T ? Why is this well-defined? Solution note: The determinant of T is the determinant of the B-matrix of T , for any basis B of V . Since all B-matrices of T are similar, and similar matrices have the same determinant, this is well-defined—it doesn't depend on which basis we pick. 2. Define the rank of T . Solution note: The rank of T is the dimension of the image. 3. Explain why T is an isomorphism if and only if det T is not zero. Solution note: T is an isomorphism if and only if [T ]B is invertible (for any choice of basis B), which happens if and only if det T 6= 0. 3 4. Now let V = R and let T be rotation around the axis L (a line through the origin) by an 21 0 0 3 3 angle θ. Find a basis for R in which the matrix of ρ is 40 cosθ −sinθ5 : Use this to 0 sinθ cosθ compute the determinant of T . Is T othogonal? Solution note: Let v be any vector spanning L and let u1; u2 be an orthonormal basis ? for V = L . Rotation fixes ~v, which means the B-matrix in the basis (v; u1; u2) has 213 first column 405. In the plane V = L?, the rotation is just an ordinary rotation 0 2 like in R . The determinant is 1. The matrix is orthogonal because the columns are orthonormal, or alternatively, because the rotation map preserves the length of every vector. B. Theorem: The determinant is multilinear in the columns. The determinant is multilinear in the rows. This means that if we fix all but one column of an n × n matrix, the determinant function is linear in the remaining column. Ditto for rows. 2 1 3 223 2x3 2x 1 23 3 1. Let ~v2 = 4−15 ;~v3 = 405 : Discuss how the map R ! R sending 4y5 7! det 4y −1 05 0 1 z z 0 1 illustrates the Theorem in D. Is this map linear? If so, find its matrix. 3 Solution note: Yes, the map is a linear transformation from R to R. This is the content of the theorem! You can (and perhaps should!) also verify this directly by checking it preserves addition and scalate multiplication. Its matrix is 1 × 3 and we find it by seeing where each of the ~ei go. To find this, we compute the three given determinants, replacing the first column by each of ~e1;~e2;~e3. We get A = −1 −1 2 : T T 2. Using the row vectors ~v2 and ~v3 , illustrate what it means that the determinant function on a 3 × 3 matrix is linear in the second row. Solution note: Linearity in row 2 means: 21 −1 03 2 1 −1 0 3 2 1 −1 0 3 det 4x y z5 + det 4x0 y0 z05 = det 4x + x0 y + y0 z + z05 2 0 1 2 0 1 2 0 1 2x3 2x03 0 3 for all vectors 4y5 ; 4y 5 2 R , and z z0 2 1 −1 0 3 21 −1 03 det 4kx ky kz5 = k det 4x y z5 2 0 1 2 0 1 for all scalars k. 3. Prove the Theorem in the 2 × 2 case for the first column. That is: Show that the determinant of a 2 × 2 matrix is linear in the first column. Solution note: We need to show a + a0 b a b a0 b ka b a b det = det + det ; det = k det : c + c0 d c d c0 d kc d c d Compute: (a + a0)d − (c + c0)b = ad + a0d − cb −0 cb = (ad − bc) + (a0d −0 cb) and (ka)d − (kc)b = k(ad − bc). QED. n 4. Suppose ~v1; : : : ;~vn−1 are vectors in R and that det[~v1 ~v2 : : :~vn−1 ~a] = 5; det[~v1 ~v2 : : :~vn−1 ~b] = 7; det[~v1 ~v2 : : :~vn−1 ~c] = −3: Find det[~v1 ~v2 : : :~vn−1 (2~a + 4~b − 6~c + 17~v1)]: Solution note: Using multilinearity, we have det[~v1 ~v2 : : : ~vn−1 2~a] + det[~v1 ~v2 : : : ~vn−1 4~b] + det[~v1 ~v2 : : : ~vn−1 − 6~c] + det[~v1 ~v2 : : : ~vn−1 17~v1)] =2 det[~v1 ~v2 : : : ~vn−1 ~a] + 4 det[~v1 ~v2 : : : ~vn−1 ~b] − 6 det[~v1 ~v2 : : : ~vn−1 ~c] + 17 det[~v1 ~v2 : : : ~vn−1 ~v1)] =10 + 28 + 18 + 0 = 56: The final zero is because a matrix with dependent columns always has determinant zero. 5. Prove that the determinant function of an n × n matrix is linear in the last column. [Hint: Use Laplace expansion along the last column.] n Solution note: Fix ~v1; : : : ;~vn−1 in R . We need to show det[~v1 ~v2 : : :~vn−1 ~x + ~y] = det[~v1 ~v2 : : :~vn−1 ~x] + det[~v1 ~v2 : : :~vn−1 ~y]; and similarly for scalar multiplication. Expanding along the last column: n+1 det[~v1 ~v2 : : :~vn−1 ~x + ~y] = (−1) [(x1 + y1)A1n − (x2 + y2)A2n + · · · ± (xn + yn)Ann] n+1 n+1 = (−1) [x1A1n − x2A2n + · · · ± xnAnn] + (−1) [y1A1n − y2A2n + · · · ± ynAnn] = det[~v1 ~v2 : : :~vn−1 ~x] + det[~v1 ~v2 : : :~vn−1 ~y]: A similar calculation shows the scalar multiplication is preserved too. 6. Use Laplace expansion to explain why the determinant is linear in each column and row. 7. True or False: The determinant map is linear. Justify. Solution note: FALSE! For example det(2I2) = 4 not 2 det I2. C. Corollary: Let A be an n × n matrix. Then det(kA) = kn det A for any scalar k. Prove this. 2a + 5p 5q + 2d a −d D. Compute det given that det = 17. p q p −q 2a + 5p 5q + 2d 2a 2d 5p 5q Solution note: det = + by linearity in the p q p q p q first row and the fact that the determinant is zero if there is a relation on the rows. a d This further becomes 2 det , again by linearity in the first row. On the other p q a −d a d hand, using linearity in the second column, = − = −34. So the p −q p q answer is −70. E. Theorem: The determinant is alternating in the columns. The determinant is alternating in the rows. This means if we interchange two columns, the determinant changes sign. Ditto for rows. 20 1 23 1. Verify this for swapping the second two columns of 40 −1 05 : Also verify for swapping 1 0 1 the first and third row. 2. Prove the theorem in the case of 2 × 2 matrices. 3. Let A be an n × n matrix. Let B be obtained from A by switching columns i and j. Prove that det A = − det B. [Hint: Write A = ~v1 ~v2 : : : ~vn . Compute the determinant of ~v1 ~v2 : : : ~vi + ~vj : : : ~vi + ~vj : : : ~vn , where the ~vi + ~vj is in BOTH the i-th spot, and the j-th spot.] Solution note: (1) Computing the determinant of the matrix in (1) we get 2. We get -2 after doing the intended swaps. (2) We prove only alternating in rows, the case of columns being similar. We need to a b c d check that det = − det : This is easy: ad − cb = −(ac − bd): c d a b (3) Note that ~v1 ~v2 : : : ~vi + ~vj : : : ~vi + ~vj : : : ~vn has determinant zero be- cause two columns are the same. Now expand out using the multi-linearity: this is the same as det [~v1 ~v2 : : : ~vi : : : ~vi + ~vj : : : ~vn] + det [~v1 ~v2 : : : ~vj : : : ~vi + ~vj : : : ~vn] : Expanding in the other column and using the fact that the determinant is zero if two columns are the 0 = det [~v1 ~v2 : : : ~vi : : : ~vj : : : ~vn] + det [~v1 ~v2 : : : ~vj : : : ~vi : : : ~vn] : It is an interesting Theorem that the determinant is the ONLY alternating multilinear function of the columns of an n × n matrix which takes the value 1 on the identity matrix. More theoretical linear algebra courses (for example, Math 420, which maybe you'll take someday) usually take this to be the definition of the determinant. We won't do this in Math 217, however. 2×3 2×3 F. Suppose that we have a linear transformation T : R ! R whose matrix in the basis 20 1 2 3 0 03 60 −1 0 0 0 07 6 7 61 0 1 0 0 07 (A1;A2;A3;A4;A5;A6) is 6 7 : 60 −1 0 0 0 07 6 7 40 −1 0 0 0 05 0 −1 0 0 0 0 1. What is the determinant of T ? 2. What is the rank of T ? 3. Find a basis for the kernel and image of T . 4. Suppose that (B1;B2;B3;B4;B5;B6) is another basis, where B1 = A1 + A2;B2 = A1 − A2;B3 = 3A3;B4 = 4A4;B5 = A5 − A4 − A3 − A2;B6 = A6: Find the change of basis matrix SB!A. Without computing: explain why the change of basis matrix has non-zero determinant. 5. Find the B-matrix of T (do not simplify!) Without computing: What is its determinant? Solution note: The rank is 3, which is less than 6. So the determinant is 0. (3) A basis for the image is represented by a maximal set of linearly independent columns.
Recommended publications
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Eigenvalues and Eigenvectors
    Jim Lambers MAT 605 Fall Semester 2015-16 Lecture 14 and 15 Notes These notes correspond to Sections 4.4 and 4.5 in the text. Eigenvalues and Eigenvectors In order to compute the matrix exponential eAt for a given matrix A, it is helpful to know the eigenvalues and eigenvectors of A. Definitions and Properties Let A be an n × n matrix. A nonzero vector x is called an eigenvector of A if there exists a scalar λ such that Ax = λx: The scalar λ is called an eigenvalue of A, and we say that x is an eigenvector of A corresponding to λ. We see that an eigenvector of A is a vector for which matrix-vector multiplication with A is equivalent to scalar multiplication by λ. Because x is nonzero, it follows that if x is an eigenvector of A, then the matrix A − λI is singular, where λ is the corresponding eigenvalue. Therefore, λ satisfies the equation det(A − λI) = 0: The expression det(A−λI) is a polynomial of degree n in λ, and therefore is called the characteristic polynomial of A (eigenvalues are sometimes called characteristic values). It follows from the fact that the eigenvalues of A are the roots of the characteristic polynomial that A has n eigenvalues, which can repeat, and can also be complex, even if A is real. However, if A is real, any complex eigenvalues must occur in complex-conjugate pairs. The set of eigenvalues of A is called the spectrum of A, and denoted by λ(A). This terminology explains why the magnitude of the largest eigenvalues is called the spectral radius of A.
    [Show full text]
  • Algebra of Linear Transformations and Matrices Math 130 Linear Algebra
    Then the two compositions are 0 −1 1 0 0 1 BA = = 1 0 0 −1 1 0 Algebra of linear transformations and 1 0 0 −1 0 −1 AB = = matrices 0 −1 1 0 −1 0 Math 130 Linear Algebra D Joyce, Fall 2013 The products aren't the same. You can perform these on physical objects. Take We've looked at the operations of addition and a book. First rotate it 90◦ then flip it over. Start scalar multiplication on linear transformations and again but flip first then rotate 90◦. The book ends used them to define addition and scalar multipli- up in different orientations. cation on matrices. For a given basis β on V and another basis γ on W , we have an isomorphism Matrix multiplication is associative. Al- γ ' φβ : Hom(V; W ) ! Mm×n of vector spaces which though it's not commutative, it is associative. assigns to a linear transformation T : V ! W its That's because it corresponds to composition of γ standard matrix [T ]β. functions, and that's associative. Given any three We also have matrix multiplication which corre- functions f, g, and h, we'll show (f ◦ g) ◦ h = sponds to composition of linear transformations. If f ◦ (g ◦ h) by showing the two sides have the same A is the standard matrix for a transformation S, values for all x. and B is the standard matrix for a transformation T , then we defined multiplication of matrices so ((f ◦ g) ◦ h)(x) = (f ◦ g)(h(x)) = f(g(h(x))) that the product AB is be the standard matrix for S ◦ T .
    [Show full text]
  • Determinants Math 122 Calculus III D Joyce, Fall 2012
    Determinants Math 122 Calculus III D Joyce, Fall 2012 What they are. A determinant is a value associated to a square array of numbers, that square array being called a square matrix. For example, here are determinants of a general 2 × 2 matrix and a general 3 × 3 matrix. a b = ad − bc: c d a b c d e f = aei + bfg + cdh − ceg − afh − bdi: g h i The determinant of a matrix A is usually denoted jAj or det (A). You can think of the rows of the determinant as being vectors. For the 3×3 matrix above, the vectors are u = (a; b; c), v = (d; e; f), and w = (g; h; i). Then the determinant is a value associated to n vectors in Rn. There's a general definition for n×n determinants. It's a particular signed sum of products of n entries in the matrix where each product is of one entry in each row and column. The two ways you can choose one entry in each row and column of the 2 × 2 matrix give you the two products ad and bc. There are six ways of chosing one entry in each row and column in a 3 × 3 matrix, and generally, there are n! ways in an n × n matrix. Thus, the determinant of a 4 × 4 matrix is the signed sum of 24, which is 4!, terms. In this general definition, half the terms are taken positively and half negatively. In class, we briefly saw how the signs are determined by permutations.
    [Show full text]
  • Matrices and Tensors
    APPENDIX MATRICES AND TENSORS A.1. INTRODUCTION AND RATIONALE The purpose of this appendix is to present the notation and most of the mathematical tech- niques that are used in the body of the text. The audience is assumed to have been through sev- eral years of college-level mathematics, which included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly, and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in under- graduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles, and the vector cross and triple scalar prod- ucts. The notation, as far as possible, will be a matrix notation that is easily entered into exist- ing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad. The desire to represent the components of three-dimensional fourth-order tensors that appear in anisotropic elasticity as the components of six-dimensional second-order tensors and thus rep- resent these components in matrices of tensor components in six dimensions leads to the non- traditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in §A.11, along with the rationale for this approach. A.2. DEFINITION OF SQUARE, COLUMN, AND ROW MATRICES An r-by-c matrix, M, is a rectangular array of numbers consisting of r rows and c columns: ¯MM..
    [Show full text]
  • Glossary of Linear Algebra Terms
    INNER PRODUCT SPACES AND THE GRAM-SCHMIDT PROCESS A. HAVENS 1. The Dot Product and Orthogonality 1.1. Review of the Dot Product. We first recall the notion of the dot product, which gives us a familiar example of an inner product structure on the real vector spaces Rn. This product is connected to the Euclidean geometry of Rn, via lengths and angles measured in Rn. Later, we will introduce inner product spaces in general, and use their structure to define general notions of length and angle on other vector spaces. Definition 1.1. The dot product of real n-vectors in the Euclidean vector space Rn is the scalar product · : Rn × Rn ! R given by the rule n n ! n X X X (u; v) = uiei; viei 7! uivi : i=1 i=1 i n Here BS := (e1;:::; en) is the standard basis of R . With respect to our conventions on basis and matrix multiplication, we may also express the dot product as the matrix-vector product 2 3 v1 6 7 t î ó 6 . 7 u v = u1 : : : un 6 . 7 : 4 5 vn It is a good exercise to verify the following proposition. Proposition 1.1. Let u; v; w 2 Rn be any real n-vectors, and s; t 2 R be any scalars. The Euclidean dot product (u; v) 7! u · v satisfies the following properties. (i:) The dot product is symmetric: u · v = v · u. (ii:) The dot product is bilinear: • (su) · v = s(u · v) = u · (sv), • (u + v) · w = u · w + v · w.
    [Show full text]
  • Basic Multivector Calculus, Proceedings of FAN 2008, Hiroshima, Japan, 23+24 October 2008
    E. Hitzer, Basic Multivector Calculus, Proceedings of FAN 2008, Hiroshima, Japan, 23+24 October 2008. Basic Multivector Calculus Eckhard Hitzer, Department of Applied Physics, Faculty of Engineering, University of Fukui Abstract: We begin with introducing the generalization of real, complex, and quaternion numbers to hypercomplex numbers, also known as Clifford numbers, or multivectors of geometric algebra. Multivectors encode everything from vectors, rotations, scaling transformations, improper transformations (reflections, inversions), geometric objects (like lines and spheres), spinors, and tensors, and the like. Multivector calculus allows to define functions mapping multivectors to multivectors, differentiation, integration, function norms, multivector Fourier transformations and wavelet transformations, filtering, windowing, etc. We give a basic introduction into this general mathematical language, which has fascinating applications in physics, engineering, and computer science. more didactic approach for newcomers to geometric algebra. 1. Introduction G(I) is the full geometric algebra over all vectors in the “Now faith is being sure of what we hope for and certain of what we do not see. This is what the ancients were n-dimensional unit pseudoscalar I e1 e2 ... en . commended for. By faith we understand that the universe was formed at God’s command, so that what is seen was not made 1 A G (I) is the n-dimensional vector sub-space of out of what was visible.” [7] n The German 19th century mathematician H. Grassmann had grade-1 elements in G(I) spanned by e1,e2 ,...,en . For the clear vision, that his “extension theory (now developed to geometric calculus) … forms the keystone of the entire 1 vectors a,b,c An G (I) and scalars , ,, ; structure of mathematics.”[6] The algebraic “grammar” of this universal form of calculus is geometric algebra (or Clifford G(I) has the fundamental properties of algebra).
    [Show full text]
  • New Foundations for Geometric Algebra1
    Text published in the electronic journal Clifford Analysis, Clifford Algebras and their Applications vol. 2, No. 3 (2013) pp. 193-211 New foundations for geometric algebra1 Ramon González Calvet Institut Pere Calders, Campus Universitat Autònoma de Barcelona, 08193 Cerdanyola del Vallès, Spain E-mail : [email protected] Abstract. New foundations for geometric algebra are proposed based upon the existing isomorphisms between geometric and matrix algebras. Each geometric algebra always has a faithful real matrix representation with a periodicity of 8. On the other hand, each matrix algebra is always embedded in a geometric algebra of a convenient dimension. The geometric product is also isomorphic to the matrix product, and many vector transformations such as rotations, axial symmetries and Lorentz transformations can be written in a form isomorphic to a similarity transformation of matrices. We collect the idea Dirac applied to develop the relativistic electron equation when he took a basis of matrices for the geometric algebra instead of a basis of geometric vectors. Of course, this way of understanding the geometric algebra requires new definitions: the geometric vector space is defined as the algebraic subspace that generates the rest of the matrix algebra by addition and multiplication; isometries are simply defined as the similarity transformations of matrices as shown above, and finally the norm of any element of the geometric algebra is defined as the nth root of the determinant of its representative matrix of order n. The main idea of this proposal is an arithmetic point of view consisting of reversing the roles of matrix and geometric algebras in the sense that geometric algebra is a way of accessing, working and understanding the most fundamental conception of matrix algebra as the algebra of transformations of multiple quantities.
    [Show full text]
  • MATH 304 Linear Algebra Lecture 24: Scalar Product. Vectors: Geometric Approach
    MATH 304 Linear Algebra Lecture 24: Scalar product. Vectors: geometric approach B A B′ A′ A vector is represented by a directed segment. • Directed segment is drawn as an arrow. • Different arrows represent the same vector if • they are of the same length and direction. Vectors: geometric approach v B A v B′ A′ −→AB denotes the vector represented by the arrow with tip at B and tail at A. −→AA is called the zero vector and denoted 0. Vectors: geometric approach v B − A v B′ A′ If v = −→AB then −→BA is called the negative vector of v and denoted v. − Vector addition Given vectors a and b, their sum a + b is defined by the rule −→AB + −→BC = −→AC. That is, choose points A, B, C so that −→AB = a and −→BC = b. Then a + b = −→AC. B b a C a b + B′ b A a C ′ a + b A′ The difference of the two vectors is defined as a b = a + ( b). − − b a b − a Properties of vector addition: a + b = b + a (commutative law) (a + b) + c = a + (b + c) (associative law) a + 0 = 0 + a = a a + ( a) = ( a) + a = 0 − − Let −→AB = a. Then a + 0 = −→AB + −→BB = −→AB = a, a + ( a) = −→AB + −→BA = −→AA = 0. − Let −→AB = a, −→BC = b, and −→CD = c. Then (a + b) + c = (−→AB + −→BC) + −→CD = −→AC + −→CD = −→AD, a + (b + c) = −→AB + (−→BC + −→CD) = −→AB + −→BD = −→AD. Parallelogram rule Let −→AB = a, −→BC = b, −−→AB′ = b, and −−→B′C ′ = a. Then a + b = −→AC, b + a = −−→AC ′.
    [Show full text]
  • The Definition Definition Definition of the Determinant Determinant For
    The Definition ofofof thethethe Determinant For our discussion of the determinant I am going to be using a slightly different definition for the determinant than the author uses in the text. The reason I want to do this is because I think that this definition will give you a little more insight into how the determinant works and may make some of the proofs we have to do a little easier. I will also show you that my definition is equivalent to the author's definition. My definition starts with the concept of a permutation . A permutation of a list of numbers is a particular re-ordering of the list. For example, here are all of the permutations of the list {1,2,3}. 1,2,3 1,3,2 2,3,1 2,1,3 3,1,2 3,2,1 I will use the greek letter σ to stand for a particular permutation. If I want to talk about a particular number in that permutation I will use the notation σ(j) to stand for element j in the permutation σ. For example, if σ is the third permutation in the list above, σ(2) = 3. The sign of a permutation is defined as (-1) i(σ), where i(σ), the inversion count of σ, is defined as the number of cases where i < j while σ(i) > σ(j). Closely related to the inversion count is the swap count , s(σ), which counts the number of swaps needed to restore the permutation to the original list order.
    [Show full text]
  • Low-Level Image Processing with the Structure Multivector
    Low-Level Image Processing with the Structure Multivector Michael Felsberg Bericht Nr. 0202 Institut f¨ur Informatik und Praktische Mathematik der Christian-Albrechts-Universitat¨ zu Kiel Olshausenstr. 40 D – 24098 Kiel e-mail: [email protected] 12. Marz¨ 2002 Dieser Bericht enthalt¨ die Dissertation des Verfassers 1. Gutachter Prof. G. Sommer (Kiel) 2. Gutachter Prof. U. Heute (Kiel) 3. Gutachter Prof. J. J. Koenderink (Utrecht) Datum der mundlichen¨ Prufung:¨ 12.2.2002 To Regina ABSTRACT The present thesis deals with two-dimensional signal processing for computer vi- sion. The main topic is the development of a sophisticated generalization of the one-dimensional analytic signal to two dimensions. Motivated by the fundamental property of the latter, the invariance – equivariance constraint, and by its relation to complex analysis and potential theory, a two-dimensional approach is derived. This method is called the monogenic signal and it is based on the Riesz transform instead of the Hilbert transform. By means of this linear approach it is possible to estimate the local orientation and the local phase of signals which are projections of one-dimensional functions to two dimensions. For general two-dimensional signals, however, the monogenic signal has to be further extended, yielding the structure multivector. The latter approach combines the ideas of the structure tensor and the quaternionic analytic signal. A rich feature set can be extracted from the structure multivector, which contains measures for local amplitudes, the local anisotropy, the local orientation, and two local phases. Both, the monogenic signal and the struc- ture multivector are combined with an appropriate scale-space approach, resulting in generalized quadrature filters.
    [Show full text]
  • A Guided Tour to the Plane-Based Geometric Algebra PGA
    A Guided Tour to the Plane-Based Geometric Algebra PGA Leo Dorst University of Amsterdam Version 1.15{ July 6, 2020 Planes are the primitive elements for the constructions of objects and oper- ators in Euclidean geometry. Triangulated meshes are built from them, and reflections in multiple planes are a mathematically pure way to construct Euclidean motions. A geometric algebra based on planes is therefore a natural choice to unify objects and operators for Euclidean geometry. The usual claims of `com- pleteness' of the GA approach leads us to hope that it might contain, in a single framework, all representations ever designed for Euclidean geometry - including normal vectors, directions as points at infinity, Pl¨ucker coordinates for lines, quaternions as 3D rotations around the origin, and dual quaternions for rigid body motions; and even spinors. This text provides a guided tour to this algebra of planes PGA. It indeed shows how all such computationally efficient methods are incorporated and related. We will see how the PGA elements naturally group into blocks of four coordinates in an implementation, and how this more complete under- standing of the embedding suggests some handy choices to avoid extraneous computations. In the unified PGA framework, one never switches between efficient representations for subtasks, and this obviously saves any time spent on data conversions. Relative to other treatments of PGA, this text is rather light on the mathematics. Where you see careful derivations, they involve the aspects of orientation and magnitude. These features have been neglected by authors focussing on the mathematical beauty of the projective nature of the algebra.
    [Show full text]