The Relationship Between Rank and Nullity a Fundamental Theorem for Linear Algebra

Total Page:16

File Type:pdf, Size:1020Kb

The Relationship Between Rank and Nullity a Fundamental Theorem for Linear Algebra Rank, Nullity, and The Row Space The Rank-Nullity Theorem Interpretation and Applications The Relationship between Rank and Nullity A Fundamental Theorem for Linear Algebra A. Havens Department of Mathematics University of Massachusetts, Amherst March 28, 2018 A. Havens The Relationship between Rank and Nullity Rank, Nullity, and The Row Space The Rank-Nullity Theorem Interpretation and Applications Outline 1 Rank, Nullity, and The Row Space Review: Column Space and Null Space Rank and Nullity Row Space 2 The Rank-Nullity Theorem The Theorem for Matrices A Fundamental Theorem of Linear Algebra 3 Interpretation and Applications The Rigidity of Linear Maps Left Null Spaces Rank One Matrices and Outer Products Duality and the Row Space A. Havens The Relationship between Rank and Nullity Rank, Nullity, and The Row Space The Rank-Nullity Theorem Interpretation and Applications Review: Column Space and Null Space Definitions of Column Space and Null Space Definition m×n Let A 2 R be a real matrix. Recall m The column space of A is the subspace Col A of R spanned m by the columns of A: Col A = Span fa1;:::; ang ⊆ R where î ó A = a1 ::: an . n m Equivalently, Col A is the same as the image T (R ) ⊆ R of the linear map T (x) = Ax. n The null space of A is the subspace Nul A of R consisting of all vectors x which are solutions to the homogeneous equation n n with matrix A: Nul A := fx 2 R j Ax = 0g ⊆ R Equivalently, it is the kernel of the map T (x) = Ax. A. Havens The Relationship between Rank and Nullity Rank, Nullity, and The Row Space The Rank-Nullity Theorem Interpretation and Applications Rank and Nullity Rank: The Dimension of the Column Space Definition The rank of a linear map T : V ! W between finite dimensional vector spaces V and W is the dimension of the image: rank T = dim T (V ) : Given an m × n matrix A, the rank of A is the dimension of the column space of A: rank A = dim Col A : Remark Observe that rank T ≤ dim V and rank T ≤ dim W . A. Havens The Relationship between Rank and Nullity Rank, Nullity, and The Row Space The Rank-Nullity Theorem Interpretation and Applications Rank and Nullity Remark We know rank T ≤ dim V because the image subspace is spanned by the images of basis vectors, and so in particular, T (V ) is spanned by a set of dim V vectors, which is an upper bound on the size of a linearly independent spanning set. That rank T ≤ dim W follows from the fact that T (V ) is a subspace of W , and so its dimension is less than or equal to the dimension of W . A. Havens The Relationship between Rank and Nullity Rank, Nullity, and The Row Space The Rank-Nullity Theorem Interpretation and Applications Rank and Nullity Nullity: The Dimension of the Null Space Definition The nullity of a linear map T : V ! W between finite dimensional vector spaces V and W is the dimension of the kernel: nullity T = dim ker T : Given an m × n matrix A, the nullity of A is the dimension of the null space of A: nullity A = dim Nul A : A. Havens The Relationship between Rank and Nullity Rank, Nullity, and The Row Space The Rank-Nullity Theorem Interpretation and Applications Rank and Nullity Remark Observe that nullity T ≤ dim V , but it need not be bounded by the dimension of W . Exercise Explain the above remark about the bound on the nullity of a linear map. What is the relationship between the dimension of the codomain and the nullity? A. Havens The Relationship between Rank and Nullity Rank, Nullity, and The Row Space The Rank-Nullity Theorem Interpretation and Applications Rank and Nullity Finding a Basis of the Column Space To find a basis of the Column space of A, find a row equivalent matrix B is echelon form and locate the pivot columns. Recall that the corresponding columns of A are the pivot columns of A. As each non-pivot column corresponds to a free variable, any non-pivot columns may be realized as linear combinations of the pivot columns. Thus the pivot columns of A are a maximal linearly independent subset of the column of A, and span Col A. That is, the pivot columns of A are a basis of the column space of A. A. Havens The Relationship between Rank and Nullity Rank, Nullity, and The Row Space The Rank-Nullity Theorem Interpretation and Applications Rank and Nullity Finding a Basis of the Null Space To find a basis of the null space of A, solve the homogeneous system Ax = 0. The solution vector x can be written as a linear combination of some vectors weighted by free variables. Since each such vector corresponds to a unique free variable, it will have a one in a coordinate position where the other vectors have zeros. Thus, the matrix whose columns are these vectors spanning the null space has as many pivots as the system has free variables, and so this collection is linearly independent and forms a basis of the null space. A. Havens The Relationship between Rank and Nullity Rank, Nullity, and The Row Space The Rank-Nullity Theorem Interpretation and Applications Rank and Nullity Summarizing: Computing Rank and Nullity A basis of Col A is given by the pivot columns of A. Be careful to use the columns of the original matrix A and not of RREF(A)! The rank is the number of pivot columns of A, or equivalently the number of pivot positions of RREF(A). A basis of Nul A is found by solving the homogenous equation and decomposing a parameterized solution vector as a linear combination of vectors weighted by free variables. The vectors in this sum form a basis of Nul A. The nullity of A is thus the number of free variables for the homogeneous system, which is the same as the number of non-pivot columns of A. A. Havens The Relationship between Rank and Nullity Rank, Nullity, and The Row Space The Rank-Nullity Theorem Interpretation and Applications Row Space Definition of the Row Space of a Matrix Definition Let A be an m × n matrix. The set of all linear combinations of the rows of A is called the row space of A. Since each row of A is a column of At, we often regard the row t m space as Col A ⊆ R , though strictly speaking, these are m isomorphic but not equal spaces, and Row A 6⊆ R , as it consists of objects distinct from column vectors. Later, we'll define the correct space within which the row space is realized as a subspace. A. Havens The Relationship between Rank and Nullity Rank, Nullity, and The Row Space The Rank-Nullity Theorem Interpretation and Applications Row Space Row Equivalence and Row Spaces Theorem Suppose A ∼ B. Then Row A = Row B. Moreover, if B is in echelon form, then the nonzero rows of B form a nonzero basis of both Row B and Row A. Proof. Suppose B is obtained from A by row operations. Then the rows of B are linear combinations of the rows of A, and thus any linear combination of rows of B is a linear combination of rows of A, whence Row B ⊆ Row A. Since elementary row operations are invertible, we can argue symmetrically that Row A ⊆ Row B, whence Row A = Row B. A. Havens The Relationship between Rank and Nullity Rank, Nullity, and The Row Space The Rank-Nullity Theorem Interpretation and Applications Row Space Proof (continued.) If B is in echelon form, then the nonzero rows form a linearly independent set, as no nonzero row can be made as a linear combination of the nonzero rows below it. Equivalently, by transposing, consider that each nonzero row has a pivot position, and gives a pivot column of Bt. Thus the nonzero rows of B form a basis of Row B = Row A. A. Havens The Relationship between Rank and Nullity Rank, Nullity, and The Row Space The Rank-Nullity Theorem Interpretation and Applications Row Space An Example Example Find bases of the row space, column space, and null space of the matrix 2 1 4 3 2 5 3 6 4 8 12 9 0 7 6 7 A = 6 7 : 4 3 4 9 7 −5 5 2 8 6 5 6 A. Havens The Relationship between Rank and Nullity R3 − 3R1 7! R3, R4 − 2R1 7! R3, R3 − R2 7! R3, and R3 $ R4, we obtain a row equivalence 2 1 4 3 2 5 3 6 0 −8 0 1 −20 7 6 7 A ∼ B = 6 7 4 0 0 0 1 −4 5 0 0 0 0 0 Since B is in echelon form, we deduce that î ó 8 1 4 3 2 5 ; 9 <> î ó => Row A = Span 0 −8 0 1 −20 ; : î ó :> 0 0 0 1 −4 ;> Rank, Nullity, and The Row Space The Rank-Nullity Theorem Interpretation and Applications Row Space Example Using the row operations R2 − 4R1 7! R2, A. Havens The Relationship between Rank and Nullity R4 − 2R1 7! R3, R3 − R2 7! R3, and R3 $ R4, we obtain a row equivalence 2 1 4 3 2 5 3 6 0 −8 0 1 −20 7 6 7 A ∼ B = 6 7 4 0 0 0 1 −4 5 0 0 0 0 0 Since B is in echelon form, we deduce that î ó 8 1 4 3 2 5 ; 9 <> î ó => Row A = Span 0 −8 0 1 −20 ; : î ó :> 0 0 0 1 −4 ;> Rank, Nullity, and The Row Space The Rank-Nullity Theorem Interpretation and Applications Row Space Example Using the row operations R2 − 4R1 7! R2, R3 − 3R1 7! R3, A.
Recommended publications
  • ADDITIVE MAPS on RANK K BIVECTORS 1. Introduction. Let N 2 Be an Integer and Let F Be a Field. We Denote by M N(F) the Algeb
    Electronic Journal of Linear Algebra, ISSN 1081-3810 A publication of the International Linear Algebra Society Volume 36, pp. 847-856, December 2020. ADDITIVE MAPS ON RANK K BIVECTORS∗ WAI LEONG CHOOIy AND KIAM HEONG KWAy V2 Abstract. Let U and V be linear spaces over fields F and K, respectively, such that dim U = n > 2 and jFj > 3. Let U n−1 V2 V2 be the second exterior power of U. Fixing an even integer k satisfying 2 6 k 6 n, it is shown that a map : U! V satisfies (u + v) = (u) + (v) for all rank k bivectors u; v 2 V2 U if and only if is an additive map. Examples showing the indispensability of the assumption on k are given. Key words. Additive maps, Second exterior powers, Bivectors, Ranks, Alternate matrices. AMS subject classifications. 15A03, 15A04, 15A75, 15A86. 1. Introduction. Let n > 2 be an integer and let F be a field. We denote by Mn(F) the algebra of n×n matrices over F. Given a nonempty subset S of Mn(F), a map : Mn(F) ! Mn(F) is called commuting on S (respectively, additive on S) if (A)A = A (A) for all A 2 S (respectively, (A + B) = (A) + (B) for all A; B 2 S). In 2012, using Breˇsar'sresult [1, Theorem A], Franca [3] characterized commuting additive maps on invertible (respectively, singular) matrices of Mn(F). He continued to characterize commuting additive maps on rank k matrices of Mn(F) in [4], where 2 6 k 6 n is a fixed integer, and commuting additive maps on rank one matrices of Mn(F) in [5].
    [Show full text]
  • Minimum Rank, Maximum Nullity, and Zero Forcing of Graphs
    Minimum Rank, Maximum Nullity, and Zero Forcing of Graphs Wayne Barrett (Brigham Young University), Steve Butler (Iowa State University), Minerva Catral (Xavier University), Shaun Fallat (University of Regina), H. Tracy Hall (Brigham Young University), Leslie Hogben (Iowa State University), Pauline van den Driessche (University of Victoria), Michael Young (Iowa State University) June 16-23, 2013 1 Mathematical Background Combinatorial matrix theory, which involves connections between linear algebra, graph theory, and combi- natorics, is a vital area and dynamic area of research, with applications to fields such as biology, chemistry, economics, and computer engineering. One area generating considerable interest recently is the study of the minimum rank of matrices associated with graphs. Let F be any field. For a graph G = (V; E), which has vertex set V = f1; : : : ; ng and edge set E, S(F; G) is the set of all symmetric n×n matrices A with entries from F such that for any non-diagonal entry aij in A, aij 6= 0 if and only if ij 2 E. The minimum rank of G is mr(F; G) = minfrank(A): A 2 S(F; G)g; and the maximum nullity of G is M(F; G) = maxfnull(A): A 2 S(F; G)g: Note that mr(F; G)+M(F; G) = jGj, where jGj is the number of vertices in G. Thus the problem of finding the minimum rank of a given graph is equivalent to the problem of determining its maximum nullity. The zero forcing number of a graph is the minimum number of blue vertices initially needed to force all the vertices in the graph blue according to the color-change rule.
    [Show full text]
  • Do Killingâ•Fiyano Tensors Form a Lie Algebra?
    University of Massachusetts Amherst ScholarWorks@UMass Amherst Physics Department Faculty Publication Series Physics 2007 Do Killing–Yano tensors form a Lie algebra? David Kastor University of Massachusetts - Amherst, [email protected] Sourya Ray University of Massachusetts - Amherst Jennie Traschen University of Massachusetts - Amherst, [email protected] Follow this and additional works at: https://scholarworks.umass.edu/physics_faculty_pubs Part of the Physics Commons Recommended Citation Kastor, David; Ray, Sourya; and Traschen, Jennie, "Do Killing–Yano tensors form a Lie algebra?" (2007). Classical and Quantum Gravity. 1235. Retrieved from https://scholarworks.umass.edu/physics_faculty_pubs/1235 This Article is brought to you for free and open access by the Physics at ScholarWorks@UMass Amherst. It has been accepted for inclusion in Physics Department Faculty Publication Series by an authorized administrator of ScholarWorks@UMass Amherst. For more information, please contact [email protected]. Do Killing-Yano tensors form a Lie algebra? David Kastor, Sourya Ray and Jennie Traschen Department of Physics University of Massachusetts Amherst, MA 01003 ABSTRACT Killing-Yano tensors are natural generalizations of Killing vec- tors. We investigate whether Killing-Yano tensors form a graded Lie algebra with respect to the Schouten-Nijenhuis bracket. We find that this proposition does not hold in general, but that it arXiv:0705.0535v1 [hep-th] 3 May 2007 does hold for constant curvature spacetimes. We also show that Minkowski and (anti)-deSitter spacetimes have the maximal num- ber of Killing-Yano tensors of each rank and that the algebras of these tensors under the SN bracket are relatively simple exten- sions of the Poincare and (A)dS symmetry algebras.
    [Show full text]
  • Preserivng Pieces of Information in a Given Order in HRR and GA$ C$
    Proceedings of the Federated Conference on ISBN 978-83-60810-22-4 Computer Science and Information Systems pp. 213–220 Preserivng pieces of information in a given order in HRR and GAc Agnieszka Patyk-Ło´nska Abstract—Geometric Analogues of Holographic Reduced Rep- Secondly, this method of encoding will not detect similarity resentations (GA HRR or GAc—the continuous version of between eye and yeye. discrete GA described in [16]) employ role-filler binding based A quantum-like attempt to tackle the problem of information on geometric products. Atomic objects are real-valued vectors in n-dimensional Euclidean space and complex statements belong to ordering was made in [1]—a version of semantic analysis, a hierarchy of multivectors. A property of GAc and HRR studied reformulated in terms of a Hilbert-space problem, is compared here is the ability to store pieces of information in a given order with structures known from quantum mechanics. In particular, by means of trajectory association. We describe results of an an LSA matrix representation [1], [10] is rewritten by the experiment: finding the alignment of items in a sequence without means of quantum notation. Geometric algebra has also been the precise knowledge of trajectory vectors. Index Terms—distributed representations, geometric algebra, used extensively in quantum mechanics ([2], [4], [3]) and so HRR, BSC, word order, trajectory associations, bag of words. there seems to be a natural connection between LSA and GAc, which is the ground for fututre work on the problem I. INTRODUCTION of preserving pieces of information in a given order. VER the years several attempts have been made to As far as convolutions are concerned, the most interesting O preserve the order in which the objects are to be remem- approach to remembering information in a given order has bered with the help of binding and superposition.
    [Show full text]
  • Implementation of Gaussian- Elimination
    International Journal of Innovative Technology and Exploring Engineering (IJITEE) ISSN: 2278-3075, Volume-5 Issue-11, April 2016 Implementation of Gaussian- Elimination Awatif M.A. Elsiddieg Abstract: Gaussian elimination is an algorithm for solving systems of linear equations, can also use to find the rank of any II. PIVOTING matrix ,we use Gaussian Jordan elimination to find the inverse of a non singular square matrix. This work gives basic concepts The objective of pivoting is to make an element above or in section (1) , show what is pivoting , and implementation of below a leading one into a zero. Gaussian elimination to solve a system of linear equations. The "pivot" or "pivot element" is an element on the left Section (2) we find the rank of any matrix. Section (3) we use hand side of a matrix that you want the elements above and Gaussian elimination to find the inverse of a non singular square matrix. We compare the method by Gauss Jordan method. In below to be zero. Normally, this element is a one. If you can section (4) practical implementation of the method we inherit the find a book that mentions pivoting, they will usually tell you computation features of Gaussian elimination we use programs that you must pivot on a one. If you restrict yourself to the in Matlab software. three elementary row operations, then this is a true Keywords: Gaussian elimination, algorithm Gauss, statement. However, if you are willing to combine the Jordan, method, computation, features, programs in Matlab, second and third elementary row operations, you come up software.
    [Show full text]
  • 8 Rank of a Matrix
    8 Rank of a matrix We already know how to figure out that the collection (v1;:::; vk) is linearly dependent or not if each n vj 2 R . Recall that we need to form the matrix [v1 j ::: j vk] with the given vectors as columns and see whether the row echelon form of this matrix has any free variables. If these vectors linearly independent (this is an abuse of language, the correct phrase would be \if the collection composed of these vectors is linearly independent"), then, due to the theorems we proved, their span is a subspace of Rn of dimension k, and this collection is a basis of this subspace. Now, what if they are linearly dependent? Still, their span will be a subspace of Rn, but what is the dimension and what is a basis? Naively, I can answer this question by looking at these vectors one by one. In particular, if v1 =6 0 then I form B = (v1) (I know that one nonzero vector is linearly independent). Next, I add v2 to this collection. If the collection (v1; v2) is linearly dependent (which I know how to check), I drop v2 and take v3. If independent then I form B = (v1; v2) and add now third vector v3. This procedure will lead to the vectors that form a basis of the span of my original collection, and their number will be the dimension. Can I do it in a different, more efficient way? The answer if \yes." As a side result we'll get one of the most important facts of the basic linear algebra.
    [Show full text]
  • Linear Algebra and Matrix Theory
    Linear Algebra and Matrix Theory Chapter 1 - Linear Systems, Matrices and Determinants This is a very brief outline of some basic definitions and theorems of linear algebra. We will assume that you know elementary facts such as how to add two matrices, how to multiply a matrix by a number, how to multiply two matrices, what an identity matrix is, and what a solution of a linear system of equations is. Hardly any of the theorems will be proved. More complete treatments may be found in the following references. 1. References (1) S. Friedberg, A. Insel and L. Spence, Linear Algebra, Prentice-Hall. (2) M. Golubitsky and M. Dellnitz, Linear Algebra and Differential Equa- tions Using Matlab, Brooks-Cole. (3) K. Hoffman and R. Kunze, Linear Algebra, Prentice-Hall. (4) P. Lancaster and M. Tismenetsky, The Theory of Matrices, Aca- demic Press. 1 2 2. Linear Systems of Equations and Gaussian Elimination The solutions, if any, of a linear system of equations (2.1) a11x1 + a12x2 + ··· + a1nxn = b1 a21x1 + a22x2 + ··· + a2nxn = b2 . am1x1 + am2x2 + ··· + amnxn = bm may be found by Gaussian elimination. The permitted steps are as follows. (1) Both sides of any equation may be multiplied by the same nonzero constant. (2) Any two equations may be interchanged. (3) Any multiple of one equation may be added to another equation. Instead of working with the symbols for the variables (the xi), it is eas- ier to place the coefficients (the aij) and the forcing terms (the bi) in a rectangular array called the augmented matrix of the system. a11 a12 .
    [Show full text]
  • 2.2 Kernel and Range of a Linear Transformation
    2.2 Kernel and Range of a Linear Transformation Performance Criteria: 2. (c) Determine whether a given vector is in the kernel or range of a linear trans- formation. Describe the kernel and range of a linear transformation. (d) Determine whether a transformation is one-to-one; determine whether a transformation is onto. When working with transformations T : Rm → Rn in Math 341, you found that any linear transformation can be represented by multiplication by a matrix. At some point after that you were introduced to the concepts of the null space and column space of a matrix. In this section we present the analogous ideas for general vector spaces. Definition 2.4: Let V and W be vector spaces, and let T : V → W be a transformation. We will call V the domain of T , and W is the codomain of T . Definition 2.5: Let V and W be vector spaces, and let T : V → W be a linear transformation. • The set of all vectors v ∈ V for which T v = 0 is a subspace of V . It is called the kernel of T , And we will denote it by ker(T ). • The set of all vectors w ∈ W such that w = T v for some v ∈ V is called the range of T . It is a subspace of W , and is denoted ran(T ). It is worth making a few comments about the above: • The kernel and range “belong to” the transformation, not the vector spaces V and W . If we had another linear transformation S : V → W , it would most likely have a different kernel and range.
    [Show full text]
  • 23. Kernel, Rank, Range
    23. Kernel, Rank, Range We now study linear transformations in more detail. First, we establish some important vocabulary. The range of a linear transformation f : V ! W is the set of vectors the linear transformation maps to. This set is also often called the image of f, written ran(f) = Im(f) = L(V ) = fL(v)jv 2 V g ⊂ W: The domain of a linear transformation is often called the pre-image of f. We can also talk about the pre-image of any subset of vectors U 2 W : L−1(U) = fv 2 V jL(v) 2 Ug ⊂ V: A linear transformation f is one-to-one if for any x 6= y 2 V , f(x) 6= f(y). In other words, different vector in V always map to different vectors in W . One-to-one transformations are also known as injective transformations. Notice that injectivity is a condition on the pre-image of f. A linear transformation f is onto if for every w 2 W , there exists an x 2 V such that f(x) = w. In other words, every vector in W is the image of some vector in V . An onto transformation is also known as an surjective transformation. Notice that surjectivity is a condition on the image of f. 1 Suppose L : V ! W is not injective. Then we can find v1 6= v2 such that Lv1 = Lv2. Then v1 − v2 6= 0, but L(v1 − v2) = 0: Definition Let L : V ! W be a linear transformation. The set of all vectors v such that Lv = 0W is called the kernel of L: ker L = fv 2 V jLv = 0g: 1 The notions of one-to-one and onto can be generalized to arbitrary functions on sets.
    [Show full text]
  • On the Range-Kernel Orthogonality of Elementary Operators
    140 (2015) MATHEMATICA BOHEMICA No. 3, 261–269 ON THE RANGE-KERNEL ORTHOGONALITY OF ELEMENTARY OPERATORS Said Bouali, Kénitra, Youssef Bouhafsi, Rabat (Received January 16, 2013) Abstract. Let L(H) denote the algebra of operators on a complex infinite dimensional Hilbert space H. For A, B ∈ L(H), the generalized derivation δA,B and the elementary operator ∆A,B are defined by δA,B(X) = AX − XB and ∆A,B(X) = AXB − X for all X ∈ L(H). In this paper, we exhibit pairs (A, B) of operators such that the range-kernel orthogonality of δA,B holds for the usual operator norm. We generalize some recent results. We also establish some theorems on the orthogonality of the range and the kernel of ∆A,B with respect to the wider class of unitarily invariant norms on L(H). Keywords: derivation; elementary operator; orthogonality; unitarily invariant norm; cyclic subnormal operator; Fuglede-Putnam property MSC 2010 : 47A30, 47A63, 47B15, 47B20, 47B47, 47B10 1. Introduction Let H be a complex infinite dimensional Hilbert space, and let L(H) denote the algebra of all bounded linear operators acting on H into itself. Given A, B ∈ L(H), we define the generalized derivation δA,B : L(H) → L(H) by δA,B(X)= AX − XB, and the elementary operator ∆A,B : L(H) → L(H) by ∆A,B(X)= AXB − X. Let δA,A = δA and ∆A,A = ∆A. In [1], Anderson shows that if A is normal and commutes with T , then for all X ∈ L(H) (1.1) kδA(X)+ T k > kT k, where k·k is the usual operator norm.
    [Show full text]
  • Applications of the Quantum Algorithm for St-Connectivity
    Applications of the quantum algorithm for st-connectivity KAI DELORENZO1 , SHELBY KIMMEL1 , AND R. TEAL WITTER1 1Middlebury College, Computer Science Department, Middlebury, VT Abstract We present quantum algorithms for various problems related to graph connectivity. We give simple and query-optimal algorithms for cycle detection and odd-length cycle detection (bipartiteness) using a reduction to st-connectivity. Furthermore, we show that our algorithm for cycle detection has improved performance under the promise of large circuit rank or a small number of edges. We also provide algo- rithms for detecting even-length cycles and for estimating the circuit rank of a graph. All of our algorithms have logarithmic space complexity. 1 Introduction Quantum query algorithms are remarkably described by span programs [15, 16], a linear algebraic object originally created to study classical logspace complexity [12]. However, finding optimal span program algorithms can be challenging; while they can be obtained using a semidefinite program, the size of the program grows exponentially with the size of the input to the algorithm. Moreover, span programs are designed to characterize the query complexity of an algorithm, while in practice we also care about the time and space complexity. One of the nicest span programs is for the problem of undirected st-connectivity, in which one must decide whether two vertices s and t are connected in a given graph. It is “nice” for several reasons: It is easy to describe and understand why it is correct. • It corresponds to a quantum algorithm that uses logarithmic (in the number of vertices and edges of • the graph) space [4, 11].
    [Show full text]
  • Low-Level Image Processing with the Structure Multivector
    Low-Level Image Processing with the Structure Multivector Michael Felsberg Bericht Nr. 0202 Institut f¨ur Informatik und Praktische Mathematik der Christian-Albrechts-Universitat¨ zu Kiel Olshausenstr. 40 D – 24098 Kiel e-mail: [email protected] 12. Marz¨ 2002 Dieser Bericht enthalt¨ die Dissertation des Verfassers 1. Gutachter Prof. G. Sommer (Kiel) 2. Gutachter Prof. U. Heute (Kiel) 3. Gutachter Prof. J. J. Koenderink (Utrecht) Datum der mundlichen¨ Prufung:¨ 12.2.2002 To Regina ABSTRACT The present thesis deals with two-dimensional signal processing for computer vi- sion. The main topic is the development of a sophisticated generalization of the one-dimensional analytic signal to two dimensions. Motivated by the fundamental property of the latter, the invariance – equivariance constraint, and by its relation to complex analysis and potential theory, a two-dimensional approach is derived. This method is called the monogenic signal and it is based on the Riesz transform instead of the Hilbert transform. By means of this linear approach it is possible to estimate the local orientation and the local phase of signals which are projections of one-dimensional functions to two dimensions. For general two-dimensional signals, however, the monogenic signal has to be further extended, yielding the structure multivector. The latter approach combines the ideas of the structure tensor and the quaternionic analytic signal. A rich feature set can be extracted from the structure multivector, which contains measures for local amplitudes, the local anisotropy, the local orientation, and two local phases. Both, the monogenic signal and the struc- ture multivector are combined with an appropriate scale-space approach, resulting in generalized quadrature filters.
    [Show full text]