3 Totally Unimodular Matrices and Their Applications

Total Page:16

File Type:pdf, Size:1020Kb

3 Totally Unimodular Matrices and Their Applications 3 Totally Unimodular Matrices and Their Applications 3.1 Totally Unimodular Matrices and Integer Polyhedra n The integer hull of a rational polyhedron P = {x ∈ R : Ax ≤ b} is the following set, n PI := conv(P ∩ Z ). (12) It can be shown that the integer hull is a polyhedron as well. A polyhedron is called integer if PI = P . Access to PI , say, an H-presentation, allows to solve integer programming problems over P , since n n max{wx : x ∈ Z , Ax ≤ b} = max{wx : x ∈ P ∩ Z } = max{wx : x ∈ PI } (13) and then, by linear programming over P on the right-hand side of (13) we can solve the integer programming problem on the left-hand side. We now study a class of matrices for which P = PI and describe several important applications. Definition 3.1 A matrix A is totally unimodular (TUM) if each subdeterminant of A is −1, 0 or 1. By the definition, all entries of a totally unimodular matrix are −1, 0 or 1 and hence, in particular, TUM matrices are integer. We have the following theorem of Hoffman and Kruskal. Theorem 3.2 Let A be any totally unimodular matrix. Then for every integer vector b, the polyhedron P = {x : Ax ≤ b} is integer, that is, PI = P . Proof. Suppose A is an m × n matrix. We assume that A has rank n. (Equivalently P is pointed, namely, has at least one vertex, so contains no lines; this is the situation in practically any application.) m Consider any integer right-hand side vector b ∈ Z . Then P is integer if and only if every vertex of P is an integer vector. Consider any vertex v of P . By Proposition 2.18 there is a nonsingular n × n submatrix Av of A for which the subsystem Avv = bv is satisfied with equality by v, where bv is the corresponding subvector of b. But then −1 1 ∗ v = Av bv = Avbv det(Av) ∗ ∗ i+j where Av is the adjoint matrix of Av whose (i, j)-th entry is (−1) times the minor of Av obtained by deleting its i-th-column and j-th row. Since A is TUM, we have det(Av) = ±1 and so v is integer. Corollary 3.3 There is a polynomial time algorithm that, given any m, n, totally unimodular matrix m×n m n A ∈ Z , and vectors b ∈ Z and w ∈ Z . solves the following integer programming problem, n max{wx : x ∈ Z , Ax ≤ b} . (14) The property of being TUM is preserved under various operations. For instance, if A is TUM then so are AT ,[A, ±I] and [A, ±A]. This implies that other variants of (14) over TUM matrices, such as n n max{wx : x ∈ N , Ax ≤ b} or min{wx : x ∈ N , Ax = b}, are polynomial time solvable as well. Another nice and useful property of totally unimodular matrices will be discussed in Section 4 where we study test sets and Graver bases. 16 3.2 Deciding Total Unimodularity Clearly, deciding if a given matrix is totally unimodular is in co-NP. There are many characteri- zations of totally unimodular matrices, that do not lead to an efficient algorithm for testing total unimodularity. We mention, without proof, the following characterization. Theorem 3.4 A matrix is totally unimodular if and only if every subset of its columns can be colored red-blue so that the sum of the red columns minus the sum of the blue columns is a (−1, 0, 1)-vector. However, it turns out that it is possible to test totally unimodularity in polynomial time: this is a consequence of an involved decomposition theorem of Seymour which is beyond our scope here. As we shall see in the next subsection, a very useful subclass of totally unimodular matrices consist of totally unimodular matrices with exactly two non zero entries per column. The following characterization of such matrices leads to a simple efficient algorithm for their recognition. Theorem 3.5 Let A be a (−1, 0, 1)-matrix with two non zero entries per column. Then A is totally unimodular if and only if its rows can be colored red-blue so that in each column the two non zeros are either of opposite sign and same color or opposite color and same sign. Exercise 3.6 1. Prove Theorem 3.5 using Theorem 3.4. 2. Describe an efficient algorithm based on Theorem 3.5 for deciding if a given {−1, 0, +1}-matrix with two non zeros per column is TUM. Prove that your algorithm works. 3. Is the following matrix TUM ? Prove your answer. 0001111000000 0010010001001 0000000000111 1001000100100 . 0000001111000 0100100010010 1110000000000 3.3 Directed Graphs and Bipartite Graphs We now describe two very important classes of totally unimodular matrices, which have very impor- tant consequences for several fundamental discrete optimization problems including transportation problems and transhipment problems (which include network flows as special case). Recall that a digraph is a pair G = (V, E) where V is a finite set of vertices and E ⊆ V × V is a finite set of arcs, which are ordered pairs of elements of V . We write either e = (u, v) or e = uv for an arc e with tail u and head v and may think of e as an arrow pointing from u to v. The incidence matrix A of a digraph G is the V × E vertex-arc matrix with Av,e := 1 if v is the head of e, Av,e := −1 if v is the tail of e, and Av,e := 0 otherwise. We have the following very useful theorem. Theorem 3.7 The incidence matrix A of any digraph G = (V, E) is totally unimodular. 17 Proof. Let A be the incidence matrix of a digraph G. We prove by induction on m that the determi- nant of any m × m submatrix B of A is 0 or ±1. For m = 1 this trivially holds. Suppose then m ≥ 2. If B has also two nonzero entries per column then the rows of B sum to zero so B is singular and det(B) = 0. Otherwise, there is a column of B containing at most one nonzero entry: evaluating det(B) by Laplace expansion on that column, we see that det(B) is obtained by multiplying the determinant of an (m−1)×(m−1) submatrix of B and hence of A by 0 or ±1, and the claim follows by induction. Next recall that a graph is a pair G = (V, E) where V is a finite set of vertices and E is a finite set of edges, which are unordered pairs of elements of V . We write either e = {u, v} or e = uv or e = vu for an edge e with vertices u, v. The incidence matrix A of a graph G is the V × E vertex-edge matrix with Av,e := 1 if v ∈ e (that is, v is a vertex of e), and Av,e := 0 otherwise. A graph G = (V, E) is bipartite if its vertex set can be partitioned V = U ] W so that every edge e ∈ E has one vertex in U and one in W . A bipartite graph is often denoted by G = (U, W, E) with the partition U ] W explicitly given. We have the following second useful theorem. Theorem 3.8 The incidence matrix A of any bipartite graph G = (U, W, E) is totally unimodular. Proof. Let A be the incidence matrix of G = (U, W, E). Let A0 be obtained from A by multiplying all rows corresponding to vertices in U by −1. Note that A is totally unimodular if and only A0 is. Let G0 be the digraph obtained from G by orienting every edge e = uw from u ∈ U to w ∈ W . Then A0 is the incidence matrix of G0. It follows from Theorem 3.7 that A0 is totally unimodular, and hence so is A. 3.4 Applications: Transportation Problems and Transshipment Problems 3.4.1 Transportation Problems Consider the following generalized transportation problem with both lower and upper bounds, X m×n X X min { wi,jxi,j : x ∈ Z , li,j ≤ xi,j ≤ ui,j , xi,j = ri , xi,j = cj } . (15) i,j j i Let Am.n denote the matrix defining the demand and supply equations. Then Am.n is the incidence matrix of the complete bipartite graph Km,n. For instance, with m = 2 and n = 4 it is given by 1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 1 0 0 0 0 0 0 A2,4 = . 0 0 1 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 1 Let Inm be the nm × nm identity matrix. Arrange coordinates of m × n matrices as vectors by x := (x1,1, . , xm,1, . , x1,n, . , xm,n) and likewise for w = (wi,j), l = (li,j) and u = (ui,j). Put r = (r1, . , rm) and c = (c1, . , cn). Define r Am,n c −Am,n −r A := , b := . Inm −c −Inm u −l 18 m×n Then the generalized transportation problem (15) is equivalent to max{wx : x ∈ P ∩ Z } with m×n X X m×n P := {x ∈ R : li,j ≤ xi,j ≤ ui,j , xi,j = ri , xi,j = cj } = {x ∈ R : Ax ≤ b} . j i By Theorem 3.8, the matrix A is totally unimodular. Therefore, by Theorem 3.2, the transportation polyhedron P is integer, PI = P .
Recommended publications
  • Parametrizations of K-Nonnegative Matrices
    Parametrizations of k-Nonnegative Matrices Anna Brosowsky, Neeraja Kulkarni, Alex Mason, Joe Suk, Ewin Tang∗ October 2, 2017 Abstract Totally nonnegative (positive) matrices are matrices whose minors are all nonnegative (positive). We generalize the notion of total nonnegativity, as follows. A k-nonnegative (resp. k-positive) matrix has all minors of size k or less nonnegative (resp. positive). We give a generating set for the semigroup of k-nonnegative matrices, as well as relations for certain special cases, i.e. the k = n − 1 and k = n − 2 unitriangular cases. In the above two cases, we find that the set of k-nonnegative matrices can be partitioned into cells, analogous to the Bruhat cells of totally nonnegative matrices, based on their factorizations into generators. We will show that these cells, like the Bruhat cells, are homeomorphic to open balls, and we prove some results about the topological structure of the closure of these cells, and in fact, in the latter case, the cells form a Bruhat-like CW complex. We also give a family of minimal k-positivity tests which form sub-cluster algebras of the total positivity test cluster algebra. We describe ways to jump between these tests, and give an alternate description of some tests as double wiring diagrams. 1 Introduction A totally nonnegative (respectively totally positive) matrix is a matrix whose minors are all nonnegative (respectively positive). Total positivity and nonnegativity are well-studied phenomena and arise in areas such as planar networks, combinatorics, dynamics, statistics and probability. The study of total positivity and total nonnegativity admit many varied applications, some of which are explored in “Totally Nonnegative Matrices” by Fallat and Johnson [5].
    [Show full text]
  • Quiz7 Problem 1
    Quiz 7 Quiz7 Problem 1. Independence The Problem. In the parts below, cite which tests apply to decide on independence or dependence. Choose one test and show complete details. 1 ! 1 ! (a) Vectors ~v = ;~v = 1 −1 2 1 0 1 1 0 1 1 0 1 1 B C B C B C (b) Vectors ~v1 = @ −1 A ;~v2 = @ 1 A ;~v3 = @ 0 A 0 −1 −1 2 5 (c) Vectors ~v1;~v2;~v3 are data packages constructed from the equations y = x ; y = x ; y = x10 on (−∞; 1). (d) Vectors ~v1;~v2;~v3 are data packages constructed from the equations y = 1 + x; y = 1 − x; y = x3 on (−∞; 1). Basic Test. To show three vectors are independent, form the system of equations c1~v1 + c2~v2 + c3~v3 = ~0; then solve for c1; c2; c3. If the only solution possible is c1 = c2 = c3 = 0, then the vectors are independent. Linear Combination Test. A list of vectors is independent if and only if each vector in the list is not a linear combination of the remaining vectors, and each vector in the list is not zero. Subset Test. Any nonvoid subset of an independent set is independent. The basic test leads to three quick independence tests for column vectors. All tests use the augmented matrix A of the vectors, which for 3 vectors is A =< ~v1j~v2j~v3 >. Rank Test. The vectors are independent if and only if rank(A) equals the number of vectors. Determinant Test. Assume A is square. The vectors are independent if and only if the deter- minant of A is nonzero.
    [Show full text]
  • Theory of Angular Momentum Rotations About Different Axes Fail to Commute
    153 CHAPTER 3 3.1. Rotations and Angular Momentum Commutation Relations followed by a 90° rotation about the z-axis. The net results are different, as we can see from Figure 3.1. Our first basic task is to work out quantitatively the manner in which Theory of Angular Momentum rotations about different axes fail to commute. To this end, we first recall how to represent rotations in three dimensions by 3 X 3 real, orthogonal matrices. Consider a vector V with components Vx ' VY ' and When we rotate, the three components become some other set of numbers, V;, V;, and Vz'. The old and new components are related via a 3 X 3 orthogonal matrix R: V'X Vx V'y R Vy I, (3.1.1a) V'z RRT = RTR =1, (3.1.1b) where the superscript T stands for a transpose of a matrix. It is a property of orthogonal matrices that 2 2 /2 /2 /2 vVx + V2y + Vz =IVVx + Vy + Vz (3.1.2) is automatically satisfied. This chapter is concerned with a systematic treatment of angular momen- tum and related topics. The importance of angular momentum in modern z physics can hardly be overemphasized. A thorough understanding of angu- Z z lar momentum is essential in molecular, atomic, and nuclear spectroscopy; I I angular-momentum considerations play an important role in scattering and I I collision problems as well as in bound-state problems. Furthermore, angu- I lar-momentum concepts have important generalizations-isospin in nuclear physics, SU(3), SU(2)® U(l) in particle physics, and so forth.
    [Show full text]
  • Adjacency and Incidence Matrices
    Adjacency and Incidence Matrices 1 / 10 The Incidence Matrix of a Graph Definition Let G = (V ; E) be a graph where V = f1; 2;:::; ng and E = fe1; e2;:::; emg. The incidence matrix of G is an n × m matrix B = (bik ), where each row corresponds to a vertex and each column corresponds to an edge such that if ek is an edge between i and j, then all elements of column k are 0 except bik = bjk = 1. 1 2 e 21 1 13 f 61 0 07 3 B = 6 7 g 40 1 05 4 0 0 1 2 / 10 The First Theorem of Graph Theory Theorem If G is a multigraph with no loops and m edges, the sum of the degrees of all the vertices of G is 2m. Corollary The number of odd vertices in a loopless multigraph is even. 3 / 10 Linear Algebra and Incidence Matrices of Graphs Recall that the rank of a matrix is the dimension of its row space. Proposition Let G be a connected graph with n vertices and let B be the incidence matrix of G. Then the rank of B is n − 1 if G is bipartite and n otherwise. Example 1 2 e 21 1 13 f 61 0 07 3 B = 6 7 g 40 1 05 4 0 0 1 4 / 10 Linear Algebra and Incidence Matrices of Graphs Recall that the rank of a matrix is the dimension of its row space. Proposition Let G be a connected graph with n vertices and let B be the incidence matrix of G.
    [Show full text]
  • A Brief Introduction to Spectral Graph Theory
    A BRIEF INTRODUCTION TO SPECTRAL GRAPH THEORY CATHERINE BABECKI, KEVIN LIU, AND OMID SADEGHI MATH 563, SPRING 2020 Abstract. There are several matrices that can be associated to a graph. Spectral graph theory is the study of the spectrum, or set of eigenvalues, of these matrices and its relation to properties of the graph. We introduce the primary matrices associated with graphs, and discuss some interesting questions that spectral graph theory can answer. We also discuss a few applications. 1. Introduction and Definitions This work is primarily based on [1]. We expect the reader is familiar with some basic graph theory and linear algebra. We begin with some preliminary definitions. Definition 1. Let Γ be a graph without multiple edges. The adjacency matrix of Γ is the matrix A indexed by V (Γ), where Axy = 1 when there is an edge from x to y, and Axy = 0 otherwise. This can be generalized to multigraphs, where Axy becomes the number of edges from x to y. Definition 2. Let Γ be an undirected graph without loops. The incidence matrix of Γ is the matrix M, with rows indexed by vertices and columns indexed by edges, where Mxe = 1 whenever vertex x is an endpoint of edge e. For a directed graph without loss, the directed incidence matrix N is defined by Nxe = −1; 1; 0 corresponding to when x is the head of e, tail of e, or not on e. Definition 3. Let Γ be an undirected graph without loops. The Laplace matrix of Γ is the matrix L indexed by V (G) with zero row sums, where Lxy = −Axy for x 6= y.
    [Show full text]
  • Introduction
    Introduction This dissertation is a reading of chapters 16 (Introduction to Integer Liner Programming) and 19 (Totally Unimodular Matrices: fundamental properties and examples) in the book : Theory of Linear and Integer Programming, Alexander Schrijver, John Wiley & Sons © 1986. The chapter one is a collection of basic definitions (polyhedron, polyhedral cone, polytope etc.) and the statement of the decomposition theorem of polyhedra. Chapter two is “Introduction to Integer Linear Programming”. A finite set of vectors a1 at is a Hilbert basis if each integral vector b in cone { a1 at} is a non- ,… , ,… , negative integral combination of a1 at. We shall prove Hilbert basis theorem: Each ,… , rational polyhedral cone C is generated by an integral Hilbert basis. Further, an analogue of Caratheodory’s theorem is proved: If a system n a1x β1 amx βm has no integral solution then there are 2 or less constraints ƙ ,…, ƙ among above inequalities which already have no integral solution. Chapter three contains some basic result on totally unimodular matrices. The main theorem is due to Hoffman and Kruskal: Let A be an integral matrix then A is totally unimodular if and only if for each integral vector b the polyhedron x x 0 Ax b is integral. ʜ | ƚ ; ƙ ʝ Next, seven equivalent characterization of total unimodularity are proved. These characterizations are due to Hoffman & Kruskal, Ghouila-Houri, Camion and R.E.Gomory. Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices. We prove Konig-Egervary theorem for bipartite graph. 1 Chapter 1 Preliminaries Definition 1.1: (Polyhedron) A polyhedron P is the set of points that satisfies Μ finite number of linear inequalities i.e., P = {xɛ ő | ≤ b} where (A, b) is an Μ m (n + 1) matrix.
    [Show full text]
  • Chapter Four Determinants
    Chapter Four Determinants In the first chapter of this book we considered linear systems and we picked out the special case of systems with the same number of equations as unknowns, those of the form T~x = ~b where T is a square matrix. We noted a distinction between two classes of T ’s. While such systems may have a unique solution or no solutions or infinitely many solutions, if a particular T is associated with a unique solution in any system, such as the homogeneous system ~b = ~0, then T is associated with a unique solution for every ~b. We call such a matrix of coefficients ‘nonsingular’. The other kind of T , where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions, we call ‘singular’. Through the second and third chapters the value of this distinction has been a theme. For instance, we now know that nonsingularity of an n£n matrix T is equivalent to each of these: ² a system T~x = ~b has a solution, and that solution is unique; ² Gauss-Jordan reduction of T yields an identity matrix; ² the rows of T form a linearly independent set; ² the columns of T form a basis for Rn; ² any map that T represents is an isomorphism; ² an inverse matrix T ¡1 exists. So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices, in this chapter we will usually simply say ‘matrix’ in place of ‘square matrix’.) More precisely, we will develop infinitely many formulas, one for 1£1 ma- trices, one for 2£2 matrices, etc.
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • On the Eigenvalues of Euclidean Distance Matrices
    “main” — 2008/10/13 — 23:12 — page 237 — #1 Volume 27, N. 3, pp. 237–250, 2008 Copyright © 2008 SBMAC ISSN 0101-8205 www.scielo.br/cam On the eigenvalues of Euclidean distance matrices A.Y. ALFAKIH∗ Department of Mathematics and Statistics University of Windsor, Windsor, Ontario N9B 3P4, Canada E-mail: [email protected] Abstract. In this paper, the notion of equitable partitions (EP) is used to study the eigenvalues of Euclidean distance matrices (EDMs). In particular, EP is used to obtain the characteristic poly- nomials of regular EDMs and non-spherical centrally symmetric EDMs. The paper also presents methods for constructing cospectral EDMs and EDMs with exactly three distinct eigenvalues. Mathematical subject classification: 51K05, 15A18, 05C50. Key words: Euclidean distance matrices, eigenvalues, equitable partitions, characteristic poly- nomial. 1 Introduction ( ) An n ×n nonzero matrix D = di j is called a Euclidean distance matrix (EDM) 1, 2,..., n r if there exist points p p p in some Euclidean space < such that i j 2 , ,..., , di j = ||p − p || for all i j = 1 n where || || denotes the Euclidean norm. i , ,..., Let p , i ∈ N = {1 2 n}, be the set of points that generate an EDM π π ( , ,..., ) D. An m-partition of D is an ordered sequence = N1 N2 Nm of ,..., nonempty disjoint subsets of N whose union is N. The subsets N1 Nm are called the cells of the partition. The n-partition of D where each cell consists #760/08. Received: 07/IV/08. Accepted: 17/VI/08. ∗Research supported by the Natural Sciences and Engineering Research Council of Canada and MITACS.
    [Show full text]
  • Incidence Matrices and Interval Graphs
    Pacific Journal of Mathematics INCIDENCE MATRICES AND INTERVAL GRAPHS DELBERT RAY FULKERSON AND OLIVER GROSS Vol. 15, No. 3 November 1965 PACIFIC JOURNAL OF MATHEMATICS Vol. 15, No. 3, 1965 INCIDENCE MATRICES AND INTERVAL GRAPHS D. R. FULKERSON AND 0. A. GROSS According to present genetic theory, the fine structure of genes consists of linearly ordered elements. A mutant gene is obtained by alteration of some connected portion of this structure. By examining data obtained from suitable experi- ments, it can be determined whether or not the blemished portions of two mutant genes intersect or not, and thus inter- section data for a large number of mutants can be represented as an undirected graph. If this graph is an "interval graph," then the observed data is consistent with a linear model of the gene. The problem of determining when a graph is an interval graph is a special case of the following problem concerning (0, l)-matrices: When can the rows of such a matrix be per- muted so as to make the l's in each column appear consecu- tively? A complete theory is obtained for this latter problem, culminating in a decomposition theorem which leads to a rapid algorithm for deciding the question, and for constructing the desired permutation when one exists. Let A — (dij) be an m by n matrix whose entries ai3 are all either 0 or 1. The matrix A may be regarded as the incidence matrix of elements el9 e2, , em vs. sets Sl9 S2, , Sn; that is, ai3 = 0 or 1 ac- cording as et is not or is a member of S3 .
    [Show full text]
  • Section 2.4–2.5 Partitioned Matrices and LU Factorization
    Section 2.4{2.5 Partitioned Matrices and LU Factorization Gexin Yu [email protected] College of William and Mary Gexin Yu [email protected] Section 2.4{2.5 Partitioned Matrices and LU Factorization One approach to simplify the computation is to partition a matrix into blocks. 2 3 0 −1 5 9 −2 3 Ex: A = 4 −5 2 4 0 −3 1 5. −8 −6 3 1 7 −4 This partition can also be written as the following 2 × 3 block matrix: A A A A = 11 12 13 A21 A22 A23 3 0 −1 In the block form, we have blocks A = and so on. 11 −5 2 4 partition matrices into blocks In real world problems, systems can have huge numbers of equations and un-knowns. Standard computation techniques are inefficient in such cases, so we need to develop techniques which exploit the internal structure of the matrices. In most cases, the matrices of interest have lots of zeros. Gexin Yu [email protected] Section 2.4{2.5 Partitioned Matrices and LU Factorization 2 3 0 −1 5 9 −2 3 Ex: A = 4 −5 2 4 0 −3 1 5. −8 −6 3 1 7 −4 This partition can also be written as the following 2 × 3 block matrix: A A A A = 11 12 13 A21 A22 A23 3 0 −1 In the block form, we have blocks A = and so on. 11 −5 2 4 partition matrices into blocks In real world problems, systems can have huge numbers of equations and un-knowns.
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]