A Proofs for Model Equivalence

Total Page:16

File Type:pdf, Size:1020Kb

A Proofs for Model Equivalence Linear Dynamics: Clustering without identification AProofsformodelequivalence This implies that each dimension of yt can be generated by an ARMAX(n, n, n 1) model, where the autore- − In this section, we prove a generalization of Theorem gressive parameters are the characteristic polynomial 4.1 for both LDSs with observed inputs and LDSs with coefficients in reverse order and in negative values. hidden inputs. To prove the theorem, we introduce a lemma to an- A.1 Preliminaries alyze the autoregressive behavior of the hidden state projected to a generalized eigenvector direction. Sum of ARMA processes It is known that the Lemma A.2. Consider a linear dynamical system with sum of ARMA processes is still an ARMA process. n parameters ⇥=(A, B, C, D), hidden states ht R , k m 2 Lemma A.1 (Main Theorem inputs xt R , and outputs yt R as defined in 2 2 in [Granger and Morris, 1976]). The sum of two (1). For any generalized eigenvector ei of A⇤ with independent stationary series generated by ARMA(p, eigenvector λ and rank µ, the lag operator polynomial m) and ARMA(q, n) is generated by ARMA(x, y), µ (i) (1 λL) applied to time series ht := ht,ei results where x p + q and y max(p + n, q + m). − h i in In shorthand notation, ARMA(p, m)+ARMA(q, n)= µ (i) (1 λL) ht = linear transformation of xt, ,xt µ+1. ARMA(p + q, max(p + n, q + m)). − ··· − When two ARMAX processes share the same exoge- Proof. To expand the LHS, first observe that nous input series, the dependency on exogenous in- (i) put is additive, and the above can be extended to (1 λL)h =(1 λL) ht,ei − t − h i ARMAX(p, m, r)+ARMAX(q, n, s)=ARMAX(p + (i) (i) = h ,ei λL h ,ei q, max(p + n, q + m), max(r, s)). h t i h t i = Aht 1 + Bxt,ei ht 1,λei h − ih − i Jordan canonical form and canonical basis Ev- (i) = ht 1, (A⇤ λI)ei + Bxt,ei . ery square real matrix is similar to a complex block h − − i h i diagonal matrix known as its Jordan canonical form We can apply (1 λL) again similarly to obtain − (JCF). In the special case for diagonalizable matri- 2 (i) 2 ces, JCF is the same as the diagonal form. Based (1 λL) ht = ht 2, (A⇤ λI) ei − h − − i on JCF, there exists a canonical basis e consisting { i} + Bxt 1, (A⇤ λI)ei +(1 λL) Bxt,ei , only of eigenvectors and generalized eigenvectors of h − − i − h i A.Avectorv is a generalized eigenvector of rank µ and in general we can show inductively that with corresponding eigenvalue λ if (λI A)µv =0and µ 1 − k (i) k (λI A) − v =0. (1 λL) ht ht k, (A⇤ λI) ei = − 6 − h − − i k 1 Relating the canonical basis to the characteristic poly- − k 1 j j j (1 λL) − − L Bx , (A⇤ λI) e , nomial, the characteristic polynomial can be com- − h t − ii j=0 pletely factored into linear factors χA(λ)=(λ X µ1 µ2 µr − λ1) (λ λ2) (λ λr) over C. The complex − ··· − where the RHS is a linear transformation of roots λ1, ,λr are eigenvalues of A.Foreacheigen- ··· xt, ,xt k+1. value λi,thereexistµi linearly independent generalized ··· − µi µ eigenvectors v such that (λiI A) v =0. Since (λI A⇤) ei =0by definition of general- − − µ ized eigenvectors, ht µ, (A⇤ λI) ei =0,and (i)h − − i A.2 General model equivalence theorem hence (1 λL)µh itself is a linear transformation − t of xt, ,xt µ+1. Now we state Theorem A.1, a more detailed version of ··· − Theorem 4.1. Proof for Theorem A.1 Using Lemma A.2 and the Theorem A.1. For any linear dynamical system with canonical basis, we can prove Theorem A.1. parameters ⇥=(A, B, C, D), hidden dimension n,in- k m puts xt R , and outputs yt R , the outputs yt Proof. Let λ , ,λ be the eigenvalues of A with mul- 2 2 1 ··· r satisfy tiplicity µ , ,µ .SinceA is a real-valued matrix, its 1 ··· r χA† (L)yt = χA† (L)⇠t +Γ(L)xt, (5) adjoint A has the same characteristic polynomial and ⇤ n n 1 eigenvalues as A. There exists a canonical basis ei where L is the lag operator, χA† (L)=L χA(L− ) is the { }i=1 for A⇤, where e , ,e are generalized eigenvectors reciprocal polynomial of the characteristic polynomial 1 ··· µ1 of A, and Γ(L) is an m-by-k matrix of polynomials of degree n 1. − Chloe Ching-Yun Hsu, Michaela Hardt, Moritz Hardt with eigenvalue λ , e , ,e are generalized Gaussians, y0 is then generated by an ARMA(n, n 1) 1 µ1+1 ··· µ1+µ2 t − eigenvectors with eigenvalue λ2,soonandsoforth,and process with autoregressive polynomial χA† (L). eµ1+ +µr 1+1, ,eµ1+ +µr are generalized eigenvec- ··· − ··· ··· The output noise ⇠t itself can be seen as an ARMA(0, 0) tors with eigenvalue λr. process. By Lemma A.1, ARMA(n, n 1)+ (i) − By Lemma (A.2), (1 λ L)µ1 h is a linear trans- ARMA(0, 0)=ARMA(n +0, max(n +0,n 1 + 0)) − 1 t − formation of xt, ,xt µ1+1 for i =1, ,µ1; (1 =ARMA(n, n). Hence the outputs yt are generated (i) ··· − ··· − µ2 by an ARMA(n, n) process as claimed in Theorem 4.1. λ2L) ht is a linear transformation of xt, ,xt µ2+1 ··· − for i = µ +1, ,µ + µ ;soonandsoforth;(1 It is easy to see in the proof of Lemma A.1 that the 1 ··· 1 2 − µr (i) autoregressive parameters do not change when adding λrL) ht is a linear transformation of xt, ,xt µr +1 ··· − a white noise [Granger and Morris, 1976]. for i = µ1 + + µr 1 +1, ,n. ··· − ··· µj We then apply lag operator polynomial ⇧j=i(1 λjL) 6 − BProofforeigenvalueapproximation to both sides of each equation. The lag polynomial µ1 µr in the LHS becomes (1 λ1L) (1 λrL) = theorems − ··· µ−j χ† (L). For the RHS, since ⇧j=i(1 λjL) is of degree A 6 − n µ ,itlagstheRHSbyatmostn µ additional Here we restate Theorem 4.2 and Theorem 4.3 together, − i − i steps, and the RHS becomes a linear transformation of and prove it in three steps for 1) the general case, 2) the simple eigenvalue case, and 3) the explicit condition xt, ,xt n+1. ··· − number bounds for the simple eigenvalue case. (i) Thus, for each i, χ† (L)h is a linear transformation A t Theorem B.1. Suppose yt are the outputs from an of xt, ,xt n+1. ··· − n-dimensional latent linear dynamical system with pa- rameters ⇥=(A, B, C, D) and eigenvalues λ , ,λ . The outputs of the LDS are defined as yt = Cht + 1 ··· n n (i) Let Φˆ =('ˆ1, , 'ˆn) be the estimated autoregressive Dxt + ⇠t = i=1 ht Cei + Dxt + ⇠t.Bylinearity, ··· n (i) parameters with error Φˆ Φ = ✏, and let r1, ,rn and since χA† (L) is of degree n,both i=1 ht Cei and k − k ···n P be the roots of the polynomial 1 'ˆ1z 'ˆnz . χ† (L)Dxt are linear transformations of xt, ,xt n. − −···− A P ··· − We can write any such linear transformation as Γ(L)xt Assuming the LDS is observable, the roots converge to m k Γ(L) the true eigenvalues with convergence rate (✏1/n).If for some -by- matrix of polynomials of degree O n 1. Thus, as desired, all eigenvalues of A are simple (i.e. multiplicity 1), − then the convergence rate is (✏).IfA is symmetric, O χA† (L)yt =χA† (L)⇠t +Γ(L)xt. Lyapunov stable (spectral radius at most 1), and only has simple eigenvalues, then Assuming that there are no common factors in χA† and p n 1 Γ, χ† is then the lag operator polynomial that repre- n2 − 2 A ri λi ✏ + (✏ ). sents the autoregressive part of yt. This assumption is | − |⇧k=j λj λk O 6 | − | the same as saying that yt cannot be expressed as a lower-order ARMA process. The reciprocal polynomial B.1 General (1/n)-exponent bound has the same coefficients in reverse order as the original polynomial. According to the lag operator polynomial This is a known perturbation bound on polynomial 2 n on the LHS, 1 '1L '2L 'nL = χA† (L), root finding due to Ostrowski [Beauzamy, 1999]. n n −1 − −···− and L '1L − 'n = χA(L),sothei-th order n n 1 Lemma B.1. Let Φ(z)=z + '1z − + + 'n 1z + − −···− − autoregressive parameter 'i is the negative value of n n 1 ··· 'n and (z)=z + 1z − + + n 1z + n be two the (n i)-th order coefficient in the characteristic ··· − − polynomials of degree n.If Φ 2 <✏, then the polynomial χA. k − k roots (rk) of Φ and roots (r˜k) of under suitable order satisfy r r˜ 4Cp✏1/n, | k − k| 1/n 1/n A.3 The hidden input case as a corollary where C = max1,0 k n 'n , n . {| | | | } The statement about LDS without external inputs in The general (✏1/n) convergence rate in Theorem 4.2 O Theorem 4.1 comes as a corollary to Theorem A.1, with follows directly from Lemma B.1 and Theorem 4.1. ashortproofhere. B.2 Bound for simple eigenvalues Proof. Define yt0 = Cht +Dxt to be the output without 1 † noise, i.e.
Recommended publications
  • Arxiv:1606.03159V1 [Math.CV] 10 Jun 2016 Higher Degree Forms
    Contemporary Mathematics Self-inversive polynomials, curves, and codes D. Joyner and T. Shaska Abstract. We study connections between self-inversive and self-reciprocal polynomials, reduction theory of binary forms, minimal models of curves, and formally self-dual codes. We prove that if X is a superelliptic curve defined over C and its reduced automorphism group is nontrivial or not isomorphic to a cyclic group, then we can write its equation as yn = f(x) or yn = xf(x), where f(x) is a self-inversive or self-reciprocal polynomial. Moreover, we state a conjecture on the coefficients of the zeta polynomial of extremal formally self-dual codes. 1. Introduction Self-inversive and self-reciprocal polynomials have been studied extensively in the last few decades due to their connections to complex functions and number the- ory. In this paper we explore the connections between such polynomials to algebraic curves, reduction theory of binary forms, and coding theory. While connections to coding theory have been explored by many authors before we are not aware of any previous work that explores the connections of self-inversive and self-reciprocal polynomials to superelliptic curves and reduction theory. In section2, we give a geometric introduction to inversive and reciprocal poly- nomials of a given polynomial. We motivate such definitions via the transformations of the complex plane which is the original motivation to study such polynomials. It is unclear who coined the names inversive, reciprocal, palindromic, and antipalin- 1 dromic, but it is obvious that inversive come from the inversion z 7! z¯ and reciprocal 1 from the reciprocal map z 7! z of the complex plane.
    [Show full text]
  • Linear Algebra Handbook
    CS419 Linear Algebra January 7, 2021 1 What do we need to know? By the end of this booklet, you should know all the linear algebra you need for CS419. More specifically, you'll understand: • Vector spaces (the important ones, at least) • Dirac (bra-ket) notation and why it's quite nice to use • Linear combinations, linearly independent sets, spanning sets and basis sets • Matrices and linear transformations (they're the same thing) • Changing basis • Inner products and norms • Unitary operations • Tensor products • Why we care about linear algebra 2 Vector Spaces In quantum computing, the `vectors' we'll be working with are going to be made up of complex numbers1. A vector space, V , over C is a set of vectors with the vital property2 that αu + βv 2 V for all α; β 2 C and u; v 2 V . Intuitively, this means we can add together and scale up vectors in V, and we know the result is still in V. Our vectors are going to be lists of n complex numbers, v 2 Cn, and Cn will be our most important vector space. Note we can just as easily define vector spaces over R, the set of real numbers. Over the course of this module, we'll see the reasons3 we use C, but for all this linear algebra, we can stick with R as everyone is happier with real numbers. Rest assured for the entire module, every time you see something like \Consider a vector space V ", this vector space will be Rn or Cn for some n 2 N.
    [Show full text]
  • Improved Lower Bound for the Number of Unimodular Zeros of Self-Reciprocal Polynomials with Coefficients in a Finite
    Improved lower bound for the number of unimodular zeros of self-reciprocal polynomials with coefficients in a finite set Tam´as Erd´elyi Department of Mathematics Texas A&M University College Station, Texas 77843 May 26, 2019 Abstract Let n < n < < n be non-negative integers. In a private 1 2 · · · N communication Brian Conrey asked how fast the number of real zeros of the trigonometric polynomials T (θ)= N cos(n θ) tends to N j=1 j ∞ as a function of N. Conrey’s question in general does not appear to P be easy. Let (S) be the set of all algebraic polynomials of degree Pn at most n with each of their coefficients in S. For a finite set S C ⊂ let M = M(S) := max z : z S . It has been shown recently {| | ∈ } that if S R is a finite set and (P ) is a sequence of self-reciprocal ⊂ n polynomials P (S) with P (1) tending to , then the number n ∈ Pn | n | ∞ of zeros of P on the unit circle also tends to . In this paper we n ∞ show that if S Z is a finite set, then every self-reciprocal polynomial ⊂ P (S) has at least ∈ Pn c(log log log P (1) )1−ε 1 | | − zeros on the unit circle of C with a constant c > 0 depending only on ε > 0 and M = M(S). Our new result improves the exponent 1/2 ε in a recent result by Sahasrabudhe to 1 ε. Sahasrabudhe’s − − new idea [66] is combined with the approach used in [34] offering an essentially simplified way to achieve our improvement.
    [Show full text]
  • Q,Q2, ' " If Ave the Conjugates of 0 , Then ©I,***,© Are All Root8 of Unity. N Pjz) = = Zn + Bn 1 Zn~L
    ALGEBRAIC INTEGERS ON THE UNIT CIRCLE* J. Hunter (received 15 April, 1981) 1. Introduction An algebraic integer 0 is a complex number which is a root of an irreducible (over Q ) monic polynomial P(s)=an + a^ ^ zn * + • • • + ao , where the a^ are integers. If 0j = Q,Q2 , ' " ,6^ are the roots of P(a) , then ©l.*’**© are called the conjugates of 0 . The first important result connecting algebraic integers and the unit circle was due to Kronecker [3]: Theorem 1 (Kronecker, 1857). If |0^| 5 1 (1 < i < n) , where ©1 * ©»©2 »’**»©„ave the conjugates of 0 , then ©i,***,© are all root8 of unity. We include a proof of Theorem 1 partly for completeness and partly because it contains ideas (especially those connected with the introduc­ tion of polynomials related to P(a) , the monic irreducible polynomial for 0) that are used to prove other results. Let n P J z ) = (m = 1,2,3,...) = zn + bn_ 1 zn ~ l* ••• b+ 0 , say. * This survey article comprises the text of a seminar presented at the University of Auckland on 15 April 1981. Math. Chronicle 11(1982) Part 2 37-47. 37 Then Pj (a) = P(a) . Also, i£ = 0* + ••• + 0*(k i JV) , then £ 77j ( k = 1,2,3, •••) , by Newton's formulae involving symmetric functions of • Since the b^ are symmetric polynomials in 07.,**»©m with integer coefficients, it follows that the b. 1 tt are rational integers. Now |bn = |the sum of the ^.J products of taken i at a time| < since |0™| <1 (1 < r < n) .
    [Show full text]
  • Self-Reciprocal Polynomials Over Finite Fields 1 the Rôle of The
    Self-reciprocal Polynomials Over Finite Fields by Helmut Meyn1 and Werner G¨otz1 Abstract. The reciprocal f ∗(x) of a polynomial f(x) of degree n is defined by f ∗(x) = xnf(1/x). A polynomial is called self-reciprocal if it coincides with its reciprocal. The aim of this paper is threefold: first we want to call attention to the fact that the product of all self-reciprocal irreducible monic (srim) polynomials of a fixed degree has structural properties which are very similar to those of the product of all irreducible monic polynomials of a fixed degree over a finite field Fq. In particular, we find the number of all srim-polynomials of fixed degree by a simple M¨obius-inversion. The second and central point is a short proof of a criterion for the irreducibility of self-reciprocal polynomials over F2, as given by Varshamov and Garakov in [10]. Any polynomial f of degree n may be transformed into the self-reciprocal polynomial f Q of degree 2n given by f Q(x) := xnf(x + x−1). The criterion states that the self-reciprocal polynomial f Q is irreducible if and only if the irreducible polynomial f satisfies f ′(0) = 1. Finally we present some results on the distribution of the traces of elements in a finite field. These results were obtained during an earlier attempt to prove the criterion cited above and are of some independent interest. For further results on self-reciprocal polynomials see the notes of chapter 3, p. 132 in Lidl/Niederreiter [5]. n 1 The rˆole of the polynomial xq +1 − 1 Some remarks on self-reciprocal polynomials are in order before we can state the main theorem of this section.
    [Show full text]
  • Cyclic Resultants of Reciprocal Polynomials - Fried’S Theorem
    CYCLIC RESULTANTS OF RECIPROCAL POLYNOMIALS - FRIED'S THEOREM STEFAN FRIEDL Abstract. This note for the most part retells the contents of [Fr88]. 1. Introduction 1.1. Knot theory. By a knot we will always mean a simple closed curve in S3, and we are interested in knots up to isotopy. Interest in knots picked up in the late 19th century, when the physicist Tait was trying to find a catalog of all knots with a small number of crossings. Tait produced a correct list of all knots with up to 9 crossings. One can show using simple combinatorics, that his list is complete, but he had no formal proof, that the list did not have any redundancies, i.e. he could not show that any two knots in the list are in fact non-isotopic. 3 3 To a knot K ⊂ S we can associate the knot exterior XK := S n νK, where νK denotes an open tubular neighborhood around K. The idea now is to apply methods from algebraic topology to the knot exteriors. A straight forward calculation shows that H0(XK ; Z) = Z, H1(XK ; Z) = Z and Hi(XK ; Z) = 0 for i ≥ 2. It thus looks like homology groups are useless for distinguishing knots. Nonetheless, there's a little opening one can exploit. Namely, the fact that for any knot K we have H1(XK ; Z) = Z means that given any knot K and any n 2 N one can talk of the n-fold cyclic cover XK;n corresponding to π1(XK ) ! H1(XK ; Z) = Z ! Z=nZ: In particular, given any n 2 N the group H1(XK;n; Z) is an invariant of the knot K.
    [Show full text]
  • 23. Eigenvalues and Eigenvectors
    23. Eigenvalues and Eigenvectors 11/17/20 Eigenvalues and eigenvectors have a variety of uses. They allow us to solve linear difference and differential equations. For many non-linear equations, they inform us about the long-run behavior of the system. They are also useful for defining functions of matrices. 23.1 Eigenvalues We start with eigenvalues. Eigenvalues and Spectrum. Let A be an m m matrix. An eigenvalue (characteristic value, proper value) of A is a number λ so that× A λI is singular. The spectrum of A, σ(A)= {eigenvalues of A}.− Sometimes it’s possible to find eigenvalues by inspection of the matrix. ◮ Example 23.1.1: Some Eigenvalues. Suppose 1 0 2 1 A = , B = . 0 2 1 2 Here it is pretty obvious that subtracting either I or 2I from A yields a singular matrix. As for matrix B, notice that subtracting I leaves us with two identical columns (and rows), so 1 is an eigenvalue. Less obvious is the fact that subtracting 3I leaves us with linearly independent columns (and rows), so 3 is also an eigenvalue. We’ll see in a moment that 2 2 matrices have at most two eigenvalues, so we have determined the spectrum of each:× σ(A)= {1, 2} and σ(B)= {1, 3}. ◭ 2 MATH METHODS 23.2 Finding Eigenvalues: 2 2 × We have several ways to determine whether a matrix is singular. One method is to check the determinant. It is zero if and only the matrix is singular. That means we can find the eigenvalues by solving the equation det(A λI)=0.
    [Show full text]
  • Chapter 2: Linear Algebra User's Manual
    Preprint typeset in JHEP style - HYPER VERSION Chapter 2: Linear Algebra User's Manual Gregory W. Moore Abstract: An overview of some of the finer points of linear algebra usually omitted in physics courses. May 3, 2021 -TOC- Contents 1. Introduction 5 2. Basic Definitions Of Algebraic Structures: Rings, Fields, Modules, Vec- tor Spaces, And Algebras 6 2.1 Rings 6 2.2 Fields 7 2.2.1 Finite Fields 8 2.3 Modules 8 2.4 Vector Spaces 9 2.5 Algebras 10 3. Linear Transformations 14 4. Basis And Dimension 16 4.1 Linear Independence 16 4.2 Free Modules 16 4.3 Vector Spaces 17 4.4 Linear Operators And Matrices 20 4.5 Determinant And Trace 23 5. New Vector Spaces from Old Ones 24 5.1 Direct sum 24 5.2 Quotient Space 28 5.3 Tensor Product 30 5.4 Dual Space 34 6. Tensor spaces 38 6.1 Totally Symmetric And Antisymmetric Tensors 39 6.2 Algebraic structures associated with tensors 44 6.2.1 An Approach To Noncommutative Geometry 47 7. Kernel, Image, and Cokernel 47 7.1 The index of a linear operator 50 8. A Taste of Homological Algebra 51 8.1 The Euler-Poincar´eprinciple 54 8.2 Chain maps and chain homotopies 55 8.3 Exact sequences of complexes 56 8.4 Left- and right-exactness 56 { 1 { 9. Relations Between Real, Complex, And Quaternionic Vector Spaces 59 9.1 Complex structure on a real vector space 59 9.2 Real Structure On A Complex Vector Space 64 9.2.1 Complex Conjugate Of A Complex Vector Space 66 9.2.2 Complexification 67 9.3 The Quaternions 69 9.4 Quaternionic Structure On A Real Vector Space 79 9.5 Quaternionic Structure On Complex Vector Space 79 9.5.1 Complex Structure On Quaternionic Vector Space 81 9.5.2 Summary 81 9.6 Spaces Of Real, Complex, Quaternionic Structures 81 10.
    [Show full text]
  • Eigenvalues and Eigenvectors MAT 67L, Laboratory III
    Eigenvalues and Eigenvectors MAT 67L, Laboratory III Contents Instructions (1) Read this document. (2) The questions labeled \Experiments" are not graded, and should not be turned in. They are designed for you to get more practice with MATLAB before you start working on the programming problems, and they reinforce mathematical ideas. (3) A subset of the questions labeled \Problems" are graded. You need to turn in MATLAB M-files for each problem via Smartsite. You must read the \Getting started guide" to learn what file names you must use. Incorrect naming of your files will result in zero credit. Every problem should be placed is its own M-file. (4) Don't forget to have fun! Eigenvalues One of the best ways to study a linear transformation f : V −! V is to find its eigenvalues and eigenvectors or in other words solve the equation f(v) = λv ; v 6= 0 : In this MATLAB exercise we will lead you through some of the neat things you can to with eigenvalues and eigenvectors. First however you need to teach MATLAB to compute eigenvectors and eigenvalues. Lets briefly recall the steps you would have to perform by hand: As an example lets take the matrix of a linear transformation f : R3 ! R3 to be (in the canonical basis) 01 2 31 M := @2 4 5A : 3 5 6 The steps to compute eigenvalues and eigenvectors are (1) Calculate the characteristic polynomial P (λ) = det(M − λI) : (2) Compute the roots λi of P (λ). These are the eigenvalues. (3) For each of the eigenvalues λi calculate ker(M − λiI) : The vectors of any basis for for ker(M − λiI) are the eigenvectors corresponding to λi.
    [Show full text]
  • Eigenvalues and Eigenvectors
    Eigenvalues and eigenvectors Definition: For a vector space V over a field K and a linear map f : V → V , the eigenvalue of f is any λ ∈ K for which exists a vector u ∈ V \ 0 such that f (u) = λu. The eigenvector corresponding to an eigenvalue λ is any vector u such that f (u) = λu. If V is of finite dimension n, then f can be represented n×n by the matrix A = [f ]XX ∈ K w.r.t. some basis X of V . n This way we get eigenvalues λ ∈ K and eigenvectors x ∈ K of matrices — these shall satisfy Ax = λx. The collection of all eigenvalues of a matrix is its spectrum. Examples — a linear map in the plane R2 The axis symmetry by the axis of the 2nd and 4th quadrant x x 1 2 0 −1 [f ] = KK −1 0 T λ1 = 1 x1 = c · (−1, 1) T λ2 = −1 x2 = c · (1, 1) The rotation by the right angle 0 1 [f ] = KK −1 0 No real eigenvalues nor eigenvectors exist. The projection onto the first coordinate x 1 1 0 [f ] = KK 0 0 x2 T λ1 = 0 x1 = c · (0, 1) T λ2 = 1 x2 = c · (1, 0) Scaling by the factor2 f (u) 2 0 [f ] = u KK 0 2 λ1 = 2 Every vector is an eigenvector. A linear map given by a matrix x1 f (u) 1 0 [f ]KK = u 1 1 T λ1 = 1 x1 = c · (0, 1) Eigenvectors and eigenvalues of a diagonal matrix D The equation d1,1 0 ..
    [Show full text]
  • CLUSTER MONOMIALS ARE DUAL CANONICAL Contents 1. Introduction 1 2. the Quantum Group 3 3. Bases of Canonical Type 4 4. Generalis
    CLUSTER MONOMIALS ARE DUAL CANONICAL PETER J MCNAMARA Abstract. Kang, Kashiwara, Kim and Oh have proved that cluster monomials lie in the dual canonical basis, under a symmetric type assumiption. This involves constructing a monoidal categorification of a quantum cluster algebra using representations of KLR alge- bras. We use a folding technique to generalise their results to all Lie types. Contents 1. Introduction 1 2. The quantum group 3 3. Bases of canonical type 4 4. Generalised minors 6 5. KLR algebras with automorphism 7 6. The Grothendieck group construction 9 7. Induction, restriction and duality 9 8. Cuspidal modules 10 9. The quantum unipotent ring and its categorification 12 10. R-matrices 13 11. Reduction modulo p 14 12. Quantum cluster algebras 16 13. The initial quiver 17 14. Seeds with automorphism 18 15. The initial seed 19 16. Cluster mutation 23 17. Decategorification 24 18. Conclusion 27 References 28 1. Introduction Let G be a Kac-Moody group, w an element of its Weyl group. Then asssociated to w and G there is a unipotent subgroup N(w), whose Lie algebra is spanned by the positive roots α such that wα is negative. Its coordinate ring C[N(w)] has a natural structure of a cluster algebra [GLS11]. This story generalises to the quantum setting [GLS13a, GY1], where the Date: August 25, 2020. 1 2 PETER J MCNAMARA quantised coordinate ring Aq(n(w)) has a quantum cluster algebra structure. In this paper, we work with the quantum version. However, for this introduction, we shall continue with the classical story.
    [Show full text]
  • Modules and Matrices
    Chapter I Modules and matrices Apart from the general reference given in the Introduction, for this Chapter we refer in particular to [8] and [20]. Let R be a ring with 1 6= 0. We assume most definitions and basic notions concerning left and right modules over R and recall just a few facts. If M is a left R-module, then for every m 2 M the set Ann (m) := fr 2 R j rm = 0M g T is a left ideal of R. Moreover Ann (M) = m2M Ann (m) is an ideal of R. The module M is torsion free if Ann (m) = f0g for all non-zero m 2 M. The regular module RR is the additive group (R; +) considered as a left R-module with respect to the ring product. The submodules of RR are precisely the left ideals of R. A finitely generated R-module is free if it is isomorphic to the direct sum of n copies of RR, for some natural number n. Namely if it is isomorphic to the module n (0.1) (RR) := RR ⊕ · · · ⊕R R | {z } n times in which the operations are performed component-wise. If R is commutative, then n ∼ m (RR) = (RR) only if n = m. So, in the commutative case, the invariant n is called n n the rank of (RR) . Note that (RR) is torsion free if and only if R has no zero-divisors. The aim of this Chapter is to determine the structure of finitely generated modules over a principal ideal domain (which are a generalization of finite dimensional vector spaces) and to describe some applications.
    [Show full text]