Simple Matrices

Total Page:16

File Type:pdf, Size:1020Kb

Simple Matrices Appendix B Simple matrices Mathematicians also attempted to develop algebra of vectors but there was no natural definition of the product of two vectors that held in arbitrary dimensions. The first vector algebra that involved a noncommutative vector product (that is, v w need not equal w v) was proposed by Hermann Grassmann in his book× Ausdehnungslehre× (1844 ). Grassmann’s text also introduced the product of a column matrix and a row matrix, which resulted in what is now called a simple or a rank-one matrix. In the late 19th century the American mathematical physicist Willard Gibbs published his famous treatise on vector analysis. In that treatise Gibbs represented general matrices, which he called dyadics, as sums of simple matrices, which Gibbs called dyads. Later the physicist P. A. M. Dirac introduced the term “bra-ket” for what we now call the scalar product of a “bra” (row) vector times a “ket” (column) vector and the term “ket-bra” for the product of a ket times a bra, resulting in what we now call a simple matrix, as above. Our convention of identifying column matrices and vectors was introduced by physicists in the 20th century. Suddhendu Biswas [52, p.2] − B.1 Rank-1 matrix (dyad) Any matrix formed from the unsigned outer product of two vectors, T M N Ψ = uv R × (1783) ∈ where u RM and v RN , is rank-1 and called dyad. Conversely, any rank-1 matrix must have the∈ form Ψ . [233∈, prob.1.4.1] Product uvT is a negative dyad. For matrix products ABT, in general, we have − (ABT) (A) , (ABT) (BT) (1784) R ⊆ R N ⊇ N B.1 § with equality when B =A [374, §3.3, 3.6] or respectively when B is invertible and (A)= 0. Yet for all nonzero dyads we have N T T T (uv ) = (u) , (uv ) = (v ) v⊥ (1785) R R N N ≡ B.1Proof. (AAT) (A) is obvious. R ⊆ R (AAT) = AATy y Rm R { | ∈ } AATy ATy (AT) = (A) by(146) ¨ ⊇ { | ∈ R } R Dattorro, Convex Optimization Euclidean Distance Geometry 2ε, εβoo, v2018.09.21. 517 M 518 APPENDIX B. SIMPLE MATRICES (v) (Ψ) = (u) R R R r0 0 r (Ψ)= (vT) (uT) N N N RN = (v) ⊞ (uvT) (uT) ⊞ (uvT) = RM R N N R T M N Figure 183: The four fundamental subspaces [376, §3.6] for any dyad Ψ = uv R × : N ∈M (v) (Ψ) & (uT) (Ψ). Ψ(x), uvTx is a linear mapping from R to R . Map R ⊥ N N ⊥ R from (v) to (u) is bijective. [374, §3.1] R R where dim v⊥ =N 1. − It is obvious that a dyad can be 0 only when u or v is 0 ; Ψ = uvT = 0 u = 0 or v = 0 (1786) ⇔ The matrix 2-norm for Ψ is equivalent to Frobenius’ norm; Ψ = σ = uvT = uvT = u v (1787) k k2 1 k kF k k2 k k k k When u and v are normalized, the pseudoinverse is the transposed dyad. Otherwise, T T vu Ψ† = (uv )† = (1788) u 2 v 2 k k k k T N N T When dyad uv R × is square, uv has at least N 1 0-eigenvalues and ∈ − corresponding eigenvectors spanning v⊥. The remaining eigenvector u spans the range of uvT with corresponding eigenvalue λ = vTu = tr(uvT) R (1789) ∈ Determinant is a product of the eigenvalues; so, it is always true that det Ψ = det(uvT) = 0 (1790) When λ = 1 , the square dyad is a nonorthogonal projector projecting on its range 2 (Ψ =Ψ , §E.6); a projector dyad. It is quite possible that u v⊥ making the remaining eigenvalue instead 0 ;B.2 λ = 0 together with the first N 1∈ 0-eigenvalues; id est, it is possible uvT were nonzero while all its eigenvalues are 0. The− matrix 1 [ 1 1 ] 1 1 = (1791) 1 1 1 · − ¸ · − − ¸ for example, has two 0-eigenvalues. In other words, eigenvector u may simultaneously be a member of the nullspace and range of the dyad. The explanation is, simply, because u and v share the same dimension, dim u = M = dim v = N : B.2 A dyad is not always diagonalizable ( §A.5) because its eigenvectors are not necessarily independent. B.1. RANK-1 MATRIX (DYAD) 519 Proof. Figure 183 sees the four fundamental subspaces for the dyad. Linear operator Ψ : RN RM provides a map between vector spaces that remain distinct when M =N ; → u (uvT) ∈ R u (uvT) vTu = 0 (1792) ∈ N ⇔ (uvT) (uvT) = R ∩ N ∅ ¨ B.1.0.1 rank-1 modification M N N M T B.3 For A R × , x R , y R , and y Ax=0 [238, §2.1] ∈ ∈ ∈ 6 AxyTA rank A = rank(A) 1 (1794) − yTAx − µ ¶ N N T 1 Given nonsingular matrix A R × Ä 1 + v A− u =0, [178, §4.11.2] [253, App.6] ∈ 6 [463, §2.3 prob.16] (Sherman-Morrison-Woodbury) 1 T 1 T 1 1 A− uv A− (A uv )− = A− (1795) ± ∓ 1 vTA 1u ± − B.1.0.2 dyad symmetry T N N In the specific circumstance that v = u , then uu R × is symmetric, rank-1 , and positive semidefinite having exactly N 1 0-eigenvalues.∈ In fact, (Theorem A.3.1.0.7) − uvT 0 v = u (1796) º ⇔ and the remaining eigenvalue is almost always positive; λ = uTu = tr(uuT) > 0 unless u=0 (1797) When λ = 1 , the dyad becomes an orthogonal projector. Matrix Ψ u (1798) uT 1 · ¸ for example, is rank-1 positive semidefinite if and only if Ψ = uuT. B.1.1 Dyad independence Now we consider a sum of dyads like (1783) as encountered in diagonalization and singular value decomposition: k k k s wT = s wT = (s ) w i are l.i. (1799) R i i R i i R i ⇐ i ∀ Ãi=1 ! i=1 i=1 X X ¡ ¢ X B.3This rank-1 modification formula has a Schur progenitor, in the symmetric case: minimize c c A Ax (1793) subject to 0 yTA c º · ¸ T T T Axy A has analytical solution by (1679b): c y AA†Ax = y Ax . Difference A comes from (1679c). ≥ − yTAx Rank modification is provable via Theorem A.4.0.1.3. 520 APPENDIX B. SIMPLE MATRICES range of summation is the vector sum of ranges.B.4 (Theorem B.1.1.1.1) Under the assumption the dyads are linearly independent (l.i.), then vector sums are unique (p.631): for w l.i. and s l.i. { i} { i} k s wT = s wT . s wT = (s ) . (s ) (1800) R i i R 1 1 ⊕ ⊕ R k k R 1 ⊕ ⊕ R k Ãi=1 ! X ¡ ¢ ¡ ¢ B.1.1.0.1 Definition. Linearly independent dyads. [243, p.29 thm.11] [382, p.2] The set of k dyads s wT i=1 ...k (1801) i i | where s CM and w CN , is said© to be linearly independentª iff i ∈ i ∈ k T T rank SW , si wi = k (1802) à i=1 ! X M k N k where S , [s s ] C × and W , [w w ] C × . 1 ··· k ∈ 1 ··· k ∈ △ Dyad independence does not preclude existence of a nullspace (SW T) , as defined, nor does it imply SW T were full-rank. In absence of assumptionN of independence, generally, rank SW T k . Conversely, any rank-k matrix can be written in the form T ≤ SW by singular value decomposition. (§A.6) B.1.1.0.2 Theorem. Linearly independent (l.i.) dyads. M N Vectors si C , i=1 ...k are l.i. and vectors wi C , i=1 ...k are l.i. if and only { ∈ T M N } { ∈ } if dyads s w C × , i=1 ...k are l.i. { i i ∈ } ⋄ Proof. Linear independence of k dyads is identical to definition (1802). ( ) Suppose s and w are each linearly independent sets. Invoking Sylvester’s rank ⇒ { i} { i} § inequality, [233, §0.4] [463, 2.4] rank S + rank W k rank(SW T) min rank S , rank W ( k) (1803) − ≤ ≤ { } ≤ Then k rank(SW T) k which implies the dyads are independent. ( ) Conversely,≤ suppose≤ rank(SW T)= k . Then ⇐ k min rank S , rank W k (1804) ≤ { } ≤ implying the vector sets are each independent. ¨ B.1.1.1 Biorthogonality condition, Range and Nullspace of Sum Dyads characterized by biorthogonality condition W TS = I are independent; id est, for M k N k T T S C × and W C × , if W S = I then rank(SW )= k by the linearly independent ∈ ∈ dyads theorem because (confer §E.1.1) W TS = I rank S = rank W = k M =N (1805) ⇒ ≤ B.5 To see that, we need only show: (S)= 0 B Ä BS =I . ( ) Assume BS =I . Then (BSN )= 0= ⇔x BSx ∃ = 0 (S). (1784) ⇐ N { | } ⊇ N B.4 Move of range to inside summation admitted by linear independence of wi . B.5 R { } Left inverse is not unique, in general. B.2. DOUBLET 521 ([ u v ]) (Π)= ([u v]) R R R r0 0 r (Π)= v⊥ u⊥ v⊥ u⊥ N ∩ ∩ vT vT RN = ([ u v ]) ⊞ ⊞ ([ u v ]) = RN R N uT N uT R µ· ¸¶ µ· ¸¶ T T N Figure 184: The four fundamental subspaces [376, §3.6] for doublet Π = uv + vu S . Π(x)=(uvT + vuT)x is a linear bijective mapping from ([ u v ]) to ([ u v ]). ∈ R R ( ) If (S)= 0 then S must be full-rank thin-or-square. ⇒ N B ∴ A,B,C Ä [ S A ] = I (id est, [ S A ] is invertible) BS =I .
Recommended publications
  • Equivalence of A-Approximate Continuity for Self-Adjoint
    Equivalence of A-Approximate Continuity for Self-Adjoint Expansive Linear Maps a,1 b, ,2 Sz. Gy. R´ev´esz , A. San Antol´ın ∗ aA. R´enyi Institute of Mathematics, Hungarian Academy of Sciences, Budapest, P.O.B. 127, 1364 Hungary bDepartamento de Matem´aticas, Universidad Aut´onoma de Madrid, 28049 Madrid, Spain Abstract Let A : Rd Rd, d 1, be an expansive linear map. The notion of A-approximate −→ ≥ continuity was recently used to give a characterization of scaling functions in a multiresolution analysis (MRA). The definition of A-approximate continuity at a point x – or, equivalently, the definition of the family of sets having x as point of A-density – depend on the expansive linear map A. The aim of the present paper is to characterize those self-adjoint expansive linear maps A , A : Rd Rd for which 1 2 → the respective concepts of Aµ-approximate continuity (µ = 1, 2) coincide. These we apply to analyze the equivalence among dilation matrices for a construction of systems of MRA. In particular, we give a full description for the equivalence class of the dyadic dilation matrix among all self-adjoint expansive maps. If the so-called “four exponentials conjecture” of algebraic number theory holds true, then a similar full description follows even for general self-adjoint expansive linear maps, too. arXiv:math/0703349v2 [math.CA] 7 Sep 2007 Key words: A-approximate continuity, multiresolution analysis, point of A-density, self-adjoint expansive linear map. 1 Supported in part in the framework of the Hungarian-Spanish Scientific and Technological Governmental Cooperation, Project # E-38/04.
    [Show full text]
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • LECTURES on PURE SPINORS and MOMENT MAPS Contents 1
    LECTURES ON PURE SPINORS AND MOMENT MAPS E. MEINRENKEN Contents 1. Introduction 1 2. Volume forms on conjugacy classes 1 3. Clifford algebras and spinors 3 4. Linear Dirac geometry 8 5. The Cartan-Dirac structure 11 6. Dirac structures 13 7. Group-valued moment maps 16 References 20 1. Introduction This article is an expanded version of notes for my lectures at the summer school on `Poisson geometry in mathematics and physics' at Keio University, Yokohama, June 5{9 2006. The plan of these lectures was to give an elementary introduction to the theory of Dirac structures, with applications to Lie group valued moment maps. Special emphasis was given to the pure spinor approach to Dirac structures, developed in Alekseev-Xu [7] and Gualtieri [20]. (See [11, 12, 16] for the more standard approach.) The connection to moment maps was made in the work of Bursztyn-Crainic [10]. Parts of these lecture notes are based on a forthcoming joint paper [1] with Anton Alekseev and Henrique Bursztyn. I would like to thank the organizers of the school, Yoshi Maeda and Guiseppe Dito, for the opportunity to deliver these lectures, and for a greatly enjoyable meeting. I also thank Yvette Kosmann-Schwarzbach and the referee for a number of helpful comments. 2. Volume forms on conjugacy classes We will begin with the following FACT, which at first sight may seem quite unrelated to the theme of these lectures: FACT. Let G be a simply connected semi-simple real Lie group. Then every conjugacy class in G carries a canonical invariant volume form.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • NOTES on DIFFERENTIAL FORMS. PART 3: TENSORS 1. What Is A
    NOTES ON DIFFERENTIAL FORMS. PART 3: TENSORS 1. What is a tensor? 1 n Let V be a finite-dimensional vector space. It could be R , it could be the tangent space to a manifold at a point, or it could just be an abstract vector space. A k-tensor is a map T : V × · · · × V ! R 2 (where there are k factors of V ) that is linear in each factor. That is, for fixed ~v2; : : : ;~vk, T (~v1;~v2; : : : ;~vk−1;~vk) is a linear function of ~v1, and for fixed ~v1;~v3; : : : ;~vk, T (~v1; : : : ;~vk) is a k ∗ linear function of ~v2, and so on. The space of k-tensors on V is denoted T (V ). Examples: n • If V = R , then the inner product P (~v; ~w) = ~v · ~w is a 2-tensor. For fixed ~v it's linear in ~w, and for fixed ~w it's linear in ~v. n • If V = R , D(~v1; : : : ;~vn) = det ~v1 ··· ~vn is an n-tensor. n • If V = R , T hree(~v) = \the 3rd entry of ~v" is a 1-tensor. • A 0-tensor is just a number. It requires no inputs at all to generate an output. Note that the definition of tensor says nothing about how things behave when you rotate vectors or permute their order. The inner product P stays the same when you swap the two vectors, but the determinant D changes sign when you swap two vectors. Both are tensors. For a 1-tensor like T hree, permuting the order of entries doesn't even make sense! ~ ~ Let fb1;:::; bng be a basis for V .
    [Show full text]
  • S-96.510 Advanced Field Theory Course for Graduate Students Lecture Viewgraphs, Fall Term 2004
    S-96.510 Advanced Field Theory Course for graduate students Lecture viewgraphs, fall term 2004 I.V.Lindell Helsinki University of Technology Electromagnetics Laboratory Espoo, Finland I.V.Lindell: Advanced Field Theory, 2004 Helsinki University of Technology 00.01 1 Contents [01] Complex Vectors and Dyadics [02] Dyadic Algebra [03] Basic Electromagnetic Equations [04] Conditions for Fields and Media [05] Duality Transformation [06] Affine Transformation [07] Electromagnetic Field Solutions [08] Singularities and Complex Sources [09] Plane Waves [10] Source Equivalence [11] Huygens’ Principle [12] Field Decompositions Vector Formulas, Dyadic Identites as an appendix I.V.Lindell: Advanced Field Theory, 2004 Helsinki University of Technology 00.02 2 Foreword This lecture material contains all viewgraphs associated with the gradu- ate course S-96.510 Advanced Field Theory given at the Department of Electrical and Communications Engineering, fall 2004. The course is based on Chapters 1–6 of the book Methods for Electromagnetic Field Analysis (Oxford University Press 1992, 2nd edition IEEE Press, 1995, 3rd printing Wiley 2002) by this author. The figures drawn by hand on the blackboard could not be added to the present material. Otaniemi, September 13 2004 I.V. Lindell I.V.Lindell: Advanced Field Theory, 2004 Helsinki University of Technology 00.03 3 I.V.Lindell: Advanced Field Theory, 2004 Helsinki University of Technology 00.04 4 S-96.510 Advanced Field Theory 1. Complex Vectors and Dyadics I.V.Lindell I.V.Lindell: Advanced Field Theory, 2004
    [Show full text]
  • Exercise 1 Advanced Sdps Matrices and Vectors
    Exercise 1 Advanced SDPs Matrices and vectors: All these are very important facts that we will use repeatedly, and should be internalized. • Inner product and outer products. Let u = (u1; : : : ; un) and v = (v1; : : : ; vn) be vectors in n T Pn R . Then u v = hu; vi = i=1 uivi is the inner product of u and v. T The outer product uv is an n × n rank 1 matrix B with entries Bij = uivj. The matrix B is a very useful operator. Suppose v is a unit vector. Then, B sends v to u i.e. Bv = uvT v = u, but Bw = 0 for all w 2 v?. m×n n×p • Matrix Product. For any two matrices A 2 R and B 2 R , the standard matrix Pn product C = AB is the m × p matrix with entries cij = k=1 aikbkj. Here are two very useful ways to view this. T Inner products: Let ri be the i-th row of A, or equivalently the i-th column of A , the T transpose of A. Let bj denote the j-th column of B. Then cij = ri cj is the dot product of i-column of AT and j-th column of B. Sums of outer products: C can also be expressed as outer products of columns of A and rows of B. Pn T Exercise: Show that C = k=1 akbk where ak is the k-th column of A and bk is the k-th row of B (or equivalently the k-column of BT ).
    [Show full text]
  • Tensor Products
    Tensor products Joel Kamnitzer April 5, 2011 1 The definition Let V, W, X be three vector spaces. A bilinear map from V × W to X is a function H : V × W → X such that H(av1 + v2, w) = aH(v1, w) + H(v2, w) for v1,v2 ∈ V, w ∈ W, a ∈ F H(v, aw1 + w2) = aH(v, w1) + H(v, w2) for v ∈ V, w1, w2 ∈ W, a ∈ F Let V and W be vector spaces. A tensor product of V and W is a vector space V ⊗ W along with a bilinear map φ : V × W → V ⊗ W , such that for every vector space X and every bilinear map H : V × W → X, there exists a unique linear map T : V ⊗ W → X such that H = T ◦ φ. In other words, giving a linear map from V ⊗ W to X is the same thing as giving a bilinear map from V × W to X. If V ⊗ W is a tensor product, then we write v ⊗ w := φ(v ⊗ w). Note that there are two pieces of data in a tensor product: a vector space V ⊗ W and a bilinear map φ : V × W → V ⊗ W . Here are the main results about tensor products summarized in one theorem. Theorem 1.1. (i) Any two tensor products of V, W are isomorphic. (ii) V, W has a tensor product. (iii) If v1,...,vn is a basis for V and w1,...,wm is a basis for W , then {vi ⊗ wj }1≤i≤n,1≤j≤m is a basis for V ⊗ W .
    [Show full text]
  • Bases for Infinite Dimensional Vector Spaces Math 513 Linear Algebra Supplement
    BASES FOR INFINITE DIMENSIONAL VECTOR SPACES MATH 513 LINEAR ALGEBRA SUPPLEMENT Professor Karen E. Smith We have proven that every finitely generated vector space has a basis. But what about vector spaces that are not finitely generated, such as the space of all continuous real valued functions on the interval [0; 1]? Does such a vector space have a basis? By definition, a basis for a vector space V is a linearly independent set which generates V . But we must be careful what we mean by linear combinations from an infinite set of vectors. The definition of a vector space gives us a rule for adding two vectors, but not for adding together infinitely many vectors. By successive additions, such as (v1 + v2) + v3, it makes sense to add any finite set of vectors, but in general, there is no way to ascribe meaning to an infinite sum of vectors in a vector space. Therefore, when we say that a vector space V is generated by or spanned by an infinite set of vectors fv1; v2;::: g, we mean that each vector v in V is a finite linear combination λi1 vi1 + ··· + λin vin of the vi's. Likewise, an infinite set of vectors fv1; v2;::: g is said to be linearly independent if the only finite linear combination of the vi's that is zero is the trivial linear combination. So a set fv1; v2; v3;:::; g is a basis for V if and only if every element of V can be be written in a unique way as a finite linear combination of elements from the set.
    [Show full text]
  • Determinants Math 122 Calculus III D Joyce, Fall 2012
    Determinants Math 122 Calculus III D Joyce, Fall 2012 What they are. A determinant is a value associated to a square array of numbers, that square array being called a square matrix. For example, here are determinants of a general 2 × 2 matrix and a general 3 × 3 matrix. a b = ad − bc: c d a b c d e f = aei + bfg + cdh − ceg − afh − bdi: g h i The determinant of a matrix A is usually denoted jAj or det (A). You can think of the rows of the determinant as being vectors. For the 3×3 matrix above, the vectors are u = (a; b; c), v = (d; e; f), and w = (g; h; i). Then the determinant is a value associated to n vectors in Rn. There's a general definition for n×n determinants. It's a particular signed sum of products of n entries in the matrix where each product is of one entry in each row and column. The two ways you can choose one entry in each row and column of the 2 × 2 matrix give you the two products ad and bc. There are six ways of chosing one entry in each row and column in a 3 × 3 matrix, and generally, there are n! ways in an n × n matrix. Thus, the determinant of a 4 × 4 matrix is the signed sum of 24, which is 4!, terms. In this general definition, half the terms are taken positively and half negatively. In class, we briefly saw how the signs are determined by permutations.
    [Show full text]
  • Università Degli Studi Di Trieste a Gentle Introduction to Clifford Algebra
    Università degli Studi di Trieste Dipartimento di Fisica Corso di Studi in Fisica Tesi di Laurea Triennale A Gentle Introduction to Clifford Algebra Laureando: Relatore: Daniele Ceravolo prof. Marco Budinich ANNO ACCADEMICO 2015–2016 Contents 1 Introduction 3 1.1 Brief Historical Sketch . 4 2 Heuristic Development of Clifford Algebra 9 2.1 Geometric Product . 9 2.2 Bivectors . 10 2.3 Grading and Blade . 11 2.4 Multivector Algebra . 13 2.4.1 Pseudoscalar and Hodge Duality . 14 2.4.2 Basis and Reciprocal Frames . 14 2.5 Clifford Algebra of the Plane . 15 2.5.1 Relation with Complex Numbers . 16 2.6 Clifford Algebra of Space . 17 2.6.1 Pauli algebra . 18 2.6.2 Relation with Quaternions . 19 2.7 Reflections . 19 2.7.1 Cross Product . 21 2.8 Rotations . 21 2.9 Differentiation . 23 2.9.1 Multivectorial Derivative . 24 2.9.2 Spacetime Derivative . 25 3 Spacetime Algebra 27 3.1 Spacetime Bivectors and Pseudoscalar . 28 3.2 Spacetime Frames . 28 3.3 Relative Vectors . 29 3.4 Even Subalgebra . 29 3.5 Relative Velocity . 30 3.6 Momentum and Wave Vectors . 31 3.7 Lorentz Transformations . 32 3.7.1 Addition of Velocities . 34 1 2 CONTENTS 3.7.2 The Lorentz Group . 34 3.8 Relativistic Visualization . 36 4 Electromagnetism in Clifford Algebra 39 4.1 The Vector Potential . 40 4.2 Electromagnetic Field Strength . 41 4.3 Free Fields . 44 5 Conclusions 47 5.1 Acknowledgements . 48 Chapter 1 Introduction The aim of this thesis is to show how an approach to classical and relativistic physics based on Clifford algebras can shed light on some hidden geometric meanings in our models.
    [Show full text]
  • Unsupervised and Supervised Principal Component Analysis: Tutorial
    Unsupervised and Supervised Principal Component Analysis: Tutorial Benyamin Ghojogh [email protected] Department of Electrical and Computer Engineering, Machine Learning Laboratory, University of Waterloo, Waterloo, ON, Canada Mark Crowley [email protected] Department of Electrical and Computer Engineering, Machine Learning Laboratory, University of Waterloo, Waterloo, ON, Canada Abstract space and manifold learning (Friedman et al., 2009). This This is a detailed tutorial paper which explains method, which is also used for feature extraction (Ghojogh the Principal Component Analysis (PCA), Su- et al., 2019b), was first proposed by (Pearson, 1901). In or- pervised PCA (SPCA), kernel PCA, and kernel der to learn a nonlinear sub-manifold, kernel PCA was pro- SPCA. We start with projection, PCA with eigen- posed, first by (Scholkopf¨ et al., 1997; 1998), which maps decomposition, PCA with one and multiple pro- the data to high dimensional feature space hoping to fall on jection directions, properties of the projection a linear manifold in that space. matrix, reconstruction error minimization, and The PCA and kernel PCA are unsupervised methods for we connect to auto-encoder. Then, PCA with sin- subspace learning. To use the class labels in PCA, super- gular value decomposition, dual PCA, and ker- vised PCA was proposed (Bair et al., 2006) which scores nel PCA are covered. SPCA using both scor- the features of the X and reduces the features before ap- ing and Hilbert-Schmidt independence criterion plying PCA. This type of SPCA were mostly used in bioin- are explained. Kernel SPCA using both direct formatics (Ma & Dai, 2011). and dual approaches are then introduced.
    [Show full text]