Appendix: Basics and Useful Relations from Linear Algebra

Total Page:16

File Type:pdf, Size:1020Kb

Appendix: Basics and Useful Relations from Linear Algebra Appendix: Basics and Useful Relations from Linear Algebra A.1 Inner Product T hx; yiA = x Ay: (A.1) The index, written with font Times, indicates the matrix used in the bilinear form. In the T case of homogeneous vectors we have hx; yiA = x Ay. We omit the index when it is clear from the context. A.2 Determinant A.2.1 Definition of the Determinant The determinant of an N × N matrix is a scalar function D = det(A) : IRN×N ! IR with the following properties 1. The determinant is linear in the columns (or rows) of the matrix. That is, if the nth N column is an = αx + βy for any vectors x; y 2 IR and some constants α; β, then j(a1; :::; αx + βy; :::; aN )j = αj(a1; :::; x; :::; aN )j + βj(a1; :::::; y; :::; aN )j (A.2) 2. When exchanging two rows or two columns, the sign of the determinant changes. 3. If N = 1, det([1]) = 1. We also write det A = jAj : (A.3) For N = 2, we have a11 a12 det = a11a22 − a12a21 : (A.4) a21 a22 A.2.2 Laplacian Development of a Determinant The following theorem allows us to write the determinant of a matrix A as a sum of products of sub-determinants of a matrix. Let r = fr1; :::; rK g with r1 < ::: < rK be a set of row indices rk 2 N , and c = fc1; :::; cK g with c1 < ::: < cK be a set of column indices ck 2 N . The K × K submatrix only keeping the indices r and c is written as S(A; r; c) : (A.5) Ó Springer International Publishing Switzerland 2016 767 W. Förstner and B.P. Wrobel, Photogrammetric Computer Vision, Geometry and Computing 11, DOI 10.1007/978-3-319-11550-4 768 Appendix The complementary (N − K) × (N − K) submatrix removing these indices is S 0(A; r; c) : (A.6) Then we have the Theorem A.2.9: Laplacian development theorem. Given the N ×N matrix A and two lists r = frkg and c = fckg of K row and column indices with 1 ≤ r1 < ::: < rK < N and 1 ≤ c1 < ::: < cK < N, the determinant can be expressed as X jAj = (−1)jcj (−1)jrjjS(A; r; c)j jS 0(A; r; c)j ; (A.7) r where jrj = r1 + ::: + rK and jcj = c1 + ::: + cK , and the summation is taken over all possible combinations of c with 1 ≤ c1 < ::: < ck < ::: < cK < N. Clearly, if the properties of the determinant hold for the submatrices S(A; r; c) and S 0(A; r; c), they also hold for the determinant of the matrix A, which allows the theorem to be proven by induction, as it holds for N = 2. The determinant of a quadratic submatrix is also called minor. Thus the Laplacian development theorem expresses the determinant of the matrix as a sum of products of minors. Two cases are of special interest. An important example is the development of a 4 × 4 matrix by the first two columns. Thus we fix c = (1; 2) and obtain X det A = (−1)1+2 (−1)r1+r2 jS(A; r; c)j jS 0(A; r; c)j (A.8) r = +jS(A; (1; 2); (1; 2))j jS 0(A; (1; 2); (1; 2))j −|S(A; (1; 3); (1; 2))j jS 0(A; (1; 3); (1; 2))j +jS(A; (1; 4); (1; 2))j jS 0(A; (1; 4); (1; 2))j +jS(A; (2; 3); (1; 2))j jS 0(A; (2; 3); (1; 2))j −|S(A; (2; 4); (1; 2))j jS 0(A; (2; 4); (1; 2))j +jS(A; (3; 4); (1; 2))j jS 0(A; (3; 4); (1; 2))j (A.9) a11 a12 a33 a34 = + a21 a22 a43 a44 a11 a12 a23 a24 − a31 a32 a43 a44 a11 a12 a23 a24 + a41 a42 a33 a34 a21 a22 a13 a14 + a31 a32 a43 a44 a21 a22 a13 a14 − a41 a42 a33 a34 a31 a32 a13 a14 + : (A.10) a41 a42 a23 a24 As the minors referring to a set c of columns of a square matrix can be interpreted as the N−1 Plücker coordinates of the join of the points Xc in IP in these columns, the determinant of a matrix is the sum of the products of the Plücker coordinates of the columns c and of the columns not c, taking the correct signs into account. The second application of (A.7) is the following lemma. Lemma A.2.1: Development of a determinant by row. The determinant of an N × N matrix can be expressed as Appendix 769 N X 1+n 0 jAj = (−1) a1;n jS (A; 1; f2; :::; ng)j : (A.11) n=1 This results from (A.7) by setting r = 1 and c = 2 : n. For example, take the determinant of a 3 × 3 matrix: a b c e f d f d e d e f = a − b + c : (A.12) h i g i g h g h i A.2.3 Determinant of a Block Matrix The determinant of a block matrix is given by A11 A12 −1 −1 = jA11j jA22 − A21A11 A12j = jA22j jA11 − A12A22 A21j : (A.13) A21 A22 A.3 Inverse, Adjugate, and Cofactor Matrix The inverse A−1 of a regular square matrix A fulfils A−1A = AA−1 = I . We have the Woodbury identity, with correctly related matrices A; B; C, (A CBC T)−1 = A−1 − A−1C(C TA−1C B−1)−1C TA−1 (A.14) (see Petersen and Pedersen, 2012). We also have A−1 + B−1 = A−1(A + B)B−1 ; (A.15) (see Petersen and Pedersen, 2012, (144)). The inverse of a symmetric 2 × 2 block matrix is given by −1 −1 −1 −1 −1 −1 −1 A11 A12 A11 + A11 A12C 2 A21A11 −A11 A12C 2 = −1 −1 −1 (A.16) A21 A22 −C 2 A21A11 C 2 −1 −1 −1 C 1 −C 1 A12A22 = −1 −1 −1 −1 −1 ; (A.17) −A22A21C 1 A22 + A22 A21C 1 A12A22 with −1 −1 C 1 = A11 − A12A22 A21 ; C 2 = A22 − A21A11 A12 ; (A.18) assuming at least one of the two submatrices Aii to be regular. The cofactor matrix AO of a square, not necessarily regular, matrix is the matrix of the determinants of its submatrices AO = [(−1)i+jjA(ij)j] ; (A.19) where A(ij) is the matrix with row i and column j deleted. For a 2 × 2 matrix we have a −a AO = 22 21 : (A.20) −a12 a11 For a general 3 × 3 matrix A = [a1; a2; a3] with column vectors ai, it can be shown that O A = [a2 × a3; a3 × a1; a1 × a2] : (A.21) 770 Appendix The adjugate matrix A∗ of a square matrix, which is not necessarily regular, is the transpose of the cofactor matrix, A∗ = (AO)T = [(−1)i+jjA(ji)j] : (A.22) It is closely related to the inverse by A∗ = jAjA−1 ; (A.23) and thus is proportional to the inverse, if A is regular. The determinant therefore can be written as 1 1 jAj = tr(A∗A) = tr((AO)TA) ; (A.24) N N where trA is the trace of the matrix A. Finally, we observe for regular n × n matrices, (A∗)∗ = jAjn−2A and (AO)O = jAjn−2A ; (A.25) due to (A∗)∗ = (jAj.A−1)∗ = jAjn−1.jAj−1A = jAjn−2A. A.4 Skew Symmetric Matrices Skew matrices play a central role when representing rotations. An N × N skew symmetric matrix S has properties: S = −S T ; (A.26) trS = 0 : (A.27) A.4.1 2 × 2 Skew Matrix For a scalar x, we obtain the 2 × 2 skew-symmetric matrix 0 −x S = S(x) = (A.28) x x 0 with the following properties: • It is regular with determinant det(S(x))2 = x2 (A.29) and eigenvalues p λ1 = ix λ2 = −ix with i = −1 : (A.30) • Its square, its cube, and its fourth power are 2 2 3 3 4 4 S (x) = −x I 2 ; S (x) = −x S(x) ; S (x) = x I 2 : (A.31) • If x = 1, then S(1) rotates a 2-vector −b 0 −1 a a = = R ◦ (A.32) a 1 0 b 90 b by 90◦ anti-clockwise. Appendix 771 • We have the rotation matrix cos x − sin x R(x) = exp(S ) = cos(x) I + sin(x) S(1) = (A.33) x 2 sin x cos x using the matrix exponential, see Sect. (A.13), p. 781, which can be proven by using the definition of the matrix exponential and collecting the odd and even terms. A.4.2 3 × 3 Skew Matrix For a 3-vector x = [x; y; z]T, the 3 × 3 skew symmetric matrix is defined as 2 0 −z y 3 S x = S(x) = 4 z 0 −x 5 : (A.34) −y x 0 The matrix S(x) has the following properties: • The product with a 3-vector is identical to the anti-symmetric cross product of two vectors: S(x)y = x × y = −y × x = −S(y)x : (A.35) Therefore, often S(x) is denoted by [x]×, leading to the intuitive relation x × y = [x]×y. We do not follow this notation since the vector product does not immediately generalize to higher dimensions. • Its right null space is x as x × x = 0. • If x 6= 0, the matrix has rank 2. Its eigenvalues are λ1 = ijxj λ1 = −ijxj λ3 = 0 : (A.36) • The matrix S(x) and its square S 2(x) are related to the dyad T 2 Dx = xx with trDx = jxj (A.37) by S x Dx = 0 (A.38) and 2 T 2 2 2 S x = xx − jxj I 3 with tr(S x) = −2jxj : (A.39) • The third and the fourth powers are xxT S 3 = −|xj2S and S(x)4 = jxj4(I − ) : (A.40) x x 3 jxj2 • Therefore we have the relation, for any 3 × 3 skew matrix, 1 S S TS = tr(S S T)S : (A.41) x x x 2 x x x • The following relations hold for unit vectors r with jrj = 1: 2 Dr = Dr (A.42) 2 S r = −(I 3 − Dr) (A.43) 3 S r = −S r (A.44) 4 S r = I 3 − Dr : (A.45) 772 Appendix The following relations between a skew-symmetric matrix and a regular matrix are useful.
Recommended publications
  • Hardware Support for Non-Photorealistic Rendering
    Hardware Support for Non-photorealistic Rendering Ramesh Raskar MERL, Mitsubishi Electric Research Labs (ii) Abstract (i) Special features such as ridges, valleys and silhouettes, of a polygonal scene are usually displayed by explicitly identifying and then rendering ‘edges’ for the corresponding geometry. The candidate edges are identified using the connectivity information, which requires preprocessing of the data. We present a non- obvious but surprisingly simple to implement technique to render such features without connectivity information or preprocessing. (iii) (iv) At the hardware level, based only on the vertices of a given flat polygon, we introduce new polygons, with appropriate color, shape and orientation, so that they eventually appear as special features. 1 INTRODUCTION Figure 1: (i) Silhouettes, (ii) ridges, (iii) valleys and (iv) their combination for a polygonal 3D model rendered by processing Sharp features convey a great deal of information with very few one polygon at a time. strokes. Technical illustrations, engineering CAD diagrams as well as non-photo-realistic rendering techniques exploit these features to enhance the appearance of the underlying graphics usually not supported by rendering APIs or hardware. The models. The most commonly used features are silhouettes, creases rendering hardware is typically more suited to working on a small and intersections. For polygonal meshes, the silhouette edges amount of data at a time. For example, most pipelines accept just consists of visible segments of all edges that connect back-facing a sequence of triangles (or triangle soups) and all the information polygons to front-facing polygons. A crease edge is a ridge if the necessary for rendering is contained in the individual triangles.
    [Show full text]
  • Linear Algebra: Example Sheet 2 of 4
    Michaelmas Term 2014 SJW Linear Algebra: Example Sheet 2 of 4 1. (Another proof of the row rank column rank equality.) Let A be an m × n matrix of (column) rank r. Show that r is the least integer for which A factorises as A = BC with B 2 Matm;r(F) and C 2 Matr;n(F). Using the fact that (BC)T = CT BT , deduce that the (column) rank of AT equals r. 2. Write down the three types of elementary matrices and find their inverses. Show that an n × n matrix A is invertible if and only if it can be written as a product of elementary matrices. Use this method to find the inverse of 0 1 −1 0 1 @ 0 0 1 A : 0 3 −1 3. Let A and B be n × n matrices over a field F . Show that the 2n × 2n matrix IB IB C = can be transformed into D = −A 0 0 AB by elementary row operations (which you should specify). By considering the determinants of C and D, obtain another proof that det AB = det A det B. 4. (i) Let V be a non-trivial real vector space of finite dimension. Show that there are no endomorphisms α; β of V with αβ − βα = idV . (ii) Let V be the space of infinitely differentiable functions R ! R. Find endomorphisms α; β of V which do satisfy αβ − βα = idV . 5. Find the eigenvalues and give bases for the eigenspaces of the following complex matrices: 0 1 1 0 1 0 1 1 −1 1 0 1 1 −1 1 @ 0 3 −2 A ; @ 0 3 −2 A ; @ −1 3 −1 A : 0 1 0 0 1 0 −1 1 1 The second and third matrices commute; find a basis with respect to which they are both diagonal.
    [Show full text]
  • Linear Algebra Review James Chuang
    Linear algebra review James Chuang December 15, 2016 Contents 2.1 vector-vector products ............................................... 1 2.2 matrix-vector products ............................................... 2 2.3 matrix-matrix products ............................................... 4 3.2 the transpose .................................................... 5 3.3 symmetric matrices ................................................. 5 3.4 the trace ....................................................... 6 3.5 norms ........................................................ 6 3.6 linear independence and rank ............................................ 7 3.7 the inverse ...................................................... 7 3.8 orthogonal matrices ................................................. 8 3.9 range and nullspace of a matrix ........................................... 8 3.10 the determinant ................................................... 9 3.11 quadratic forms and positive semidefinite matrices ................................ 10 3.12 eigenvalues and eigenvectors ........................................... 11 3.13 eigenvalues and eigenvectors of symmetric matrices ............................... 12 4.1 the gradient ..................................................... 13 4.2 the Hessian ..................................................... 14 4.3 gradients and hessians of linear and quadratic functions ............................. 15 4.5 gradients of the determinant ............................................ 16 4.6 eigenvalues
    [Show full text]
  • A New Algorithm to Obtain the Adjugate Matrix Using CUBLAS on GPU González, H.E., Carmona, L., J.J
    ISSN: 2277-3754 ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 7, Issue 4, October 2017 A New Algorithm to Obtain the Adjugate Matrix using CUBLAS on GPU González, H.E., Carmona, L., J.J. Information Technology Department, ININ-ITTLA Information Technology Department, ININ proved to be the best option for most practical applications. Abstract— In this paper a parallel code for obtain the Adjugate The new transformation proposed here can be obtained from Matrix with real coefficients are used. We postulate a new linear it. Next, we briefly review this topic. transformation in matrix product form and we apply this linear transformation to an augmented matrix (A|I) by means of both a minimum and a complete pivoting strategies, then we obtain the II. LU-MATRICIAL DECOMPOSITION WITH GT. Adjugate matrix. Furthermore, if we apply this new linear NOTATION AND DEFINITIONS transformation and the above pivot strategy to a augmented The problem of solving a linear system of equations matrix (A|b), we obtain a Cramer’s solution of the linear system of Ax b is central to the field of matrix computation. There are equations. That new algorithm present an O n3 computational several ways to perform the elimination process necessary for complexity when n . We use subroutines of CUBLAS 2nd A, b R its matrix triangulation. We will focus on the Doolittle-Gauss rd and 3 levels in double precision and we obtain correct numeric elimination method: the algorithm of choice when A is square, results. dense, and un-structured. nxn Index Terms—Adjoint matrix, Adjugate matrix, Cramer’s Let us assume that AR is nonsingular and that we rule, CUBLAS, GPU.
    [Show full text]
  • Field of U-Invariants of Adjoint Representation of the Group GL (N, K)
    Field of U-invariants of adjoint representation of the group GL(n,K) K.A.Vyatkina A.N.Panov ∗ It is known that the field of invariants of an arbitrary unitriangular group is rational (see [1]). In this paper for adjoint representation of the group GL(n,K) we present the algebraically system of generators of the field of U-invariants. Note that the algorithm for calculating generators of the field of invariants of an arbitrary rational representation of an unipotent group was presented in [2, Chapter 1]. The invariants was constructed using induction on the length of Jordan-H¨older series. In our case the length is equal to n2; that is why it is difficult to apply the algorithm. −1 Let us consider the adjoint representation AdgA = gAg of the group GL(n,K), where K is a field of zero characteristic, on the algebra of matri- ces M = Mat(n,K). The adjoint representation determines the representation −1 ρg on K[M] (resp. K(M)) by formula ρgf(A)= f(g Ag). Let U be the subgroup of upper triangular matrices in GL(n,K) with units on the diagonal. A polynomial (rational function) f on M is called an U- invariant, if ρuf = f for every u ∈ U. The set of U-invariant rational functions K(M)U is a subfield of K(M). Let {xi,j} be a system of standard coordinate functions on M. Construct a X X∗ ∗ X X∗ X∗ X matrix = (xij). Let = (xij) be its adjugate matrix, · = · = det X · E. Denote by Jk the left lower corner minor of order k of the matrix X.
    [Show full text]
  • Automatic Workflow for Roof Extraction and Generation of 3D
    International Journal of Geo-Information Article Automatic Workflow for Roof Extraction and Generation of 3D CityGML Models from Low-Cost UAV Image-Derived Point Clouds Arnadi Murtiyoso * , Mirza Veriandi, Deni Suwardhi , Budhy Soeksmantono and Agung Budi Harto Remote Sensing and GIS Group, Bandung Institute of Technology (ITB), Jalan Ganesha No. 10, Bandung 40132, Indonesia; [email protected] (M.V.); [email protected] (D.S.); [email protected] (B.S.); [email protected] (A.B.H.) * Correspondence: arnadi_ad@fitb.itb.ac.id Received: 6 November 2020; Accepted: 11 December 2020; Published: 12 December 2020 Abstract: Developments in UAV sensors and platforms in recent decades have stimulated an upsurge in its application for 3D mapping. The relatively low-cost nature of UAVs combined with the use of revolutionary photogrammetric algorithms, such as dense image matching, has made it a strong competitor to aerial lidar mapping. However, in the context of 3D city mapping, further 3D modeling is required to generate 3D city models which is often performed manually using, e.g., photogrammetric stereoplotting. The aim of the paper was to try to implement an algorithmic approach to building point cloud segmentation, from which an automated workflow for the generation of roof planes will also be presented. 3D models of buildings are then created using the roofs’ planes as a base, therefore satisfying the requirements for a Level of Detail (LoD) 2 in the CityGML paradigm. Consequently, the paper attempts to create an automated workflow starting from UAV-derived point clouds to LoD 2-compatible 3D model.
    [Show full text]
  • A Unified Algorithmic Framework for Multi-Dimensional Scaling
    A Unified Algorithmic Framework for Multi-Dimensional Scaling † ‡ Arvind Agarwal∗ Jeff M. Phillips Suresh Venkatasubramanian Abstract In this paper, we propose a unified algorithmic framework for solving many known variants of MDS. Our algorithm is a simple iterative scheme with guaranteed convergence, and is modular; by changing the internals of a single subroutine in the algorithm, we can switch cost functions and target spaces easily. In addition to the formal guarantees of convergence, our algorithms are accurate; in most cases, they converge to better quality solutions than existing methods, in comparable time. We expect that this framework will be useful for a number of MDS variants that have not yet been studied. Our framework extends to embedding high-dimensional points lying on a sphere to points on a lower di- mensional sphere, preserving geodesic distances. As a compliment to this result, we also extend the Johnson- 2 Lindenstrauss Lemma to this spherical setting, where projecting to a random O((1=" ) log n)-dimensional sphere causes "-distortion. 1 Introduction Multidimensional scaling (MDS) [23, 10, 3] is a widely used method for embedding a general distance matrix into a low dimensional Euclidean space, used both as a preprocessing step for many problems, as well as a visualization tool in its own right. MDS has been studied and used in psychology since the 1930s [35, 33, 22] to help visualize and analyze data sets where the only input is a distance matrix. More recently MDS has become a standard dimensionality reduction and embedding technique to manage the complexity of dealing with large high dimensional data sets [8, 9, 31, 6].
    [Show full text]
  • Formalizing the Matrix Inversion Based on the Adjugate Matrix in HOL4 Liming Li, Zhiping Shi, Yong Guan, Jie Zhang, Hongxing Wei
    Formalizing the Matrix Inversion Based on the Adjugate Matrix in HOL4 Liming Li, Zhiping Shi, Yong Guan, Jie Zhang, Hongxing Wei To cite this version: Liming Li, Zhiping Shi, Yong Guan, Jie Zhang, Hongxing Wei. Formalizing the Matrix Inversion Based on the Adjugate Matrix in HOL4. 8th International Conference on Intelligent Information Processing (IIP), Oct 2014, Hangzhou, China. pp.178-186, 10.1007/978-3-662-44980-6_20. hal-01383331 HAL Id: hal-01383331 https://hal.inria.fr/hal-01383331 Submitted on 18 Oct 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Distributed under a Creative Commons Attribution| 4.0 International License Formalizing the Matrix Inversion Based on the Adjugate Matrix in HOL4 Liming LI1, Zhiping SHI1?, Yong GUAN1, Jie ZHANG2, and Hongxing WEI3 1 Beijing Key Laboratory of Electronic System Reliability Technology, Capital Normal University, Beijing 100048, China [email protected], [email protected] 2 College of Information Science & Technology, Beijing University of Chemical Technology, Beijing 100029, China 3 School of Mechanical Engineering and Automation, Beihang University, Beijing 100083, China Abstract. This paper presents the formalization of the matrix inversion based on the adjugate matrix in the HOL4 system.
    [Show full text]
  • Linearized Polynomials Over Finite Fields Revisited
    Linearized polynomials over finite fields revisited∗ Baofeng Wu,† Zhuojun Liu‡ Abstract We give new characterizations of the algebra Ln(Fqn ) formed by all linearized polynomials over the finite field Fqn after briefly sur- veying some known ones. One isomorphism we construct is between ∨ L F n F F n n( q ) and the composition algebra qn ⊗Fq q . The other iso- morphism we construct is between Ln(Fqn ) and the so-called Dickson matrix algebra Dn(Fqn ). We also further study the relations between a linearized polynomial and its associated Dickson matrix, generalizing a well-known criterion of Dickson on linearized permutation polyno- mials. Adjugate polynomial of a linearized polynomial is then intro- duced, and connections between them are discussed. Both of the new characterizations can bring us more simple approaches to establish a special form of representations of linearized polynomials proposed re- cently by several authors. Structure of the subalgebra Ln(Fqm ) which are formed by all linearized polynomials over a subfield Fqm of Fqn where m|n are also described. Keywords Linearized polynomial; Composition algebra; Dickson arXiv:1211.5475v2 [math.RA] 1 Jan 2013 matrix algebra; Representation. ∗Partially supported by National Basic Research Program of China (2011CB302400). †Key Laboratory of Mathematics Mechanization, AMSS, Chinese Academy of Sciences, Beijing 100190, China. Email: [email protected] ‡Key Laboratory of Mathematics Mechanization, AMSS, Chinese Academy of Sciences, Beijing 100190, China. Email: [email protected] 1 1 Introduction n Let Fq and Fqn be the finite fields with q and q elements respectively, where q is a prime or a prime power.
    [Show full text]
  • Arxiv:2012.03523V3 [Math.NT] 16 May 2021 Favne Cetfi Optn Eerh(SR Spr Ft of Part (CM4)
    WRONSKIAN´ ALGEBRA AND BROADHURST–ROBERTS QUADRATIC RELATIONS YAJUN ZHOU To the memory of Dr. W (1986–2020) ABSTRACT. Throughalgebraicmanipulationson Wronskian´ matrices whose entries are reducible to Bessel moments, we present a new analytic proof of the quadratic relations conjectured by Broadhurst and Roberts, along with some generalizations. In the Wronskian´ framework, we reinterpret the de Rham intersection pairing through polynomial coefficients in Vanhove’s differential operators, and compute the Betti inter- section pairing via linear sum rules for on-shell and off-shell Feynman diagrams at threshold momenta. From the ideal generated by Broadhurst–Roberts quadratic relations, we derive new non-linear sum rules for on-shell Feynman diagrams, including an infinite family of determinant identities that are compatible with Deligne’s conjectures for critical values of motivic L-functions. CONTENTS 1. Introduction 1 2. W algebra 5 2.1. Wronskian´ matrices and Vanhove operators 5 2.2. Block diagonalization of on-shell Wronskian´ matrices 8 2.3. Sumrulesforon-shellandoff-shellBesselmoments 11 3. W⋆ algebra 20 3.1. Cofactors of Wronskians´ 21 3.2. Threshold behavior of Wronskian´ cofactors 24 3.3. Quadratic relations for off-shell Bessel moments 30 4. Broadhurst–Roberts quadratic relations 37 4.1. Quadratic relations for on-shell Bessel moments 37 4.2. Arithmetic applications to Feynman diagrams 43 4.3. Alternative representations of de Rham matrices 49 References 53 arXiv:2012.03523v3 [math.NT] 16 May 2021 Date: May 18, 2021. Keywords: Bessel moments, Feynman integrals, Wronskian´ matrices, Bernoulli numbers Subject Classification (AMS 2020): 11B68, 33C10, 34M35 (Primary) 81T18, 81T40 (Secondary) * This research was supported in part by the Applied Mathematics Program within the Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR) as part of the Collaboratory on Mathematics for Mesoscopic Modeling of Materials (CM4).
    [Show full text]
  • Lecture 5: Matrix Operations: Inverse
    Lecture 5: Matrix Operations: Inverse • Inverse of a matrix • Computation of inverse using co-factor matrix • Properties of the inverse of a matrix • Inverse of special matrices • Unit Matrix • Diagonal Matrix • Orthogonal Matrix • Lower/Upper Triangular Matrices 1 Matrix Inverse • Inverse of a matrix can only be defined for square matrices. • Inverse of a square matrix exists only if the determinant of that matrix is non-zero. • Inverse matrix of 퐴 is noted as 퐴−1. • 퐴퐴−1 = 퐴−1퐴 = 퐼 • Example: 2 −1 0 1 • 퐴 = , 퐴−1 = , 1 0 −1 2 2 −1 0 1 0 1 2 −1 1 0 • 퐴퐴−1 = = 퐴−1퐴 = = 1 0 −1 2 −1 2 1 0 0 1 2 Inverse of a 3 x 3 matrix (using cofactor matrix) • Calculating the inverse of a 3 × 3 matrix is: • Compute the matrix of minors for A. • Compute the cofactor matrix by alternating + and – signs. • Compute the adjugate matrix by taking a transpose of cofactor matrix. • Divide all elements in the adjugate matrix by determinant of matrix 퐴. 1 퐴−1 = 푎푑푗(퐴) det(퐴) 3 Inverse of a 3 x 3 matrix (using cofactor matrix) 3 0 2 퐴 = 2 0 −2 0 1 1 0 −2 2 −2 2 0 1 1 0 1 0 1 2 2 2 0 2 3 2 3 0 Matrix of Minors = = −2 3 3 1 1 0 1 0 1 0 2 3 2 3 0 0 −10 0 0 −2 2 −2 2 0 2 2 2 1 −1 1 2 −2 2 Cofactor of A (퐂) = −2 3 3 .∗ −1 1 −1 = 2 3 −3 0 −10 0 1 −1 1 0 10 0 2 2 0 adj A = CT = −2 3 10 2 −3 0 2 2 0 0.2 0.2 0 1 1 A-1 = ∗ adj A = −2 3 10 = −0.2 0.3 1 |퐴| 10 2 −3 0 0.2 −0.3 0 4 Properties of Inverse of a Matrix • (A-1)-1 = A • (AB)-1 = B-1A-1 • (kA)-1 = k-1A-1 where k is a non-zero scalar.
    [Show full text]
  • Geant4 Integration Status
    Geant4 integration status ● All CMS shapes integrated: box, tube, trapezoid, cone, polycone, and polyhedra – stress-tested with Geant4 test job FullCMS, a few warnings printed multiple times, ~ all of them coming from polycone or polyhedron special cases. – several fixes uploaded, job still crashes using VecGeom shapes (runs to the end using Geant4 or USolids shapes) ● ...and a few more: Trd, torus2, paraboloid, orb, sphere – integrated, but not stress-tested yet, since not used in FullCMS geant4 test job. G. Lima – GeantV weekly meeting – 2015/10/27 1 During code sprint ● With Sandro, fixed few more bugs with the tube (point near phi- section surface) and polycone (near a ~vertical section) ● Changes in USolids shapes, to inherit from specialized shapes instead of “simple shapes” ● Crashes due to Normal() calculation returning (0,0,0) when fully inside, but at the z-plane between sections z Both sections need to be checked, and several possible section outcomes Q combined → normals added: +z + (-z) = (0,0,0) G. Lima – GeantV weekly meeting – 2015/10/27 2 Geant4 integration status ● What happened since our code sprint? – More exceptions related to Inside/Contains inconsistencies, triggered at Geant4 navigation tests ● outPoint, dir → surfPoint = outPoint + step * dir ● Inside(surfPoint) → kOutside (should be surface) Both sections need to be checked, several possible section outcomes combined, e.g. in+in = in surf + out = surf, etc... P G. Lima – GeantV weekly meeting – 2015/10/27 3 Geant4 integration status ● What happened after our code sprint? – Normal() fix required a similar set of steps → code duplication ● GenericKernelContainsAndInside() ● ConeImplementation::NormalKernel() (vector mode) ● UnplacedCone::Normal() (scalar mode) – First attempt involved Normal (0,0,0) and valid=false for all points away from surface --- unacceptable for navigation – Currently a valid normal is P always provided, BUT it is not always the best one (a performance priority choice) n G.
    [Show full text]