Why Jacket Matrices?

Total Page:16

File Type:pdf, Size:1020Kb

Why Jacket Matrices? Why Jacket Matrices? 1 1 T [][]AN a ij Moon Ho Lee (E-mail:[email protected]) wcu.chonbuk.ac.kr, mdmc.chonbuk.ac.kr Institute of Information & Communication Chonbuk National University Jeonju, 561-756, Korea Tel: +82632702463 Fax: +82632704166 htttp://en.wikipedia.org/wiki/Category:Matrices htttp://en.wikipedia.org/wiki/Jacket:Matrix 1 http://en.wikipedia.org/wiki/user:leejacket 2 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + -j -j j j - - + + - - + + - - + + j j -j -j - - + + - - + + - - + - -j j j -j - + + - - + + - - + + - j -j -j j - + + - - + + - - + + + j j -j -j - - + + + + - - - - + + -j -j j j - - + + + + - - - - + - j -j -j j - + + - + - - + - + + - -j j j -j - + + - + - - + - + + + - - - - + + + + - - - - + + + + - - - - + + + + - - - - + + + - - + - + + - + - - + - + + - + - - + - + + - + - - + - + + - Real Domain Complex Domain The basic idea was motivated by the cloths of Jacket. As our two sided Jacket is inside and outside compatible, at least two positions of a Jacket matrix are replaced by their inverse; these elements are changed in their position and are moved, for example, from inside of the middle circle to outside or from to inside without loss of sign. 3 In mathematics a Jacket matrix is a square matrix A = aij of order n whose entries are from a field (including real field, complex field, finite field ), if AA * = A * A = nIn Where : A * is the transpose of the matrix of inverse entries of A , i.e. Written in different form is a u, v {1,2,...,n }, u v : u, i 0 av, i The inverse form which is only from the entrywise inverse and transpose : Jacket Matrices T j0,0 j0,1 .... j0,n1 1/ j0,0 1/ j0,1 .... 1/ j0,n1 j j .... j 1/ j 1/ j .... 1/ j 1,0 1,1 1,n1 1 1 1,0 1,1 1,n1 J Jmn mn C 1/ jm1,0 1/ jm1,1 .... 1/ jm1,n1 jm1,0 jm1,1 .... jm1,n1 Orthogonal: u , v {1,2,..., n }, u v : a . a 0; a2 const . u,,, i v i u i 4 i i 5 6 Category : Matrices (from Http://en.wikipedia.org/wiki/Category:Matrices) H I J L ▪Hadamard matrix ▪Identity matrix ▪Jacket matrix ▪Laplacian matrix ▪Hamiltonian matrix ▪Incidence matrix ▪Jacobian matrix ▪Lehmer matrix ▪Hankel matrix ▪Integer matrix and determinant ▪Leslie matrix ▪Hasse-Witt matrix ▪Invertible matrix ▪Jones calculus ▪Levinson recursion ▪Hat matrix ▪Involutory matrix K ▪List of matrices ▪Hermitian matrix ▪Irregular matrix ▪Kernel (matrix) ▪Hessenberg matrix ▪Krawtchouk matrices 7 Jacket Basic Concept from Center Weighted Hadamard 1 1 1 1 1 1 1 1 1 2 2 1 1 1/ 2 1/ 2 1 WH ,WH 1 4 1 2 2 1 4 1 1/ 2 1/ 2 1 1 1 1 1 1 1 1 1 1 1 WH N WH N / 2 H 2 where H 2 1 1 Sparse matrix and its relation to construction of center weighted Hadamard 1 WC H WH WC 2I WH H WC N N N N/2 2 N N N N 1 1 1 1 1 WC N WC N / 2 I 2 WH NWC H 2 N N N * Moon Ho Lee, “Center Weighted Hadamard Transform” IEEE Trans. on CAS, vol.26, no.9, Sept. 1989 * Moon Ho Lee, and Xiao-Dong Zhang,“Fast Block Center Weighted Hadamard Transform” IEEE Trans. On 8 CAS-I, vol.54, no.12, Dec. 2007. 9 10 11 12 13 14 1 1 1 1 1 m(n ) 2 N 1 C k cos 2 1 1 1 j j 1 X (n) x(k)wnk N m,n m N N [H ]2 [J]4 k0 1 1 1 j j 1 j 2 / N n 0,1...N 1, w e m,n 0,1,..., N 1 1 1 1 1 [H ]n [H ]n / 2 [H ]2 [J]n [J ]n / 2 [H]2 n 4 n1 n1 i j N 3 N 4 k k ik jk (1) k 0 (1) k 0 w(in2 in1 )( jn2 jn1 ) 1 1 1 1 1 1 1 1 2 2 2 2 1 1 1 1 1 1 1 1 3 5 7 C C C C 1 w w 1 C4 8 8 8 8 1 1 1 1 2 J 4 F 1 w w 2 6 6 2 H 4 3 C8 C8 C8 C8 1 w w 1 1 1 1 1 1 w2 w C 3 C 7 C 1 C 5 8 8 8 8 1 1 1 1 1 1 1 1 i2 / 3 i w e C i cos w1: Hadamard 8 8 w2: Center Weighted Hadamard 1 C 1 C 2 C 3 2 8 8 8 1 1 1 1 1 1 1 1 3 6 7 1 1 1 1 C 8 C 8 C 8 1 1 1 2 1 1 C 2 1 1 1 1 1 1 F3 1 w w 4 1 1 1 1 1/w 1/w 1 2 C 5 C 6 C 1 H4 J 3 2 1 8 8 8 4 1 w w 2 4 1 1 1 1 4 1 1/w 1/w 1 1 C 7 C 2 C 5 8 8 8 1 1 1 1 1 1 1 1 2 i2 /3 j e 6 2 i2 cos cos 1 e 8 8 1 1 1 2 6 cos i2 /3 cos j e 8 8 DFTN DFTN DFT2N DCT N DCT N DCT 2 N H N H N H 2N J N J N J 2 N 2n p 2n 2n ,4n Arbitrary 15 Jacket Definition: element inverse and transpose 1 T Simple Inverse J L and J mn 1/ Lij mn ij mn mn 1 1 1 1 Examples: 1 1 1 1 i i 1 J 4 J 1 2 1 i i 1 1 1 3 J 2 1 1 1 1 1 1 1 1 2 1 1 1 1 1 1 H 4 1 1 1 1 1 1 1 1 1 1 1 2 J 2 J 1 1 1 1 1 1 1 1 1 1 3 2 1 i i 1 1 J 1 4 1 i i 1 where where 1 0, 2 1 1 2 0, 3 1 1 1 1 1 16 17 18 19 20 21 22 23 24 25 Jacket case [ J ]1 [ J ] 2 1 1 1 1 1 1 1 1 4 0 0 0 1 1 1 1 1 2 0 1 1 1 1 1 1 1 1 1 0 4 0 0 1 a b c 1 1 1 2 1 1 0 2 1 1 1 1 4 1 1 1 1 0 0 4 0 1 1 1 1 1 1 1 1 0 0 0 4 ( a) w re a l 2 1 1 1 1 1 1 1 1 4 0 0 0 1 1 1 1 1 2 1 1 2 2 1 1 1 1 1 1 0 6 2 0 1 2 2 1 1 0 3 1 2 2 1 4 1 1 1 1 0 2 6 0 a b c 2 1 1 1 1 1 1 1 1 0 0 0 4 a b 1, c w ( b) w im a g in a ry j 1 0 0 0 1 1 1 1 1 1 1 1 1 1 1 j j 1 1 1 1 1 0 (1 j) (1 j) 0 1 1 1 1 1 2 1 j 1 2 2 1 1 1 j j 1 4 1 1 1 1 0 (1 j) (1 j) 0 1 j 2 1 1 0 1 j 2 2 1 1 1 1 1 1 1 1 0 0 0 1 1 2 1 1 3 1 1 2 2 1 1 1 1 1 6 0 0 2 a c b 1 2 1 1 2 1 1 1 1 1 0 6 2 0 3 2 1 2 1 1 1 3 2 1 1 2 4 1 1 1 1 0 2 6 0 a c 1, b 2 1 2 2 1 1 1 1 1 2 0 0 6 1 2 2 1 1 1 1 1 6 0 0 2 a b c 1 2 1 1 1 3 0 2 2 2 2 1 1 1 1 1 0 8 0 0 4 2 2 2 2 4 1 1 1 1 0 0 8 0 a 1,b c 2 2 2 2 1 1 1 4 1 2 2 1 1 1 1 1 2 0 0 6 2 1 1 2 1 1 1 1 6 0 0 2 a b c 2 1 1 1 3 3 1 4 4 1 1 1 1 1 1 0 10 6 0 1 5 1 4 4 1 4 1 1 1 1 0 6 10 0 a 2,b 1,c 4 1 4 2 1 1 1 5 2 1 1 2 1 1 1 1 2 0 0 6 * Moon Ho Lee, “A New Reverse Jacket Transform and Its Fast Algorithm,” IEEE Trans. On Circuit and System 2, vol. 47, no. 1, Jan. 2000. pp. 39-47 26 1 1 J 1 J2 1 1 1 1 1 1 1 1 4 0 0 0 a b 1 1 1 1 2 0 1 1 1 1 1 1 1 1 1 1 0 4 0 0 1 1 1 c 1 2 1 1 2 2 1 1 0 2 4 1 1 1 1 16 1 1 1 10 0 4 0 1 1 1 1 1 1 1 1 0 0 0 4 w real 2 2 2 2 2 1 1 1 1 4 0 0 0 2 1 1 1 3 0 2 1 1 2 1 1 1 10 3 1 0 1 1 1 1 1 2 1 1 2 3 1 11 2 8 2 1 1 2 16 1 1 1 10 1 3 0 2 2 2 2 1 1 1 1 0 0 0 4 a b c a b 1, w imaginary j j j j j 1 1 1 1 4 j 0 0 0 c w j 1 1 j 1 1 1 1 0 2 j 2 2 j 2 0 j 1 1 1 j 1 0 j 1 1 j 1 1 1 1 0 2 j 2 2 j 2 0 1 1 1 1 j 1 2 j j j j 1 1 1 1 0 0 0 4 j j 1 2( j 1) 4 j 16 j 2 1 1 2 1 1 1 1 3 0 0 1 a c b 1 2 2 1 1 1 1 10 3 1 0 1 1 2 1 1 1 1 3 1 1 1 a c 1, 5 2 1 2 5 1 11 3 8 1 2 2 1 16 1 1 1 10 1 3 0 b 2 2 1 1 2 1 1 1 1 1 0 0 3 2 1 1 2 1 1 1 1 3 0 0 1 a b c 1 1 1 1 1 1 1 10 2 0 0 a 1, 1 2 2 1 1 1 1 4 1 1 1 6 2 1 2 6 1 10 3 8 1 1 1 1 16 1 1 1 10 0 2 0 b c 2 2 1 1 2 1 1 1 1 1 0 0 3 2 4 4 2 1 1 1 1 6 0 0 2 a b c 4 1 1 4 1 1 1 1 0 5 3 0 a 2, 1 4 1 1 1 1 1 5 1 1 1 b 1,c 4 9 1 2 2 9 1 13 3 8 4 1 1 4 16 1 1 1 1 0 3 5 0 2 4 4 2 1 1 1 1 2 0 0 6 27 DFT, Fourier,1822 DCT-II, K.R.Rao, 1974 Wavelet, G.Strang,1996 1 N 1 m() n 2 r r nm C kcos2 , m, n 0,1,..., N 1 X( n ) x ( m ) W , 0 n N 1 Nm, n m A NN 2 m0 T r r IIF 0 2 2 2 1,j 1, 2, ..., N 1 T FF Pr 1 1 1/r 1/ r 4 4 4 w here k 1 IIE2 2 0 2 j ,j 0, N * A 2 F 1 j 2 2 1/r 1/ r E 1 1 2 1 j 1 / 2 1 / 2 O C 2 2 2 A Pi A Pj 4 4 4 4 T CC1 3 1 / 2 1 / 2 R 1 1/1 1/j 1 1 4 4 T III 0 *E 2 2 2 2 1 1 2 2 r M 1/1 1/ j j j C * 2 2 IIA2 2 0 2 2 2 IIIINNNN0 FN 0 0 2 2 2 2 2 F ICIIINNNNN/20 /2 0 /2 0 /2 /2 IIIINNNN/20 IN/2 0 /2 0 /2 /2 N C 0 PrNNNN0 FN 0 WII A r N 0KCDII 0 0 N 1 1 2 2 2 2 2 NNNNN/2 /2 /2 /2 /2 0PiNNNN/2 0 AN/2 0 Pj /2 I /2 I /2 1 1 1 1 1 FPF []r [][][][]CPCP A Pi A Pj N N N N rN N c N NNN N IIII0N / 2 0 NNNN/ 2 X( orI N / 2 ) 0 / 2 / 2 / 2 *Jacket Matrices Based on Common Form:X r N 0Pi 0 Pj I I 28 NNNN/ 2 0 X N / 2 / 2 / 2 / 2 Decomposition DCT DFT.
Recommended publications
  • Conference Program
    !"#$%&'(&)'*+%*+,! ! #$%&'%('! "! )*+,%-*!.%!/0112345""6! #! )*+,%-*!.%!789+:&;!<(*+=&>6! $! <&'.(8,.:%&'!?%(!#*'':%&!1@=:('!=&>!A(*'*&.*('! %! B(=,C!D:'.! &! 7=:+E!#,@*>8+*!=.!=!/+=&,*! '! A=$*(!A(*'*&.=.:%&'!9E!B(=,C! ("! GeneticGenetic andand EvolutionaryEvolutionary1%&?*(*&,*!F+%%($+=& Computation Computation! (%!! Conference Conference 2009 2012 ConferenceConference2(G=&:H*(' ProgramProgram! ('! Table of Contents B(=,C!1@=:('! ()! GECCO has grown up Table of Contents 1 A(%G(=-!1%--:..**! "*! Program Outline 2 WelcomeOrganizers from General Chair! I*'.!A=$*(!J%-:&**'! "'! 3 4 Instruction for Session Chairs and Presenters! 4 Program Committee K*E&%.*'! #*! 5 ProgramPapers Schedule Nominated and Floor for BestPlans Paper! Awards 5 12 B@*!L.@!M&&8=+!NO8-:*'P!MQ=(>'! #"! TrackAbout List the and Evolutionary Abbreviations! Computation in Practice Track 5 15 Floor Plans! 6 Tutorials and Workshops0R%+8.:%&=(E!1%-$8.=.:%&!:&!A(=,.:,* Schedule ! ##! 16 Daily Schedule at a Glance! 8 Meeting Rooms Floor Plan 18 GECCO-2012 Conference B@*!/0112!<&>8'.(:=+!1@=++*&G*Organizers and Track Chairs! ! #$! 13 Wednesday Workshop Presentations 20 GECCO-2012Thursday Program Workshop Committee 45""!/0112!1%-$*.:.:%&'Presentations Members ! ! #%! 15 30 Best PaperPaper NominationsPresentations! Sessions-at-a-Glance 24 34 )%(C'@%$!A(*'*&.=.:%&'! #'! Keynote withThursday, Chris Adami 18:30! Poster Session 30 36 Keynote forInstructions the GDS track for Sessionwith Stuart Chairs+,-./!+/.0.1 Kauffman and Presenters! 2,23410 ! ! 30 55 Friday, 8:30 Presentations
    [Show full text]
  • A GMRES Solver with ILU(K) Preconditioner for Large-Scale Sparse Linear Systems on Multiple Gpus
    University of Calgary PRISM: University of Calgary's Digital Repository Graduate Studies The Vault: Electronic Theses and Dissertations 2015-09-28 A GMRES Solver with ILU(k) Preconditioner for Large-Scale Sparse Linear Systems on Multiple GPUs Yang, Bo Yang, B. (2015). A GMRES Solver with ILU(k) Preconditioner for Large-Scale Sparse Linear Systems on Multiple GPUs (Unpublished master's thesis). University of Calgary, Calgary, AB. doi:10.11575/PRISM/24749 http://hdl.handle.net/11023/2512 master thesis University of Calgary graduate students retain copyright ownership and moral rights for their thesis. You may use this material in any way that is permitted by the Copyright Act or through licensing that has been assigned to the document. For uses that are not allowable under copyright legislation or licensing, you are required to seek permission. Downloaded from PRISM: https://prism.ucalgary.ca UNIVERSITY OF CALGARY A GMRES Solver with ILU(k) Preconditioner for Large-Scale Sparse Linear Systems on Multiple GPUs by Bo Yang A THESIS SUBMITTED TO THE FACULTY OF GRADUATE STUDIES IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE GRADUATE PROGRAM IN CHEMICAL AND PETROLEUM ENGINEERING CALGARY, ALBERTA SEPTEMBER, 2015 c Bo Yang 2015 Abstract Most time of reservoir simulation is spent on the solution of large-scale sparse linear systems. The Krylov subspace solvers and the ILU preconditioners are the most commonly used methods for solving such systems. Based on excellent parallel computing performance, GPUs have been a promising hardware architecture. The work of developing preconditioned Krylov solvers on GPUs is necessary and challengeable.
    [Show full text]
  • Product Irregularity Strength of Graphs with Small Clique Cover Number
    Product irregularity strength of graphs with small clique cover number Daniil Baldouski University of Primorska, FAMNIT, Glagoljaˇska 8, 6000 Koper, Slovenia Abstract For a graph X without isolated vertices and without isolated edges, a product-irregular labelling ω : E(X) 1, 2,...,s , first defined by Anholcer in 2009, is a labelling of the edges of X such that→ for { any two} distinct vertices u and v of X the product of labels of the edges incident with u is different from the product of labels of the edges incident with v. The minimal s for which there exist a product irregular labeling is called the product irregularity strength of X and is denoted by ps(X). Clique cover number of a graph is the minimum number of cliques that partition its vertex-set. In this paper we prove that connected graphs with clique cover number 2 or 3 have the product-irregularity strength equal to 3, with some small exceptions. Keywords: product irregularity strength, clique-cover number. Math. Subj. Class.: 05C15, 05C70, 05C78 1 Introduction Throughout this paper let X be a simple graph, that is, a graph without loops or multiple edges, without isolated vertices and without isolated edges. Let V (X) and E(X) denote the vertex set and the edge set of X, respectively. Let ω : E(X) 1, 2,...,s be an integer → { } labelling of the edges of X. Then the product degree pdX (v) of a vertex v V (X) in the graph ∈ X with respect to the labelling ω is defined by pdX (v)= ω(e).
    [Show full text]
  • Phd Thesis Parallel and Scalable Sparse Basic Linear Algebra
    UNIVERSITY OF COPENHAGEN FACULTY OF SCIENCE PhD Thesis Weifeng Liu Parallel and Scalable Sparse Basic Linear Algebra Subprograms [Undertitel på afhandling] Academic advisor: Brian Vinter Submitted: 26/10/2015 ii To Huamin and Renheng for enduring my absence while working on the PhD and the thesis To my parents and parents-in-law for always helping us to get through hard times iii iv Acknowledgments I am immensely grateful to my supervisor, Professor Brian Vinter, for his support, feedback and advice throughout my PhD years. This thesis would not have been possible without his encouragement and guidance. I would also like to thank Profes- sor Anders Logg for giving me the opportunity to work with him at the Department of Mathematical Sciences of Chalmers University of Technology and University of Gothenburg. I would like to thank my co-workers in the eScience Center who supported me in every respect during the completion of this thesis: James Avery, Jonas Bardino, Troels Blum, Klaus Birkelund Jensen, Mads Ruben Burgdorff Kristensen, Simon Andreas Frimann Lund, Martin Rehr, Kenneth Skovhede, and Yan Wang. Special thanks goes to my PhD buddy James Avery, Huamin Ren (Aalborg Uni- versity), and Wenliang Wang (Bank of China) for their insightful feedback on my manuscripts before being peer-reviewed. I would like to thank Jianbin Fang (Delft University of Technology and National University of Defense Technology), Joseph L. Greathouse (AMD), Shuai Che (AMD), Ruipeng Li (University of Minnesota) and Anders Logg for their insights on my papers before being published. I thank Jianbin Fang, Klaus Birkelund Jensen, Hans Henrik Happe (University of Copenhagen) and Rune Kildetoft (University of Copenhagen) for access to the Intel Xeon and Xeon Phi machines.
    [Show full text]
  • Multi-Color Low-Rank Preconditioner for General Sparse Linear Systems ∗
    MULTI-COLOR LOW-RANK PRECONDITIONER FOR GENERAL SPARSE LINEAR SYSTEMS ∗ QINGQING ZHENG y , YUANZHE XI z , AND YOUSEF SAAD x Abstract. This paper presents a multilevel parallel preconditioning technique for solving general large sparse linear systems of equations. Subdomain coloring is invoked to reorder the coefficient matrix by multicoloring the quotient graph of the adjacency graph of the subdomains, resulting in a two-level block diagonal structure. A full binary tree structure T is then built to facilitate the construction of the preconditioner. We show that the difference between the inverse of a general block 2-by-2 SPD matrix and that of its block diagonal part can be well approximated by a low-rank matrix. This property and the block diagonal structure of the reordered matrix are exploited to develop a Multi-Color Low-Rank (MCLR) preconditioner. The construction procedure of the MCLR preconditioner follows a bottom-up traversal of the tree T . All irregular matrix computations, such as ILU factorizations and related triangular solves, are restricted to leaf nodes where these operations can be performed independently. Computations in non-leaf nodes only involve easy-to-optimize dense matrix operations. In order to further reduce the number of iteration of the Preconditioned Krylov subspace procedure, we combine MCLR with a few classical block-relaxation techniques. Numerical experiments on various test problems are proposed to illustrate the robustness and efficiency of the proposed approach for solving large sparse symmetric and nonsymmetric linear systems. Key words. Low-rank approximation; parallel preconditioners; domain decomposition; recur- sive multilevel methods; Krylov subspace methods. AMS subject classifications.
    [Show full text]
  • Fast Large-Integer Matrix Multiplication
    T.J. Smeding Fast Large-Integer Matrix Multiplication Bachelor thesis 11 July 2018 Thesis supervisors: dr. P.J. Bruin dr. K.F.D. Rietveld Leiden University Mathematical Institute Leiden Institute of Advanced Computer Science Abstract This thesis provides two perspectives on fast large-integer matrix multiplication. First, we will cover the complexity theory underlying recent developments in fast matrix multiplication algorithms by developing the theory behind, and proving, Schönhage’s τ-theorem. The theorems will be proved for matrix multiplication over commutative rings. Afterwards, we will discuss two newly developed programs for large-integer matrix multiplication using Strassen’s algorithm, one on the CPU and one on the GPU. Code from the GMP library is used on both of these platforms. We will discuss these implementations and evaluate their performance, and find that multi-core CPU platforms remain the most suited for large-integer matrix multiplication. 2 Contents 1 Introduction 4 2 Complexity Theory 5 2.1 Matrix Multiplication . .5 2.2 Computation . .5 2.3 Tensors . .6 2.3.1 Reduction of Abstraction . .6 2.3.2 Cost . .7 2.3.3 Rank . .8 2.4 Divisions & Rank . 10 2.5 Tensors & Rank . 13 2.5.1 Tensor Properties . 15 2.5.2 Rank Properties . 16 2.6 Matrix Multiplication Exponent . 17 2.7 Border Rank . 21 2.7.1 Schönhage’s τ-Theorem . 23 2.7.2 A Simple Application . 26 3 Performance 27 3.1 Introduction . 27 3.1.1 Input Data Set . 27 3.2 Large-Integer Arithmetic . 28 3.2.1 Multiplication Algorithms . 28 3.3 Implementations .
    [Show full text]
  • Krawtchouk Matrices from Classical and Quantum Random Walks
    Krawtchouk matrices from classical and quantum random walks Philip Feinsilver and Jerzy Kocik Abstract. Krawtchouk’s polynomials occur classically as orthogonal polyno- mials with respect to the binomial distribution. They may be also expressed in the form of matrices, that emerge as arrays of the values that the polynomials take. The algebraic properties of these matrices provide a very interesting and accessible example in the approach to probability theory known as quantum probability. First it is noted how the Krawtchouk matrices are connected to the classical symmetric Bernoulli random walk. And we show how to derive Krawtchouk matrices in the quantum probability context via tensor powers of the elementary Hadamard matrix. Then connections with the classical situa- tion are shown by calculating expectation values in the quantum case. 1. Introduction Some very basic algebraic rules can be expressed using matrices. Take, for example, (a + b)2 = a2 + 2ab + b2 1 1 1 (2) (a + b)(a − b) = a2 − b2 ⇒ Φ = 2 0 −2 (a − b)2 = a2 − 2ab + b2 1 −1 1 (the expansion coefficients make up the columns of the matrix). In general, we make the definition: Definition 1.1. The N th-order Krawtchouk matrix Φ(N) is an (N +1)×(N +1) matrix, the entries of which are determined by the expansion: N N−j j X i (N) (1.1) (1 + v) (1 − v) = v Φij . i=0 The left-hand-side G(v) = (1 + v)N−j (1 − v)j is thus the generating function for the row entries of the jth column of Φ(N).
    [Show full text]
  • Sparse Linear System Solvers on Gpus: Parallel Preconditioning, Workload Balancing, and Communication Reduction
    UNIVERSIDAD JAUME I DE CASTELLÓN E. S. DE TECNOLOGÍA Y CIENCIAS EXPERIMENTALES SPARSE LINEAR SYSTEM SOLVERS ON GPUS: PARALLEL PRECONDITIONING, WORKLOAD BALANCING, AND COMMUNICATION REDUCTION CASTELLÓN DE LA PLANA,MARCH 2019 TESIS DOCTORAL PRESENTADA POR:GORAN FLEGAR DIRIGIDA POR:ENRIQUE S. QUINTANA-ORTÍ HARTWIG ANZT UNIVERSIDAD JAUME I DE CASTELLÓN E. S. DE TECNOLOGÍA Y CIENCIAS EXPERIMENTALES SPARSE LINEAR SYSTEM SOLVERS ON GPUS: PARALLEL PRECONDITIONING, WORKLOAD BALANCING, AND COMMUNICATION REDUCTION GORAN FLEGAR Abstract With the breakdown of Dennard scaling during the mid-2000s, and the end of Moore’s law on the horizon, hard- ware vendors, datacenters, and the high performance computing community are turning their attention towards unconventional hardware in hope of continuing the exponential performance growth of computational capacity. Among the available hardware options, a new generation of graphics processing units (GPUs), designed to support a wide variety of workloads in addition to graphics processing, is achieving the widest adoption. These processors are employed by the majority of today’s most powerful supercomputers to solve the world’s most complex problems in physics simulations, weather forecasting, data analytics, social network analysis, and ma- chine learning, among others. The potential of GPUs for these problems can only be unleashed by developing appropriate software, specifically tuned for the GPU architectures. Fortunately, many algorithms that appear in these applications are constructed out of the same basic building blocks. One example of a heavily-used building block is the solution of large, sparse linear systems, a challenge that is addressed in this thesis. After a quick overview of the current state-of-the-art methods for the solution of linear systems, this dis- sertation pays detailed attention to the class of Krylov iterative methods.
    [Show full text]
  • Probability on Algebraic and Geometric Structures
    668 Probability on Algebraic and Geometric Structures International Research Conference in Honor of Philip Feinsilver, Salah-Eldin A. Mohammed, and Arunava Mukherjea Probability on Algebraic and Geometric Structures June 5–7, 2014 Southern Illinois University, Carbondale, Illinois Gregory Budzban Harry Randolph Hughes Henri Schurz Editors American Mathematical Society Probability on Algebraic and Geometric Structures International Research Conference in Honor of Philip Feinsilver, Salah-Eldin A. Mohammed, and Arunava Mukherjea Probability on Algebraic and Geometric Structures June 5–7, 2014 Southern Illinois University, Carbondale, Illinois Gregory Budzban Harry Randolph Hughes Henri Schurz Editors 668 Probability on Algebraic and Geometric Structures International Research Conference in Honor of Philip Feinsilver, Salah-Eldin A. Mohammed, and Arunava Mukherjea Probability on Algebraic and Geometric Structures June 5–7, 2014 Southern Illinois University, Carbondale, Illinois Gregory Budzban Harry Randolph Hughes Henri Schurz Editors American Mathematical Society Providence, Rhode Island EDITORIAL COMMITTEE Dennis DeTurck, Managing Editor Michael Loss Kailash Misra Catherine Yan 2010 Mathematics Subject Classification. Primary 05C50, 15A66, 49N90, 54C40, 60B15, 60G50, 60H07, 60H15, 60H30, 60J05. Library of Congress Cataloging-in-Publication Data Names: Feinsilver, Philip J. (Philip Joel), 1948- — Mohammed, Salah-Eldin, 1946– — Mukherjea, Arunava, 1941– — Budzban, Gregory, 1957– editor. — Hughes, Harry Randolph, 1957– editor. — Schurz, Henri,
    [Show full text]
  • A Study of Linear Error Correcting Codes
    University of Plymouth PEARL https://pearl.plymouth.ac.uk 04 University of Plymouth Research Theses 01 Research Theses Main Collection 2007 A STUDY OF LINEAR ERROR CORRECTING CODES TJHAI, CEN JUNG http://hdl.handle.net/10026.1/1624 University of Plymouth All content in PEARL is protected by copyright law. Author manuscripts are made available in accordance with publisher policies. Please cite only the published version using the details provided on the item record or document. In the absence of an open licence (e.g. Creative Commons), permissions for further reuse of content should be sought from the publisher or author. A STUDY OF LINEAR ERROR CORRECTING CODES C. J., Tjhai Ph.D. September 2007 - Un-.vers.ty^o^H.y...ou-m Copyright © 2007 Cen Jung Tjhai This copy of the thesis has been supplied on condition that anyone who consults it is understood to recognise that its copyright rests with its author and that no quotation from the thesis and no information derived from it may be pubHshed without author's prior consent. A STUDY OF LINEAR ERROR CORRECTING CODES A thesis submitted to the University of Plymouth in partial fulfillment of the requirements for the degree of Doctor of Philosophy Cen Jung Tjhai September 2007 School of Computing, Communications and Electronics " *^ * Faculty of Technology University of Plymouth, UK A Study of Linear Error Correcting Codes Cen Jung Tjhai Abstract Since Shannon's ground-breaking work in 1948, there have been two main development streams of channel coding in approaching the limit of communication channels, namely classical coding theory which aims at designing codes with large minimum Hamming distance and probabilistic coding which places the emphasis on low complexity probabilistic decoding using long codes built from simple constituent codes.
    [Show full text]
  • Kravchuk Polynomials and Induced/Reduced Operators on Clifford Algebras G
    Southern Illinois University Edwardsville SPARK SIUE Faculty Research, Scholarship, and Creative Activity 2015 Kravchuk Polynomials and Induced/Reduced Operators on Clifford Algebras G. Stacey Staples Southern Illinois University Edwardsville, [email protected] Follow this and additional works at: http://spark.siue.edu/siue_fac Part of the Algebra Commons, and the Discrete Mathematics and Combinatorics Commons Recommended Citation G.S. Staples, Kravchuk polynomials and induced/reduced operators on Clifford algebras, Complex Analysis and Operator Theory, 9 (2015), 445 - 478. http://dx.doi.org/10.1007/s11785-014-0377-z This Article is brought to you for free and open access by SPARK. It has been accepted for inclusion in SIUE Faculty Research, Scholarship, and Creative Activity by an authorized administrator of SPARK. For more information, please contact [email protected]. Cover Page Footnote The definitive version of this article was published by Springer in Complex Analysis and Operator Theory. The final publication is available at Springer via http://dx.doi.org/10.1007/s11785-014-0377-z . This article is available at SPARK: http://spark.siue.edu/siue_fac/6 Kravchuk Polynomials and Induced/Reduced Operators on Clifford Algebras G. Stacey Staples∗ Abstract Kravchuk polynomials arise as orthogonal polynomials with respect to the binomial distribution and have numerous applications in harmonic analysis, statistics, coding theory, and quantum probability. The relation- ship between Kravchuk polynomials and Clifford algebras is multifaceted. In this paper, Kravchuk polynomials are discovered as traces of conjuga- tion operators in Clifford algebras, and appear in Clifford Berezin integrals of Clifford polynomials. Regarding Kravchuk matrices as linear operators on a vector space V , the action induced on the Clifford algebra over V is equivalent to blade conjugation, i.e., reflections across sets of orthogonal hyperplanes.
    [Show full text]
  • Optimizing the Sparse Matrix-Vector Multiplication Kernel for Modern Multicore Computer Architectures
    ¨ thesis March 11, 2013 15:54 Page 1 © National Technical University of Athens School of Electrical and Computer Engineering Division of Computer Science Optimizing the Sparse Matrix-Vector Multiplication Kernel for Modern Multicore Computer Architectures ¨ ¨ © © Ph.D. esis Vasileios K. Karakasis Electrical and Computer Engineer, Dipl.-Ing. Athens, Greece December, 2012 ¨ © ¨ thesis March 11, 2013 15:54 Page 2 © ¨ ¨ © © ¨ © ¨ thesis March 11, 2013 15:54 Page 3 © National Technical University of Athens School of Electrical and Computer Engineering Division of Computer Science Optimizing the Sparse Matrix-Vector Multiplication Kernel for Modern Multicore Computer Architectures Ph.D. esis Vasileios K. Karakasis Electrical and Computer Engineer, Dipl.-Ing. Advisory Committee: Nectarios Koziris Panayiotis Tsanakas ¨ Andreas Stafylopatis ¨ © © Approved by the examining committee on December 19, 2012. Nectarios Koziris Panayiotis Tsanakas Andreas Stafylopatis Associate Prof., NTUA Prof., NTUA Prof., NTUA Andreas Boudouvis Giorgos Stamou Dimitrios Soudris Prof., NTUA Lecturer, NTUA Assistant Prof., NTUA Ioannis Cotronis Assistant Prof., UOA Athens, Greece December, 2012 ¨ © ¨ thesis March 11, 2013 15:54 Page 4 © Vasileios K. Karakasis Ph.D., National Technical University of Athens, Greece. ¨ ¨ © © Copyright © Vasileios K. Karakasis, 2012. Με επιφύλαξη παντός δικαιώματος. All rights reserved Απαγορεύεται η αντιγραφή, αποθήκευση και διανομή της παρούσας εργασίας, εξ ολοκλήρου ή τμήματος αυτής, για εμπορικό σκοπό. Επιτρέπεται η ανατύπωση, αποθήκευση και διανομή για σκοπό μη κερδοσκοπικό, εκπαιδευτικής ή ερευνητικής φύσης, υπό την προϋπόθεση να αναφέρε- ται η πηγή προέλευσης και να διατηρείται το παρόν μήνυμα. Ερωτήματα που αφορούν τη χρήση της εργασίας για κερδοσκοπικό σκοπό πρέπει να απευθύνονται προς τον συγγραφέα. Η έγκριση της διδακτορικής διατριβής από την Ανώτατη Σχολή των Ηλεκτρολόγων Μηχανικών και Μηχανικών Υπολογιστών του Ε.
    [Show full text]