Efficient Algorithms for Computation of Equilibrium/Transient Probability Distribution of Finite Markov Chains: Potential Lower Bound on Computational Complexity

Total Page:16

File Type:pdf, Size:1020Kb

Efficient Algorithms for Computation of Equilibrium/Transient Probability Distribution of Finite Markov Chains: Potential Lower Bound on Computational Complexity EasyChair Preprint № 490 Efficient Algorithms for Computation of Equilibrium/Transient Probability Distribution of Finite Markov Chains: Potential Lower Bound on Computational Complexity Rama Murthy Garimella EasyChair preprints are intended for rapid dissemination of research results and are integrated with the rest of EasyChair. March 8, 2020 EFFICIENT ALGORITHMS FOR COMPUTATION OF EQUILIBRIUM/TRANSIENT PROBABILITY DISTRIBUTION OF ARBITRARY FINITE CONTINUOUS TIME MARKOV CHAINS: Garimella Rama Murthy, Professor, Department of Computer Science & Engineering, Mahindra Ecole Centrale, Hyderabad, India ABSTRACT In this research paper, efficient algorithms for computation of equilibrium as well as transient probability distribution of arbitrary finite state space Continuous / Discrete Time Markov Chains are proposed. The effective idea is to solve a structured systems of linear equations efficiently. It is realized that the approach proposed in this research paper generalizes to infinite state space Continuous Time Markov Chains. The ideas of this research paper could be utilized in solving structured system of linear equations that arise in other applications. 1. Introduction: Markov chains provide interesting stochastic models of natural/artificial phenomena arising in science and engineering. The existence of equilibrium behavior enables computation of equilibrium performance measures. Thus, researchers invested considerable effort in EFFICIENTLY computing the equilibrium probability distribution of Markov chains and thus, the equilibrium performance measures. Traditionally, computation of transient probability distribution of Continuous Time Markov Chains ( CTMCs ) was considered to be a difficult open problem. It requires computation of MATRIX EXPONENTIAL associated with the generator matrix of CTMC. Even in the case of finite state space CTMCs, efficient computation of transient probability distribution was considered to be a difficult open problem. In [ Rama1 ], an interesting approach for recursive computation of transient probability distribution of arbitrary finite state space CTMCs was proposed. In the case of infinite state space, Quasi-birth-and Death (QBD) processes, matrix geometric recursion for transient probability distribution was found in [ZhC]. It is well known that computation of equilibrium distribution of CTMCs reduces to solution of linear system of equations. The approach proposed in [ Rama 2] reduces the computation of transient probability distribution of finite state space CTMCs to solving linear system of equations in the transform ( Laplace Transform ) domain. Thus, an interesting question that remained deals with efficient solution of such STRUCTURED linear system of equations in the Laplace transform domain. In fact, a more interesting problem is to design an algorithm which meets a LOWER BOUND on the computation of solution of structured system of linear equations arising in the transient/equilibrium analysis of CTMCs. This research paper is organized as follows. In Section 2, algorithm for efficient computation of equilibrium probability mass function of Continuous Time Markov Chains (CTMCs ) is discussed. In Section 3, algorithm for efficient computation of transient probability mass Function of Continuous Time Markov Chains is discussed. In section 4, we provide numerical results. The research paper concludes in Section 5. 2. Review of Research Literature: It is well known that computation of equilibrium probability vector (PMF) 휋̅ of a Homogeneous Continuous Time Markov Chain ( CTMC ) with generator 푄̅ reduces to the solution of following linear system of equations ( i.e. computation of vector 휋̅ in the left null space of the generator matrix 푄̅ ) i.e. 휋̅ 푄̅ ≡ 0̅ . It should be noted that in the case of homogeneous Discrete Time Markov Chain ( DTMC) with state transition matrix 푃̅, the computation of equilibrium probability vector (PMF) 휋̅ reduces to the solution of following linear system of equations i.e. 휋̅ 푃̅ ≡ 휋̅ 표푟 푒푞푢푖푣푎푙푒푛푡푙푦 휋̅ ( 푃̅ − 퐼 ) ≡ 0̅ . 퐼푡 푟푒푎푑푖푙푦 푓표푙푙표푤푠 푡ℎ푎푡 ( 푃̅ − 퐼 ) ℎ푎푠 푡ℎ푒 푝푟표푝푒푟푡푖푒푠 표푓 푎 푔푒푛푒푟푎푡표푟 푚푎푡푟푖푥. 푇ℎ푢푠, 푖푛 푡ℎ푒 푓표푙푙표푤푖푛푔 푑푖푠푐푢푠푠푖표푛, 푤푒 푐표푛푠푖푑푒푟 표푛푙푦 푡ℎ푒 푐표푚푝푢푡푎푡푖표푛 표푓 푒푞푢푖푙푖푏푟푖푢푚 푝푟표푏푎푏푖푙푖푡푦 푣푒푐푡표푟 표푓 푎 퐶푇푀퐶. In computational linear algebra, there are efficient algorithms to solve linear system of equations ,[Gall], [Sto], [Wil] . Volker Strassen made fundamental contributions to this problem [Str1], [Str2]. He showed that the complexity of matrix inversion is equivalent to that of matrix multiplication. It readily follows that the matrix ‘-Q’ is a Laplacian matrix. There are efficient algorithms to solve Laplacian system of equations [Coh], [DGA]. 3. Efficient Computation of Equilibrium Probability Mass Function of Continuous Time Markov Chains: But, from the point of view of computing 휋̅ , earlier researchers donot fully take into account the structure of generator matrix 푄̅̅̅ ̅. Thus, the goal of this research paper is to design efficient algorithm for computing 휋̅, taking into account the structure of generator matrix 푄̅ . Some related effort dealing with Laplacian system of equations is discussed in [DGA]. In fact, we would like to design a COMPUTATIONALLY OPTIMAL ALGORITHM for computation of 휋̅ ( 푖푛 푡푒푟푚푠 표푓 Computational Complexity ). We illustrate the essential idea with a Continuous Time Markov chain ( CTMC ) with 4 states. First consider the generator matrix of a CTMC with 4 states ( i.e. 4 x 4 matrix ) partitioned using blocks of size 2 x 2 ( i.e. there are 4 such 2 x 2 matrices in the generator ) [RaR1], [RaR2]. Specifically, we have 퐴11 퐴12 푄 = [ ] , 푤ℎ푒푟푒 { 퐴푖푗; 푖, 푗 ∈ {1,2} } 푎푟푒 퐴21 퐴22 2 푥 2 푠푞푢푎푟푒 푚푎푡푟푖푐푒푠 푖푛 푡ℎ푒 4 푥 4 푏푙표푐푘 푚푎푡푟푖푥 푄. Claim: For a positive recurrent ( recurrent non-null ) CTMC with generator matrix Q, it is well known that the matrices { 퐴11, 퐴22 } are non-singular. Hence, we have that −1 퐴21 = 퐴22 푋 푤푖푡ℎ 푋 = 퐴22 퐴21. Thus, by means of elementary column operations, the generator matrix Q can be converted into the following form i.e. 퐴̃11 퐴̃12 푄̃ = [ ] 푖. 푒. 푏푙표푐푘 푢푝푝푒푟 푡푟푖푎푛푔푢푙푎푟 푚푎푡푟푖푥. 0 퐴22 In fact, let 푋 = [ 푓1 푓2 ]. Hence, column operations to arrive at 푄̃ ( from Q ) are determined by the column vectors { 푓1 푓2 } . Also, it follows from linear algebra, that the equilibrium probability vector 휋̅ is unaffected by such column operations. Hence, with 휋̅ = [ 휋̅̅1̅ 휋̅̅2̅ ], we have the following system of linear equations i.e. 휋̅̅1̅ 퐴̃11 = 0̅ 푖. 푒. 푏표푢푛푑푎푟푦 푠푦푠푡푒푚 표푓 푡푤표 푒푞푢푎푡푖표푛푠 ̃ ̅ ̃ −1 휋̅̅1̅ 퐴12 + 휋̅̅2̅ 퐴22 = 0 . 푇ℎ푢푠, 푤푒 ℎ푎푣푒 휋̅̅2̅ = −휋̅̅1̅ 퐴12 퐴22 . The boundary system of linear equations leads to a single linear equation in 2 variables 1 2 i.e. denoting 휋̅̅1̅ = [휋1 휋1 ], we have a linear equation of the form 훼 휋1훼 + 휋2 훽 = 0. 푇ℎ푢푠, 휋2 = −휋1 . 1 1 1 1 훽 Further, since 휋̅̅2̅ can be expressed in terms of 휋̅̅1̅ , we utilize the normalizing equation ( 휋̅̅1̅ + 휋̅̅2̅) 푒̅ = 1, 푤ℎ푒푟푒 푒̅ is a column vector of ones, 1 to determine 휋1 and hence all the other equilibrium probabilities. Remark 1: We realize that the inverse of a 2 x 2 matrix can be computed by inspection and essentially only requires computation of the determinant. For instance, let us consider 푏 푏 1 푏 −푏 a 2 x 2 matrix B i.e. 퐵 = [ 11 12] . 퐼푡 푖푠 푤푒푙푙 푘푛표푤푛 푡ℎ푎푡 퐵−1 = [ 22 12] , 푏21 푏22 ∆ −푏21 푏11 푤ℎ푒푟푒 ∆ = 푑푒푡푒푟푚푖푛푎푛푡(퐵) = ( 푏11푏22 − 푏12푏21 ). COMPUTATIONAL COMPLEXITY: Now, we determine the computational complexity of our algorithm: The number of arithmetic operations required to convert the generator matrix Q into block upper triangular matrix 푄̃ are as follows. We need one 2 x 2 matrix inversion, one 2 x 2 matrix multiplication and one addition of 2 x 2 matrices. 2 휋1 푐표푚푝푢푡푎푡푖표푛 푟푒푞푢푖푟푒푠 푂푁퐸 푑푖푣푖푠푖표푛 . 휋̅̅2̅ 푐표푚푝푢푡푎푡푖표푛 푟푒푞푢푖푟푒푠 푖푛푣푒푟푠푖표푛 표푓 2 푥 2 푚푎푡푟푖푥 퐴22 푎푛푑 표푛푒 푚푎푡푟푖푥 푚푢푙푡푖푝푙푖푐푎푡푖표푛 ( 표푓 2 푥 2 푚푎푡푟푖푐푒푠 퐴̃12 푎푛푑 퐴22 ). 1 퐴푙푠표, 푡ℎ푒 푛표푟푚푎푙푖푧푖푛푔 푒푞푢푎푡푖표푛 푡표 푑푒푡푒푟푚푖푛푒 휋1 푟푒푞푢푖푟푒푠 3 푎푑푑푖푡푖표푛푠 푎푛푑 푂푁퐸 푑푖푣푖푠푖표푛. 1 퐹푖푛푎푙푙푦, 푡표 푑푒푡푒푟푚푖푛푒 휋̅̅1̅ , 휋̅̅2̅ 푖푛 푡푒푟푚푠 표푓 휋1, 푤푒 푟푒푞푢푖푟푒 3 푚푢푙푡푖푝푙푖푐푎푡푖표푛푠. Now, we generalize the above algorithm for the case, where the number of states, N = 2 m where m > 1. In such case, the generator matrix is of the following form: 퐴11 퐴12 퐴13 … 퐴1푚 퐴 퐴 퐴 … 퐴 푄 = [ 21 22 23 2푚 ] 푤ℎ푒푟푒 퐴′ 푠 푎푟푒 2 푥 2 푚푎푡푟푖푐푒푠. ⋮ ⋮ ⋮ … ⋮ 푖푗 퐴푚1 퐴푚2 퐴푚3 … 퐴푚푚 Lemma 1 : For a positive recurrent CTMC, the following sub-matrices of generator i.e. { 퐴푖푖: 1 ≤ 푖 ≤ 푚 } are all non-singular matrices. Proof: Refer [Rama1]. The proof utilizes the fact that strictly diagonally dominant matrix is non-singular. Hence, as in the m=1 case, by means of elementary column operations on the generator matrix Q, we arrive at the following block upper triangular matrix 푄̃ . 퐴̃ 퐴̃ 퐴̃ ⋯ ̃ ̃ 11 12 13 퐴1 푚−1 퐴1푚 ̃ ̃ 0 퐴22 퐴23 ⋯ 퐴̃2 푚−1 퐴̃2푚 푄̃ = ⋮ ⋮ . ⋮ ⋮ 0 ⋯ 퐴 ̃ 퐴̃ 0 0 푚−1 푚−1 푚−1 푚 [ 0 0 0 … . 0 퐴̃푚푚 ] Thus, the equilibrium probability vector 휋̅ satisfies the following linear system of equations: 휋̅̅1̅ 퐴̃11 ≡ 0̅ 휋̅̅1̅ 퐴̃12 + 휋̅̅2̅ 퐴̃22 ≡ 0̅ ⋮ 휋̅̅1̅ 퐴̃1푚−1 + 휋̅̅2̅ 퐴2 푚−1 ̃+ ⋯ . 휋̅̅푚̅̅̅−̅1̅ 퐴푚̃−1 푚−1 ≡ 0̅ 휋̅̅1̅ 퐴̃1푚 + 휋̅̅2̅ 퐴̃2푚 + ⋯ + 휋̅̅푚̅̅ 퐴푚푚 ≡ 0̅ . The above system of linear equations is recursively solved to compute the equilibrium probability vector i.e. 휋̅̅1̅ 퐴̃11 ≡ 0̅ −1 휋̅̅2̅ = −휋̅̅1̅ 퐴̃12 퐴̃22 . ⋮ −1 −1 −1 휋̅푗 = −휋̅̅1̅ 퐴̃1푗 퐴̃푗푗 − 휋̅̅2̅ 퐴̃2푗 퐴̃푗푗 − ⋯ ⋯ − 휋̅̅푗̅̅−̅1̅ 퐴̃푗−1 푗 퐴̃푗푗 푓표푟 2 ≤ 푗 ≤ (푚 − 1). −1 −1 −1 휋̅̅푚̅̅ = −휋̅̅1̅ 퐴̃1푚 퐴̃푚푚 − 휋̅̅2̅ 퐴̃2푚 퐴̃푚푚 − ⋯ ⋯ − 휋̅̅푚̅̅̅−̅1̅ 퐴푚̃−1 푚 퐴푚 푚 . Note: As in the 2 x2 case, using non-singular matrices on the diagonal of generator matix, Q, it can be converted to a block upper-triangular matrix. Since inverse of 2 x 2 matrices on the diagonal of Q can be computed efficiently, an efficient algorithm exists for such a computational problem. Definition: Finite Memory Recursion of order “L” for the equilibrium probability vector is of the following form ′ 흅̅ (푳 + ퟏ) = 휋̅ (1) 푊1 + 휋̅ (2) 푊2 + ⋯ + 휋̅ (퐿) 푊퐿 ,, 푤ℎ푒푟푒 푊푖 푠 푎푟푒 푟푒푐푢푟푠푖표푛 푚푎푡푟푖푐푒푠.
Recommended publications
  • Superregular Matrices and Applications to Convolutional Codes
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Repositório Institucional da Universidade de Aveiro Superregular matrices and applications to convolutional codes P. J. Almeidaa, D. Nappa, R.Pinto∗,a aCIDMA - Center for Research and Development in Mathematics and Applications, Department of Mathematics, University of Aveiro, Aveiro, Portugal. Abstract The main results of this paper are twofold: the first one is a matrix theoretical result. We say that a matrix is superregular if all of its minors that are not trivially zero are nonzero. Given a a × b, a ≥ b, superregular matrix over a field, we show that if all of its rows are nonzero then any linear combination of its columns, with nonzero coefficients, has at least a − b + 1 nonzero entries. Secondly, we make use of this result to construct convolutional codes that attain the maximum possible distance for some fixed parameters of the code, namely, the rate and the Forney indices. These results answer some open questions on distances and constructions of convolutional codes posted in the literature [6, 9]. Key words: convolutional code, Forney indices, optimal code, superregular matrix 2000MSC: 94B10, 15B33, 15B05 1. Introduction Several notions of superregular matrices (or totally positive) have appeared in different areas of mathematics and engineering having in common the specification of some properties regarding their minors [2, 3, 5, 11, 14]. In the context of coding theory these matrices have entries in a finite field F and are important because they can be used to generate linear codes with good distance properties.
    [Show full text]
  • Reed-Solomon Code and Its Application 1 Equivalence
    IERG 6120 Coding Theory for Storage Systems Lecture 5 - 27/09/2016 Reed-Solomon Code and Its Application Lecturer: Kenneth Shum Scribe: Xishi Wang 1 Equivalence of Codes Two linear codes are said to be equivalent if one of them can be obtained from the other by means of a sequence of transformations of the following types: (i) a permutation of the positions of the code; (ii) multiplication of symbols in a fixed position by a non-zero scalar in F . Note that these transformations can be applied to all code symbols. 2 Reed-Solomon Code In RS code each symbol in the codewords is the evaluation of a polynomial at one point α, namely, 2 1 3 6 α 7 6 2 7 6 α 7 f(α) = c0 c1 c2 ··· ck−1 6 7 : 6 . 7 4 . 5 αk−1 The whole codeword is given by n such evaluations at distinct points α1; ··· ; αn, 2 1 1 ··· 1 3 6 α1 α2 ··· αn 7 6 2 2 2 7 6 α1 α2 ··· αn 7 T f(α1) f(α2) ··· f(αn) = c0 c1 c2 ··· ck−1 6 7 = c G: 6 . .. 7 4 . 5 k−1 k−1 k−1 α1 α2 ··· αn G is the generator matrix of RSq(n; k; d) code. The order of writing down the field elements α1 to αn is not important as far as the code structure is concerned, as any permutation of the αi's give an equivalent code. i j=1;:::;n The generator matrix G is a Vandermonde matrix, which is of the form [aj]i=0;:::;k−1.
    [Show full text]
  • Linear Block Codes Vector Spaces
    Linear Block Codes Vector Spaces Let GF(q) be fixed, normally q=2 • The set Vn consisting of all n-dimensional vectors (a n-1, a n-2,•••a0), ai ∈ GF(q), 0 ≤i ≤n-1 forms an n-dimensional vector space. • In Vn, we can also define the inner product of any two vectors a = (a n-1, a n-2,•••a0) and b = (b n-1, b n-2,•••b0) n−1 i ab= ∑ abi i i=0 • If a•b = 0, then a and b are said to be orthogonal. Linear Block codes • Let C ⊂ Vn be a block code consisting of M codewords. • C is said to be linear if a linear combination of two codewords C1 and C2, a 1C1+a 2C2, is still a codeword, that is, C forms a subspace of Vn. a 1, ∈ a2 GF(q) • The dimension of the linear block code (a subspace of Vn) is denoted as k. M=qk. Dual codes Let C ⊂Vn be a linear code with dimension K. ┴ The dual code C of C consists of all vectors in Vn that are orthogonal to every vector in C. The dual code C┴ is also a linear code; its dimension is equal to n-k. Generator Matrix Assume q=2 Let C be a binary linear code with dimension K. Let {g1,g2,…gk} be a basis for C, where gi = (g i(n-1) , g i(n-2) , … gi0 ), 1 ≤i≤K. Then any codeword Ci in C can be represented uniquely as a linear combination of {g1,g2,…gk}.
    [Show full text]
  • On a Class of Matrices with Low Displacement Rank M
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Linear Algebra and its Applications 325 (2001) 161–176 www.elsevier.com/locate/laa On a class of matrices with low displacement rank M. Barnabei∗, L.B. Montefusco Department of Mathematics, University of Bologna, Piazza di Porta S. Donato, 5 40127 Bologna, Italy Received 5 June 2000; accepted 5 September 2000 Submitted by L. Elsner Abstract AmatrixA such that, for some matrices U and V, the matrix AU − VAor the matrix A − VAU has a rank which is small compared with the order of the matrix is called a matrix with displacement structure. In this paper the authors single out a new class of matrices with dis- placement structure—namely, finite sections of recursive matrices—which includes the class of finite Hurwitz matrices, widely used in computer graphics. For such matrices it is possible to give an explicit evaluation of the displacement rank and an algebraic description, and hence a stable numerical evaluation, of the corresponding generators. The generalized Schur algorithm can therefore be used for the fast factorization of some classes of such matrices, and exhibits a backward stable behavior. © 2001 Elsevier Science Inc. All rights reserved. AMS classification: 65F05; 65F30; 15A23; 15A57 Keywords: Recursive matrices; Displacement structure; Fast factorization algorithm; Stability 1. Introduction It is well known that the notion of matrix with displacement structure is used in the literature to define classes of matrices for which fast O(n2) factorization algorithms can be given (see, e.g., [9]).
    [Show full text]
  • Chapter 2: Linear Algebra User's Manual
    Preprint typeset in JHEP style - HYPER VERSION Chapter 2: Linear Algebra User's Manual Gregory W. Moore Abstract: An overview of some of the finer points of linear algebra usually omitted in physics courses. May 3, 2021 -TOC- Contents 1. Introduction 5 2. Basic Definitions Of Algebraic Structures: Rings, Fields, Modules, Vec- tor Spaces, And Algebras 6 2.1 Rings 6 2.2 Fields 7 2.2.1 Finite Fields 8 2.3 Modules 8 2.4 Vector Spaces 9 2.5 Algebras 10 3. Linear Transformations 14 4. Basis And Dimension 16 4.1 Linear Independence 16 4.2 Free Modules 16 4.3 Vector Spaces 17 4.4 Linear Operators And Matrices 20 4.5 Determinant And Trace 23 5. New Vector Spaces from Old Ones 24 5.1 Direct sum 24 5.2 Quotient Space 28 5.3 Tensor Product 30 5.4 Dual Space 34 6. Tensor spaces 38 6.1 Totally Symmetric And Antisymmetric Tensors 39 6.2 Algebraic structures associated with tensors 44 6.2.1 An Approach To Noncommutative Geometry 47 7. Kernel, Image, and Cokernel 47 7.1 The index of a linear operator 50 8. A Taste of Homological Algebra 51 8.1 The Euler-Poincar´eprinciple 54 8.2 Chain maps and chain homotopies 55 8.3 Exact sequences of complexes 56 8.4 Left- and right-exactness 56 { 1 { 9. Relations Between Real, Complex, And Quaternionic Vector Spaces 59 9.1 Complex structure on a real vector space 59 9.2 Real Structure On A Complex Vector Space 64 9.2.1 Complex Conjugate Of A Complex Vector Space 66 9.2.2 Complexification 67 9.3 The Quaternions 69 9.4 Quaternionic Structure On A Real Vector Space 79 9.5 Quaternionic Structure On Complex Vector Space 79 9.5.1 Complex Structure On Quaternionic Vector Space 81 9.5.2 Summary 81 9.6 Spaces Of Real, Complex, Quaternionic Structures 81 10.
    [Show full text]
  • A Standard Generator/Parity Check Matrix for Codes from the Cayley Tables Due to the Non-Associative (123)-Avoiding Patterns of AUNU Numbers
    Universal Journal of Applied Mathematics 4(2): 39-41, 2016 http://www.hrpub.org DOI: 10.13189/ujam.2016.040202 A Standard Generator/Parity Check Matrix for Codes from the Cayley Tables Due to the Non-associative (123)-Avoiding Patterns of AUNU Numbers Ibrahim A.A1, Chun P.B2,*, Abubakar S.I3, Garba A.I1, Mustafa.A1 1Department of Mathematics, Usmanu Danfodiyo University Sokoto, Nigeria 2Department of Mathematics, Plateau State University Bokkos, Nigeria 3Department of Mathematics, Sokoto State University Sokoto, Nigeria Copyright©2016 by authors, all rights reserved. Authors agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 international License. Abstract In this paper, we aim at utilizing the Cayley Eulerian graphs due to AUNU patterns had been reported in tables demonstrated by the Authors[1] in the construction of [2] and [3]. This special class of the (132) and a Generator/Parity check Matrix in standard form for a Code (123)-avoiding class of permutation patterns which were first say C Our goal is achieved first by converting the Cayley reported [4], where some group and graph theoretic tables in [1] using Mod2 arithmetic into a Matrix with entries properties were identified, had enjoyed a wide range of from the binary field. Echelon Row operations are then applications in various areas of applied Mathematics. In[1], performed (carried out) on the matrix in line with existing we described how the non-associative and non-commutative algorithms and propositions to obtain a matrix say G whose properties of the patterns can be established using their rows spans C and a matrix say H whose rows spans , the Cayley tables where a binary operation was defined to act on dual code of C, where G and H are given as, G = (Ik| X ) and the (132) and (123)-avoiding patterns of the AUNU numbers T H= ( -X |In-k ).
    [Show full text]
  • Error Correcting Codes
    9 error correcting codes A main idea of this sequence of notes is that accounting is a legitimate field of academic study independent of its vocational implications and its obvious impor- tance in the economic environment. For example, the study of accounting pro- motes careful and disciplined thinking important to any intellectual pursuit. Also, studying how accounting works illuminates other fields, both academic and ap- plied. In this chapter we begin to connect the study of accounting with the study of codes. Coding theory, while theoretically rich, is also important in its applica- tions, especially with the pervasiveness of computerized information transfer and storage. Information integrity is inherently of interest to accountants, so exploring coding can be justified for both academic and applied reasons. 9.1 kinds of codes We will study three kinds of codes: error detecting, error correcting, and secret codes. The first two increase the reliability of message transmission. The third, secret codes, are designed to ensure that messages are available only to authorized users, and can’t be read by the bad guys. The sequence of events for all three types of codes is as follows. 1. determine the message to transmit. For our purposes, we will treat messages as vectors. 2. The message is encoded. The result is termed a codeword, or cyphertext, also a vector. 180 9. error correcting codes 3. The codeword is transmitted through a channel. The channel might inject noise into the codeword, or it might be vulnerable to eavesdropping. 4. The received vector is decoded. If the received vector is no longer the trans- mitted codeword, the decoding process may detect, or even correct, the er- ror.
    [Show full text]
  • Lec16.Ps (Mpage)
    Potential Problems with Large Blocks 2 CSC310 – Information Theory Sam Roweis • Using very large blocks could potentially cause some serious practical problems with storage/retrieval of codewords. • In particular, if we are encoding blocks of K bits, our code will have 2K codewords. For K ≈ 1000 this is a huge number! Lecture 16: • How could we even store all the codewords? • How could we retrieve (look up) the N bit codeword corresponding to a given K bit message? Equivalent Codes & Systematic Forms • How could we check if a given block of N bits is a valid codeword or a forbidden encoding? • Last class, we saw how to solve all these problems by representing the codes mathematically and using the magic of linear algebra. November 9, 2005 • The valid codewords formed a subspace in the vector space of the N finite field Z2 . Reminder: Dealing with Long Blocks 1 Generator and Parity Check Matrices 3 • Recall that Shannon’s second theorem tells us that for any noisy • Using aritmetic mod 2, we could represent a code using either a set channel, there is some code which allows us to achieve error free of basis functions or a set of constraint (check) equations, transmission at a rate up to the capacity. represented a generator matrix G or a parity check matrix H. • However, this might require us to encode our message in very long • Every linear combination basis vectors is a valid codeword & all blocks. Why? valid codewords are spanned by the basis; similarly all valid • Intuitively it is because we need to add just the right fraction of codewords satisfy every check equation & any bitstring which redundancy; too little and we won’t be able to correct the erorrs, satisfies all equations is a valid codeword.
    [Show full text]
  • Estimating the Generator Matrix from Discret Data Set
    Estimating the Generator Matrix from Discret Data Set Romayssa Touaimia (Algeria) Department Probability and Statistics, Mathematical Institute Eötvös Loránd University, Budapest, Hungary A thesis submitted for the degree of MSc in Mathematics 2019. May Declaration I herewith declare that I have produced this thesis without the prohibited assistance of third parties and without making use of aids other than those specified; notions taken over directly or indirectly from other sources have been identified as such. This paper has not previously been presented in identical or similar form to any other Hungarian or foreign examination board. The thesis work was conducted under the supervision of Mr. Pröhle Tamás at Eötvös Loránd University. Budapest, May, 2019 Acknowledgements First and foremost, I would like to thank my supervision and advisor Mr. Prőhle Tamás for introducing me to this topic, for his help through the difficulties that I faced, his patience with me, his helpful suggestions and his assistance in every step through the process. I am grateful for the many discussions that we had. I would like to thank my parents whom were always there for me, sup- ported me and believed in me. Abstract Continuous Markov chain is determined by its transition rates which are the entries of the generator matrix. This generator matrix is used to fully describe the process. In this work, we will first start with an introduction about the impor- tance of the application of the Markov Chains in real life, then we will give some definitions that help us to understand the discrete and continuous time Markov chains, then we will see the relation between transition matrix and the generator matrix.
    [Show full text]
  • Recovery of a Lattice Generator Matrix from Its Gram Matrix for Feedback and Precoding in MIMO
    Recovery of a Lattice Generator Matrix from its Gram Matrix for Feedback and Precoding in MIMO Francisco A. Monteiro, Student Member, IEEE, and Ian J. Wassell Abstract— Either in communication or in control applications, coding, MIMO spatial multiplexing, and even OFDM) multiple-input multiple-output systems often assume the unified from a lattice perspective as a general equalization knowledge of a matrix that relates the input and output vectors. problem (e.g., [14]). Advances in lattice theory are therefore For discrete inputs, this linear transformation generates a of great interest for MIMO engineering. multidimensional lattice. The same lattice may be described by There are several ways of describing a lattice (e.g., via an infinite number of generator matrixes, even if the rotated versions of a lattice are not considered. While obtaining the modular equations [15] or trellis structures [16]), however, Gram matrix from a given generator matrix is a trivial the two most popular ones in engineering applications are i) operation, the converse is not obvious for non-square matrixes the generator matrix and ii) the Gram matrix. The and is a research topic in algorithmic number theory. This computation of the latter given the former is trivial. The paper proposes a method to execute such a conversion and reverse is not, and an efficient algorithm for this conversion applies it in a novel MIMO system implementation where some remains an open problem in the theory of lattices. of the complexity is taken from the receiver to the transmitter. It should be noticed that an efficient algorithm for this Additionally, given the symmetry of the Gram matrix, the reverse operation can allow a lattice to be described using number of elements required in the feedback channel is nearly only about half the number of elements usually required halved.
    [Show full text]
  • An Introduction to Lattices and Their Applications in Communications Frank R
    An Introduction to Lattices and their Applications in Communications Frank R. Kschischang Chen Feng University of Toronto, Canada 2014 Australian School of Information Theory University of South Australia Institute for Telecommunications Research Adelaide, Australia November 13, 2014 Outline 1 Fundamentals 2 Packing, Covering, Quantization, Modulation 3 Lattices and Linear Codes 4 Asymptopia 5 Communications Applications 2 Part 1: Lattice Fundamentals 3 Notation • C: the complex numbers (a field) • R: the real numbers (a field) • Z: the integers (a ring) • X n: the n-fold Cartesian product of set X with itself; n X = f(x1;:::; xn): x1 2 X ; x2 2 X ;:::; xn 2 X g. If X is a field, then the elements of X n are row vectors. • X m×n: the m × n matrices with entries from X . ∗ • If (G; +) is a group with identity 0, then G , G n f0g denotes the nonzero elements of G. 4 Euclidean Space Lattices are discrete subgroups (under vector addition) of finite-dimensional Euclidean spaces such as n. R x d( • x; y) In n we have R •y Pn (0; 1) k • an inner product: hx; yi xi yi x , i=1 k k p ky • a norm: kxk , hx; xi • 0 • a metric: d(x; y) , kx − yk (1; 0) • Vectors x and y are orthogonal if hx; yi = 0. n • A ball centered at the origin in R is the set n Br = fx 2 R : kxk ≤ rg: n • If R is any subset of R , the translation of R by x is, for any n x 2 R , the set x + R = fx + y : y 2 Rg.
    [Show full text]
  • The Pennsylvania State University Schreyer Honors College
    The Pennsylvania State University Schreyer Honors College Department of Mathematics Application of Computational Linear Algebraic Methods to a Reliability Simulation Seth Henry Spring 2015 A thesis submitted in partial fulfillment of the requirements for baccalaureate degrees in Mathematics and Physics with honors in Mathematics Reviewed and approved* by the following: Christopher Griffin Sr. Research Assoc. & Assoc. Professor of Mathematics Thesis Supervisor Victoria Sadovskaya Associate Professor of Mathematics Honors Adviser *Signatures are on file in the Schreyer Honors College Abstract In this paper, numerical methods for the solution of a reliability modeling problem are presented by finding the steady state solution of a Markov chain. The reliability modeling problem analyzed is that of a large system made up of two smaller systems each with a varying number of subsystems. The focus of this study is on the optimal choice and formulation of algorithms for the steady-state solution of the gen- erator matrix for the Markov chain associated with the given reliabil- ity modeling problem. In this context, iterative linear equation solu- tion algorithms were compared. The Conjugate-Gradient method was determined to have the quickest convergence with the Gauss-Seidel method following close behind for the relevant model structures. The successive over-relaxation method was studied as well, and some op- timal relaxation parameters are presented. i Contents List of Figures iii 1 Introduction1 2 Literature Review2 2.1 Markov Chains..........................2 2.2 Reliability Theory.........................6 2.3 Numerical Analysis........................9 3 A Reliability Problem 17 4 Computational Analysis 20 5 Conclusion and Future Work 28 5.1 Conclusions...........................
    [Show full text]