Solutions to Selected Problems Chapter 1

Total Page:16

File Type:pdf, Size:1020Kb

Solutions to Selected Problems Chapter 1 Solutions to Selected Problems Chapter 1 1.7. Solution Clearly 1]_1 [2]-1=[1/2], 3 1]-1=~ [ 3-1] 1 =_1! -I5 -1 5-1 -1] [1 3 8 -I 3' 4 18 -I -I 5 and the sum of the elements in the inverse matrix is ~ in each case. We show that this is true in general. The matrix A3=nl + j, where j is the matrix every element of which is I, and in the special cases above the inverses are linear combinations of I and j. Let us see if this is true in general. Assume A;;1=CtJ+/3J. Then (al+/3})(nl+})=1 which gives nal +(a+/3n)) +/3]2= I which can be satisfied, since ]2=nj, by taking 1 a=-, n Since the inverse is unique we have and the sum of its elements is nx~+n2X ( __1_) = I_~=~. n 2n2 2 2 The answer in the case of the Hilbert matrix is n2• See e.g. D. E. Knuth, The art of computer programming, I (1968), pp. 36/7, 473/4. 1.9. Solution Since R(x}=R(rx) for any r¥O we may replace the condition x¥O by x' x = 1. We know that we can choose an orthonormal system of vectors 120 Solutions to Selected Problems c1 , C2 , ••• , Cn which span the whole space Rn and which are characteristic vectors of A, say Ac;=ex; c;, i= 1,2, ... , n. Hence we can express any x, with x'x= 1 as where .2'~; = 1. Since =.2'exj~;~jC;Cj i, j = .2' ex; ~r (by orthonormality) ; we have exn = an .2' ~r;§ R (x) = .2' ai ~r;§ a1 .2' ~r = a1 • iii Also, clearly, for any i, and so the bounds are attained. In view of the importance of the Rayleigh quotient in numerical mathe­ matics we add three remarks, the first two dealing with the two-dimensional case. (1) We show how the Rayleigh quotient varies when A=[~ ~l By homo- geneity we can restrict ourselves to vectors of unit length say x = [~~~:l Then Q (8) = a cos2 8 + 2 h cos 8 sin 8 + b sin2 8 =~ [(a-b) cos 28+2 h sin 28+ (a + b)]. To study the variation of Q(8) with 8 observe that q ( <p) = a cos <p + (3 sin <p + y = yex 2+ (32 [(exl Ya 2+ (32) cos <p+ ((31 Yex 2+ (32) sin <PJ + y = Yex 2 + (32 sin (<p + 1/1) + y where sin 1/1 = aNex 2+ (32, cos 1/1 = (3N ex2+ 132, and so q(<p) oscillates between y±yex2 +(32. Hence Q(8) oscillates between (the two real numbers) i.e., between the characteristic values of A. Chapter 1 121 (2) The fact that the characteristic vectors are involved can be seen by use of the Lagrange Multipliers. To find extrema ofax2+2hxy+by 2 subject to x 2 +y2=1, say, we compute Ex, Ey where Then Ex =2(a-A)x+2hy, Ey=2hx+2(b-A)Y and at an extremum (a-A)x+hY=O} hx+(b-A)y=O . For a non-trivial solution we must have a-A h] det [ h b-A =0, i.e., A must be a characteristic value of [~ ~]. (3) A very important general principle should be pointed out here. At an extremum Xo of y=f(x) at which f(x) is smooth, it is true that x "near" Xo implies f(x) "very near" f(xo). In the simplest case, f(x)=x2 and xo=O, we have f(x)=x2 of the "second order" in x; this is not true if we do not insist on smoothness, as in shown by the case g(x) = lxi, xo=O, in which g(x) is of the same order as x. We are just using the Taylor expansion about Xo: f(x)-f(xo) = (X_XO)2 [~f"(xo)+ ... ] in the case where f'(xo) =0. This idea can be generalized to the case where y=f(x) is a scalar function of a vector variable x in particular y=R(x). It means that from a "good" guess. at a characteristic vector of A, the Rayleigh quotient gives a "very good" estimate of the corresponding characteristic value. 1.10. Solution (1- 2 m m') (1- 2 m m')' = (1- 2 m m') (1- 2 m m') = 1- 4 m m' + 4 m m' m m' = 1- 4 m m' + 4 m (m' m) m' = 1- 4 OJ m' + 4 m m' Matrices of the form of 0 were introduced by Householder and are of great use in numerical algebra. (See e.g. Chapter 8.) 122 Solutions to Selected Problems Chapter 2 2.4. Solution Assume p:> 1, .!:.+~= 1, a:>O, {3:>O. .1'=.'(,-1 p q B a P Then area OA'A= IxP-ldx =~ o p P {3q and area OBB'= J yl/(P-l)dy=_. o q x Clearly the area of the rectangle OA' C B' is not greater than the sum of the areas of the curvilinear triangles OA' A and OBB' and equal to it only if A, B and C coalesce. Hence with strict inequality unless {3q=aP. This inequality, when written in the form Al/p B1/q;§ (Alp) + (Blq) can be recognized as a generalization of the Arithmetic-Geometric Mean inequality from which it can be deduced, first when the weights p, q are rational and then by a limiting process for general p, q. If we write a-H {3-JbL in this inequality we find - Ilxllp' - Ilyllq (1) Adding the last inequalities for i=l, 2, ... , n we find so that (H) Chapter 2 123 This is the Holder inequality. There is equality in the last inequality if and only if there is equality in all the inequalities (1) which means that the Ixil P are proportional to the IYil q. Observe that when p = q = 2 the inequality (H) reduces to the Schwarz inequality (S) Observe also that the limiting case of (H), when p = 1, q = =, is also valid. In order to establish the Minkowski inequality (M) we write and sum, applying (H) twice on the right to get L: (IXi! + !YiJ)P;:§ Ilxll p[L: (lXi! + !Yil)(P-ljqJ/q + lIyllp [L: (lxd + !YiJ)(P-ljqJ/q. Observe that (p-1)q=p, so that the terms in [ ] on the right are identical with that on the left. Hence, dividing through, [L: (lxd + !YiI)P]l-(l/qj;:§ Ilxll p+ Ilyllp, i.e., since 1-(l/q) = lip, The equality cases can easily be distinguished. We have therefore shown that the p-norm satisfies Axiom 3, the triangle­ inequality. The proofs that Axioms 1, 2 are satisfied are trivial. To complete the solution we observe that which we can write as Ilxll!.;:§ Ilxll~;:§ n Ilxll!.· Taking p-th roots we get IIxll=;:§ Ilxllp;:§ n1 / P Ilxll= and, since as p -+ =, we have llxll=;:§ lim II xll p;:§ Ilxll=· P-= 124 Solutions to Selected Problems 2.5. Solution See sketch. For simplicity we have only drawn the part in the first quadrant. Each set is bounded, closed, convex and symmetrical about the origin ("equilibrated") and has a not-empty interior. 2.6. Solution See sketch. For simplicity we have only drawn the part in the first quadrant. This set Ilxll ~ I is not convex but has the other properties of those in Problem 2.5. The triangle inequality is not satisfied: e.g., x=[O, I)" y=[l,O)" x+y=[I, I)' Ilx+ yll =23/2>2= Ilxll + Ilyll· 2.7. Solution See sketch. For simplicity we have only drawn the part in the first quadrant. The set Ilxll ~ 1 has the properties listed in Problem 2.5 and the axioms are satisfied. Chapter 2 125 .,, 2.B. Solution If PI and P2 are equivalent and if P2 and P3 are equivalent then PI and P3 are equivalent for we have -c PI (x) P2 (x) PI (x) -c O<P12P23=-(-) .-(-) =-P( ) =P32P21 <=. P2 X P3 X 3 X It will be enough, therefore, to prove that any norm p(x) is equivalent, e.g., to the Chebyshev norm ll(x)=llxL. The set S={x: ll(x) = I}, the surface of the appropriate cube, is closed and bounded. Any norm p(x) is continuous everywhere. Let m, M be its lower and upper bounds on S, so that m~p(x)~M, xE S. Now, by continuity there are vectors x m , XM in S such thatp(xm)=m,p(xM)=M and, since Ilxmll = 1, m>O and we have O<m~M<=. For any vector x;;:<,O there is a k such that x=k;, where ll(;) = 1. We have, therefore, p(x) p(k;) Iklpm p@. q(x) ll(k;) Iklll(;) Hence, for x;;:<,O, p(x) O<m:s--:sM<=- ll(x) - . It is instructive to deal with the two-dimensional case of the second part geometrically, drawing the contour lines of the norm surfaces z=p(xI , x 2). By homogeneity, the ratio IIXI12/11XIII is equal to its value at the vectors Xl' X2 where these are chosen to make II XliiI = 1, II X2 111 = Vi. Since Xl is inside the 126 Solutions to Selected Problems circle IIx11 2= 1 we have IIXI 1l 2:2I so that IIXI I1 2:2I = IIXIII I and then IIXI12:2IIXIII' Since X2 is outside the circle Ilx1l 2=I we have IIX2112/IIX2111~I/y'2 and so V21IXI12~IIXIIII' x 2 We deal with the general case analytically.
Recommended publications
  • Periodic Staircase Matrices and Generalized Cluster Structures
    PERIODIC STAIRCASE MATRICES AND GENERALIZED CLUSTER STRUCTURES MISHA GEKHTMAN, MICHAEL SHAPIRO, AND ALEK VAINSHTEIN Abstract. As is well-known, cluster transformations in cluster structures of geometric type are often modeled on determinant identities, such as short Pl¨ucker relations, Desnanot–Jacobi identities and their generalizations. We present a construction that plays a similar role in a description of general- ized cluster transformations and discuss its applications to generalized clus- ter structures in GLn compatible with a certain subclass of Belavin–Drinfeld Poisson–Lie brackets, in the Drinfeld double of GLn, and in spaces of periodic difference operators. 1. Introduction Since the discovery of cluster algebras in [4], many important algebraic varieties were shown to support a cluster structure in a sense that the coordinate rings of such variety is isomorphic to a cluster algebra or an upper cluster algebra. Lie theory and representation theory turned out to be a particularly rich source of varieties of this sort including but in no way limited to such examples as Grassmannians [5, 18], double Bruhat cells [1] and strata in flag varieties [15]. In all these examples, cluster transformations that connect distinguished coordinate charts within a ring of regular functions are modeled on three-term relations such as short Pl¨ucker relations, Desnanot–Jacobi identities and their Lie-theoretic generalizations of the kind considered in [3]. This remains true even in the case of exotic cluster structures on GLn considered in [8, 10] where cluster transformations can be obtained by applying Desnanot–Jacobi type identities to certain structured matrices of a size far exceeding n.
    [Show full text]
  • New Algorithms in the Frobenius Matrix Algebra for Polynomial Root-Finding
    City University of New York (CUNY) CUNY Academic Works Computer Science Technical Reports CUNY Academic Works 2014 TR-2014006: New Algorithms in the Frobenius Matrix Algebra for Polynomial Root-Finding Victor Y. Pan Ai-Long Zheng How does access to this work benefit ou?y Let us know! More information about this work at: https://academicworks.cuny.edu/gc_cs_tr/397 Discover additional works at: https://academicworks.cuny.edu This work is made publicly available by the City University of New York (CUNY). Contact: [email protected] New Algoirthms in the Frobenius Matrix Algebra for Polynomial Root-finding ∗ Victor Y. Pan[1,2],[a] and Ai-Long Zheng[2],[b] Supported by NSF Grant CCF-1116736 and PSC CUNY Award 64512–0042 [1] Department of Mathematics and Computer Science Lehman College of the City University of New York Bronx, NY 10468 USA [2] Ph.D. Programs in Mathematics and Computer Science The Graduate Center of the City University of New York New York, NY 10036 USA [a] [email protected] http://comet.lehman.cuny.edu/vpan/ [b] [email protected] Abstract In 1996 Cardinal applied fast algorithms in Frobenius matrix algebra to complex root-finding for univariate polynomials, but he resorted to some numerically unsafe techniques of symbolic manipulation with polynomials at the final stages of his algorithms. We extend his work to complete the computations by operating with matrices at the final stage as well and also to adjust them to real polynomial root-finding. Our analysis and experiments show efficiency of the resulting algorithms. 2000 Math.
    [Show full text]
  • Arxiv:Math/0010243V1
    FOUR SHORT STORIES ABOUT TOEPLITZ MATRIX CALCULATIONS THOMAS STROHMER∗ Abstract. The stories told in this paper are dealing with the solution of finite, infinite, and biinfinite Toeplitz-type systems. A crucial role plays the off-diagonal decay behavior of Toeplitz matrices and their inverses. Classical results of Gelfand et al. on commutative Banach algebras yield a general characterization of this decay behavior. We then derive estimates for the approximate solution of (bi)infinite Toeplitz systems by the finite section method, showing that the approximation rate depends only on the decay of the entries of the Toeplitz matrix and its condition number. Furthermore, we give error estimates for the solution of doubly infinite convolution systems by finite circulant systems. Finally, some quantitative results on the construction of preconditioners via circulant embedding are derived, which allow to provide a theoretical explanation for numerical observations made by some researchers in connection with deconvolution problems. Key words. Toeplitz matrix, Laurent operator, decay of inverse matrix, preconditioner, circu- lant matrix, finite section method. AMS subject classifications. 65T10, 42A10, 65D10, 65F10 0. Introduction. Toeplitz-type equations arise in many applications in mathe- matics, signal processing, communications engineering, and statistics. The excellent surveys [4, 17] describe a number of applications and contain a vast list of references. The stories told in this paper are dealing with the (approximate) solution of biinfinite, infinite, and finite hermitian positive definite Toeplitz-type systems. We pay special attention to Toeplitz-type systems with certain decay properties in the sense that the entries of the matrix enjoy a certain decay rate off the diagonal.
    [Show full text]
  • Extremely Accurate Solutions Using Block Decomposition and Extended Precision for Solving Very Ill-Conditioned Equations
    Extremely accurate solutions using block decomposition and extended precision for solving very ill-conditioned equations †Kansa, E.J.1, Skala V.2, and Holoborodko, P.3 1Convergent Solutions, Livermore, CA 94550 USA 2Computer Science & Engineering Dept., Faculty of Applied Sciences, University of West Bohemia, University 8, CZ 301 00 Plzen, Czech Republic 3Advanpix LLC, Maison Takashima Bldg. 2F, Daimachi 10-15-201, Yokohama, Japan 221-0834 †Corresponding author: [email protected] Abstract Many authors have complained about the ill-conditioning associated the numerical solution of partial differential equations (PDEs) and integral equations (IEs) using as the continuously differentiable Gaussian and multiquadric continuously differentiable (C ) radial basis functions (RBFs). Unlike finite elements, finite difference, or finite volume methods that lave compact local support that give rise to sparse equations, the C -RBFs with simple collocation methods give rise to full, asymmetric systems of equations. Since C RBFs have adjustable constent or variable shape parameters, the resulting systems of equations that are solve on single or double precision computers can suffer from “ill-conditioning”. Condition numbers can be either the absolute or relative condition number, but in the context of linear equations, the absolute condition number will be understood. Results will be presented that demonstrates the combination of Block Gaussian elimination, arbitrary arithmetic precision, and iterative refinement can give remarkably accurate numerical salutations to large Hilbert and van der Monde equation systems. 1. Introduction An accurate definition of the condition number, , of the matrix, A, is the ratio of the largest to smallest absolute value of the singular values, { i}, obtained from the singular value decomposition (SVD) method, see [1]: (A) = maxjjminjj (1) This definition of condition number will be used in this study.
    [Show full text]
  • THREE STEPS on an OPEN ROAD Gilbert Strang This Note Describes
    Inverse Problems and Imaging doi:10.3934/ipi.2013.7.961 Volume 7, No. 3, 2013, 961{966 THREE STEPS ON AN OPEN ROAD Gilbert Strang Massachusetts Institute of Technology Cambridge, MA 02139, USA Abstract. This note describes three recent factorizations of banded invertible infinite matrices 1. If A has a banded inverse : A=BC with block{diagonal factors B and C. 2. Permutations factor into a shift times N < 2w tridiagonal permutations. 3. A = LP U with lower triangular L, permutation P , upper triangular U. We include examples and references and outlines of proofs. This note describes three small steps in the factorization of banded matrices. It is written to encourage others to go further in this direction (and related directions). At some point the problems will become deeper and more difficult, especially for doubly infinite matrices. Our main point is that matrices need not be Toeplitz or block Toeplitz for progress to be possible. An important theory is already established [2, 9, 10, 13-16] for non-Toeplitz \band-dominated operators". The Fredholm index plays a key role, and the second small step below (taken jointly with Marko Lindner) computes that index in the special case of permutation matrices. Recall that banded Toeplitz matrices lead to Laurent polynomials. If the scalars or matrices a−w; : : : ; a0; : : : ; aw lie along the diagonals, the polynomial is A(z) = P k akz and the bandwidth is w. The required index is in this case a winding number of det A(z). Factorization into A+(z)A−(z) is a classical problem solved by Plemelj [12] and Gohberg [6-7].
    [Show full text]
  • Arxiv:2004.05118V1 [Math.RT]
    GENERALIZED CLUSTER STRUCTURES RELATED TO THE DRINFELD DOUBLE OF GLn MISHA GEKHTMAN, MICHAEL SHAPIRO, AND ALEK VAINSHTEIN Abstract. We prove that the regular generalized cluster structure on the Drinfeld double of GLn constructed in [9] is complete and compatible with the standard Poisson–Lie structure on the double. Moreover, we show that for n = 4 this structure is distinct from a previously known regular generalized cluster structure on the Drinfeld double, even though they have the same compatible Poisson structure and the same collection of frozen variables. Further, we prove that the regular generalized cluster structure on band periodic matrices constructed in [9] possesses similar compatibility and completeness properties. 1. Introduction In [6, 8] we constructed an initial seed Σn for a complete generalized cluster D structure GCn in the ring of regular functions on the Drinfeld double D(GLn) and proved that this structure is compatible with the standard Poisson–Lie structure on D(GLn). In [9, Section 4] we constructed a different seed Σ¯ n for a regular D D generalized cluster structure GCn on D(GLn). In this note we prove that GCn D shares all the properties of GCn : it is complete and compatible with the standard Poisson–Lie structure on D(GLn). Moreover, we prove that the seeds Σ¯ 4(X, Y ) and T T Σ4(Y ,X ) are not mutationally equivalent. In this way we provide an explicit example of two different regular complete generalized cluster structures on the same variety with the same compatible Poisson structure and the same collection of frozen D variables.
    [Show full text]
  • The Spectra of Large Toeplitz Band Matrices with A
    MATHEMATICS OF COMPUTATION Volume 72, Number 243, Pages 1329{1348 S 0025-5718(03)01505-9 Article electronically published on February 3, 2003 THESPECTRAOFLARGETOEPLITZBANDMATRICES WITH A RANDOMLY PERTURBED ENTRY A. BOTTCHER,¨ M. EMBREE, AND V. I. SOKOLOV Abstract. (j;k) This paper is concerned with the union spΩ Tn(a) of all possible spectra that may emerge when perturbing a large n n Toeplitz band matrix × Tn(a)inthe(j; k) site by a number randomly chosen from some set Ω. The main results give descriptive bounds and, in several interesting situations, even (j;k) provide complete identifications of the limit of sp Tn(a)asn .Also Ω !1 discussed are the cases of small and large sets Ω as well as the \discontinuity of (j;k) the infinite volume case", which means that in general spΩ Tn(a)doesnot converge to something close to sp(j;k) T (a)asn ,whereT (a)isthecor- Ω !1 responding infinite Toeplitz matrix. Illustrations are provided for tridiagonal Toeplitz matrices, a notable special case. 1. Introduction and main results For a complex-valued continuous function a on the complex unit circle T,the infinite Toeplitz matrix T (a) and the finite Toeplitz matrices Tn(a) are defined by n T (a)=(aj k)j;k1 =1 and Tn(a)=(aj k)j;k=1; − − where a` is the `th Fourier coefficient of a, 2π 1 iθ i`θ a` = a(e )e− dθ; ` Z: 2π 2 Z0 Here, we restrict our attention to the case where a is a trigonometric polynomial, a , implying that at most a finite number of the Fourier coefficients are nonzero; equivalently,2P T (a) is a banded matrix.
    [Show full text]
  • A Study of Convergence of the PMARC Matrices Applicable to WICS Calculations
    A Study of Convergence of the PMARC Matrices applicable to WICS Calculations /- by Amitabha Ghosh Department of Mechanical Engineering Rochester Institute of Technology Rochester, NY 14623 Final Report NASA Cooperative Agreement No: NCC 2-937 Presented to NASA Ames Research Center Moffett Field, CA 94035 August 31, 1997 Table of Contents Abstract ............................................ 3 Introduction .......................................... 3 Solution of Linear Systems ................................... 4 Direct Solvers .......................................... 5 Gaussian Elimination: .................................. 5 Gauss-Jordan Elimination: ................................ 5 L-U Decompostion: ................................... 6 Iterative Solvers ........................................ 6 Jacobi Method: ..................................... 7 Gauss-Seidel Method: .............................. .... 7 Successive Over-relaxation Method: .......................... 8 Conjugate Gradient Method: .............................. 9 Recent Developments: ......................... ....... 9 Computational Efficiency .................................... 10 Graphical Interpretation of Residual Correction Schemes .................... 11 Defective Matrices ............................. , ......... 15 Results and Discussion ............ ......................... 16 Concluding Remarks ...................................... 19 Acknowledgements ....................................... 19 References ..........................................
    [Show full text]
  • Applied Linear Algebra
    APPLIED LINEAR ALGEBRA Giorgio Picci November 24, 2015 1 Contents 1 LINEAR VECTOR SPACES AND LINEAR MAPS 10 1.1 Linear Maps and Matrices . 11 1.2 Inverse of a Linear Map . 12 1.3 Inner products and norms . 13 1.4 Inner products in coordinate spaces (1) . 14 1.5 Inner products in coordinate spaces (2) . 15 1.6 Adjoints. 16 1.7 Subspaces . 18 1.8 Image and kernel of a linear map . 19 1.9 Invariant subspaces in Rn ........................... 22 1.10 Invariant subspaces and block-diagonalization . 23 2 1.11 Eigenvalues and Eigenvectors . 24 2 SYMMETRIC MATRICES 25 2.1 Generalizations: Normal, Hermitian and Unitary matrices . 26 2.2 Change of Basis . 27 2.3 Similarity . 29 2.4 Similarity again . 30 2.5 Problems . 31 2.6 Skew-Hermitian matrices (1) . 32 2.7 Skew-Symmetric matrices (2) . 33 2.8 Square roots of positive semidefinite matrices . 36 2.9 Projections in Rn ............................... 38 2.10 Projections on general inner product spaces . 40 3 2.11 Gramians. 41 2.12 Example: Polynomial vector spaces . 42 3 LINEAR LEAST SQUARES PROBLEMS 43 3.1 Weighted Least Squares . 44 3.2 Solution by the Orthogonality Principle . 46 3.3 Matrix least-Squares Problems . 48 3.4 A problem from subspace identification . 50 3.5 Relation with Left- and Right- Inverses . 51 3.6 The Pseudoinverse . 54 3.7 The Euclidean pseudoinverse . 63 3.8 The Pseudoinverse and Orthogonal Projections . 64 3.9 Linear equations . 66 4 3.10 Unfeasible linear equations and Least Squares . 68 3.11 The Singular value decomposition (SVD) .
    [Show full text]
  • Subspace-Preserving Sparsification of Matrices with Minimal Perturbation
    Subspace-preserving sparsification of matrices with minimal perturbation to the near null-space. Part I: Basics Chetan Jhurani Tech-X Corporation 5621 Arapahoe Ave Boulder, Colorado 80303, U.S.A. Abstract This is the first of two papers to describe a matrix sparsification algorithm that takes a general real or complex matrix as input and produces a sparse output matrix of the same size. The non-zero entries in the output are chosen to minimize changes to the singular values and singular vectors corresponding to the near null-space of the input. The output matrix is constrained to preserve left and right null-spaces exactly. The sparsity pattern of the output matrix is automatically determined or can be given as input. If the input matrix belongs to a common matrix subspace, we prove that the computed sparse matrix belongs to the same subspace. This works with- out imposing explicit constraints pertaining to the subspace. This property holds for the subspaces of Hermitian, complex-symmetric, Hamiltonian, cir- culant, centrosymmetric, and persymmetric matrices, and for each of the skew counterparts. Applications of our method include computation of reusable sparse pre- conditioning matrices for reliable and efficient solution of high-order finite element systems. The second paper in this series [1] describes our open- arXiv:1304.7049v1 [math.NA] 26 Apr 2013 source implementation, and presents further technical details. Keywords: Sparsification, Spectral equivalence, Matrix structure, Convex optimization, Moore-Penrose pseudoinverse Email address: [email protected] (Chetan Jhurani) Preprint submitted to Computers and Mathematics with Applications October 10, 2018 1. Introduction We present and analyze a matrix-valued optimization problem formulated to sparsify matrices while preserving the matrix null-spaces and certain spe- cial structural properties.
    [Show full text]
  • Conversion of Sparse Matrix to Band Matrix Using an Fpga For
    CONVERSION OF SPARSE MATRIX TO BAND MATRIX USING AN FPGA FOR HIGH-PERFORMANCE COMPUTING by Anjani Chaudhary, M.S. A thesis submitted to the Graduate Council of Texas State University in partial fulfillment of the requirements for the degree of Master of Science with a Major in Engineering December 2020 Committee Members: Semih Aslan, Chair Shuying Sun William A Stapleton COPYRIGHT by Anjani Chaudhary 2020 FAIR USE AND AUTHOR’S PERMISSION STATEMENT Fair Use This work is protected by the Copyright Laws of the United States (Public Law 94-553, section 107). Consistent with fair use as defined in the Copyright Laws, brief quotations from this material are allowed with proper acknowledgement. Use of this material for financial gain without the author’s express written permission is not allowed. Duplication Permission As the copyright holder of this work, I, Anjani Chaudhary, authorize duplication of this work, in whole or in part, for educational or scholarly purposes only. ACKNOWLEDGEMENTS Throughout my research and report writing process, I am grateful to receive support and suggestions from the following people. I would like to thank my supervisor and chair, Dr. Semih Aslan, Associate Professor, Ingram School of Engineering, for his time, support, encouragement, and patience, without which I would not have made it. My committee members Dr. Bill Stapleton, Associate Professor, Ingram School of Engineering, and Dr. Shuying Sun, Associate Professor, Department of Mathematics, for their essential guidance throughout the research and my graduate advisor, Dr. Vishu Viswanathan, for providing the opportunity and valuable suggestions which helped me to come one more step closer to my goal.
    [Show full text]
  • Copyrighted Material
    JWST600-IND JWST600-Penney September 28, 2015 7:40 Printer Name: Trim: 6.125in × 9.25in INDEX Additive property of determinants, 244 Cofactor expansion, 240 Argument of a complex number, 297 Column space, 76 Column vector, 2 Band matrix, 407 Complementary vector, 319 Band width, 407 Complex eigenvalue, 302 Basis Complex eigenvector, 302 chain, 438 Complex linear transformation, 304 coordinate matrix for a basis, 217 Complex matrices, 301 coordinate transformation, 223 Complex number definition, 104 argument, 297 normalization, 315 conjugate, 302 ordered, 216 Complex vector space, 303 orthonormal, 315 Conjugate of a complex number, 302 point transformation, 223 Consumption matrix, 198 standard Coordinate matrix, 217 M(m,n), 119 Coordinates n, 119 coordinate vector, 216 Rn, 118 coordinate matrix for a basis, 217 COPYRIGHTEDcoordinate MATERIAL vector, 216 Cn, 302 point matrix, 217 Chain basis, 438 point transformation, 223 Characteristic polynomial, 275 Cramer’s rule, 265 Coefficient matrix, 75 Current, 41 Linear Algebra: Ideas and Applications, Fourth Edition. Richard C. Penney. © 2016 John Wiley & Sons, Inc. Published 2016 by John Wiley & Sons, Inc. Companion website: www.wiley.com/go/penney/linearalgebra 487 JWST600-IND JWST600-Penney September 28, 2015 7:40 Printer Name: Trim: 6.125in × 9.25in 488 INDEX Data compression, 349 Haar function, 346 Determinant Hermitian additive property, 244 adjoint, 412 cofactor expansion, 240 symmetric, 414 Cramer’s rule, 265 Hermitian, orthogonal, 413 inverse matrix, 267 Homogeneous system, 80 Laplace
    [Show full text]