Generalized Inverses: Theory and Applications Bibliography for the 2Nd Edition (June 21, 2001)

Total Page:16

File Type:pdf, Size:1020Kb

Generalized Inverses: Theory and Applications Bibliography for the 2Nd Edition (June 21, 2001) Generalized Inverses: Theory and Applications Bibliography for the 2nd Edition (June 21, 2001) Adi Ben-Israel Thomas N.E. Greville† RUTCOR–Rutgers Center for Operations Research, Rutgers University, 640 Bartholomew Rd, Piscataway, NJ 08854-8003, USA E-mail address: [email protected] Bibliography 16. A. Albert and R. W. Sittler, A method for comput- ing least squares estimators that keep up with the data, SIAM J. Control 3 (1965), 384–417. 1. K. Abdel-Malek and Harn-Jou Yeh, On the deter- 17. V. Aleksi´cand V. Rakoˇcevi´c, Approximate proper- mination of starting points for parametric surface ties of the Moore-Penrose inverse, VIII Conference intersections, Computer-aided Design 29 (1997), on Applied Mathematics (Tivat, 1993), Univ. Mon- no. 1, 21–35. tenegro, Podgorica, 1994, pp. 1–14. 2. N. N. Abdelmalek, On the solutions of the linear 18. E. L. Allgower, K. B¨ohmer, A. Hoy, and least squares problems and pseudo–inverses, Com- V. Janovsk´y, Direct methods for solving singu- puting 13 (1974), no. 3-4, 215–228. lar nonlinear equations, ZAMM Z. Angew. Math. 3. V. M. Adukov, Generalized inversion of block Mech. 79 (1999), 219–231. Toeplitz matrices, Linear Algebra and its Appli- 19. M. Altman, A generalization of Newton’s method, cations 274 (1998), 85–124. Bull. Acad. Polon. Sci. Ser. Sci. Math. Astronom. 4. , Generalized inversion of finite rank Han- Phys. 3 (1955), 189–193. kel and Toeplitz operators with rational matrix 20. , On a generalization of Newton’s method, symbols, Linear Algebra and its Applications 290 Bull. Acad. Polon. Sci. Ser. Sci. Math. Astronom. (1999), no. 1-3, 119–134. Phys. 5 (1957), 789–795. 5. S. N. Afriat, On the latent vectors and characteris- 21. J. K. Amburgey, T. O. Lewis, and T. L. Boul- tic values of products of pairs of symmetric idem- lion, On computing generalized characteristic vec- potents, Quart. J. Math. Oxford Ser. (2) 7 (1956), tors and values for a rectangular matrix, In Boul- 76–78. lion and Odell [207], pp. 267–275. 6. , Orthogonal and oblique projectors and the 22. A. R. Amir-Mo´ez, Geometry of generalized in- characteristics of pairs of vector spaces, Proc. Cam- verses, Math. Mag. 43 (1970), 33–36. bridge Philos. Soc. 53 (1957), 800–816. 23. , Extreme properties of linear transforma- 7. J. H. Ahlberg, E. N. Nilson, and J. L. Walsh, The tions, Polygonal Publ. House, Washington, NJ, theory of splines and their applications, Academic 1990. Press, New York, 1967. 24. C. L. Anderson, A geometric theory of pseudoin- 8. A. C. Aitken, On least squares and linear combi- verses and some applications in statistics, Mas- nations of observations, Proceedings of the Royal ter’s thesis in statistics, Southern Methodist Univ., Society of Edinburgh, Sec A 55 (1934), 42–47. 1967. 9. Y. Akatsuka and T. Matsuo, Optimal control of 25. W. N. Anderson, Jr., Shorted operators, SIAM J. linear discrete systems using the generalized inverse Appl. Math. 20 (1971), 520–525. of a matrix, Techn Rept. 13, Institute of Automatic 26. W. N. Anderson, Jr. and R. J. Duffin, Series and Control, Nagoya Univ., Nagoya, Japan, 1965. parallel addition of matrices, SIAM J. Appl. Math. 10. I. S. Alalouf and G. P. H. Styan, Characterizations 26 (1969), 576–594, (see [878]). of estimability in the general linear model, Ann. 27. W. N. Anderson, Jr. and M. Schreiber, The infi- Statist. 7 (1979), no. 1, 194–200. mum of two projections, Acta Sci. Math. (Szeged) 11. , Estimability and testability in restricted 33 (1972), 165–168. linear models, Math. Operationsforsch. Statist. Ser. 28. W. N. Anderson, Jr. and G. E. Trapp, Inequalities Statist. 10 (1979), no. 2, 189–201. for the parallel connection of resistive n-port net- 12. A. Albert, Conditions for positive and nonnegative works, J. Franklin Inst. 209 (1975), no. 5, 305–313. definiteness in terms of pseudo–inverses, SIAM J. 29. , Shorted operators. II, SIAM J. Appl. Appl. Math. 17 (1969), 434–440. Math. 28 (1975), 60–71, (this concept first intro- 13. , Regression and the Moore–Penrose Pseu- duced by Krein [880]). doinverse, Academic Press, New York, 1972. 30. , Analytic operator functions and electrical 14. , The Gauss–Markov theorem for regression networks, In Campbell [267], pp. 12–26. models with possibly singular covariabes, SIAM J. 31. , Inverse problems for means of matrices, Appl. Math. 24 (1973), 182–187. SIAM J. Algebraic Discrete Methods 7 (1986), 15. , Statistical applications of the pseudo in- no. 2, 188–192. verse, In Nashed [1116], pp. 525–548. 32. T. Ando, Generalized Schur complements, Linear Algebra and its Applications 27 (1979), 173–186. 3 4 BIBLIOGRAPHY 33. Mihai Anitescu, Dan I. Coroian, M. Zuhair Nashed, 50. G. Backus, Inference from inadequate and inaccu- and Florian A. Potra, Outer inverses and multi- rate data. I, II, Proc. Nat. Acad. Sci. U.S.A. 65 body system simulation, Numer. Funct. Anal. Op- (1970), 1–7; ibid. 65 (1970), 281–287. tim. 17 (1996), no. 7-8, 661–678. 51. G. Backus and F. Gilbert, Uniqueness in the inver- 34. P. M. Anselone and P. J. Laurent, A general method sion of inaccurate gross Earth data, Philos. Trans. for the construction of interpolating or smoothing Roy. Soc. London Ser. A 266 (1970), no. 1173, 123– spline-functions, Numer. Math. 12 (1968), 66–82. 192. 35. H. Anton and C. S. Duris, On minimum norm and 52. C. Badea and M. Mbekhta, Generalized inverses best approximate solutions of Av = b in normed and the maximal radius of regularity of a Fredholm spaces, J. Approximation Theory 16 (1976), no. 3, operator, Integral Equations Operator Theory 28 245–250. (1997), no. 2, 133–146. 36. E. Arghiriade, Sur les matrices qui sont permuta- 53. , Compressions of resolvents and maximal bles avec leur inverse g´en´eralis´ee, Atti Accad. Naz. radius of regularity, Trans. Amer. Math. Soc. 351 Lincei Rend. Cl. Sci. Fis. Mat. Natur. Ser. VIII 35 (1999), no. 7, 2949–2960. (1963), 244–251. 54. J. K. Baksalary and R. Kala, The matrix equation 37. , On the generalized inverse of a product of AX−YB = C, Linear Algebra and its Applications matrices, An. Univ. Timi¸soaraSer. S¸ti. Mat.-Fiz. 25 (1979), 41–43. No. 5 (1967), 37–42. 55. , The matrix equation AXB + CYD = E, 38. , Remarques sur l’inverse g´en´eralis´eed’un Linear Algebra and its Applications 30 (1980), produit de matrices, Atti Accad. Naz. Lincei Rend. 141–147. Cl. Sci. Fis. Mat. Natur. Ser. VIII 42 (1967), 621– 56. , Two properties of a nonnegative definite 625. matrix, Bull. Acad. Polon. Sci. S´er. Sci. Math. 28 39. , Sur quelques ´equations fonctionnelles de (1980), no. 5-6, 233–235 (1981). matrices, Rev. Roumaine Math. Pures Appl. 12 57. , Range invariance of certain matrix prod- (1967), 1127–1133. ucts, Linear and Multilinear Algebra 14 (1983), 40. , Sur l’inverse g´en´eralis´ee d’un operateur no. 1, 89–96. lineaire dans les espaces de Hilbert, Atti Accad. 58. J. K. Baksalary and T. Mathew, Rank invariance Naz. Lincei Rend. Cl. Sci. Fis. Mat. Natur. Ser. criterion and its application to the unified theory of VIII 45 (1968), 471–477. least squares, Linear Algebra and its Applications 41. E. Arghiriade and A. Dragomir, Une nouvelle 127 (1990), 393–401. d´efinition de l’inverse g´en´eralis´ee d’une matrice, 59. J. K. Baksalary, P. R. Pordzik, and G. Trenkler, Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. A note on generalized ridge estimators, Comm. Natur. (8) 35 (1963), 158–165. Statist. Theory Methods 19 (1990), no. 8, 2871– 42. , Remarques sur quelques th´eoremes rela- 2877. tives a l’inverse g´en´eralis´eed’un operateur lineaire 60. J. K. Baksalary and F. Pukelsheim, On the L¨owner, dans les espaces de hilbert, Atti Accad. Naz. Lincei minus, and star partial orderings of nonnegative Rend. Cl. Sci. Fis. Mat. Natur. Ser. VIII 46 (1969), definite matrices and their squares, Linear Algebra 333–338. and its Applications 151 (1991), 135–141. 43. I. K. Argyros, Local convergence theorems of New- 61. J. K. Baksalary, S. Puntanen, and H. Yanai, ton’s method for nonlinear equations using outer Canonical correlations associated with symmetric or generalized inverses, Czechoslovak Math. J. reflexive generalized inverses of the dispersion ma- 50(125) (2000), no. 3, 603–614. trix, Linear Algebra and its Applications 176 44. S. Aronowitz and B. E. Eichinger, Petrie matrices (1992), 61–74. and generalized inverses, J. Math. Phys. 16 (1975), 62. A. V. Balakrishnan, An operator theoretic formu- 1278–1283. lation of a class of control problems and a steepest 45. F. V. Atkinson, The normal solvability of linear descent method of solution, J. Soc. Indust. Appl. equations in normed spaces (russian), Mat. Sbornik Math. Ser. A: Control 1 (1963), 109–127. N.S. 28(70) (1951), 3–14. 63. K. F. Baldwin and A. E. Hoerl, Bounds of min- 46. , On relatively regular operators, Acta Sci. imum mean squared error in ridge regression, Math. Szeged 15 (1953), 38–56. Comm. Statist. A—Theory Methods 7 (1978), 47. K. E. Atkinson, The solution of non-unique linear no. 13, 1209–1218. integral equations, Numer. Math. 10 (1967), 117– 64. J. A. Ball, M. Rakowski, and B. F. Wyman, Cou- 124, (see also [1090]). pling operators, Wedderburn-Forney spaces, and 48. L. Autonne, Bull. Soc. Math. France 30 (1902), generalized inverses, Linear Algebra and its Ap- 121–133. plications 203/204 (1994), 111–138. 49. , Sur les matrices hypohermitiennes et sur 65. K. S. Banerjee, Singularity in Hotelling’s weigh- les matrices unitaires, Ann. Univ. Lyon 38 (1917), ing designs and generalized inverses, Ann.
Recommended publications
  • The Drazin Inverse of the Sum of Two Matrices and Its Applications
    Filomat 31:16 (2017), 5151–5158 Published by Faculty of Sciences and Mathematics, https://doi.org/10.2298/FIL1716151X University of Nis,ˇ Serbia Available at: http://www.pmf.ni.ac.rs/filomat The Drazin Inverse of the Sum of Two Matrices and its Applications Lingling Xiaa, Bin Denga aSchool of Mathematics, Hefei University of Technology, Hefei, 230009,China Abstract. In this paper, we give the results for the Drazin inverse of P + Q, then derive a representation ! AB for the Drazin inverse of a block matrix M = under some conditions. Moreover, some alternative CD representations for the Drazin inverse of MD where the generalized Schur complement S = D CADB is − nonsingular. Finally, the numerical example is given to illustrate our results. 1. Introduction and preliminaries n n Let C × denote the set of n n complex matrix. By (A), (A) and rank(A) we denote the range, the null space and the rank of matrix ×A. The Drazin inverse of AR is theN unique matrix AD satisfying ADAAD = AD; AAD = ADA; Ak+1AD = Ak: (1) where k = ind(A) is the index of A, the smallest nonnegative integer k which satisfies rank(Ak+1) = rank(Ak). If D ] D 1 ind(A) = 0, then we call A is the group inverse of A and denote it by A . If ind(A) = 0, then A = A− . In addition, we denote Aπ = I AAD, and define A0 = I, where I is the identity matrix with proper sizes[1]. n n − For A C × , k is the index of A, there exists unique matrices C and N, such that A = C + N, CN = NC = 0, N is the nilpotent2 of index k, and ind(C) = 0 or 1.We shall always use C; N in this context and will refer to A = C + N as the core-nilpotent decomposition of A, Note that AD = CD.
    [Show full text]
  • Perron–Frobenius Theory of Nonnegative Matrices
    Buy from AMAZON.com http://www.amazon.com/exec/obidos/ASIN/0898714540 CHAPTER 8 Perron–Frobenius Theory of Nonnegative Matrices 8.1 INTRODUCTION m×n A ∈ is said to be a nonnegative matrix whenever each aij ≥ 0, and this is denoted by writing A ≥ 0. In general, A ≥ B means that each aij ≥ bij. Similarly, A is a positive matrix when each aij > 0, and this is denoted by writing A > 0. More generally, A > B means that each aij >bij. Applications abound with nonnegative and positive matrices. In fact, many of the applications considered in this text involve nonnegative matrices. For example, the connectivity matrix C in Example 3.5.2 (p. 100) is nonnegative. The discrete Laplacian L from Example 7.6.2 (p. 563) leads to a nonnegative matrix because (4I − L) ≥ 0. The matrix eAt that defines the solution of the system of differential equations in the mixing problem of Example 7.9.7 (p. 610) is nonnegative for all t ≥ 0. And the system of difference equations p(k)=Ap(k − 1) resulting from the shell game of Example 7.10.8 (p. 635) has Please report violations to [email protected] a nonnegativeCOPYRIGHTED coefficient matrix A. Since nonnegative matrices are pervasive, it’s natural to investigate their properties, and that’s the purpose of this chapter. A primary issue concerns It is illegal to print, duplicate, or distribute this material the extent to which the properties A > 0 or A ≥ 0 translate to spectral properties—e.g., to what extent does A have positive (or nonnegative) eigen- values and eigenvectors? The topic is called the “Perron–Frobenius theory” because it evolved from the contributions of the German mathematicians Oskar (or Oscar) Perron 89 and 89 Oskar Perron (1880–1975) originally set out to fulfill his father’s wishes to be in the family busi- Buy online from SIAM Copyright c 2000 SIAM http://www.ec-securehost.com/SIAM/ot71.html Buy from AMAZON.com 662 Chapter 8 Perron–Frobenius Theory of Nonnegative Matrices http://www.amazon.com/exec/obidos/ASIN/0898714540 Ferdinand Georg Frobenius.
    [Show full text]
  • 4.6 Matrices Dimensions: Number of Rows and Columns a Is a 2 X 3 Matrix
    Matrices are rectangular arrangements of data that are used to represent information in tabular form. 1 0 4 A = 3 - 6 8 a23 4.6 Matrices Dimensions: number of rows and columns A is a 2 x 3 matrix Elements of a matrix A are denoted by aij. a23 = 8 1 2 Data about many kinds of problems can often be represented by Matrix of coefficients matrix. Solutions to many problems can be obtained by solving e.g Average temperatures in 3 different cities for each month: systems of linear equations. For example, the constraints of a problem are represented by the system of linear equations 23 26 38 47 58 71 78 77 69 55 39 33 x + y = 70 A = 14 21 33 38 44 57 61 59 49 38 25 21 3 cities 24x + 14y = 1180 35 46 54 67 78 86 91 94 89 75 62 51 12 months Jan - Dec 1 1 rd The matrix A = Average temp. in the 3 24 14 city in July, a37, is 91. is the matrix of coefficient for this system of linear equations. 3 4 In a matrix, the arrangement of the entries is significant. Square Matrix is a matrix in which the number of Therefore, for two matrices to be equal they must have rows equals the number of columns. the same dimensions and the same entries in each location. •Main Diagonal: in a n x n square matrix, the elements a , a , a , …, a form the main Example: Let 11 22 33 nn diagonal of the matrix.
    [Show full text]
  • Recent Developments in Boolean Matrix Factorization
    Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI-20) Survey Track Recent Developments in Boolean Matrix Factorization Pauli Miettinen1∗ and Stefan Neumann2y 1University of Eastern Finland, Kuopio, Finland 2University of Vienna, Vienna, Austria pauli.miettinen@uef.fi, [email protected] Abstract analysts, and van den Broeck and Darwiche [2013] to lifted inference community, to name but a few examples. Even more The goal of Boolean Matrix Factorization (BMF) is recently, Ravanbakhsh et al. [2016] studied the problem in to approximate a given binary matrix as the product the framework of machine learning, increasing the interest of of two low-rank binary factor matrices, where the that community, and Chandran et al. [2016]; Ban et al. [2019]; product of the factor matrices is computed under Fomin et al. [2019] (re-)launched the interest in the problem the Boolean algebra. While the problem is computa- in the theory community. tionally hard, it is also attractive because the binary The various fields in which BMF is used are connected by nature of the factor matrices makes them highly in- the desire to “keep the data binary”. This can be because terpretable. In the last decade, BMF has received a the factorization is used as a pre-processing step, and the considerable amount of attention in the data mining subsequent methods require binary input, or because binary and formal concept analysis communities and, more matrices are more interpretable in the application domain. In recently, the machine learning and the theory com- the latter case, Boolean algebra is often preferred over the field munities also started studying BMF.
    [Show full text]
  • A Matrix Iteration for Finding Drazin Inverse with Ninth-Order Convergence
    Hindawi Publishing Corporation Abstract and Applied Analysis Volume 2014, Article ID 137486, 7 pages http://dx.doi.org/10.1155/2014/137486 Research Article A Matrix Iteration for Finding Drazin Inverse with Ninth-Order Convergence A. S. Al-Fhaid,1 S. Shateyi,2 M. Zaka Ullah,1 and F. Soleymani3 1 Department of Mathematics, Faculty of Sciences, King AbdulazizUniversity,P.O.Box80203,Jeddah21589,SaudiArabia 2 Department of Mathematics and Applied Mathematics, School of Mathematical and Natural Sciences, University of Venda, Private Bag X5050, Thohoyandou 0950, South Africa 3 Department of Mathematics, Islamic Azad University, Zahedan Branch, Zahedan, Iran Correspondence should be addressed to S. Shateyi; [email protected] Received 31 January 2014; Accepted 11 March 2014; Published 14 April 2014 Academic Editor: Sofiya Ostrovska Copyright © 2014 A. S. Al-Fhaid et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The aim of this paper is twofold. First, a matrix iteration for finding approximate inverses of nonsingular square matrices is constructed. Second, how the new method could be applied for computing the Drazin inverse is discussed. It is theoretically proven that the contributed method possesses the convergence rate nine. Numerical studies are brought forward to support the analytical parts. 1. Preliminary Notes Generally speaking, applying Schroder’s¨ general method (often called Schroder-Traub’s¨ sequence [2]) to the nonlinear C× C× × Let and denote the set of all complex matrix equation =, one obtains the following scheme matrices and the set of all complex ×matrices of rank , ∗ [3]: respectively.
    [Show full text]
  • A Complete Bibliography of Publications in Linear Algebra and Its Applications: 2010–2019
    A Complete Bibliography of Publications in Linear Algebra and its Applications: 2010{2019 Nelson H. F. Beebe University of Utah Department of Mathematics, 110 LCB 155 S 1400 E RM 233 Salt Lake City, UT 84112-0090 USA Tel: +1 801 581 5254 FAX: +1 801 581 4148 E-mail: [email protected], [email protected], [email protected] (Internet) WWW URL: http://www.math.utah.edu/~beebe/ 12 March 2021 Version 1.74 Title word cross-reference KY14, Rim12, Rud12, YHH12, vdH14]. 24 [KAAK11]. 2n − 3[BCS10,ˇ Hil13]. 2 × 2 [CGRVC13, CGSCZ10, CM14, DW11, DMS10, JK11, KJK13, MSvW12, Yan14]. (−1; 1) [AAFG12].´ (0; 1) 2 × 2 × 2 [Ber13b]. 3 [BBS12b, NP10, Ghe14a]. (2; 2; 0) [BZWL13, Bre14, CILL12, CKAC14, Fri12, [CI13, PH12]. (A; B) [PP13b]. (α, β) GOvdD14, GX12a, Kal13b, KK14, YHH12]. [HW11, HZM10]. (C; λ; µ)[dMR12].(`; m) p 3n2 − 2 2n3=2 − 3n [MR13]. [DFG10]. (H; m)[BOZ10].(κ, τ) p 3n2 − 2 2n3=23n [MR14a]. 3 × 3 [CSZ10, CR10c]. (λ, 2) [BBS12b]. (m; s; 0) [Dru14, GLZ14, Sev14]. 3 × 3 × 2 [Ber13b]. [GH13b]. (n − 3) [CGO10]. (n − 3; 2; 1) 3 × 3 × 3 [BH13b]. 4 [Ban13a, BDK11, [CCGR13]. (!) [CL12a]. (P; R)[KNS14]. BZ12b, CK13a, FP14, NSW13, Nor14]. 4 × 4 (R; S )[Tre12].−1 [LZG14]. 0 [AKZ13, σ [CJR11]. 5 Ano12-30, CGGS13, DLMZ14, Wu10a]. 1 [BH13b, CHY12, KRH14, Kol13, MW14a]. [Ano12-30, AHL+14, CGGS13, GM14, 5 × 5 [BAD09, DA10, Hil12a, Spe11]. 5 × n Kal13b, LM12, Wu10a]. 1=n [CNPP12]. [CJR11]. 6 1 <t<2 [Seo14]. 2 [AIS14, AM14, AKA13, [DK13c, DK11, DK12a, DK13b, Kar11a].
    [Show full text]
  • COMPUTING RELATIVELY LARGE ALGEBRAIC STRUCTURES by AUTOMATED THEORY EXPLORATION By
    COMPUTING RELATIVELY LARGE ALGEBRAIC STRUCTURES BY AUTOMATED THEORY EXPLORATION by QURATUL-AIN MAHESAR A thesis submitted to The University of Birmingham for the degree of DOCTOR OF PHILOSOPHY School of Computer Science College of Engineering and Physical Sciences The University of Birmingham March 2014 University of Birmingham Research Archive e-theses repository This unpublished thesis/dissertation is copyright of the author and/or third parties. The intellectual property rights of the author or third parties in respect of this work are as defined by The Copyright Designs and Patents Act 1988 or as modified by any successor legislation. Any use made of information contained in this thesis/dissertation must be in accordance with that legislation and must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the permission of the copyright holder. Abstract Automated reasoning technology provides means for inference in a formal context via a multitude of disparate reasoning techniques. Combining different techniques not only increases the effectiveness of single systems but also provides a more powerful approach to solving hard problems. Consequently combined reasoning systems have been successfully employed to solve non-trivial mathematical problems in combinatorially rich domains that are intractable by traditional mathematical means. Nevertheless, the lack of domain specific knowledge often limits the effectiveness of these systems. In this thesis we investigate how the combination of diverse reasoning techniques can be employed to pre-compute additional knowledge to enable mathematical discovery in finite and potentially infinite domains that is otherwise not feasible. In particular, we demonstrate how we can exploit bespoke symbolic computations and automated theorem proving to automatically compute and evolve the structural knowledge of small size finite structures in the algebraic theory of quasigroups.
    [Show full text]
  • Representations and Properties of the W-Weighted Core-EP Inverse
    Representations and properties of the W -weighted core-EP inverse Yuefeng Gao1,2,∗ Jianlong Chen1,† Pedro Patr´ıcio2 ‡ 1School of Mathematics, Southeast University, Nanjing 210096, China; 2CMAT-Centro de Matem´atica, Universidade do Minho, Braga 4710-057, Portugal In this paper, we investigate the weighted core-EP inverse introduced by Ferreyra, Levis and Thome. Several computational representations of the weighted core-EP inverse are obtained in terms of singular-value decomposition, full-rank decomposition and QR de- composition. These representations are expressed in terms of various matrix powers as well as matrix product involving the core-EP inverse, Moore-Penrose inverse and usual matrix inverse. Finally, those representations involving only Moore-Penrose inverse are compared and analyzed via computational complexity and numerical examples. Keywords: weighted core-EP inverse, core-EP inverse, pseudo core inverse, outer inverse, complexity AMS Subject Classifications: 15A09; 65F20; 68Q25 1 Introduction m×n m×n Let C be the set of all m × n complex matrices and let Cr be the set of all m × n m×n ∗ complex matrices of rank r. For each complex matrix A ∈ C , A , Rs(A), R(A) and N (A) denote the conjugate transpose, row space, range (column space) and null space of A, respectively. The index of A ∈ Cn×n, denoted by ind(A), is the smallest non-negative integer k for which rank(Ak) =rank(Ak+1). The Moore-Penrose inverse (also known as the arXiv:1803.10186v4 [math.RA] 21 May 2019 pseudoinverse) of A ∈ Cm×n, Drazin inverse of A ∈ Cn×n are denoted as usual by A†, AD respectively.
    [Show full text]
  • The Representation and Approximation for Drazin Inverse
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Journal of Computational and Applied Mathematics 126 (2000) 417–432 www.elsevier.nl/locate/cam The representation and approximation for Drazin inverse ( Yimin Weia; ∗, Hebing Wub aDepartment of Mathematics and Laboratory of Mathematics for Nonlinear Science, Fudan University, Shanghai 200433, People’s Republic of China bInstitute of Mathematics, Fudan University, Shanghai 200433, People’s Republic of China Received 2 August 1999; received in revised form 2 November 1999 Abstract We present a uniÿed representation theorem for Drazin inverse. Speciÿc expression and computational procedures for Drazin inverse can be uniformly derived. Numerical examples are tested and the application to the singular linear equations is also considered. c 2000 Elsevier Science B.V. All rights reserved. MSC: 15A09; 65F20 Keywords: Index; Drazin inverse; Representation; Approximation 1. Introduction The main theme of this paper can be described as a study of the Drazin inverse. In 1958, Drazin [4] introduced a di erent kind of generalized inverse in associative rings and semigroups that does not have the re exivity property but commutes with the element. Deÿnition 1.1. Let A be an n × n complex matrix, the Drazin inverse of A, denoted by AD,isthe unique matrix X satisfying Ak+1X = Ak ; XAX = X; AX = XA; (1.1) where k=Ind(A), the index of A, is the smallest nonnegative integer for which rank(Ak )=rank(Ak+1). ( Project 19901006 supported by National Nature Science Foundation of China, Doctoral Point Foundation of China and the Laboratory of Computational Physics in Beijing of China; the second author is also supported by the State Major Key Project for Basic Researches in China.
    [Show full text]
  • View This Volume's Front and Back Matter
    Combinatorics o f Nonnegative Matrice s This page intentionally left blank 10.1090/mmono/213 Translations o f MATHEMATICAL MONOGRAPHS Volume 21 3 Combinatorics o f Nonnegative Matrice s V. N. Sachko v V. E. Tarakano v Translated b y Valentin F . Kolchl n | yj | America n Mathematica l Societ y Providence, Rhod e Islan d EDITORIAL COMMITTE E AMS Subcommitte e Robert D . MacPherso n Grigorii A . Marguli s James D . Stashef f (Chair ) ASL Subcommitte e Steffe n Lemp p (Chair ) IMS Subcommitte e Mar k I . Freidli n (Chair ) B. H . Ca^KOB , B . E . TapaKaHO B KOMBMHATOPMKA HEOTPMUATEJIBHbl X MATPM U Hay^Hoe 143/iaTeJibCTB O TBE[ , MocKBa , 200 0 Translated fro m th e Russia n b y Dr . Valenti n F . Kolchi n 2000 Mathematics Subject Classification. Primar y 05-02 ; Secondary 05C50 , 15-02 , 15A48 , 93-02. Library o f Congress Cataloging-in-Publicatio n Dat a Sachkov, Vladimi r Nikolaevich . [Kombinatorika neotritsatel'nyk h matrits . English ] Combinatorics o f nonnegativ e matrice s / V . N . Sachkov , V . E . Tarakano v ; translate d b y Valentin F . Kolchin . p. cm . — (Translations o f mathematical monographs , ISS N 0065-928 2 ; v. 213) Includes bibliographica l reference s an d index. ISBN 0-8218-2788- X (acid-fre e paper ) 1. Non-negative matrices. 2 . Combinatorial analysis . I . Tarakanov, V . E. (Valerii Evgen'evich ) II. Title . III . Series. QA188.S1913 200 2 512.9'434—dc21 200207439 2 Copying an d reprinting . Individua l reader s o f thi s publication , an d nonprofi t librarie s acting fo r them, ar e permitted t o make fai r us e of the material, suc h a s to copy a chapter fo r use in teachin g o r research .
    [Show full text]
  • Additional Details and Fortification for Chapter 1
    APPENDIX A Additional Details and Fortification for Chapter 1 A.1 Matrix Classes and Special Matrices The matrices can be grouped into several classes based on their operational prop- erties. A short list of various classes of matrices is given in Tables A.1 and A.2. Some of these have already been described earlier, for example, elementary, sym- metric, hermitian, othogonal, unitary, positive definite/semidefinite, negative defi- nite/semidefinite, real, imaginary, and reducible/irreducible. Some of the matrix classes are defined based on the existence of associated matri- ces. For instance, A is a diagonalizable matrix if there exists nonsingular matrices T such that TAT −1 = D results in a diagonal matrix D. Connected with diagonalizable matrices are normal matrices. A matrix B is a normal matrix if BB∗ = B∗B.Normal matrices are guaranteed to be diagonalizable matrices. However, defective matri- ces are not diagonalizable. Once a matrix has been identified to be diagonalizable, then the following fact can be used for easier computation of integral powers of the matrix: A = T −1DT → Ak = T −1DT T −1DT ··· T −1DT = T −1DkT and then take advantage of the fact that ⎛ ⎞ dk 0 ⎜ 1 ⎟ K . D = ⎝ .. ⎠ K 0 dN Another set of related classes of matrices are the idempotent, projection, invo- lutory, nilpotent, and convergent matrices. These classes are based on the results of integral powers. Matrix A is idempotent if A2 = A, and if, in addition, A is hermitian, then A is known as a projection matrix. Projection matrices are used to partition an N-dimensional space into two subspaces that are orthogonal to each other.
    [Show full text]
  • Beyond Moore-Penrose Part I: Generalized Inverses That Minimize Matrix Norms Ivan Dokmanić, Rémi Gribonval
    Beyond Moore-Penrose Part I: Generalized Inverses that Minimize Matrix Norms Ivan Dokmanić, Rémi Gribonval To cite this version: Ivan Dokmanić, Rémi Gribonval. Beyond Moore-Penrose Part I: Generalized Inverses that Minimize Matrix Norms. 2017. hal-01547183v1 HAL Id: hal-01547183 https://hal.inria.fr/hal-01547183v1 Preprint submitted on 30 Jun 2017 (v1), last revised 13 Jul 2017 (v2) HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Beyond Moore-Penrose Part I: Generalized Inverses that Minimize Matrix Norms Ivan Dokmani´cand R´emiGribonval Abstract This is the first paper of a two-long series in which we study linear generalized in- verses that minimize matrix norms. Such generalized inverses are famously represented by the Moore-Penrose pseudoinverse (MPP) which happens to minimize the Frobenius norm. Freeing up the degrees of freedom associated with Frobenius optimality enables us to pro- mote other interesting properties. In this Part I, we look at the basic properties of norm- minimizing generalized inverses, especially in terms of uniqueness and relation to the MPP. We first show that the MPP minimizes many norms beyond those unitarily invariant, thus further bolstering its role as a robust choice in many situations.
    [Show full text]