<<

arXiv:2102.02101v1 [math.RA] 3 Feb 2021 h lc nre hudb ie yepesosivligt involving expressions by given be should entries block The with h i fti ae stefloig ie nabtaycom arbitrary an Given following: the is paper this of aim The p eeaie nessaebidol fmtie fteblock the set of the matrices from of columns only build are inverses generalized nMa 1] eti egtdMoePnoeivrei us is inverse Moore–Penrose weighted of certain entries a [13], Miao In io[3,Go 9,adYn[7 ecieepii lc rep block explicit 2 describe complex [27] arbitrary Yan of inverse and [9], Groß [13], Miao or–ers inverse Moore–Penrose lc sizes. block of computation by assumptions) eaieHriin2 Hermitian negative Keywords Introduction 1 (2010) Classification Subject Mathematics ers inverse Penrose and × otebs forkolde xetteaoemnindauth mentioned above the except knowledge, our of best the To p E p ope 2 complex ain fotooa rjcinmatrices. projection orthogonal of tations hgnlpoeto matrices projection thogonal block † × ( = nti ae,nwbokrpeettoso or–ers i Moore–Penrose of representations block new paper, this In or–ers nes n orthogonal and inverse Moore–Penrose s E or–ers nes,gnrlzdivre fmatrices, of inverses , Moore–Penrose block E † e n[] rßgvsabokrpeetto fteMoore–Penr the of representation block a gives Groß [9], In . 11 ∗ 2 E fteotooa rjcinmatrix projection orthogonal the of ) E × × † a E † egv e lc representations block new give we , lc arcsaegvn h prahi ae nbokrep block on based is approach The given. are matrices block 2 ∗ fa rirr lc matrix block arbitrary an of 2 a hnesl eue odrv lc ersnainfrt for representation block a derive to used be easily then can { × en rtsh ordMädler Conrad Fritzsche Bernd ,q ,t s, q, p, E lc arcs hs h elkonformulas well-known the Thus, matrices. block 2 lc ersnain fthe of representations block † of rjcinmatrices projection × } E ewl e htti olcnb elzd(ihu addition (without realized be can goal this that see will We . attoe arcswtotmkn diinlassumpti additional making without matrices partitioned 2 { swl snwbokrepresentations block new as well as 1 } eray4 2021 4, February ivre ffu nnngtv emta)mtie fthe of matrices Hermitian) (non-negative four of -inverses E = 50 (15A23) 15A09 " 1 b a d c E nYn[7,afl akfcoiainof factorization full a [27], Yan In . # P R E ( E † ) ie,i . ihnme frw and rows of number with e., i. sizes, = eblocks he noteclm space column the onto eettoso h Moore–Penrose the of resentations di h omlsfrteblock the for formulas the in ed  β α δ γ r ugMrhm[2 only [12] Hung/Markham ors lx( plex  vre o arbitrary for nverses with lc ersnain,or- representations, block a , P b p R , s ( c + E and , × s nes fnon- of inverse ose ) q E p ) = × † block  d = ( e e s of 11 21 resen- R eMoore– he + E e e ( E α 12 22 ∗ E t where , block ) ( of )  EE fthe of (1.1) with ons. ∗ E al ) E † . derived from full rank factorizations of the block entries a, b, c, d is utilized to obtain a block representation of E†. Assuming certain additional conditions, e. g., on column spaces or ranks, several authors derived block representations of the Moore–Penrose inverse of matrices, see, e. g. [4, 10, 14]. The existence of a Banachiewicz–Schur form for E† is studied e. g. in [2, 21]. Furthermore, special classes of matrices were considered in this context, see, e. g. [18,19] for so-called block k-circulant matrices. Block representations of E† involving regular transformations, e. g., per- mutations, have also been considered, see e. g. [11,15]. A representation of the Moore–Penrose inverse of a block column or block row in terms of the block entries is given in [1]. Under a certain rank additivity condition, a block representation of m × n partitioned matrices can be found in [20] as well. Several results on block representations of partitioned operators are obtained, e. g., in [6, 7, 24–26]. The here mentioned list of references on this topic is not exhaustive.

2 Notation

Throughout this paper let m,n,p,q,s,t be positive integers. We denote by Cm×n the set of all complex m × n matrices and by Cn := Cn×1 the set of all column vectors with n complex m×n n×n entries. We write 0m×n for the in C and In for the in C . n Let U and V be linear subspaces of C . If U ∩ V = {0n×1}, then we write U ⊕ V for the direct sum of U and V . Let U ⊥ be the orthogonal complement of U . If U ⊆ V , we use the notation V ⊖ U := V ∩ (U ⊥). We write R(M) and N (M) for the column space and the null space of a complex matrix M. Let M ∗ be the conjugate transpose of a complex matrix M. If M is an arbitrary complex m × n matrix, then there exists a unique complex n × m matrix X such that the four equations

(1) MXM = M, (2) XMX = X, (3) (MX)∗ = MX, (4) (XM)∗ = XM (2.1) are fulfilled (see [16]). This matrix X is called the Moore–Penrose inverse of M and is desig- nated usually by the notation M †. Following [3, Ch. 1, Sec. 1, Def. 1], for each M ∈ Cm×n we denote by M{j,k,...,ℓ} the set of all X ∈ Cn×m which satisfy equations (j), (k),..., (ℓ) from the equations (1)–(4) in (2.1). Each matrix belonging to M{j,k,...,ℓ} is said to be a {j,k,...,ℓ}-inverse of M. Remark 2.1. Let U be a linear subspace of Cn. Then there exists a unique complex n × n ma- ⊥ n trix PU such that PU x ∈ U and x − PU x ∈ U for all x ∈ C . This matrix PU is called n×n the orthogonal projection matrix onto U . If P ∈ C , then P = PU if and only if the three conditions P 2 = P and P ∗ = P as well as R(P )= U are fulfilled. Furthermore, the equation PU ⊥ = In − PU holds true. Our strategy to give a block representation of the Moore–Penrose inverse E† of the E given in (1.1) consists of three elementary steps:

Step (I) We consider the following factorization problem for orthogonal projection matri- r11 r12 P ces: Find a complex (s + t) × (p + q) matrix R = r21 r22 fulfilling R(E) = ER with block r Cs×p r Cs×q r Ct×p r Ct×q entries 11 ∈ , 12 ∈ , 21 ∈ , and 22 ∈ expressible explicitly only using the block entries a, b, c, and d of E. Remark 2.2. Let M ∈ Cm×n and let X ∈ Cn×m. In view of Remark 2.1, then:

2 (a) PR(M) = MX if and only if X ∈ M{1, 3}.

P ∗ (b) R(M ) = XM if and only if X ∈ M{1, 4}. We constructing a suitable {1, 3}-inverse R of E using: Theorem 2.3 (Urquhart [22], see also, e. g. [8, Ch. 1, Sec. 5, Thm. 3]). Let M ∈ Cm×n. (a) Let G := MM ∗ and let G(1) ∈ G{1}. Then M ∗G(1) ∈ M{1, 2, 4}. (b) Let H := M ∗M and let H(1) ∈ H{1}. Then H(1)M ∗ ∈ M{1, 2, 3}.

Applying Theorem 2.3, we will get an explicit block representation of PR(E) = ER in terms of a, b, c, and d.

Step (II) Analogous to Step (I) we construct a suitable complex (s + t) × (p + q) matrix ℓ11 ℓ12 L L E P ∗ LE = ℓ21 ℓ22 fulfilling ∈ {1, 4} and hence R(E ) = .   Step (III) With the matrices L and R we apply: Theorem 2.4 (Urquhart [22], see e. g. [8, Ch. 1, Sec. 5, Thm. 4]). If M ∈ Cm×n, then M (1,4)MM (1,3) = M † for every choice of M (1,3) ∈ M{1, 3} and M (1,4) ∈ M{1, 4}. Regarding Remark 2.2, Theorem 2.4 admits the following reformulation: m×n n×m n×m Remark 2.5. Let M ∈ C and let L ∈ C and R ∈ C be such that LM = PR(M ∗) P † and MR = R(M). Then M = LMR. Consider an additive decomposition M = U + V of an m × n matrix M with two m × n ma- ∗ trices U and V , fulfilling UV = 0m×m. In this situation, a result of Cline [5] is applica- ble to obtain a non-trivial representation of M † as a sum of U † and a further matrix. By a 0 0 c Hung/Markham [12] a decomposition E = U + V with U = c 0 and V = 0 d is used in E† this way to derive a block representation of involving only Moore–Penrose  inverses  of block size matrices. P ∗ Regarding Remarks 2.2 and 2.1 as well as (2.1), the orthogonal projection matrix Q := R(U ) † ∗ ∗ † ∗ ∗ fulfills UQ = UU U = U and VQ = (QV ) = (U UV ) = 0m×n. Consequently, MQ = U m×n and M(In − Q) = V . Conversely, given M ∈ C and an orthogonal projection matrix n×n Q ∈ C , then it is readily checked that U := MQ and V := M(In − Q) fulfill M = U + V ∗ ∗ and UV = 0m×m. Thus, every decomposition M = U + V with UV = 0m×m can be written n×n as M = MQ + M(In − Q) with some orthogonal projection matrix Q ∈ C occurring on the right-hand side and vice versa. Although not explicitly using Cline’s theorem, our investigations involve an analogous decomposition, namely E = PS E + (Ip+q − PS )E with the orthogonal projection matrix PS onto the linear subspace S spanned by the first s columns of E occurring on the left-hand side (see Lemma 3.2).

3 Main results

We consider a complex (p + q) × (s + t) matrix E. Let (1.1) be the block representation of E with p × s block a. Setting a b Y := [a, b], Z := [c, d], S := , T := (3.1) "c# "d#

3 then

Y E = , E = [S, T ]. (3.2) "Z# Let

µ := aa∗ + bb∗, σ := a∗a + c∗c, (3.3) ζ := cc∗ + dd∗, τ := b∗b + d∗d, ρ := ca∗ + db∗, λ := a∗b + c∗d. (3.4)

In view of (3.1), then

µ = YY ∗, σ = S∗S, (3.5) ζ = ZZ∗, τ = T ∗T, (3.6) ρ = ZY ∗, λ = S∗T. (3.7)

Choose µ(1) ∈ µ{1} and σ(1) ∈ σ{1}. Regarding (3.5), then Theorem 2.3 shows that

Y (1,2,4) := Y ∗µ(1), S(1,2,3) := σ(1)S∗ (3.8) fulfill

Y (1,2,4) ∈ Y {1, 2, 4}, S(1,2,3) ∈ S{1, 2, 3}. (3.9)

Let

φ := c − (ca∗ + db∗)µ(1)a, ψ := d − (ca∗ + db∗)µ(1)b, (3.10) η := b − aσ(1)(a∗b + c∗d), θ := d − cσ(1)(a∗b + c∗d). (3.11)

Because of (3.8), (3.7), (3.1), (3.4), (3.10), and (3.11), then

η V := [φ, ψ] and W := (3.12) "θ# admit the representations

(1,2,4) ∗ (1) (1) Z(Is+t − Y Y )= Z − ZY µ Y = Z − ρµ Y (3.13) = [c, d] − ρµ(1)[a, b] = [φ, ψ]= V and

(1,2,3) (1) ∗ (1) (Ip+q − SS )T = T − Sσ S T = T − Sσ λ b a η (3.14) = − σ(1)λ = = W. "d# "c# "θ#

Using (3.13), (3.14), (3.9), Remarks 2.2 and 2.1, and [R(Y ∗)]⊥ = N (Y ), we can infer P P V = Z N (Y ), W = [R(S)]⊥ T. (3.15)

4 Let

ν := φφ∗ + ψψ∗, ω := η∗η + θ∗θ. (3.16)

In view of (3.12), (3.15), and Remark 2.1 then ∗ P ∗ ∗ ∗ ∗P ∗ ν = V V = Z N (Y )Z = V Z , ω = W W = T [R(S)]⊥ T = T W. (3.17)

Choose ν(1) ∈ ν{1} and ω(1) ∈ ω{1}. Regarding (3.17), then Theorem 2.3 shows that

V (1,2,4) := V ∗ν(1) and W (1,2,3) := ω(1)W ∗ (3.18) fulfill

V (1,2,4) ∈ V {1, 2, 4} and W (1,2,3) ∈ W {1, 2, 3}. (3.19)

Cp×p Cs×s Cq×q Ct×t Cq×q Ct×t Cq×p Obviously, we have µ ∈ ≥ , σ ∈ ≥ , ζ ∈ ≥ , τ ∈ ≥ , ν ∈ ≥ , ω ∈ ≥ , ρ ∈ , and Cs×t Cn×n λ ∈ , where ≥ denotes the set of all non-negative Hermitian complex n × n matrices. Remark 3.1. Let

(1,2,3) (1,2,3) (1,2,4) (1,2,4) (1,2,4) S (Ip+q − TW ) L := (Is+t − V Z)Y , V , R := (1,2,3) . (3.20) " W # h i Regarding (3.20), (3.18), (3.8), (3.7), (3.1), and (3.12), then

∗ (1) ∗ (1) ∗ (1) ∗ ∗ (1) ∗ (1) ∗ (1) L = (Is+t − V ν Z)Y µ , V ν = (Y − V ν ZY )µ , V ν h ∗ ∗ (1) (1) ∗ (1) i ∗ h(1) ∗ (1) (1) i∗ (1) = (Y − V ν ρ)µ , V ν = [Y µ , 0(s+t)×q] + [−V ν ρµ , V ν ] h ∗ (1) ∗ (1) i (1) = Y µ [Ip, 0p×q]+ V ν [−ρµ ,Iq] ∗ ∗ a (1) φ (1) (1) ℓ11 ℓ12 = ∗ µ [Ip, 0p×q]+ ∗ ν [−ρµ ,Iq]= , "b # "ψ # "ℓ21 ℓ22# where

∗ (1) ∗ (1) ∗ (1) ∗ (1) ℓ12 := φ ν , ℓ11 := (a − ℓ12ρ)µ , ℓ22 := ψ ν , ℓ21 := (b − ℓ22ρ)µ , (3.21) and

(1) ∗ (1) ∗ (1) ∗ ∗ (1) ∗ σ S (Ip+q − Tω W ) σ (S − S Tω W ) R = (1) ∗ = (1) ∗ " ω W # " ω W # σ(1)(S∗ − λω(1)W ∗) σ(1)S∗ −σ(1)λω(1)W ∗ = (1) ∗ = + (1) ∗ " ω W # "0t×(p+q)# " ω W # I −σ(1)λ = s σ(1)S∗ + ω(1)W ∗ "0t×s# " It # I −σ(1)λ r r = s σ(1)[a∗, c∗]+ ω(1)[η∗,θ∗]= 11 12 , "0t×s# " It # "r21 r22# where

(1) ∗ (1) ∗ (1) ∗ (1) ∗ r21 := ω η , r11 := σ (a − λr21), r22 := ω θ , r12 := σ (c − λr22). (3.22)

5 Lemma 3.2. Let E := R(E), let S := R(S), and let W := R(W ). Then W = E ⊖ S and PE = PS + PW = ER.

Proof. We first check W = E ∩ (S ⊥). Because of (3.19), (3.9), and Remark 2.2(a), we have (1,2,3) (1,2,3) (1,2,3) PW = WW and PS = SS . According to Remark 2.1, hence Ip+q − SS = PS ⊥ . ⊥ By virtue of (3.14), then W = PS ⊥ T follows. From Remark 2.1 we know R(PS ⊥ ) = S . Consequently, W ⊆ S ⊥. Regarding (3.2) and (3.14), we have S ⊆ E and

(1,2,3) (1,2,3) (1,2,3) (1,2,3) −S T −S T W = (Ip+q − SS )T = T − SS T = [S, T ] = E , " It # " It # implying W ⊆ E . Thus, W ⊆ E ∩(S ⊥) is proved. Now we consider an arbitrary w ∈ E ∩(S ⊥). s+t x Then w ∈ E ; so there exists some v ∈ C with w = Ev. Let v = y be the block v x Cs w Sx Ty w S ⊥ representation of with ∈ . Regarding (3.2), then = + . In view of ∈ and Sx ∈ S , furthermore PS ⊥ w = w and PS ⊥ Sx = 0(p+q)×1. Taking additionally into account W = PS ⊥ T , we obtain then

w = PS ⊥ w = PS ⊥ (Sx + Ty)= PS ⊥ Ty = W y, implying w ∈ W . Thus, we have also shown E ∩ (S ⊥) ⊆ W . Therefore, W = E ∩ (S ⊥) holds true. Since S ⊆ E , hence W = E ⊖ S . Consequently, PW = PE − PS follows (see, , , e. g. [23, Thm. 4.30(c)]). Thus, PE = PS +PW . Taking additionally into account PS = SS(1 2 3) , , and PW = WW (1 2 3) as well as (3.14), (3.2), and (3.20), then we can conclude

(1,2,3) (1,2,3) (1,2,3) (1,2,3) (1,2,3) PE = SS + WW = SS + (Ip+q − SS )TW (1,2,3) (1,2,3) (1,2,3) (1,2,3) (1,2,3) S (Ip+q − TW ) = SS (Ip+q − TW )+ TW = [S, T ] (1,2,3) = ER. " W # The following result can be proved analogously. We omit the details.

Lemma 3.3. Let E˜ := R(E∗), let Y˜ := R(Y ∗), and let V˜ := R(V ∗). Then V˜ = E˜ ⊖ Y˜ and PE˜ = PY˜ + PV˜ = LE. Remark 3.4. From Lemma 3.2 and Remark 2.2(a) we can infer R ∈ E{1, 3}, whereas Lemma 3.3 and Remark 2.2(b) yield L ∈ E{1, 4}. Now we obtain the announced block representations of orthogonal projection matrices.

Proposition 3.5. Let E be a complex (p + q) × (s + t) matrix and let (1.1) be the block rep- resentation of E with p × s block a. Let S be given by (3.1). Let σ and λ be given by (3.3) and (3.4). Let σ(1) ∈ σ{1}. Let η,θ and W be given by (3.11) and (3.12). Let ω be given by (3.16) and let ω(1) ∈ ω{1}. Then

aσ(1)a∗ + ηω(1)η∗ aσ(1)c∗ + ηω(1)θ∗ P (1) ∗ (1) ∗ R(E) = Sσ S + Wω W = .  cσ(1)a∗ + θω(1)η∗ cσ(1)c∗ + θω(1)θ∗    P (1,2,3) (1,2,3) Proof. In the proof of Lemma 3.2, we have already shown R(E) = SS + WW . Taking additionally into account (3.8), (3.18), (3.1), and (3.12), the assertions follow.

Now we are able to prove a 2 × 2 block representation of the Moore–Penrose inverse.

6 Theorem 3.6. Let E be a complex (p + q) × (s + t) matrix and let (1.1) be the block repre- sentation of E with p × s block a. Let Y,S be given by (3.1). Let µ,σ and ρ, λ be given by (3.3) and (3.4). Let µ(1) ∈ µ{1} and let σ(1) ∈ σ{1}. Let φ, ψ and η,θ be given by (3.10) and (3.11). Let V and W be given by (3.12). Let ν and ω be given by (3.16). Let ν(1) ∈ ν{1} and let ω(1) ∈ ω{1}. Then

α β E† = LER = = (Y ∗ −V ∗ν(1)ρ)µ(1)aσ(1)(S∗ −λω(1)W ∗)+(Y ∗ −V ∗ν(1)ρ)µ(1)bω(1)W ∗ "γ δ# + V ∗ν(1)cσ(1)(S∗ − λω(1)W ∗)+ V ∗ν(1)dω(1)W ∗ with

α := ℓ11ar11 + ℓ11br21 + ℓ12cr11 + ℓ12dr21 β := ℓ11ar12 + ℓ11br22 + ℓ12cr12 + ℓ12dr22

γ := ℓ21ar11 + ℓ21br21 + ℓ22cr11 + ℓ22dr21 δ := ℓ21ar12 + ℓ21br22 + ℓ22cr12 + ℓ22dr22 where, for each j, k ∈ {1, 2}, the matrices ℓjk and rjk are given by (3.21) and (3.22), resp.

Proof. According to Lemmas 3.3 and 3.2, we have PR(E∗) = LE and PR(E) = ER. Thus, we can apply Remark 2.5 to obtain E† = LER. Using Remark 3.1 and (1.1), we furthermore obtain

(1) ∗ (1) ∗ ∗ ∗ (1) (1) ∗ (1) a b σ (S − λω W ) LER = (Y − V ν ρ)µ , V ν (1) ∗ "c d# " ω W # h i = (Y ∗ − V ∗ν(1)ρ)µ(1)aσ(1)(S∗ − λω(1)W ∗) + (Y ∗ − V ∗ν(1)ρ)µ(1)bω(1)W ∗ + V ∗ν(1)cσ(1)(S∗ − λω(1)W ∗)+ V ∗ν(1)dω(1)W ∗ as well as ℓ ℓ a b r r α β LER = 11 12 11 12 = . "ℓ21 ℓ22# "c d# "r21 r22# "γ δ#

4 Examples for consequences

In this section, we give some examples of applications of the block representations of orthogonal projection matrices and Moore–Penrose inverses given in Proposition 3.5 and Theorem 3.6. In order to avoid lengthy formulas, we give only hints for computations. Example 4.1. Let N be a positive integer and let U and V be two complementary linear subspaces of CN , i. e., the subspaces U and V fulfill U ⊕ V = CN . Then there exists a unique N complex N × N matrix PU ,V such that PU ,V x = u for all x ∈ C , where x = u + v is the unique representation of x with u ∈ U and v ∈ V . This matrix PU ,V is called the oblique projection matrix on U along V and admits the representations † † PU ,V = (PV ⊥ PU ) = [(IN − PV )PU ] , (4.1) see [8] or also [3, Ch. 2, Sec. 7, Ex. 60, formula (80)]. Assume that N = p + q and that U = R(E) and V = R(F) for two matrices E ∈ C(p+q)×(s+t) and F ∈ C(p+q)×(m+n) with block e f representations (1.1) and F = g h , where a is a p × s block and e is a p × m block. Then (4.1) together with Proposition 3.5 and  Theorem 3.6 could be used to obtain a block representation of PU ,V in terms of a, b, c, d and e,f,g,h.

7 Example 4.2. Because U ⊕ (U ⊥) = Cn holds true for every linear subspace U of Cn, the Moore–Penrose inverse of matrices is a special case of the uniquely determined {1, 2}-inverse with simultaneously prescribed column space and null space. More precisely, if M ∈ Cm×n and linear subspaces U of Cm and V of Cn with R(M) ⊕ U = Cm and N (M) ⊕ V = Cn are given, then there exists a unique complex n × m matrix X such that the four conditions

MXM = M, XMX = X, R(X)= V , and N (X)= U

(1,2) are fulfilled. This matrix X is denoted by MU ,V and admits the representations

(1,2) (1) P P † (1) P P † MV ,U = PV ,N (M)M PR(M),U = ( V ⊥ N (M)) M ( [R(M)]⊥ U ) (4.2) with every M (1) ∈ M{1} (see, e. g. [3, Ch. 2, Sec. 6, Thm. 12]). In particular, (4.2) is valid (1) † † (1,2) for M = M . Furthermore, M = M[N (M)]⊥,[R(M)]⊥ . Example 4.1 could be used to obtain a (1,2) U V block representation of MV ,U , if the subspaces and are given as column spaces of certain matrices, partitioned accordingly to a block representation of M, and if a matrix M (1) ∈ M{1} the corresponding block representation of which is known. (In particular, for M (1) = M † one can use Theorem 3.6.)

5 An alternative approach

In this final section, we give alternative representations of the matrices L and R occurring in Theorem 3.6 and Lemmas 3.3 and 3.2. Utilizing these representations, further block represen- tations of the Moore–Penrose inverse E† could possibly be obtained, in particular, in the case of E satisfying additional conditions. We will not pursue this direction any further here. We continue to use the notations given above. C(p+q)×(p+q) Lemma 5.1 (Rohde [17], see, e. g. [8, Ch. 5, Sec. 2, Ex. 10(a)]). Let M ∈ ≥ and m11 m12 (1) let M = m21 m22 be the block representation of M with p × p block m11. Let m11 ∈ m11{1} (1) (1) and let ς := m22 − m21m11 m12. Let ς ∈ ς{1}. Then

(1) (1) (1) (1) (1) (1) m + m m12ς m21m −m m12ς M (1) := 11 11 11 11  (1) (1) (1)  −ς m21m11 ς     belongs to M (1) ∈ M{1}. Remark 5.2. Regarding (3.17), (3.13), (3.14), (3.8), (3.6), and, (3.7), we can infer

∗ (1,2,4) ∗ ∗ (1) ∗ (1) ∗ ν = V Z = Z(Is+t − Y Y )Z = Z(Is+t − Y µ Y )Z = ζ − ρµ ρ (5.1) and

∗ ∗ (1,2,3) ∗ (1) ∗ ∗ (1) ω = T W = T (Ip+q − SS )T = T (Ip+q − Sσ S )T = τ − λ σ λ. (5.2)

Lemma 5.3. Let

G := EE∗ and H := E∗E. (5.3)

8 Let µ(1) ∈ µ{1} and ν(1) ∈ ν{1} and let σ(1) ∈ σ{1} and ω(1) ∈ ω{1}. Then the matrices µ(1) + µ(1)ρ∗ν(1)ρµ(1) −µ(1)ρ∗ν(1) G(1) := (5.4)  −ν(1)ρµ(1) ν(1)  and   σ(1) + σ(1)λω(1)λ∗σ(1) −σ(1)λω(1) H(1) := (5.5) " −ω(1)λ∗σ(1) ω(1) # fulfill G(1) ∈ G{1} and H(1) ∈ H{1}. C(p+q)×(p+q) C(s+t)×(s+t) Proof. Clearly, G ∈ ≥ and H ∈ ≥ . Regarding (3.2), (3.5), and (3.6), we ∗ Y ∗ ∗ YY ∗ YZ∗ µ ρ S∗ S∗S S∗T σ λ have G = Z [Y , Z ] = ZY ∗ ZZ∗ = ρ ζ and H = T ∗ [S, T ] = T ∗S T ∗T = λ∗ τ . Taking additionally  into account Remark 5.2, thus from Lemm a 5.1 the assertions immediately   follow. Finally, in the following result, we not only get new representations for the matrices L and R occurring in Theorem 3.6 and Lemmas 3.3 and 3.2, but also obtain their belonging to the set E{1, 2, 4} and E{1, 2, 3}, resp., thereby improving Remark 3.4. Lemma 5.4. The matrices L and R admit the representations L = E∗G(1) and R = H(1)E∗ and fulfill L ∈ E{1, 2, 4} and R ∈ E{1, 2, 3}.

(1,2,4) ∗ P∗ P ∗ Proof. Using (3.9) and Remarks 2.2 and 2.1, we have (Y Y ) = R(Y ∗) = R(Y ) = Y (1,2,4)Y and, analogously, (SS(1,2,3))∗ = SS(1,2,3). Taking additionally into account (3.13), (3.14), (3.8), and (3.7), we thus can conclude ∗ (1,2,4) ∗ ∗ ∗ (1) ∗ ∗ ∗ (1) ∗ V = (Is+t − Y Y )Z = Z − Y µ Y Z = Z − Y µ ρ (5.6) and ∗ ∗ (1,2,3) ∗ ∗ (1) ∗ ∗ ∗ (1) ∗ W = T (Ip+q − SS )= T − T Sσ S = T − λ σ S . (5.7) Regarding (3.2), (5.4), (5.5), (5.1), (5.2), (5.6), (5.7), and Remark 3.1, we obtain (1) ∗ ∗ (1) ∗ ∗ Ip (1) −µ ρ (1) (1) E G = [Y , Z ] µ [Ip, 0p×q]+ ν [−ρµ ,Iq] "0q×p# " Iq # ! (5.8) ∗ (1) ∗ ∗ (1) ∗ (1) (1) = Y µ [Ip, 0p×q] + (Z − Y µ ρ )ν [−ρµ ,Iq]= L and (1) ∗ (1) ∗ Is (1) −σ λ (1) ∗ (1) S H E = σ [Is, 0s×t]+ ω [−λ σ ,It] ∗ "0t×s# " It # ! "T # (5.9) I −σ(1)λ = s σ(1)S∗ + ω(1)(T ∗ − λ∗σ(1)S∗)= R. "0t×s# " It # According to Lemma 5.3, we have G(1) ∈ G{1} and H(1) ∈ H{1}. Taking additionally into account (5.3), (5.8), and (5.9), then L ∈ E{1, 2, 4} and R ∈ E{1, 2, 3} follow from Theorem 2.3.

Observe that Lemma 5.4 in connection with Theorem 3.6 yields a factorization E† = LER with particular matrices L ∈ E{1, 2, 4} and R ∈ E{1, 2, 3}. This gives a special factorization of the kind mentioned in Urquhart’s result (Theorem 2.4), whereby all matrices can be expressed explicitly in terms of the block entries a, b, c, d of the given matrix E.

9 References

[1] Baksalary, J.K. and O.M. Baksalary: Particular formulae for the Moore-Penrose in- verse of a columnwise partitioned matrix. Appl., 421(1):16–23, 2007. https://doi.org/10.1016/j.laa.2006.03.031.

[2] Baksalary, J.K. and G.P.H. Styan: Generalized inverses of partitioned matrices in Banachiewicz-Schur form. Vol. 354, pp. 41–47. 2002. https://doi.org/10.1016/S0024-3795(02)00334-8, Ninth special issue on linear algebra and .

[3] Ben-Israel, A. and T.N.E. Greville: Generalized inverses, vol. 15 of CMS Books in Math- ematics/Ouvrages de Mathématiques de la SMC. Springer-Verlag, New York, second ed., 2003. Theory and applications.

[4] Castro-González, N., M.F. Martínez-Serrano, and J. Robles: Expressions for the Moore- Penrose inverse of block matrices involving the Schur complement. Linear Algebra Appl., 471:353–368, 2015. https://doi.org/10.1016/j.laa.2015.01.003.

[5] Cline, R.E.: Representations for the generalized inverse of sums of matrices. J. Soc. Indust. Appl. Math. Ser. B Numer. Anal., 2:99–114, 1965.

[6] Deng, C.Y. and H.K. Du: Representations of the Moore-Penrose inverse of 2 × 2 block operator valued matrices. J. Korean Math. Soc., 46(6):1139–1150, 2009. https://doi.org/10.4134/JKMS.2009.46.6.1139.

[7] Deng, C.Y. and H.K. Du: Representations of the Moore-Penrose inverse for a class of 2-by-2 block operator valued partial matrices. Linear Multilinear Algebra, 58(1-2):15–26, 2010. https://doi.org/10.1080/03081080801980457.

[8] Greville, T.N.E.: Solutions of the matrix equation XAX = X, and relations be- tween oblique and orthogonal projectors. SIAM J. Appl. Math., 26:828–832, 1974. https://doi.org/10.1137/0126074.

[9] Groß, J.: The Moore-Penrose inverse of a partitioned nonnegative definite matrix. Vol. 321, pp. 113–121. 2000. https://doi.org/10.1016/S0024-3795(99)00073-7, Linear algebra and statistics (Fort Lauderdale, FL, 1998).

[10] Hartwig, R.E.: Rank factorization and Moore-Penrose inversion. Indust. Math., 26(1):49– 63, 1976.

[11] He, C.N.: General forms for Moore-Penrose inverses of matrices by block permutation. J. Nat. Sci. Hunan Norm. Univ., 29(4):1–5, 2006.

[12] Hung, C.H. and T.L. Markham: The Moore-Penrose inverse of a par- A D titioned matrix M = (B C ). Linear Algebra Appl., 11:73–86, 1975. https://doi.org/10.1016/0024-3795(75)90118-4.

[13] Miao, J.M.: General expressions for the Moore-Penrose inverse of a 2×2 block matrix. Lin- ear Algebra Appl., 151:1–15, 1991. https://doi.org/10.1016/0024-3795(91)90351-V.

10 [14] Mihailović, B., V. Miler Jerković, and B. Malešević: Solving fuzzy linear systems using a block representation of generalized inverses: the Moore-Penrose inverse. Fuzzy Sets and Systems, 353:44–65, 2018. https://doi.org/10.1016/j.fss.2017.11.007.

[15] Milovanović, G.V. and P.S. Stanimirović: On Moore-Penrose inverse of block matrices and full-rank factorization. Publ. Inst. Math. (Beograd) (N.S.), 62(76):26–40, 1997.

[16] Penrose, R.: A generalized inverse for matrices. Proc. Cambridge Philos. Soc., 51:406–413, 1955.

[17] Rohde, C.A.: Generalized inverses of partitioned matrices. J. Soc. Indust. Appl. Math., 13:1033–1035, 1965.

[18] Smith, R.L.: Moore-Penrose inverses of block circulant and block k-circulant matrices. Linear Algebra Appl., 16(3):237–245, 1977. https://doi.org/10.1016/0024-3795(77)90007-6.

[19] Tang, S. and H.Z. Wu: The Moore-Penrose inverse and the weighted Drazin inverse of block k-circulant matrices. J. Hefei Univ. Technol. Nat. Sci., 32(9):1442–1444, 1448, 2009.

[20] Tian, Y.: The Moore-Penrose inverses of m × n block matrices and their applications. Linear Algebra Appl., 283(1-3):35–60, 1998. https://doi.org/10.1016/S0024-3795(98)10049-6.

[21] Tian, Y. and Y. Takane: More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms. Linear Algebra Appl., 430(5-6):1641–1655, 2009. https://doi.org/10.1016/j.laa.2008.06.007.

[22] Urquhart, N.S.: Computation of generalized inverse matrices which satisfy specified con- ditions. SIAM Rev., 10:216–218, 1968. https://doi.org/10.1137/1010035.

[23] Weidmann, J.: Linear operators in Hilbert spaces, vol. 68 of Graduate Texts in Mathe- matics. Springer-Verlag, New York-Berlin, 1980. Translated from the German by Joseph Szücs.

[24] Xu, Q.: Moore-Penrose inverses of partitioned adjointable operators on Hilbert C∗-modules. Linear Algebra Appl., 430(11-12):2929–2942, 2009. https://doi.org/10.1016/j.laa.2009.01.003.

[25] Xu, Q., Y. Chen, and C. Song: Representations for weighted Moore-Penrose in- verses of partitioned adjointable operators. Linear Algebra Appl., 438(1):10–30, 2013. https://doi.org/10.1016/j.laa.2012.08.002.

[26] Xu, Q. and X. Hu: Particular formulae for the Moore-Penrose inverses of the par- titioned bounded linear operators. Linear Algebra Appl., 428(11-12):2941–2946, 2008. https://doi.org/10.1016/j.laa.2008.01.021.

[27] Yan, Z.Z.: New representations of the Moore-Penrose inverse of 2×2 block matrices. Linear Algebra Appl., 456:3–15, 2014. https://doi.org/10.1016/j.laa.2012.08.014.

11 Universität Leipzig Fakultät für Mathematik und Informatik [email protected] PF 10 09 20 [email protected] D-04009 Leipzig Germany

12