arXiv:2012.06533v1 [math.RA] 11 Dec 2020 lsia tep osletepolmwe ehv aiyo w inn two symmetric of two family given a if have we follows: when as problem is the solve to attempt classical ignlzbevara ogunei n fthem, of one if congruence real via diagonalizable aif htteeexist there that satisfy nw euti httora ymti matrices symmetric real two that well-known. is historically result is known congruence via diagonalizable simultaneously B a rvdt ea i n nyi”frtecase the for if” only and “if an be to proved was nomto bu hstpcsePolm1 n[8]. fo in and 12 [10] Problem survey see the topic in included this are about forms information quadratic of diagonalization pairs simultaneously only the of concerning literature the symm B real of two Part However, of congruence via congru diagonalization simult 1]. block via vanish simultaneously Theorem diagonalization not with [12, simultaneously do matrices forms hermitian about generalises the work 1 and his Corollary closed approaches our real particular is Wo field In bilin is ground symmetric ours the nondegenerate resembles two characteristic when most of of case that field the study a first The studies setting. who via [12] general diagonalizable su simultaneously more and be a to necessary matrices in give symmetric of authors, pair other a for among (1978), Becker and aay(94 4.Atrad ru 7 nsancsaycondition for necessary matrices a finds symmetric [7] real two Greub Afterwards [4]. (1964) Calaby n M1-EEJ-1,alo hmwt EE funds. FEDER Andaluc´ı with them de of Junta all the UMA18-FEDERJA-119, by and and PID2019-104236GB-I00 project v diagonalization simultaneous orthogonalization, neous r iutnosydaoaial i ogune et h previou the Next, congruence. via diagonalizable simultaneously are h rbe fdtriigwe nt olcino ymti matric symmetric of collection finite a when determining of problem The 2020 h uhr r upre yteSaihMnsei eCienc de Ministerio Spanish the by supported are authors The e od n phrases. and words Key n ≥ ahmtc ujc Classification. Subject Mathematics IUTNOSOTOOAIAINO INNER OF ORTHOGONALIZATION SIMULTANEOUS ,seicly ( specifically: 3, OOE MART DOLORES oeouinagba r loconsidered. products. also inner the are all algebras to evolution relative to orthogonal an have rdcsi nt-iesoa etrspace vector finite-dimensional a in products Abstract. OAD ARR AAO CRIST CASADO, CABRERA YOLANDA RDCSOE RIRR FIELDS ARBITRARY OVER PRODUCTS egv eesr n ucetcniin o aiyo inn of family a for conditions sufficient and necessary give We ,β α, xAx ´ NBRUR,ADC AND BARQUERO, IN )adi usqetrsl h tak h problem the attacks she result subsequent a in and 2) 6= ymti iierfr,inrpout vlto algebra evolution product, inner form, bilinear Symmetric C sabs ed on i ute,Ulg[1 ok the works [11] Uhlig further, bit a Going field. base a as ∈ t A ) 2 R +( and 1. uhthat such xBx Introduction B n iutnosydaoaial i congruence via diagonalizable simultaneously t ) 10,1A3 79,15A20. 17D92, 15A63, 11E04, × 2 1 if 0 6= n αA matrices + NIOMART ANDIDO ´ x acongruence. ia βB V .BsdsWnnugr(1966) Wonenburger Besides 0. 6= vra rirr field arbitrary an over n A BLGLCANTO, GIL OBAL ´ spstv ent,then definite, positive is A ≥ A and hog h rjcsFQM-336 projects the through a or yFnlr(97 5 and [5] (1937) Finsler by 3 and B aeInvc´ntruhthe Innovaci´on through e ia B oeapplications Some B sdfiie Another definite. is , a esimultaneously be can ´ NGONZ IN vrteraswhich reals the over cetconditions fficient nodrt have to order in i congruence via oegeneral more r ti matrices. etric a om (for forms ear ALEZ ´ K neo two of ence nenburger’s statement s rproducts er congruence to er aneously. h first The ce [2] ecker simulta- , A sis es and 2 Y. CABRERA, C. GIL, D. MART´IN, AND C. MART´IN

In the fourth’s author talk “Two constructions related to evolution algebras” given in 2018’s event “Research School on Evolution Algebras and non associative algebraic structures” [9] at University of M´alaga (Spain), the idea of the simul- taneous orthogonalization of inners products in the context of evolution algebras was exposed. The question posed there was: how to classify up to simultaneous congruence couples of simultaneous orthogonalizable inner products. Recently the work [3] deals also with the problem of simultaneously orthogo- nalization of inner products on finite-dimensional vector spaces over K = R or C. One of the motivations in [3] is that of detecting when a given algebra is an evo- lution algebra. The issue is equivalent to the simultaneous orthogonalization of a collection of inner products (the projections of the product in the lines generated by each element of a given basis of the algebra). Modulo some minimal mistake in [3, Theorem 2] and [3, Corollary 1] (where the “if and only if” is actually a one- direction implication), the paper provides answers to the question working over C. However, we will see in our work that it is possible to handle satisfactorily the task considering general ground fields. Though we solve the problem for arbitrary fields, the case of characteristic two, has to be considered on its own in some of our results. The paper is organized as follows. In Section 2 we introduce the preliminary definitions and notations. If we consider an algebra of dimension n, we introduce in subsection 2.1 the evolution test ideal. This is an ideal J in certain polynomial algebra on n2 + 1 indeterminates whose zero set V (J) tell us if A is or not an evolution algebra. In section 3 we consider the nondegenerate case: we have a family of inner products F such that at least one of the inner products is nondegenerate. Theorem 1 is the main result in this section, the example below would be helpful to understand how it works. Our result includes the characteristic 2 case and we have inserted Example 4 to illustrate how to deal with this eventuality. In section 4 we deal with the general case (that in which no inner product in the family is nondegenerate). We introduce a notion of radical of a family of inner products inspired by the general procedure of modding out those elements which obstruct the symmetry of a phenomenon. After examining the historical development of the topic of the simultaneously diagonalization via congruence, we realized that the idea appears, of course, in other cited works dealing with the problem (see for example [2] and [12]). In this way we reduce the problem to families whose radical is zero (though possibly degenerate). We define also a notion of equivalence of families of inner products (roughly speaking two such families are equivalent when the orthogonalizability of the space relative to one of them is equivalent to the orthogonalizability relative to the second family, and moreover each orthogonal decomposition relative to one of families is also an orthogonal decomposition relative to the other). When the ground field is infinite we prove that any F with zero radical we can modified to an equivalent nondegenerate family F ′ by adding only one inner product. Finally Theorem 3 and Proposition 4 can be considered as structure theorem of families F of inner products with zero radical. In order to better understand how to apply the thesis given in the last results we refer to Example 8, where we work with an algebra of dimension 6 over the rationals.

2. Preliminaries and basic results

Let V be a over a field K, a family F = , i i∈I of inner products (symmetric bilinear forms) on V is said to be simultaneously{h· ·i } orthogonalizable if SIMULTANEOUS ORTHOGONALIZATION OF INNER PRODUCTS 3 there exists a basis vj j∈Λ of V such that for any i I we have vj , vk i = 0 whenever j = k. We{ say} that the family F is nondegenerate∈ if thereh existsi i I 6 ∈ such that , i is nondegenerate. Otherwise, we say that the family F is degenerate. Let V beh· a vector·i space over a field K with a inner product , : V V K and T : V V a linear map. We will say that T ♯ : V V is theh·adjoint·i ×of T→: V V when →T (x),y = x, T ♯(y) for any x, y V . A→ linear map T : V V is said→ to be self-adjointh i whenh its adjointi exists and∈ T ♯ = T . The existence→ of the adjoint is guaranteed in the case of a nondegenerate inner product on a finite-dimensional vector space. If V is a finite-dimensional K-vector space and , an inner product in V , it is well-known that if the characteristic of K (char(h·K·i)) is other than 2, there is an of V relative to , . In the characteristic two case there are inner products with no orthogonal basis,h· ·i for instance, the inner product of matrix 0 1 2 1 0 in the vector space F2 over the field of two elements F2. But even over fields of characteristic other than two, there are sets of inner products which are not simultaneously orthogonalizable. Such is the case of the inner products of the real vector space R2 which in the canonical basis have the matrices: 1 0 0 1 1 0 , , . 0 1 1 0 0 1      −  Remark 1. Concerning the above example, if F is a family of inner products on a finite-dimensional vector space V of dimension n and F is simultaneously orthogonalizable, then the maximum number of linearly independent elements of F is n. Thus if we have a family of more than n linearly independent inner products on an n-dimensional vector space, we can conclude that F is not simultaneously orthogonalizable. This is the reason why the above three inner products are not simultaneously orthogonalizable.

If F = , i i∈I is a family of inner products on a space V over a field K, it may happen{h· that·i }F is not simultaneously orthogonalizable but that under extension to a field F K, the family F considered as a family of inner products ⊃ in the scalar extension VF is simultaneously orthogonalizable as we show in the following example. Example 1. Consider the inner products of the real vector space R2 whose Gram matrices relative to the canonical basis are

1 1 1 1 , . −1 1 1 1    −  It is easy to check that they are not simultaneously orthogonalizable, however the inner products of the complex vector space C2 with the same matrices are simultaneously orthogonalizable relative to the basis B = (i, 1), ( i, 1) . { − } One of the applications of simultaneously orthogonalizable theory is to evolution algebras. For this reason, we briefly remember some basic notions related to these algebras. A K-algebra A is an evolution algebra if there exists a basis B = ei i I { } ∈ called natural basis such that eiej = 0 for any i = j. Fixed a natural basis B of 6 2 A, the matrix C = (cij ) with cij K such that e = cjiej will be called the ∈ i j structure matrix of A relative to B. P 4 Y. CABRERA, C. GIL, D. MART´IN, AND C. MART´IN

The problem of simultaneously orthogonalization of families of inner products in a vector space is directly related to that of detecting whether or not a given algebra is an evolution algebra. If A is a commutative algebra over a field K the product in A can be written in the form

(1) xy = x, y iei h i i I X∈ where ei i∈I is any fixed basis of A and the inner products , i : A A K provide{ the} coordinates of xy relative to the basis. So A is an evolutionh· ·i × algebra→ if and only if the set of inner products , i is simultaneously orthogonalizable. The above result appears in [3, Theoremh· 1]·i under the additional conditions that the dimension of the algebra is finite and the ground field is R or C. Note that the “if and only if” condition of [3, Theorem 2] is not true (see Example 1). The implication that is trivially true is that F = , i i∈I being simultaneously orthogonalizable in the vector space V over K, implies{h· ·i that} the same family considered in the scalar extension VF (with F K) is also simultaneously orthogonalizable. Note also that [3, Corollary 2] is wrong.⊃ 2.1. Evolution test ideal. We will show that given a commutative finite-dim- ensional algebra A over a field K there is an ideal in a certain polynomial algebra which detects if A is an evolution algebra. This will allow to apply tools of algebraic geometry in this setting: consider the affine space Km and H an ideal of the polyno- mial K-algebra K[x1,...,xm]. We define the set of zeros of H as the set of common zeros of the polynomials in H, that is, V (H) := x Km : q(x)=0, q H (here we use the duality polynomial-function). { ∈ ∀ ∈ } Assume dim(A) = n and consider the polynomial algebra in the n2 + 1 inde- n terminates of the set z xij i,j=1, so we have the polynomial algebra R := n { }⊔{ } n K[ z xij i,j=1]. Fix now a basis ei i=1 of A and write the product of A in { }⊔{ } { } n the form given in equation (1) which gives the family of inner products , i . {h· ·i }i=1 Define Mk as the Gram matrix of the inner product , k in any basis (the same basis for all the inner products). h· ·i Definition 1. In the above conditions, we consider the ideal J of R generated by the polynomials p0 and pijk defined as: n p (x ,...,xnn,z) := 1 z det[(xij ) ] 0 11 − i,j=1 and t pijk(xi1,...,xin, xj1,...,xjn) := (xi1,...,xin)Mk(xj1,...,xjn) where i, j, k =1,...,n, i = j. The ideal J will be called the evolution test ideal of A though its form depends6 of the chosen basis. The name of this ideal J is motivated by the following proposition. Proposition 1. Let A be a n-dimensional commutative algebra over a field K and J the evolution test ideal of A (fixed a basis). Then A is an evolution algebra if and only if V (J) = . 6 ∅ n n Proof. Write the product of A in the form xy = x, y kek where B = ek k=1h i { }k=1 is a basis of A. Let Mk be the matrix of each , k relative to B for k =1,...n. If h· P·in A is an evolution algebra, fix a natural basis uq of A. For q =1,...,n, assume { }q=1 that the coordinates of uq relative to B are (uq ,...,uqn). Since up,uq k = 0 if p = 1 h i 6 SIMULTANEOUS ORTHOGONALIZATION OF INNER PRODUCTS 5 q, we have pijk(ui1,...,uin,uj1,...,ujn)=0 for i = j and p0(u11,...,unn, ∆)=0 n 6 where ∆ = det[(uij )i,j=1]. Consequently

(u ,...,unn, ∆) V (J). 11 ∈ Conversely if (u ,...,unn, ∆) V (J), then p (u ,...,unn, ∆) = 0 which implies 11 ∈ 0 11 that the vectors uq := (uq1,...,uqn) (for q = 1,...,n) are a basis of the vector space. Furthermore, we also have pijk(ui ,...,uin,uj ,...,ujn) = 0 for i, j, k 1 1 ∈ 1,...,n and i = j. This tell us that if i = j, the vectors ui and uj are orthogonal { } 6 n6 relative to , k for any k. Whence uq is a natural basis of A.  h· ·i { }q=1 If we have 1 J, then V (J)= hence A is not an evolution algebra. In the case of an algebraically∈ closed field K ∅the condition 1 J is equivalent to V (J)= by the Hilbert’s Nullstellensatz. ∈ ∅ To see how this works in low dimension, consider the following example: Example 2. Take an evolution algebra over a field K with product given by the inner products of matrices as in Example 1. The evolution test ideal is the ideal J of K[x11, x12, x21, x22,z] generated by the polynomials: x x + x x + x x + x x − 11 21 12 21 11 22 12 22 x11x21 + x12x21 + x11x22 x12x22 zx x + zx x −1. − 12 21 11 22 − In case char(K) = 2 the ideal is just the generated by

x11x21 + x12x21 + x11x22 + x12x22, zx12x21 + zx11x22 +1.

For K = F2 the zeros of the polynomial above must have z = 1, x12x21 +x11x22 =1 hence x11x21 + x12x22 = 1 and we have (among others) the solution x11 = x12 = 1, x21 = 0, x22 = 1. Since any field of characteristic two contains F2 we find that A is an evolution algebra when the ground field is of characteristic two. If K has characteristic other than 2, a Gro¨ebner basis of J is the set x2 + x2 , 2zx x2 x , 2zx x +1, 2zx x2 + x . 21 22 12 22 − 21 12 21 22 12 11 Observe that if (x11, x12, x21, x22,z) is a zero of the polynomials, then x21 = 0 which implies x = 0. In this case, x21 has to be a square root of 1. Consequently6 if 22 6 x22 − √ 1 / K the algebra is not an evolution algebra. If √ 1 K, then we have a solution− ∈ x = x = 1, x = x = i := √ 1 and z =−i . So∈ the zero set of the 11 21 12 − 22 − 2 evolution test ideal says that A is an evolution algebra only when √ 1 K. − ∈ The evolution test ideal of a three-dimensional algebra could involve a set of 1 quartic polynomial of 10 variables and 9 quadratic polynomials of 6 variables. In this case the computations are more involved and it seems reasonable to use another tools to elucidate if a given algebra is an evolution algebra or not.

3. The nondegenerate case Let K be a field and V an n-dimensional vector space over K. Let I be a nonempty set and for i I, let , i be an inner product on V , that is, symmetric ∈ h· ·i , i : V V K. The question is: under what conditions is h· ·i × → the family , i i∈I simultaneously orthogonalizable? In other words, under what {h· ·i } n conditions is there a basis vj j=1 of V such that for any i I we have vj , vk i =0 whenever j = k? Of course{ each} inner product must be orthogonalizable∈ h hencei this is a necessary6 condition. As a first approach we will assume that one of the inner 6 Y. CABRERA, C. GIL, D. MART´IN, AND C. MART´IN products is nondegenerate and we will denote this as , 0. Under this assumption ∗ ∗ h· ·i there is a canonical isomorphism V ∼= V , where V is the dual space of V . The isomorphism is defined by x x, 0. In the next lemma we find necessary and sufficient conditions to ensure7→ that h i a family of inner products is simultaneously orthogonalizable. In this case the ground field K is arbitrary.

Lemma 1. Assume that F = , i i∈I∪{0} is a family of inner products in the finite-dimensional vector space V{h·over·i } K, and that , is nondegenerate. Then: h· ·i0 (1) For each i I there is a linear map Ti : V V such that x, y i = ∈ → h i Ti(x),y for any x, y V . Furthermore, each Ti is a self-adjoint op- h i0 ∈ erator of (V, , 0). (2) F is simultaneouslyh· ·i orthogonalizable if and only if there exists an orthog- onal basis B of (V, , 0) such that each Ti is diagonalizable relative to B. h· ·i ∗ Proof. To prove item 1, fix x V and i I and consider the element x, i V . ∈∗ ∈ h i i∈ From the isomorphism V = V above, we conclude that there is a unique ax V i ∼ i ∈ such that x, i = ax, 0. So for any y V we have x, y i = ax,y 0 for every h i h i ∈ ih i h i i I. Now, for any i I define Ti : V V by Ti(x)= a . For proving the linearity ∈ ∈ → x of Ti take into account that i i i a a a , = x + y, i x, i y, i =0, h x+y − x − y i0 h i − h i − h i i i a λa , = λx, i λ x, i =0, h λx − x i0 h i − h i so nondegeneracy of , gives the linearity of each Ti. Now, let us prove that h· ·i0 Ti(x),y = x, Ti(y) for any x, y V and i I: h i0 h i0 ∈ ∈ Ti(x),y = x, y i = y, x i = Ti(y), x = x, Ti(y) . h i0 h i h i h i0 h i0 Now we prove item 2, assume that F is simultaneously orthogonalizable. Let B = vj be a basis of V with vj , vk i = 0 for j = k and any i I 0 . For i I { } k h i 6 ∈ ∪{ } ∈ we write Ti(vj )= k aij vk we have j j Ti(vj ) a vjP, vk 0 = Ti(vj ), vk 0 a vj , vk 0 = vj , vk i = 0 if k = j . h − ij i h i − ij h i h i 6 j q Ti(vj ) a vj , vj = a vq, vj =0. h − ij i0 ij h i0 q j X6= And then Ti(vj ) Kvj for arbitrary i I and j. Thus each self-adjoint operator ∈ ∈ Ti is diagonalizable in the basis B. So we have proved that there is an orthogonal basis of V relative to , such that each Ti diagonalizes relative to that basis. h· ·i0 Reciprocally, assume that for any i I we have that each Ti is diagonalizable ∈ relative to a certain orthogonal basis B = vj respect to , . Thus Ti(vj ) { } h· ·i0 ∈ Kvj and we can write Ti(vj ) = aij vj for some aij K. So F is simultaneously orthogonalizable in B since for any i, j, k with j = k∈we have: 6 vj , vk i = Ti(vj ), vk = aij vj , vk =0. h i h i0 h i0 

Remark 2. In the conditions of Lemma 1 note that TiTj = TjTi for any i, j I. ∈ A well known result that we will apply in the sequel is that for a finite-dimensional vector space V , any commutative family of diagonalizable linear maps Ti i∈I is simultaneously diagonalizable in the sense that there is a basis B of V {such} that each Ti in the family is diagonalizable relative to B. SIMULTANEOUS ORTHOGONALIZATION OF INNER PRODUCTS 7

We use the symbol to denote orthogonal direct sum. If F is a family of inner products in a vector space V and we have subspaces S,T V such that V = S T and S, T are orthogonal relative to all the inner products⊂ in F , we will use⊕ the notation V = S F T . The symbol F will be abbreviated to if no confusion is possible. Lemma 2. Let V be a finite-dimensional vector space over a field K of character- istic other than 2, endowed with a nondegenerate inner product , : V V K. Assume that T is an commutative family of self-adjoint diagonalizableh· ·i li×near→ maps of V V . Then there is an orthogonal basis B of (V, , ) such that all the ele- ments→ of T are diagonal relative to B. h· ·i

Proof. We know that there is a basis C = v1,...,,vn of V such that each T T diagonalizes with respect to C. If C is an orthogonal{ basis} we are done. Otherwise∈ we consider the following set of families

D = Vi i I a family of vector subspaces of V V = i I Vi and {{ } ∈ | ⊕ ∈ T V = λi(T )1V T T . | i i ∀ ∈ } D n D D Observe that = because Kvi i=1 . Let Vi i∈I be a family in such that the cardinal of I 6 is∅ minimum.{ Take} i, j ∈ I different.{ } We will prove that there exists ∈ T T such that λi(T ) = λj (T ). Indeed, if for any T T we have λi(T )= λj (T ), then∈ we redefine the decomposition6 of V in direct sum∈ of vector subspaces in the following way: let J := (I i, j ) q . Then the cardinal of J is lower than the cardinal of I. Next define \{ } ⊔{ }

Wq := Vi Vj , ⊕ Wk := Vk (k = i, j) ( 6 D Note that T Wj = µj (T )1Wj for some scalars µj (T ) K. Then Wj j∈J and the cardinal| of J is lower than the cardinal of I, a∈ contradiction.{ Hence} ∈ there is some T T such that λi(T ) = λj (T ). Let us check that Vi Vj : take 0 = x Vi ∈ 6 ⊥ 6 ∈ and 0 = y Vj , then 6 ∈ λi(T ) x, y = T (x),y = x, T (y) = λj (T ) x, y h i h i h i h i whence x, y = 0. Thus we have V = i Vi and since the characteristic of the h i ∈ ground field is not 2, each Vi has an orthogonal basis. So we are done. 

Now we can summarize Lemma 1 and Lemma 2 as follows:

Theorem 1. Assume that F = , i i∈I∪{0} is a family of inner products in the finite-dimensional vector space V{h·over·i } a field K and that , is nondegenerate. h· ·i0 Then for each i I there is a linear map Ti : V V such that x, y i = Ti(x),y ∈ → h i h i0 for any x, y V . Furthermore, each Ti is a self-adjoint operator of (V, , 0). We also have ∈ h· ·i

(1) The family F is simultaneously orthogonalizable if and only if each Ti is diagonalizable relative to an orthogonal basis of V relative to , 0. (2) If char(K) =2 the family F is simultaneously orthogonalizableh· if·i and only 6 if Ti i I is a commutative family of diagonalizable endomorphisms of V . { } ∈ Remark 3. Observe that the basis B relative to which F is simultaneously or- thogonalizable, coincides with the basis diagonalizing each Ti with i I. ∈ 8 Y. CABRERA, C. GIL, D. MART´IN, AND C. MART´IN

In the context of the above Lemma 2 if we fix a basis B = vj of V , we will de- { } k note by MB(Ti) the matrices of Ti relative to B. This means that MB(Ti) = (aij )j,k k where Ti(vj ) = k aij vk for any i and j. On the other hand, the matrices Mi,B := ( vj , vk i)j,k of the inner products , i in B are related by the equations h i P k h· ·i vj , vt i = Ti(vj ), vt = a vk, vt , that is, Mi,B = MB(Ti)M ,B. Equiva- h i h i0 k ij h i0 0 lently M (T )= M M −1 . Summarizing, we have the following. B i i,B 0,BP Corollary 1. Fix a basis B of a vector space V of finite dimension over a field K with char(K) =2 and assume that F = , i is a family of inner products 6 {h· ·i }i∈I∪{0} on V whose matrices in B are Mi,B. Further assume that M0,B is nonsingular. Then F is simultaneously orthogonalizable if and only if the collection of matrices −1 Mi,BM i I is commutative and each one of them is diagonalizable. { 0,B} ∈ Next, we illustrate Theorem 1 and Corollary 1 with two examples over a field K, one of them with char(K) = 2 and another one with char(K) = 2. 6 Example 3. Consider the inner products given by the matrices 4 2 3 0 0 1 3 2 2 − − − M0 = 2 2 1 ,M1 = 0 0 1 ,M2 = 2 2 1  3 1 3   1 1 1   −2 −1 −2  − − −       relative to a certain basis B of K3. Assume that the characteristic of K is other than 2 so that M0 is nonsingular (we shall investigate the singular case later on). Are these inner products simultaneously orthogonalizable? We have: 1 1 2 1 2 2 2 0 −1 − −1 − − M1M0 = 2 1 2 ,M2M0 = 0 1 0  −1 1 1   1 −1 1  − 2 − 2 −   −1   which can be seen to commute. Moreover M1M0 is diagonalizable since its min- −1 imal polynomial is x(x + 1)(x 1). Also M2M0 is diagonalizable its minimal − 1 − polynomial being (x + 1)(x + 2 ). Thus, there is basis which is orthogonal relative to the three inner− products. If we want to find a basis orthogonalizing all the inner −1 −1 product it suffices to diagonalize simultanously the matrices M1M0 and M2M0 . For the first one, the eigenspace of eigenvalue 0 is generated by v = (1, 1, 0), 1 − the one of eigenvalue 1 is generated by v2 = ( 1, 1, 1) and that of eigenvalue −1 1 − −1 1 by v3 = (1, 0, 1). But v1M2M0 = 2 v1 while v2M2M0 = v2 and − −1 − − − v3M2M0 = v3. Then the matrices of the inner products in the basis v1, v2, v3 are − { } 2 0 0 00 0 1 0 0 − 0 1 0 , 01 0 and 0 1 0 .  0 0 1   0 0 1   0− 0 1  − −       Since our methods include also the characteristic two case we can handle an example like the following. Example 4. Let K be a field of characteristic two and V = K3. Let F be the family of inner products whose Gram matrices relative to the canonical basis are: 1 0 0 0 0 1 0 1 0 M0 = 0 0 1 ,M1 = 0 0 1 ,M2 = 1 1 1 .  0 1 0   1 1 1   0 1 0        SIMULTANEOUS ORTHOGONALIZATION OF INNER PRODUCTS 9

t Consider the inner product on V given by x, y 0 := xM0y . Define next the linear h −i1 maps T ,T : V V given by Ti(x)= xMiM for i =1, 2. We have 1 2 → 0 0 1 0 0 0 1 −1 −1 M1M0 = 0 1 0 ,M2M0 = 1 1 1  1 1 1   0 0 1  and both matrices are diagonalizable,  being a basis of simultaneous eigenvectors for both matrices v1 = (1, 0, 1), v2 = (1, 1, 0), v3 = (1, 1, 1). Since these vectors are pairwise orthogonal relative to , 0, Theorem 1(1) implies that F is simulta- neously orthogonalizable. An orthogonalh· ·i basis for F is B = v , v , v and the { 1 2 3} Gram matrices of the Mi’s relative to B are: 1 0 0 1 0 0 0 0 0 0 1 0 , 0 0 0 , 0 1 0 .  0 0 1   0 0 1   0 0 1        4. The degenerate case Assume as before that K is a field and V a vector space over K. Recall that for an inner product , : V V K, we denote by V ⊥ the subspace V ⊥ := x V : x, V =0 thath· ·i we will× call→ the radical of the inner product. We will also{ use∈ theh notationi rad} (V, , ) for V ⊥ if we want to be more accurate. If there is no h· ·i possible confusion we will use rad( , ). If there is an orthogonal basis vi of V relative to an inner product , : V h· V·i K, then the radical of the inner{ product} h· ·i × → is the of all the vi’s such that vi, vi = 0. h i Let I be a nonempty set and for i I, let F = , i i I be a family of inner ∈ {h· ·i } ∈ products on V and consider rad(V, , i). We define the radical of the family F h· ·i by rad(F ) := i I rad(V, , i). There is a subspace W of V such that V = ∩ ∈ h· ·i rad(F ) W and rad(F ), W i = 0 for any i I. Indeed, any subspace W ⊕ h i ∈ complementing rad(F ) satisfies rad(F ), W i = 0 for any i. So we can write h i V = rad(F ) F W . Then we have the following result. Proposition 2. Let V be a vector space of arbitrary dimension over a field K. Let F = , i i I be a family of inner products on V . There is a subspace W of V {h· ·i } ∈ (in fact any complement of rad(F )) such that V = rad(F ) F W . Moreover, the family of inner products F W = , i W i I on W satisfies rad(F W )=0 and the | {h· ·i | } ∈ | collection F is simultaneously orthogonalizable if and only if F W is simultaneously orthogonalizable. | Proof. Observe that by the definition of rad(F ) we have

rad(F W )=0. | Indeed, if x W satisfying x, W i = 0 for any i I, then x, V i = x, rad(F ) i + ∈ h i ∈ h i h i x, W i = 0 implying x rad(F ). So x = 0. Clearly, if the family F W is simul- htaneouslyi orthogonalizable,∈ then F is simultaneously orthogonalizable.| Conversely take a basis ej j J of V such that for any j, k J with j = k and i I verifying { } ∈ ∈ 6 ∈ ej ,ek i = 0. For any j J write ej = rj + wj with rj rad(F ) and wj W . We hhave i ∈ ∈ ∈ 0= ej ,ek i = wj , wk i h i h i which proves that the collection of vectors wj j J is orthogonal relative to any { } ∈ inner product , i. Now define the set J := j J : wj = 0 , we see that h· ·i 1 { ∈ 6 } 10 Y. CABRERA, C. GIL, D. MART´IN, AND C. MART´IN

wj j J1 is a basis of W . First, we show that it is a system of generators of { } ∈ W : take w W then w = λj ej = λj rj + λj wj , (λj K). Thus ∈ j j j ∈ W w λj wj = λj rj rad(F ) hence w = λj wj . In order to prove ∋ − j j P∈ P Pj the , consider scalars λ and assume λ w = 0. Then for P P j P j∈J1 j j any i I and k J we have 0 = λj wj , wk i = λk wk, wk i. So if λk = 0, ∈ ∈ 1 j∈J1 h i Ph i 6 then wk, wk i = 0 for any i. Therefore wk rad(F W ) = 0 whence wk = 0, a h i P ∈ | contradiction (because k J ). Thus wj j J1 is an orthogonal basis of W relative ∈ 1 { } ∈ to any , i.  h· ·i So we can reduce the problem of orthogonalizing a collection of inner products F = , i I to the case in which rad(F ) = 0. {h· ·i} ∈ Example 5. Consider the vector space K4 and the family F on inner products given by the matrices below: 3 2 2 0 0 0 1 1 2− 2− 1 1 00− 1− 1 N = , N = 1  −2 1 2− 1  2  11 1 0  − −  0 1 1 2   11 0 1   −   − −      3 1 3 2 2 1 1 0 1− 1− 1− 0 1− 1− 0 1 N = , N = . 3  −3 1 3 2  4  −1 0 1− 1  − −  2 0 2 2   0 1 1 2   −   −  Now, we analyze if the family F is simultaneously  orthogonalizable. For a generic 4 v K , solving the equations vNi = 0 (i = 1, 2, 3, 4) we find that rad(F ) = K(0∈, 1, 1, 1) so K4 = rad(F ) W where W can be taken to be the linear span of − ⊕ e1 = (1, 0, 1, 1), e2 = (1, 1, 0, 1), e3 = (1, 1, 1, 0). The restriction of the inner products to W is given by the linearly independent matrices 5 2 0 4 2 0 2 0 0 − − M0 = 2 1 0 ,M1 = 2 1 0 ,M2 = 0 0 0 ,  0 0 1   −0− 01   0 0 1        relative to the basis e1,e2,e3 of W . By Remark 1, observe that the maximum number of linearly independent{ } inner products on W has to be three if we want W to have an orthogonal basis relative to them. Since M0 = 1 we can apply the procedure explained in Corollary 1. Then | | 1 −2 0 0 −2 0 2 −4 0 −1 −2 5 0 −1 0 −1 0 −1 0 0 0 M0 = ,M1M0 = ,M2M0 = , 0 0 1 ! 0 0 1 ! 0 0 1 !

−1 −1 and it can be checked that M1M0 commutes with M2M0 and that both matrices are diagonalizable since their characteristic polynomials are x(x + 1)(x 1) and x(x 1)(x 2). We conclude that F is simultaneously orthogonalizable.− If we want− to find− a basis which is orthogonal relative to F we first find a basis of W which orthogonalizes the inner products of matrices M0, M1 and M2. For this, −1 it suffices to simultaneously diagonalize the matrices Ai = MiM0 , with i = 1, 2. SIMULTANEOUS ORTHOGONALIZATION OF INNER PRODUCTS 11

A basis of common eigenvectors for both matrices is e1 2e2,e2,e3 . Moreover defining { − } v := (0, 1, 1, 1), v := e 2e = (1, 0, 1, 1) 2(1, 1, 0, 1)=( 1, 2, 1, 1), 0 − − 1 1 − 2 − − − − v2 := e2 = (1, 1, 0, 1), v3 := e3 = (1, 1, 1, 0), 3 4 we get a basis vi i=0 of K such that the matrices of the inner products relative to this basis are{ diag(0} , 1, 1, 1), diag(0, 0, 1, 1), diag(0, 2, 0, 1) and diag(0, 1, 1, 0) respectively. In this example we have had− the good luck that after modding out rad(F ), the restrictions of the inner products to W have been in the conditions of the nondegenerate case. However, as we will see, it is not necessary to be lucky in order to solve the problem successfully. Let V be a K-vector space. For a subset X V we denote by span(X) the K-linear span of X. ⊆ Definition 2. Let F and F ′ be two families of inner products over the same K-vector space V , that is, F , F ′ L 2(V V ; K). We say that F and F ′ are equivalent and we write F F ′ if⊂ and only× if span(F ) = span(F ′). ∼ Observe that if F F ′, then a basis B of V orthogonalizes F if and only if B orthogonalizes F ′. ∼ Next we observe that under mild hypothesis on the nature of the ground field, the fact that rad(F ) = 0 implies the existence of a nondegenerate inner product in a family F ′ simultaneously orthogonalizable with F ′ F : ∼ Theorem 2. Assume that K is an infinite field and F a family of simultane- ously orthogonalizable inner products in a finite-dimensional K-vector space V such that rad(F )=0. Then there is a family F ′ with F F ′ such that F ′ has a nondegenerate inner product. ∼ Proof. Since V is finite-dimensional, without loss of generality, we may assume that F is finite because it is equivalent to a finite family F ′. So to fix ideas write F n = , i i=1. If we take any collection of scalars λ1,...,λn K we can construct {h· ·i } n F ∈ the inner product , := 1 λi , i. Then , is simultaneously orthog- onalizable if and onlyhh· ·ii if F is. Weh· will·i replace F∪{hh·with·ii}F , and prove that P ∪ {hh· ·ii} , is nondegenerate for some values of λ ,...,λn K. Assume on the contrary hh· ·ii 1 ∈ that , is degenerate for any choice of λ1,...,λn K. Then the determinant of thehh· Gram·ii matrix of , is zero. Since F is simultaneously∈ orthogonalizable hh· ·ii the matrices of , i are diagonal relative to some basis B = v ,...,vm of V . h· ·i { 1 } So the matrix of , i is diag(ai ,...,aim). Consequently the matrix of , is h· ·i 1 hh· ·ii diag( i λiai1,..., i λiaim). Since the determinant of the Gram matrix of , is zero we get h· ·i P P ( λiaij )=0, j i Y X n for any (λ ,...,λn) K . Consider the polynomial algebra K[x ,...,xn] in the 1 ∈ 1 n-indeterminates x ,...,xn. The polynomial p K[x ,...,xn] given by p = 1 ∈ 1 j ( i xiaij ) vanishes everywhere (for any values of the variables in K). Since K is an infinite field, following [1, Section 8.1.3, item(8), Chapter 8] or [6, Section Q P 1.3, item(7), Chapter 1] we have p I(An) = 0 (here An is the n-dimensional afin space Kn). We get p = 0 and since p∈is the product of the homogeneous polynomials 12 Y. CABRERA, C. GIL, D. MART´IN, AND C. MART´IN qj := i xiaij some of these factors must be 0. But if some qj = 0, then aij = 0 for any i. Denoting by ξB(x) the coordinates of any x V relative to B we have P ∈ t vj , x i = ξB (vj )diag(ai ,...,aim)ξB(x) = h i 1 t t (0,..., 1, 0,..., 0)diag(ai1,...,aim)ξB (x) = (0,...,aij , 0,..., 0)ξB(x) =0

j j F for any| {zi and} x V . Thus vj rad( ) = 0 a| contradiction.{z } So in the family F , there∈ is a nondegenerate∈ inner product. Note that any basis which is orthogonal∪ {hh· ·ii} for all F is also orthogonal for F , and conversely.  ∪ {hh· ·ii} As a consequence of the proof given in Theorem 2 we have this result. Corollary 2. If K is an infinite field, and F is a simultaneously orthogonaliz- able family of inner products in the finite-dimensional vector space V over K with rad(F )=0, then F can be enlarged by adding at most one more inner product linear combination of those in F , so that the new family has a nondegenerate inner product. The hypothesis that K must be infinite in Corollary 2 is essential as the following example shows.

Example 6. Consider K = F2 the field of two elements and the F2-vector space 3 V = F2 . Let F be the family of inner products whose Gram matrices relative to the canonical basis of V are 0 1 0 1 0 1 1 1 1 , 0 1 1 . 0 1 0 1 1 0     It can be checked that rad(F ) = 0 but any linear combination of those two inner product has matrix y x y x x + y x + y y x + y 0   whose determinant is xy(x + y) and vanishes for all x, y F2. So, the family F is degenerate. Note that F is simultaneously orthogonalizable∈ for the basis B = (1, 1, 1), (1, 0, 1), (0, 1, 1) . { } F n Corollary 3. Let K be an infinite field, and = , i i=1 a family of inner products in the finite-dimensional vector space V over {h·K. Assume·i } that rad(F )=0 n n and that for any (λ1,...,λn) K the inner product , := i=1 λi , i is degenerate. Then F is not simultaneously∈ orthogonalizable.hh· ·ii h· ·i P Definition 3. Let V be a vector space over a field K. If F = , i∈I is a family of inner products on V labelled by I, a subspace S V is said to{h· be·i}F -supplemented ′ ⊂ ′ ′ if there exists a subspace S V such that V = S F S . The subspace S is said to be an F -supplement of S.⊂ For instance, let V be a K-vector space of arbitrary dimension and let F = , i I be a family of inner products on V which is simultaneously orthogonaliz- {h· ·i} ∈ able. Then for any fixed i the radical rad( , i) is F -supplemented. To see this, h· ·i consider a basis ej j∈J of V which is orthogonal relative to F . Then rad( , i) { } ′ h· ·i is the linear span of all ej ’s such that ej ,ej i = 0. So define S as the linear span h i ′ of the remaining ej’s. We have V = rad( , i) F S . Thus: h· ·i SIMULTANEOUS ORTHOGONALIZATION OF INNER PRODUCTS 13

Proposition 3. Let V be an arbitrary-dimensional K-vector space and let F = , i i∈I be a family of inner products on V . A necessary condition for F to be {h·simultaneously·i } orthogonalizable is that the radical of each inner product of F be F - supplemented. Furthermore, for any basis ej j J simultaneously orthogonalizing { } ∈ F , and any i I there is subset Ji J such that ej j J is a basis of rad( , i). ∈ ⊂ { } ∈ i h· ·i The following lemma shows the relationship between F and F S . | Lemma 3. Let V be a K-vector space of arbitrary dimension. Let F = , i i∈I be a family of inner products. Let S be a subspace of V and suppose that{h· ·i there} exists S′ a F -supplement. Then:

(1) If F S and F S′ are simultaneously orthogonalizable, then F is simulta- neously| orthogonalizable.| (2) If rad(F )=0, then rad(F S′ )= rad(F S )=0. | | We observe that the converse of item (1) of Lemma 3 is not true in general as the following example shows: Example 7. We consider a field K of characteristic 2 and V = K3 with F = , 3 {h· ·i} being , the inner product whose matrix in the canonical basis ei is: h· ·i { }i=1 1 0 0 0 0 1 . 0 1 0!

Now decompose V = Ke1 (Ke2 Ke3). The subspace Ke2 Ke3 is not or- thogonalizable (since any vector is isotropic),⊕ however V has an⊕ orthogonal basis e + e + e ,e + e ,e + e . { 1 2 3 1 2 1 3} In view of Proposition 3 and Lemma 3 we may expect to construct a basis orthog- onalizing F from basis of rad( , i) which orthogonalizes the remaining radicals h· ·i rad( , j ) (with j = i). In particular one of the cases where the simultaneous orthogonalizationh· ·i is6 inherited by orthogonal summands is given.

Theorem 3. Let F = , i∈I be a family of inner products in an arbitrary- dimensional vector space {h·V over·i} a field K and assume that rad(F )=0. Then the following assertions are equivalent: (1) F is simultaneously orthogonalizable. (2) There is an i I such that rad( , i) = V and rad( , i) is F -supp- lemented for some∈ supplement S′ suchh· ·i that:6 (i) the familyh· of·i inner products F S′ is simultaneously orthogonalizable and nondegenerate; (ii) the family of| inner products F is simultaneously orthogonalizable. |rad(h·,·ii) Proof. Let ej j J be an orthogonal basis of V relative to the family F . Since { } ∈ rad(F ) = 0 there is some i I such that rad( , i) = V . Applying Proposition 3 ∈ h· ·i 6 we know that rad( , i) is F -supplemented. Then rad( , i) is the linear span of a certain subset ofh· the·i previous basis: h· ·i

rad( , i) = span ej j J h· ·i { } ∈ i ′ for some Ji J. Then S = span ej is an F -supplement of rad( , i) and ⊂ { }j∈J\Ji h· ·i the family of inner products F S′ is simultaneously orthogonalizable (relative to the | F basis ej j∈J\Ji ) and nondegenerate. Observe that rad(h·,·ii) is simultaneously orthogonalizable{ } too. For the converse apply Lemma 3.|  14 Y. CABRERA, C. GIL, D. MART´IN, AND C. MART´IN

Remark 4. Without loss of generality, in the family F we can elimi- |rad(h·,·ii) nate , i rad(h·,·ii) since this inner product is null. At a computational level this simplifiesh· ·i | the complexity of the problem. Theorem 4. Let V be a finite-dimensional vector space over a field K and let F = , i i∈I be a family of inner products. Then F is simultaneously orthogonalizable {h· ·i } F F if and only if V = rad( ) ( j∈J Vj ) with J < such that Vj is nondegenerate and simultaneously orthogonalizable. | | ∞ | Proof. The non trivial implication is as follows. First, we apply Proposition 2 which gives a decomposition V = rad(F ) W where F W is simultaneously orthogonalizable and has zero radical. Next, we⊕ apply Theorem| 3 to the subspace W and the family of inner products F W and we repeat this process. Since the dimension of V is finite this process finishes| in a finite number of steps.  Example 8. As a final example we consider K = Q and V = Q6, being the family F 6 of inner products = , i i=1 whose Gram matrices relative to the canonical 6 {h· ·i } basis are Bi (respectively) given as follows: { }i=1

1 0 1 1 0 1 2 1 1022 0 2 −3 1 1 1 1 3 1 0 1 1  1 3− 2 −2− 2− 1   1 1− 2022  B1 = − − − , B2 = − ,  1 1 2 0 0 1   0 0 0000   − −     0 12 0 3 3   2 1 2044   −     1 11 1 3 4   2 1 2044   −        3 1 0 123 1 1 2 12 1 1 1 1 1 2 2 1 1 2 −10 0  0 1− 1 −2 1 0   2 2− 0− 02 3  B3 = − − , B4 = − ,  1 1 2 001   1 1 0 2 0 1   − −   − − − −   2 2 1 033   2 0 2 03 3       3 2 0 134   1 0 3 13 2     −      0 1 10 0 0 2 0 0 11 2 1− 2 2 0 2 2 0 1 0 12 2  −1 2− 30− 3− 3   0− 0 0 −2 0 1  B5 = − , B6 = − − .  0 0 00 0 0   1 1 2 00 1     − −   0 2 30 3 3   1 2 0 01 1   −     0 2 30 3 3   2 2 1 11 2   −   −      We can see that rad(F ) = 0 and rad( , ) = Ke Ke where the vectors h· ·i1 1 ⊕ 2 are e1 = ( 5, 1, 2, 3, 1, 0) and e2 = ( 6, 1, 2, 3, 0, 1). It can be checked that F − − is− nondegenerate, so applying− − Theorem− 1 it has a simultane- |rad(h·,·i1) ously orthogonalizable basis. Thus we find a basis of rad( , 1) orthogonal rel- F h· ·i ative to rad(h·,·i1): concretely f1 = 6e1 + 5e2, f2 = 7e1 6e2. Next we |F − − ′ check that rad(h·,·i1) is supplemented and we give such a supplement S = 6 | Kfi where f = ( 1, 1, 1, 0, 0, 0), f = (0, 0, 0, 1, 0, 0), f = ( 1, 0, 0, 0, 1, 0) ⊕i=3 3 − 4 5 − and f6 = ( 1, 0, 0, 0, 0, 1). Now, again by Theorem 1, F S′ is nondegenerate and − ′ | we find a simultaneously orthogonalizable basis of S given by q3, q4, q5, q6 where q = (0, 0, 0, 0, 1, 1), q = (0, 0, 0, 1, 1, 1), q = ( 1, 1, 1, { 2, 4, 4) and} q = 3 − 4 − − 5 − − − 6 (0, 1, 1, 2, 4, 3). So, definitively there is a basis of V given by f1,f2, q3, q4, q5, q6 and the− inner− products in this basis are given by the matrices: diag(0{ , 0, 1, 1, 1, 2),} diag(1, 1, 0, 0, 1, 1), diag(1, 1, 1, 1, 1, 0), diag(1, 1, 1, 1, 0, 1), diag(1, 1,−0, 0, 1, 2) and diag(1, 0, 1, 1, 1, 1). − − − − − − SIMULTANEOUS ORTHOGONALIZATION OF INNER PRODUCTS 15

Let us mention that if we have an algebra structure on Q6 with structure con- stants cijk being the (i, j) entry of Bk, then this algebra is an evolution algebra and a natural basis is precisely f ,f , q , q , q , q begin its structure matrix { 1 2 3 4 5 6} 01 1 1 1 1 01 1 1 −1 0 1 0 1 −1 0 1  −1 0 −1 −1 0 −1  . 11 1 0 1 1 −  21 0 1 2 1    References [1] Robert B. Ash, Abstract Algebra: The Basic Graduate Year (Revised 11/02). Dover Books on Mathematics. https://faculty.math.illinois.edu/ r-ash/Algebra.html. [2] Ronald I. Becker, Necessary and Sufficient Conditions for the Simultaneous Diagonability of Two Quadratic Forms. Appl. 30 (1980). pp. 129–139. [3] Miguel D. Bustamante, Pauline Mellon, and M. Victoria Velasco, Determining When an Algebra Is an Evolution Algebra. Mathematics. 8, 1349 (2020). [4] Eugenio Calabi, Linear systems of real quadratic forms. Proc. Amer. Math. Soc. 15 (1965) pp. 844-46. [5] Paul Finsler, Uber¨ das Vorkommen definiter und semidefiniter Formen in Scharen quadratis- cher Formen. Comment. Math. Helv. 9 (1937) pp. 188-192. [6] William Fulton, Algebraic Curves: An Introduction to Algebraic Geometry. Addison Wesley Publ. Co., 3rd Edition, Reading MA (2008). [7] Werner Greub, Linear Algebra. Springer-Verlag, Heidelberg, 4st ed. (1975). [8] Jean B. Hiriart-Urruty, Potpourri of conjectures and open questions in nonlinear analysis and optimization. SIAM Rev., 49 (2007) pp. 255–273. [9] Research School on Evolution Algebras and non associative algebraic structures. https://algebraygrafos.sciencesconf.org/ [10] Frank Uhlig, A recurring theorem about pairs of quadratic forms and extensions: A survey. Linear Algebra Appl., 25 (1979) pp. 219-–237. [11] Frank Uhlig, Simultaneous block diagonalisation of two real symmetric matrices. Linear Al- gebra and Appl., 7 (1973) pp. 281–289. [12] Maria J. Wonenburger, Simultaneous Diagonalization of Symmetric Bilinear Forms. Journal of Mathematics and Mechanics, 15 (4) (1966) pp. 617-622.

Departamento de Matematica´ Aplicada, E.T.S. Ingenier´ıa Informatica,´ Universidad de Malaga,´ Campus de Teatinos s/n. 29071 Malaga.´ Spain. Email address: [email protected]

Departamento de Matematica´ Aplicada, E.T.S. Ingenier´ıa Informatica,´ Universidad de Malaga,´ Campus de Teatinos s/n. 29071 Malaga.´ Spain. Email address: [email protected]

Departamento de Matematica´ Aplicada, Escuela de Ingenier´ıas Industriales, Univer- sidad de Malaga,´ Campus de Teatinos s/n. 29071 Malaga.´ Spain. Email address: [email protected]

Departamento de Algebra´ Geometr´ıa y Topolog´ıa, Facultad de Ciencias, Universidad de Malaga,´ Campus de Teatinos s/n. 29071 Malaga.´ Spain. Email address: candido [email protected]