arXiv:2001.01273v2 [math.RA] 17 Sep 2020 s cmuaieo o-omttv)lf ler vradvso r division a over algebra left commutativ non-commutative) the [ where or in ring (commutative Ore polynomial a by general introduced is, most was the polynomial of skew elements univariate of concept The Introduction 1 ∗ [email protected] ri’ hoe ntedmnino h agrdvso ring division larger the 90. Theorem of Hilbert’s dimension and the correspondence, on results Suc Galois-theoretic theorem ring. Three Artin’s polynomial centralize fields. skew a of multivariate intro extension be free we Galois to a Finally, subring by lineari given the work. and class considering this skew by of in rings version connected division novel and a determi introduced to to method are rise explicit give an also give matrices results previous multivari The generalize which ces. introduced, are matrices cor monde the over products matrix into translate polynomi class skew coord conjugacy multivariate into linearized of translate fi class) polynomials expected conjugacy Such skew as multivariate works classes. free interpolation conjugacy Lagrange of where into sets set rig the P-closed finite-dimensional are the of lists of to partition correspond sets Hen P-closed classes. t subring conjugacy corresponds disjoint division pair-wise P-independence of that a centralizers shown eva over is remainder-based linear It the right o to polynomials. connected are algebras is they and evaluation that natural automorphisms, shown ring is division It lineari of univariate groups generalize finite which introduced, are rings oyoil,Moemtie,Re-ulrcds kwpoly matrices. skew Wronskian codes, Reed-Muller matrices, Moore polynomials, Keywords: lin First, given. are results these of applications Several wit polynomials skew multivariate linearized work, this In MSC: 21,1E5 21,1S6 94B60. 16S36, 12F10, 12E15, 12E10, ierzdmliait kwpolynomials skew multivariate linearized nttt fCmue cec n Mathematics, and Science Computer of Institute aoster,HletsTerm9,Lgag interpolati Lagrange 90, Theorem Hilbert’s theory, Galois nvriyo Neuchˆatel, Switzerland of University hoyadapiain of applications and Theory met Mart´ınez-Pe˜nasUmberto Abstract 1 ei sddcdta ntl generated finitely that deduced is it ce e oyoil,gopagba of algebras group polynomials, zed r eeaie osc extensions: such to generalized are l,adcmoiin vrasingle a over compositions and als, tvco pcsacrigt the to according spaces vector ht nt-iecmoiin oeper (one compositions inate-wise t or n rnka matri- Wronskian and Moore ate oil,Vnemnematrices, Vandermonde nomials, uto ffe utvraeskew multivariate free of luation epnigcentralizers. responding ih ieridpnec over independence linear right o orsodn oaconjugacy a to corresponding r ti losonta products that shown also is It . eterrnsi eea.Such general. in ranks their ne xesosgnrlz (finite) generalize extensions h ieygnrtdPcoe sets P-closed generated nitely e edMle oe,which codes, Reed-Muller zed ∗ aie utvraeVander- multivariate earized eiain,aogothers. among derivations, f uePGli xesosof extensions P-Galois duce vrissbig h Galois the subring, its over aldcnrlzr n their and centralizer, called offiinsoe division over coefficients h t xo sdopd that dropped, is axiom ity 8.Te r endas defined are They 38]. n ihalf ai of basis left a with ing n linearized on, monomials 1, x, x2,... whose product satisfies that xi + xj = xi+j , for all non-negative integers i, j, and such that the degree of the product of two skew polynomials is the sum of their degrees. Here we are slightly bending the usual notion of algebra [8]. By left algebra, we mean a left vector space with a ring structure whose product is linear on the first component (rather than bilinear as in the commutative case). A natural definition of evaluation of univariate skew polynomials, via Euclidean division, was introduced by Lam and Leroy in the works [22, 25]. Thanks to this concept of evaluation, Lam and Leroy introduced the concept of P-independence of evaluation points in [22, 24], which in turn gives rise to the concept of P-closed set (Definition 20), and P-basis (Definition 23) of a P-closed set. Intuitively, a finite set of evaluation points is P-independent if we may perform Lagrange interpolation over them, i.e., any set of values (of the right size) can be attained by evaluating some skew polynomial over such evaluation points (see Theorem 2). In [22, Theorem 23], it was shown that a set of evaluation points is P-independent if, and only if, the subsets in its partition into conjugacy classes (Definition 18) are each P-independent. Later in [25, Theorem 4.5], it was shown that a set of evaluation points, all from the same conjugacy class, is P-independent if, and only if, the exponents in the conjugacy relation are right linearly independent over the corresponding centralizer (Definition 10). With these two results, Lam and Leroy gave a simple explicit method to find the rank of matrices obtained by evaluating (univariate) skew polynomials, which generalize Vandermonde matrices [43] and are related to Moore matrices [33] and Wronskian matrices [15]. Later on, such method for finding the rank of such general Vandermonde matrices that use evaluations of skew polynomials was used in [29] to show that linearized Reed-Solomon codes (introduced in [29, Definition 31]) have maximum possible minimum sum-rank distance. Linearized Reed-Solomon codes are defined by evaluating certain operator polynomials that generalize classical univariate linearized polynomials over finite fields [28, Chapter 3]. Evaluations of such polynomials are tightly connected to Lam and Leroy’s concept of evaluation for skew polynomials via a particu- lar case of a result by Leroy [27, Theorem 2.8]. Since such operator polynomials are right linear over the corresponding centralizer, they can be seen as a linearization of skew polynomials. The generator matrices of linearized Reed-Solomon codes [29, page 604] are a linearized version of the skew Vandermonde matrices defined in [22, 25] and simultaneously recover as particular cases Vandermonde, Moore and Wronskian matrices. For this reason, linearized Reed-Solomon codes also recover as particular cases Reed-Solomon codes [40], which are MDS (maximum distance separable) and Gabidulin codes [10], which are MRD (maximum rank distance). These codes have numerous applications in error correction in telecommunications, repair in data storage or information-theoretical security, among others. Most notably, Reed-Solomon codes been ex- tensively used in practice, including CDs, DVDs, QR codes, satellite communications and the storage system RAID 6. In [31], free multivariate skew polynomials were introduced, following Ore’s definition: They are the most general polynomial ring in several free variables (variables are not allowed to com- mute with each other) such that the product of two monomials consists in appending them and the degree of a product of two skew polynomials is the sum of their degrees. Thanks to the lack of relations between the variables, the concept of evaluation was extended in [31, Definition 9] due to the uniqueness of remainders in the Euclidean division [31, Lemma 5], which cannot be guaranteed for iterated skew polynomial rings (see [31, Remark 8] and [12, Example 3.7]) or if the variables are allowed to commute (see [31, Remark 7]). The concepts of conjugacy, P-independence and skew Vandermonde matrices were then extended to such a multivariate case in [31], leading to a skew Lagrange interpolation theorem [31, Theorem 4] and equating the rank of a skew Vandermonde matrix to the rank of the P-closed set generated by the corresponding evaluation points [31, Proposition 41].
2 In this work, we introduce a concept of multivariate polynomials on certain operators, as done in [29], which we will call linearized multivariate skew polynomials (Subsection 2.2), and we show that their natural evaluation is also tightly connected to the arithmetic evaluation (that is, based on Euclidean divisions) of free multivariate skew polynomials (Subsection 2.3). We will use this connection and skew Lagrange interpolation [31, Theorem 4] to extend the important results [22, Theorem 23] and [25, Theorem 4.5] to the multivariate case, finding an explicit representation of P-closed sets as a disjoint union or a list of right vector spaces over the corresponding centralizers (Theorems 4 and 5 in Section 3), and similarly for their P-bases. We will then show that compositions of linearized multivariate skew polynomials, seen as right linear maps, coincide with matrix products, and products of free multivariate skew polynomials can be mapped onto coordinate-wise compositions of linearized multivariate skew polynomials over pair-wise disjoint conjugacy classes (Section 4), which is hence equivalent to products of block-diagonal matrices. As a consequence, we deduce in Corollary 45 that quotients of free multivariate skew polynomial rings over the ideal of skew polynomials vanishing on a finite union of finitely generated conjugacy classes are semisimple rings [23, Definition (2.5)]. Moreover, they are simple rings [23, Definition (2.1)] in the case of a single conjugacy class (Corollary 44). We note that all of these results particularize to non-trivial results on the free conventional multivariate polynomial ring (where variables do not commute with each other but commute with constants) over an arbitrary division ring. The case where such division rings are fields (i.e., commutative) was extensively studied in [8], and the case of arbitrary division rings but where variables commute with each other was studied in [2]. However, our results in the general case are new to the best of our knowledge. The final two sections of this work constitute applications of the theory developed up to this point. In Subsection 5.2, we will define linearized multivariate Vandermonde matrices, connect them to skew multivariate Vandermonde matrices [31], and provide a simple explicit criterion to de- termine their ranks (Theorem 11 in Subsection 5.2) similar to that obtained by combining [22, Theorem 23] and [25, Theorem 4.5] in the univariate case. In Subsection 5.3, we introduce skew and linearized Reed-Muller codes, calculate their dimension and show a connection between their minimum skew and sum-rank distances (respectively) and their minimum Hamming distance. As we will show, skew Reed-Muller are similar but not exactly the same as those introduced in [12], and linearized Reed-Muller codes recover the version of Reed-Muller codes in [4] as the particular case of a single conjugacy class. Finally, in Section 6 we introduce the concept of P-Galois extensions of division rings, which generalize Galois extensions of fields. We then generalize to these P-Galois extensions of division rings three classical results in Galois theory [3]. In Subsection 6.2, we generalize Artin’s Theorem [3, Theorem 14], which calculates the dimension of the larger field over its subfield. In Subsection 6.3, we generalize the Galois correspondence [3, Theorem 16]. Finally, in Subsection 6.4, we generalize Hilbert’s Theorem 90 [3, Theorem 21].
Notation
For a set A and positive integers m and n, Am×n will denote the set of m × n matrices over A, and An will denote the set of column vectors of length n over A. That is, An = An×1. Given another set B, we denote by BA the set of all maps A −→B. Unless otherwise stated, F will denote a division ring, that is, a commutative or non- commutative ring with identity such that every non-zero element has another non-zero ele- ment that is both its left and right inverse. A field is a commutative division ring, and Fq
3 denotes the finite field of size q, where q is a prime power. On a ring R, we will denote by (A) ⊆ R the left ideal generated by a set A ⊆ R, and on a left vector space V over F, we will denote by hBiL ⊆ V the left F-linear vector space generated by a set B⊆V. We use the simplified notations (F1, F2,...,Fn) = ({F1, F2,...,Fn}), for F1, F2,...,Fn ∈ R, and L L hG1, G2,...,Gni = h{G1, G2,...,Gn}i , for G1, G2,...,Gn ∈ V. Similarly for right vector R L R spaces, where we denote hBi . We denote by dimF and dimF left and right dimensions over F. Rings are not assumed to be commutative, but all of them will be assumed to have multiplica- tive identity, and all ring morphisms map multiplicative identities to multiplicative identities.
2 Main definitions and the natural evaluation maps
In this section, we define linearized multivariate skew polynomials. We extend the notion of cen- tralizers, defined in [25, Equation (3.1)] for the univariate case (which was in turn an extension of the classical notion of centralizers of non-commutative division rings), and we show that lin- earized multivariate skew polynomials are right linear over the corresponding centralizer. Finally, we show that the natural evaluation on linearized multivariate skew polynomials corresponds to evaluation as proposed in [31, Definition 9], based on remainders of Euclidean divisions.
2.1 Skew polynomials and skew evaluation We start by revisiting the concepts of free multivariate skew polynomials from [31, Section 2]. These are the building blocks for defining skew polynomial rings with relations [31, Section 6], which are simply quotient rings of the free ring. For brevity, we will usually drop the term multivariate. Definition 1 (Free multivariate skew polynomial rings [31]). Given a ring morphism σ : F −→ Fn×n, we say that δ : F −→ Fn is a σ-derivation if it is additive and
δ(ab)= σ(a)δ(b)+ δ(a)b, for all a,b ∈ F. Let x1, x2,...,xn be n pair-wise distinct letters, which we will call variables, and denote by M the free (non-commutative) monoid on such letters, whose elements are called monomials and where 1 denotes the empty string. The free (multivariate) skew polynomial ring over F in the variables x1, x2,...,xn with morphism σ and derivation δ is the left vector space F[x; σ, δ] with left basis M and product given by appending monomials and the rule
xa = σ(a)x + δ(a), (1) for a ∈ F, which are called constants. Here, we denote
x1 x2 x = . ∈Mn. . x n Therefore, (1) is a short form of the equations
n xia = σi,j (a)xj + δi(a), (2) j=1 X
4 for i =1, 2,...,n, where σi,j and δi denote the component functions of σ and δ, respectively. Each element F ∈ F[x; σ, δ] is called a free multivariate skew polynomial, or simply skew polynomial, and can be uniquely written as F = Fmm, (3) m X∈M where Fm ∈ F, for m ∈M, are called the coefficients of F , which are all zero except for a finite number of them. Define the degree of a monomial m ∈ M as its length as a string, and define the degree of a non-zero skew polynomial F ∈ F[x; σ, δ], denoted by deg(F ), as the maximum degree of a monomial m ∈M such that Fm 6= 0. We also define deg(0) = −∞. Following Ore’s line of thought [38], it was shown in [31, Theorem 1] that pairs (σ, δ) as in the previous definition correspond bijectively, via the rule (1), with products in the left vector space R with basis M that turn R into a ring with unit 1, where products of monomials consists in appending them and where deg(F G) = deg(F ) + deg(G), (4) for all F, G ∈ R. The notation R = F[x; σ, δ] emphasizes the ring structure on R given by the pair (σ, δ) via (1). If we denote by Id : F −→ Fn×n the ring morphism given by Id(a) = aI, for a ∈ F, where I ∈ Fn×n is the n × n identity matrix, then F[x; Id, 0] is the free conventional polynomial ring in the variables x1, x2,...,xn (which do not commute with each other but commute with constants), as in [8, Section 0.11] and [23, Example (1.2)]. Note that this is the only skew polynomial ring where constants and variables commute. In other words, free multivariate skew polynomial rings are nothing but free multivariate polynomial rings where the commutativity axiom is dropped. Finally, an interesting point to make is that all of the results in this paper yield non-trivial results for free conventional polynomial rings over a division ring F by setting σ = Id and δ = 0. The free conventional polynomial ring with F being a field (i.e., commutative) was extensively studied in [8]. Multivariate polynomial rings over general division rings but where variables commute with each other was studied in [2]. As mentioned in [31, Section 2], the free multivariate skew polynomial ring F[x; σ, δ] can also be characterized by a universal property using the rule (1). We now state this property, leaving its proof to the reader.
Lemma 2. Let R be a left F-algebra such that there exist elements y1,y2,...,yn ∈ R satisfying n yia = σi,j (a)yj + δi(a), j=1 X for i = 1, 2,...,n, for all a ∈ F. Then there exists a unique left F-algebra morphism ϕ : F[x; σ, δ] −→ R such that ϕ(xi)= yi, for i =1, 2,...,n. We conclude the subsection by giving a few examples of ring morphisms and derivations to show how we recover classical objects but also less classical ones. Example 3 (Diagonal and triangular morphisms). A ring morphism σ : F −→ Fn×n satisfies that σi,j (a) = 0, for all a ∈ F and all i 6= j if, and only if, there exist ring endomorphisms σi : F −→ F, for i =1, 2,...,n, such that
σ1(a) 0 ... 0 0 σ2(a) ... 0 σ(a)= . . . . , . . .. . 0 0 ... σ (a) n 5 for all a ∈ F. It is trivial to check that the σ-derivations in this case are precisely those such that δi is a σi-derivation, for i =1, 2,...,n. In this case, we say that σ is a diagonal morphism and we denote it by σ = diag(σ1, σ2,...,σn). Similarly for triangular morphisms. Example 4 (Similar morphisms and derivations). It is easy to see that, given an invertible matrix A ∈ Fn×n and a ring morphism σ : F −→ Fn×n, the similar or conjugate map τ = AσA−1 : F −→ Fn×n, given by τ(a)= Aσ(a)A−1 for a ∈ F, is also a ring morphism. Furthermore, its derivations are of the form Aδ : F −→ Fn, for a σ-derivation δ : F −→ Fn. We say that a ring morphism σ : F −→ Fn×n is diagonalizable (resp. triangulable) if it is similar to a diagonal (resp. triangular) ring morphim. It was shown in [30, Theorem 2] that all ring morphisms σ : F −→ Fn×n are diagonalizable if F is a finite field. However, this is far from the case in general, as we will next show.
Example 5 (Wild example I). Let p be a prime number and let F = Fp(z) be the field of 2×2 rational functions over Fp. The ring morphism σ : Fp(z) −→ Fp(z) given by
f(z) δ(f(z)) σ(f(z)) = , 0 f(z) for f(z) ∈ Fp(z), where δ = d/dz is the usual standard derivation in Fp(z), is upper triangular p but is not diagonalizable: Simply note that the subfield of Fp(z) fixed by σ is Fp(z ), but there p is no field endomorphism of Fp(z), other than the identity, leaving the elements in Fp(z ) fixed.
∗ Example 6 (Wild example II). Let F = F4(z) and let γ ∈ F4 be a primitive element (F4 = {1,γ,γ2} and γ3 = 1). Define the matrix
0 z Z = ∈ (z)2×2. γz 0 F4
Then Z is trascendental over F4, that is, there is no non-zero F ∈ F4[x] such that F (Z) = 0. 2×2 Therefore F4[Z] ⊆ F4(z) is an integral domain. Moreover, any non-zero matrix in F4[Z] has 2×2 a matrix inverse (that is, its inverse lies inside the ring F4(z) ). Hence there exists a unique 2×2 ring morphism σZ : F4(z) −→ F4(z) such that σZ (z)= Z. In general, it is given by
τ(f(z)) γ∂(f(z)) σ (f(z)) = ∈ [z]2×2, Z γ2∂(f(z)) τ(f(z)) F4 2 where τ(f(z)) ∈ F4[z] and ∂(f(z)) ∈ F4[z] are formed by the even and odd terms in f(γ z) ∈ F4[z], respectively, for f(z) ∈ F4[z], and extended uniquely to F4(z). 6 The subfield of elements in F4(z) fixed by σZ is F4(z ). Therefore, σZ is neither diagonalizable nor triangulable, since there is no field endomorphism or derivation of F4(z), or a combination 6 of both, whose subfield of fixed and/or annihilated elements is F4(z ). To see this, note that any derivation δ of F4(z) is of the form δ = δ(z)d/dz, where δ(z) ∈ F4 and d/dz is the usual standard 2 derivation. For δ(z) 6= 0, the subfield of F4(z) of elements annihilated by δ is F4(z ), but again, 2 6 no field endomorphism of F4(z ), other than the identity, leaves the elements in F4(z ) fixed. Finally, it is worth showing how multiplication in F[x1, x2; σZ , 0] works. Letting k be a positive integer and setting a = z2k−1 and a = z2k in (2) we have, respectively, that
2k−1 k−1 2k−1 2k−1 k 2k−1 x1z = γ z x2 and x2z = γ z x1,
2k k 2k 2k k 2k x1z = γ z x1 and x2z = γ z x2.
6 Although Examples 5 and 6 may seem pathological, the field Fq(z), where q is a power of a prime number (typically a power of 2), appear naturally in Engineering applications, such as convolutional error-correcting codes [9]. Similarly, algebraic extensions of Fq(z) form the basis of algebraic-geometry codes [13]. The idea behind free skew polynomials as in Definition 1 is that they admit a natural arith- metic evaluation map, which we will call skew evaluation and which is guaranteed by unique remainder Euclidean division. The following definition is [31, Definition 9] and is consistent due to [31, Lemma 5].
n Definition 7 (Skew evaluation [31]). For a = (a1,a2,...,an) ∈ F and a skew polynomial S F ∈ F[x; σ, δ], we define its evaluation, denoted by F (a) = Ea (F ), as the unique element F (a) ∈ F such that F − F (a) ∈ (x1 − a1, x2 − a2,...,xn − an) . Given a set Ω ⊆ Fn, we define the skew evaluation map over Ω as the left linear map
S Ω EΩ : F[x; σ, δ] −→ F ,
S Ω where f = EΩ(F ) ∈ F is given by f(a)= F (a), for all a ∈ Ω and all F ∈ F[x; σ, δ]. Note that the skew evaluation map depends on the pair (σ, δ). This will be the case with most of the objects defined from now on. However, we will not write such a dependency for brevity, unless it is necessary to avoid confusions. The main motivation behind free multivariate skew polynomials is that Definition 7 is con- sistent. In contrast, if variables are allowed to commute (as considered, for instance, in [2]), then such a definition is not consistent unless F is a field, σ = Id and δ = 0 (see [31, Remark 7]). Definition 7 would not always be consistent either for iterated skew polynomials (see [31, Remark 8] and [12, Example 3.7]).
2.2 Linearized polynomials and linearized evaluation We now turn to linearized (multivariate) skew polynomials. The idea is to turn skew polynomials into linear maps by giving an alternative evaluation map. Remarkably, both evaluation maps are related by a simple formula involving the conjugacy relation (Theorem 1), which was proven in a more general form for the univariate case in [27, Theorem 2.8] (see also [29, Lemma 24]). However, one major difference arises in the multivariate case. On the one hand, skew polynomials are evaluated on an n-dimensional affine point over F. On the other hand, for each representative of a conjugacy class (see Definition 18), linearized skew polynomials are evaluated on an element of the division ring F. Definition 8 (Linearized multivariate skew polynomials). Given a ring morphism σ : F −→ Fn×n, a σ-derivation δ : F −→ Fn, a point a ∈ Fn and a monomial m ∈M, we define the operator m Da : F −→ F 1 m recursively on m ∈ M as follows. We start by defining Da = Id. Next, if Da is defined for m ∈M, then we define
x m Da1 (β) x m Da2 (β) xm m m n Da (β)= . = σ(Da (β))a + δ(Da (β)) ∈ F , . x m Dan (β) 7 for all β ∈ F. For convenience, we denote by Da : F −→ Fn the operator given by
x Da1 (β) x Da2 (β) n Da(β)= . = σ(β)a + δ(β) ∈ F , . x Dan (β) for all β ∈ F. Hence, by definition, we have that xm m Da = Da ◦ Da , (5) for all m ∈M and all a ∈ F. We then define the left vector space of linearized (multivariate skew) polynomials F[Da] over F, with variables x1, x2,...,xn, morphism σ, derivation δ and conjugacy M m representative a, as the left vector space generated by the set of operators Da = {Da | m ∈ M}, which need not be a basis nor be in a one-to-one correspondence with M. We define the left F-linear (surjective) map
φa : F[x; σ, δ] −→ F[Da] m m (6) m∈M Fm 7→ m∈M FmDa , and we denote F D = φa(F ), for allPF ∈ F[x; σ, δ], omittingP the dependency on a for brevity. Observe that classical (univariate) linearized polynomials over finite fields [28, Chapter 3] as considered originally by Moore [33] and Ore [37] are the elements in the ring F[Da] from Definition 8 whenever F is a finite field, n = 1, δ = 0 and a = 1. See Example 14 below. From the definition itself, linearized polynomials admit a natural evaluation, which we will call linearized evaluation.
Definition 9 (Linearized evaluation). Given a ∈ Fn, for a linearized polynomial F D = m m∈M FmDa ∈ F[Da], we define its evaluation over β ∈ F as
P D m F (β)= FmDa (β) ∈ F. m X∈M Given a set Ω ⊆ F, we define the linearized evaluation map over Ω as the left linear map
L Ω EΩ : F[Da] −→ F ,
L Ω D D where f = EΩ (F ) ∈ F is given by f(β)= F (β), for all β ∈ Ω and all F ∈ F[Da]. As announced earlier, linearized polynomials are right linear over certain division subring of F, called centralizers, which were defined in [25, Equation (3.1)] in the univariate case. Definition 10 (Centralizers). Given a ∈ Fn, we define its centralizer as
Ka = {β ∈ F | Da(β)= aβ}⊆ F. The proof of the following lemma is straightforward.
n Lemma 11. For all a ∈ F , it holds that Ka ⊆ F is a division subring of F. Moreover, for F D ∈ F[Da], the map β 7→ F D(β), for β ∈ F, is right linear over Ka. That is, for all β,γ ∈ F and all λ ∈ Ka, it holds that
F D(β + γ)= F D(β)+ F D(γ) and F D(βλ)= F D(β)λ.
8 Thus, we have provided a type of evaluation that turns skew polynomials into linear maps over certain division subring. Interestingly, centralizers are the largest division subrings over which linearized polynomials are right linear. We will use this result later, for instance to establish P-Galois correspondences (see Subsection 6.3), since it allows to recover Ka from F[Da]. Proposition 12. For all a ∈ Fn, it holds that
Da Da Da Ka = {λ ∈ F | F (βλ)= F (β)λ, ∀β ∈ F, ∀F ∈ F[Da]}.
In other words, Ka is the largest division subring K of F such that every linearized polynomial in F[Da] is right linear over K.
Da x Proof. The inclusion ⊆ is Lemma 11. For the reversed inclusion, choose β = 1 and F = Dai , for some i =1, 2,...,n. Then xi xi Da (λ)= Da (1)λ = aiλ.
Thus Da(λ)= aλ and hence λ ∈ Ka by definition. In the next subsection, we connect both types of evaluation. We conclude this subsection with a few examples. Example 13 (Group algebras). Assume that G is a finite group (commutative or not) of ring automorphisms of F generated by σ1, σ2,...,σn. Consider σ = diag(σ1, σ2, ..., σn) (see Example 3), δ = 0 and a = 1. m Note that all elements in G are of the form D1 = m(σ), where m ∈ M and m(σ) denotes the symbolic evaluation of m in (σ1, σ2,...,σn) (for instance, m(σ) = σ1σ2 if m = x1x2). This mi is due to the fact that, since G is finite, there exists a positive integer mi such that σi = Id m −1 mi−1 mi−1 (as the set {σi | m ∈ N} ⊆ G is finite), thus σi = σi = (xi )(σ), for i = 1, 2,...,n. Therefore, we have that F[D1] = F[G] is the group algebra of G over F. Finally, observe that K1 = FG = {β ∈ F | τ(β)= β, ∀τ ∈ G} (see also Example 64). Example 14 (Linearized polynomials over finite fields). Let assumptions and notation be as in Example 13 above, and further let F = Fqm , for a positive integer m and a prime power q. qr Set n = 1 and define σ = σ1 by σ(β)= β , for all β ∈ Fqm , for some integer r ≥ 1 coprime with m. In such a case, F[D1] recovers the classical ring of linearized polynomials over a finite field [33, 37] [28, Chapter 3]:
∼ qr qrd F[D1] = Lqr Fqm [x]= {F0x + F1x + ··· + Fdx ∈ Fqm [x] | d ∈ N, F0, F1,...,Fd ∈ Fqm }.
Example 15 (Algebra of derivations). Assume that δ1,δ2,...,δn : F −→ F are standard derivations (i.e. Id-derivations) and let ∇ = {m(δ) | m ∈ M}, where m(δ) is a symbolic evaluation as in the previous example. Consider also σ = Id and a = 0. Similarly to the previous example, m it holds that D0 = m(δ), for all m ∈M, thus F[D0]= F[∇] is the algebra of derivatives ∇ over F. Finally, observe that K0 = Fδ = {β ∈ F | ∂(β)=0, ∀∂ ∈ ∇} (see also Example 65).
Example 16 (The case of σi-derivations). The previous example could be trivially extended to the case where δi is a σi-derivation, for i =1, 2,...,n.
2.3 Connecting both evaluations We now give the main connection between skew evaluation and linearized evaluation. This result extends the last part of [27, Theorem 2.8] from the univariate to the multivariate case. See also
9 [29, Lemma 24]. It is worth giving a meaningful proof of the connection between both evaluations, which requires the concepts of norm and conjugacy, which we will use again later in the paper. The following formula for the skew evaluation of monomials was given in [31, Theorem 2] and motivates the definition of multivariate norms. Lemma 17 (Multivariate norms [31]). Given a monomial m ∈ M and a point a ∈ Fn, S denote by Nm(a)= Ea (m) ∈ F the evaluation of the skew monomial m at a. It holds that
Nx1m(a) Nx m(a) 2 n Nxm(a)= . = σ(Nm(a))a + δ(Nm(a)) ∈ F , (7) . N m(a) xn or in other words, it holds that Nxm(a)= Da(Nm(a)). (8)
By choosing n = 1 and δ = 0, the previous maps Nm recover the concept of norm (or “truncated norm”). For this reason, we will call Nm(a) the mth (multivariate) norm of a. We next revisit the concept of conjugacy. The following definition is [31, Definition 11]. Definition 18 (Conjugacy [31]). Given a ∈ Fn and β ∈ F∗, we define the conjugate of a with respect to β, which is called exponent, as β −1 n a = Da(β)β ∈ F . (9) We give the exponential notation aβ for simplicity and for consistency with previous notation (see [22, 24, 25, 26, 31]). Recall from [31, Lemma 12] that conjugacy defines an equivalence relation in Fn, thus a partition of Fn into conjugacy classes, which will be denoted by −1 ∗ n C(a)= {Da(β)β | β ∈ F }⊆ F , (10) for a ∈ Fn. Linearized polynomials and centralizers over conjugate points can be connected easily as follows. The proof is straightforward. Lemma 19. Let a, b ∈ Fn and γ ∈ F∗ be such that b = aγ . Then it holds that −1 Db(β)γ = Da(βγ) and Kb = γKaγ , for all β ∈ F. In particular, if F is commutative, then Kb = Ka. We may now prove the connection between linearized and skew evaluations. This result can be seen as an explicit linearization of the map β 7→ Nm(aβ ) that maps an exponent β to the mth norm of the conjugate aβ of a. Theorem 1. Given a ∈ Fn, β ∈ F∗ and F ∈ F[x; σ, δ], and denoting D = Da, it holds that F (D(β)β−1)= F D(β)β−1. In particular, for all monomials m ∈M, we have that −1 m −1 Nm(D(β)β )= D (β)β . (11) Proof. By linearity, we only need to prove (11), for all m ∈M. The case m = 1 is trivial. Assume now that it is true for a given m ∈M. Combining Equations (5) and (8) with Lemma 19, and denoting b = aβ = Da(β)β−1, we have that −1 m −1 xm −1 Nxm(b)= Db(Nm(b)) = Da(Nm(b)β)β = Da(Da (β))β = Da (β)β , and we are done.
10 3 Linearizing sets of roots and Lagrange interpolation
The structure of sets of roots play a central role in the study of conventional polynomials. In particular, Lagrange interpolation behaves well when the evaluation points form a “basis” of some set of roots, meaning that they can be differentiated by taking “independent” values on different polynomials. This is also true for skew polynomials [31, Theorem 4] and leads to the concepts of P-closed sets, P-independence and P-bases, where P stands for polynomial. Such concepts were introduced by Lam and Leroy in [22, 24, 25] for n = 1, and in [31] for the multivariate case. By looking at Theorem 1, we see that after fixing a conjugacy representative, the set of roots of a skew polynomial in that conjugacy class corresponds to certain right vector subspace of F. Furthermore, by Lemma 19, there is a simple way of changing the conjugacy representative. This suggests a linear structure of sets of roots on each conjugacy class separately. In this section, we will give such a linearized structure of the sets of roots of skew poly- nomials and linearized polynomials. In Section 3.1, we revisit the concepts of P-closed sets, P-independence, P-bases and skew Lagrange interpolation from [31]. In Section 3.2, we show that P-independence in one conjugacy class corresponds to right linear independence over the cor- responding centralizer. In Section 3.3, we show that, in general, P-independent sets correspond simply to disjoint unions of right linearly independent elements over the different centralizers. We will also give descriptions in terms of lattices. The results in Subsections 3.2 and 3.3 extend the important results [25, Theorem 4.5] and [22, Theorem 23], respectively.
3.1 P-closed sets and skew Lagrange interpolation We revisit the concepts of P-closedness and skew Lagrange interpolation from [31], all of which were previously introduced in [22, 24, 25] for n = 1. As in classical algebraic geometry, given a set A⊆ F[x; σ, δ], we define its set of roots, or zero set for brevity, as Z(A)= {a ∈ Fn | F (a)=0, ∀F ∈ A}. Conversely, given a set Ω ⊆ Fn, we define its associated ideal as I(Ω) = {F ∈ F[x; σ, δ] | F (a)=0, ∀a ∈ Ω}, which is a left ideal of F[x; σ, δ] by Definition 7. The following definition is [31, Definition 16]. Definition 20 (P-closed sets [31]). Given a subset Ω ⊆ Fn, we define its P-closure as Ω= Z(I(Ω)), and we say that Ω is P-closed if Ω = Ω. P-closed sets form all sets of roots of skew polynomials [31, Proposition 15, Item 8]. Fur- thermore, the P-closure of a set Ω ⊆ Fn is the smallest P-closed set in Fn containing Ω [31, Lemma 17]. This naturally leads to the following concepts, given in [31, Definitions 22, 23 & 24], respectively. Definition 21 (P-generators [31]). Given a P-closed set Ω ⊆ Fn, we say that G ⊆ Ω generates Ω if G = Ω, and it is then called a set of P-generators for Ω. We say that Ω is finitely generated if it has a finite set of P-generators. Definition 22 (P-independence [31]). We say that a ∈ Fn is P-independent from Ω ⊆ Fn if it does not belong to Ω. A set Ω ⊆ Fn is called P-independent if every a ∈ Ω is P-independent from Ω \{a}. P-dependent means not P-independent.
11 Definition 23 (P-bases [31]). Given a P-closed set Ω ⊆ Fn, we say that a subset B ⊆ Ω is a P-basis of Ω if it is P-independent and a set of P-generators of Ω. P-bases are minimal sets of P-generators of a P-closed set [31, Proposition 25] and, for a finitely generated P-closed set, they also correspond to its maximal P-independent subsets [31, Lemma 36]. Another important result is the following, which combines [31, Corollary 26] and [31, Corollary 32]. Lemma 24 ([31]). If a P-closed set is finitely generated, then it admits a finite P-basis, and any two of its P-bases are finite and have the same number of elements. Thus the following definition [31, Definition 33] is consistent.
Definition 25 (Ranks [31]). Given a finitely generated P-closed set Ω ⊆ Fn, we define its rank, denoted by Rk(Ω), as the size of any of its P-bases. Moreover, P-closed subsets of finitely generated P-closed sets are in turn finitely generated [31, Corollary 37].
Lemma 26 ([31]). Let Ψ ⊆ Ω ⊆ Fn be P-closed sets. If Ω is finitely generated, then so is Ψ. The main feature of P-bases of finitely generated P-closed sets is the following result on the existence and uniqueness of skew Lagrange interpolating polynomials, which is [31, Theorem 4].
Theorem 2 (Skew Lagrange interpolation [31]). Let Ω ⊆ Fn be a finitely generated P-closed set with finite P-basis B = {b1, b2,..., bM }, where M = Rk(Ω). The following hold:
S S S S 1. If EB (F )= EB (G), then EΩ(F )= EΩ(G), for all F, G ∈ F[x; σ, δ].
2. For every a1,a2,...,aM ∈ F, there exists F ∈ F[x; σ, δ] such that deg(F ) Definition 27 (Dual P-bases [31]). Given a finite P-basis B = {b1, b2,..., bM } of a P-closed set Ω ⊆ Fn, we say that a set of skew polynomials ∗ B = {F1, F2,...,FM }⊆ F[x; σ, δ] is a dual P-basis of B if Fi(bj ) = δi,j , where δi,j denotes the Kronecker delta, for all i, j = 1, 2,...,M. Thus the following is an immediate consequence of skew Lagrange interpolation, and was given in [31, Corollary 31]. Corollary 28 ([31]). Any finite P-basis, with M = Rk(Ω) elements, of a P-closed set Ω admits a dual P-basis consisting of M skew polynomials of degree less than M. Moreover, any two dual P-bases of the same P-basis define the same skew polynomial functions over Ω. 12 Another important consequence that we will use throughout the paper is the following left vector space isomorphism. Corollary 29 ([31]). If {F1, F2,...,FM } is a dual P-basis of a finitely generated P-closed set Ω ⊆ Fn, then the natural projection map restricts to a left F-linear vector space isomorphism L ∼ hF1, F2,...,FM iF = F[x; σ, δ]/I(Ω). Moreover, F1, F2,...,FM are left linearly independent over F, hence L dimF (F[x; σ, δ]/I(Ω)) = Rk(Ω). Finally, a powerful tool to relate conjugate points will be the so-called product rule, given first in [25, Theorem 2.7] in the univariate case, and in [31, Theorem 3] in general. Theorem 3 (Product rule [31]). Given skew polynomials F, G ∈ F[x; σ, δ] and a ∈ Fn, if G(a)=0, then (F G)(a)=0, and if β = G(a) 6=0, then (F G)(a)= F (aβ)G(a). 3.2 Linearizing P-closed sets in one conjugacy class In this subsection, we will give a linearized description of finitely generated P-closed sets that are generated by a subset of a single conjugacy class. As we will see, such finitely generated P-closed sets correspond to right linear subspaces of F over the corresponding centralizer. The results in this section extend [25, Theorem 4.5] from the univariate to the multivariate case. The first important ingredient is the following equivalence between P-independence and right linear independence over a single conjugacy class. n ∗ Lemma 30. Let a, b1, b2,..., bM ∈ F and β1,β2,...,βM ∈ F be such that βi −1 bi = a = Da(βi)βi , D for i = 1, 2,...,M. It holds that B = {b1, b2,..., bM } is P-independent if, and only if, B = {β1,β2,...,βM } is right linearly independent over Ka. Proof. We first prove the direct implication, which is significantly easier. Assume that B is P- D ∗ independent, but B is not right linearly independent over Ka. Let B = {F1, F2,...,FM } ⊆ F[x,σ,δ] be a dual P-basis of B (Definition 27), which exists by Corollary 28. We may assume without loss of generality that there exist λ1, λ2,...,λM−1 ∈ Ka such that M−1 βM = βiλi. i=1 X Therefore by Lemma 11 and Theorem 1, denoting D = Da, it holds that M−1 D D βM = FM (βM )= FM (βi)λi =0, i=1 X ∗ which is absurd since βM ∈ F by hypothesis. Conversely, assume that BD is right linearly independent over Ka. We will prove by induction on M that B is P-independent. The case M = 1 is obvious since singleton sets are always 13 ′ P-independent. Assume then that B = {b1, b2,..., bM−1} is P-independent but B is not P- ′ indpendent. Then bM ∈ B , since otherwise B would be P-independent by [31, Lemma 36]. ′∗ ′ Let B = {F1, F2,...,FM−1} be a dual P-basis of B (Corollary 28). Fix i =1, 2,...,M − 1 βM n and define Gi = (x−bi )(βM Fi) ∈ F[x; σ, δ] . It holds that Gi(bj )= 0, for j =1, 2,...,M −1, ′ by Theorem 3. We deduce from bM ∈ B and Theorem 2 that Gi(bM )= 0. If Fi(bM ) 6= 0, then βM Fi(bM ) β M b b 0 = Gi(bM ) = (bM − bi )βM Fi(bM )= D M (βM Fi(bM )) − D i (βM )Fi(bM ), by Theorem 3. By Lemma 19, we deduce that −1 −1 −1 −1 −1 0 = Da(βM Fi(bM )βM )βM Fi(bM ) βM − Da(βM βi)βi βM . (12) Using the notation in Definition 18, a straightforward calculation (see [31, Lemma 12, Item 1]) shows that (12) is equivalent to − β F (b )β β β β 1F (b )β a M i M M = a M i ⇐⇒ a i i M M = a, −1 which means that λi = βi Fi(bM )βM ∈ Ka, by Definition 10. Hence in all cases (Fi(bM )=0 or Fi(bM ) 6= 0) we have that −1 Fi(bM )= βiλiβM , for some λi ∈ Ka. Next if F = F1 + F2 + ··· + FM−1 ∈ F[x; σ, δ], we have by Definition 27 that F (bj )=1, ′ for j =1, 2,...,M − 1. Since bM ∈ B , we deduce from Theorem 2 that F (bM ) = 1. Hence M−1 M−1 M−1 −1 1= F (bM )= Fi(bM )= βiλiβM ⇐⇒ βM = βiλi. i=1 i=1 i=1 X X X That is, βM is right linearly dependent of β1,β2,...,βM−1 over Ka, which is a contradiction. The second important ingredient is to ensure that P-closed sets generated by a finite set inside a single conjugacy class remain contained in such a conjugacy class. Lemma 31. Let G ⊆ Fn be a finite set. If b ∈ G, then b is conjugate to an element in G. Proof. Let B = {b1, b2,..., bM } ⊆ G be a P-basis of G, which exists by [31, Proposition 25], and let {F1, F2,...,FM } be a dual P-basis of B (Corollary 28). There exists i = 1, 2,...,M such that Fi(b) 6= 0, since otherwise we deduce from Corollary 29 that F (b) = 0, for all n F ∈ F[x; σ, δ], which is absurd. However, if Gi = (x − bi)Fi ∈ F[x; σ, δ] , then Gi(bj ) = 0, for all j =1, 2,...,M, by Theorem 3. Since b ∈ B, we deduce from Theorem 2 and Theorem 3 that Fi(b) 0 = Gi(b) = (b − bi)Fi(b). Therefore b is conjugate to bi ∈ G and we are done. We may now give the main result of this section, which gives a linearized description of P-closed sets generated by a finite subset of a single conjugacy class. Theorem 4. Let a ∈ Fn. The following hold: 14 1. If G⊆C(a) is finite and Ω= G ⊆ Fn, then Ω= {aβ | β ∈ ΩD \{0}}⊆C(a), (13) D for a finite-dimensional right vector space Ω ⊆ F over Ka. 2. Conversely, if ΩD ⊆ F is a finite-dimensional right vector space over Ka, then Ω ⊆ C(a) given as in (13) is a finitely generated P-closed set. Moreover if Item 1 or 2 holds, then BD is a right basis of ΩD over Ka if, and only if, B = {aβ ∈ F∗ | β ∈BD} is a P-basis of Ω. In particular, we have that R D Rk(Ω) = dimKa (Ω ). (14) Thus we deduce that the map Ω 7→ ΩD is a bijection between finitely generated P-closed subsets of C(a) and finite-dimensional righ vector subspaces of F over Ka. Proof. Assume first the hypotheses in Item 1, and let B = {b1, b2,..., bM } ⊆ G be a minimal ∗ set of P-generators of Ω, hence a P-basis of Ω by [31, Proposition 25]. Let βi ∈ F be such that βi bi = a , for i =1, 2,...,M, which exist since G⊆C(a), and define D R Ω = hβ1,β2,...,βM iKa ⊆ F. First, we have that Ω ⊆C(a) by Lemma 31. Now, the equality in (13) follows directly from the equivalence between P-independence and right linear independence inside the conjugacy class C(a) by Lemma 30 (analogously to the paragraph below), and Item 1 is proven. D D Assume now the hypotheses in Item 2. Let B = {β1,β2,...,βM } be a right basis of Ω βi over Ka, and define bi = a ∈ Ω, for i = 1, 2,...,M. We will prove that Ω = B, for B = β D {b1, b2,..., bM }. Let b = a ∈ Ω, for some β ∈ Ω . By Lemma 30, we have that {b}∪B is P-dependent. Thus by [31, Lemma 36] we have that b ∈ B, and we conclude that Ω ⊆ B. Conversely, by Lemma 31, if b ∈ B, then b = aβ, for some β ∈ F∗. Again by Lemma 30, we have that β ∈ ΩD, and we conclude that B ⊆ Ω, and Item 2 is proven. Finally, the claim on P-bases and right bases follows from Lemma 30 (analogously to the rest of this proof, see also [29, Corollary 27]), and we are done. We may deduce the following important consequence. As we will show in Subsection 6.4, this consequence is a generalization of Hilbert’s Theorem 90. Corollary 32. Let a ∈ F. The conjugacy class C(a) ⊆ Fn is P-closed and finitely generated if, and only if, F is a finite-dimensional right vector space over Ka. In such a case, we have that R Rk(C(a)) = dimKa (F). Proof. Assume that C(a) ⊆ Fn is a finitely generated P-closed set, and let ΩD be as in Item 1 in Theorem 4, for Ω = C(a). Let BD be a finite right basis of ΩD. If β ∈ F, we have that aβ ∈ Ω, thus β is right linearly dependent from BD by Lemma 30. Hence F =ΩD, and F has finite right dimension over Ka. The converse is trivial from Item 2 in Theorem 4. The last equality in the corollary follows directly from (14). We may also deduce the following important consequence. As we will show in Subsection 6.2, this consequence is a generalization of Artin’s Theorem to prove Galois’ Theorem. We will extend it to arbitrary finitely generated P-closed sets in Theorems 7 and 9, where we also consider the ring arithmetic of linearized polynomials. 15 Corollary 33. For all a ∈ Fn, it holds that the map in (6) restricts to a left F-linear vector space isomorphism φa : F[x; σ, δ]/I(C(a)) −→ F[Da]. (15) In particular, F[Da] is a finite-dimensional left vector space over F if, and only if, F is a finite- dimensional right vector space over Ka, and in such a case, we conclude that L L R dimF (F[Da]) = dimF (F[x; σ, δ]/I(C(a))) = Rk(C(a)) = dimKa (F). Proof. The fact that φa restricts to a left vector space isomorphism as in (15) follows from Theorem 1 and the definitions. In particular, if F has finite right dimension over Ka, then F[Da] has finite left dimension over F by Theorem 4 and Corollary 29. The equalities at the end of the corollary follow then from Corollary 29, Theorem 4 and (15). Conversely, assume that F has inifinite right dimension over Ka. Assume also that the left dimension of F[x; σ, δ]/I(C(a)) over F is finite. We will now reach a contradiction. By Theorem 4, there exists a finitely generated P-closed set Ω ⊆C(a) such that Rk(Ω) is strictly larger than the left dimension of F[x; σ, δ]/I(C(a)) over F. However, this contradicts the fact that the natural left linear projection map F[x; σ, δ]/I(C(a)) −→ F[x; σ, δ]/I(Ω) is surjective and Rk(Ω) = dim(F[x; σ, δ]/I(Ω)) by Corollary 29, and we are done. We may also deduce that the set of P-closed subsets of a conjugacy class forms a lattice that is naturally isomorphic to the lattice of right projective subspaces of PKa (F) over Ka. Corollary 34. Let a ∈ F, and define the sum of two finitely generated P-closed sets Ω1, Ω2 ⊆ C(a) as Ω1 +Ω2 = Ω1 ∪ Ω2 ⊆C(a). The collection of finitely generated P-closed subsets of C(a) forms a lattice with sums and intersections isomorphic to the lattice of right projective subspaces R a of PKa (F) over K via the bijection a R π : PKa (F) −→ C(a) [β] 7→ aβ, ∗ ∗ R where [β]= {βλ ∈ F | λ ∈ Ka} ∈ PKa (F). For any finitely generated P-closed subset Ω ⊆C(a) D and the finite-dimensional right vector space Ω ⊆ F over Ka as in Theorem 4, the bijection πa restricts to a bijection R D πΩ : PKa (Ω ) −→ Ω [β] 7→ aβ, that induces a lattice isomorphism with respect to the same operations. 3.3 Linearizing P-closed sets over several conjugacy classes From the previous section (Theorem 4) and Lemma 26, we know that if a ∈ F and Ω ⊆ Fn is a finitely generated P-closed set, then Ω ∩C(a) is a finitely generated P-closed set corresponding to a finite-dimensional right vector space over Ka. In this section, we show that any P-basis of Ω has the same partition into conjugacy classes, thus Rk(Ω) is the sum of Rk(Ω ∩ C(a)), running over disjoint conjugacy classes. This extends the result [22, Theorem 23] from the univariate to the multivariate case. In particular, we will show in Corollary 36 that the lattice of finitely generated P-closed subsets of a finite union of conjugacy classes is isomorphic to the Cartesian product of the lattices of projective spaces over the corresponding centralizers. We start with the following lemma. 16 n Lemma 35. Let B1, B2 ⊆ F be non-empty finite P-independent sets such that no element in B1 is conjugate to an element in B2. Then B = B1 ∪B2 is P-independent. Proof. Let B1 = {b1, b2,..., bM } and B2 = {c1, c2,..., cN }, where M,N > 0. We will prove the result by induction on k = M + N. The case k = 2 (M = N = 1) is trivial, since any set of two elements is P-independent. Assume that the lemma holds for certain k = M + N and we prove it for k +1= M + N + 1, where we may assume without loss of generality that N +1=#B2. If the result does not hold for k + 1, we may assume that cN+1 ∈ B, where B = {b1, b2,..., bM , c1, c2,..., cN }. Since B is P-independent by induction hypothesis, we may take one of its dual P-bases {F1, F2,...,FM , G1, G2,...,GN }. Also by hypothesis, we may take a dual P-basis {H1,H2,..., HN+1} of B2. First we prove that Fi(cN+1) = 0, for all i = 1, 2,...,M. Assume that it does not hold for n certain i. It holds that Gi = (x − bi)Fi ∈ I(B) , and since cN+1 ∈ B, then Fi(cN+1) 0 = Gi(cN+1)= cN+1 − bi Fi(cN+1), hence cN+1 and bi are conjugate, which is a contradiction. Next define M F = HN+1 − HN+1(bi)Fi. i=1 X It holds that F (bi) = F (cj ) = 0, for all i = 1, 2,...,M and all j = 1, 2,...,N. That is, F ∈ I(B), and since cN+1 ∈ B, we have that F (cN+1) = 0. In other words, M 0= F (cN+1)= HN+1(cN+1) − HN+1(bi)Fi(cN+1)=1 − 0, i=1 X which is absurd, and we are done. We may now state and prove the main result of this section: Theorem 5. If Ω ⊆ Fn is P-closed and finitely generated, then so is Ω ∩C(a) for all a ∈ Fn. Conversely, if the sets Ωi ⊆ C(ai) are P-closed and finitely generated, for i = 1, 2,...,ℓ, where n a1, a2,..., aℓ ∈ F are pair-wise non-conjugate, then Ω=Ω1 ∪ Ω2 ∪ ... ∪ Ωℓ is P-closed and finitely generated. In addition, if Bi is a P-basis of Ωi, for i =1, 2,...,ℓ, then B = B1 ∪B2 ∪ ...∪Bℓ is a P-basis of Ω, and in particular we have that Rk(Ω) = Rk(Ω1) + Rk(Ω2)+ ··· + Rk(Ωℓ). Proof. The first sentence follows from Theorem 4 and Lemma 26. Now let Ωi ⊆C(ai) be P-closed sets with finite P-bases Bi, for i =1, 2,...,ℓ, as in the theorem, and define Ω = Ω1 ∪Ω2 ∪...∪Ωℓ. First B = B1 ∪B2 ∪ ... ∪Bℓ is P-independent by Lemma 35, hence we are done if we prove that Ω= B. Since the inclusion Ω ⊆ B is obvious, we only need to prove the reversed one. Let b ∈ B. By Lemma 31, there exists j =1, 2,...,ℓ such that b is conjugate to an element ′ ′ in Bj . Define B = i6=j Bi and let {F1, F2,...,FM } be a dual P-basis of B = {b1, b2,..., bM }. Let F ∈ I(Bj ) and define S M G = F − F (bi)Fi. i=1 X 17 It holds that Fi(c) = 0, for all c ∈Bj, and therefore G ∈ I(B). Thus G(b) = 0. However, since ′ b is not conjugate to any element in B , it must hold that Fi(b) = 0, for all i =1, 2,...,M, by the proof of Lemma 31. Hence F (b) = 0 and b ∈ Bj =Ωj ⊆ Ω. We conclude by giving a lattice represenation of finitely generated P-closed sets over several conjugacy classes, which follows directly from Theorem 5. n Corollary 36. Let a1, a2,..., aℓ ∈ F be pair-wise non-conjugate. Then the lattice of finitely generated P-closed subsets of C(a1) ∪C(a2) ∪ ... ∪C(aℓ) is isomorphic to the Cartesian-product lattice a ( ) × a ( ) ×···× a ( ), PK 1 F PK 2 F PK ℓ F via the map D D D Ω=Ω1 ∪ Ω2 ∪ ... ∪ Ωℓ 7→ (Ω1 , Ω2 ,..., Ωℓ ), where Ωi =Ω ∩C(ai), for i =1, 2,...,ℓ. 3.4 Linearizing Lagrange interpolation In this short subsection, we rewrite Theorem 2 using linearized polynomials. Theorem 6 (Linearized Lagrange interpolation). Let Ω ⊆ Fn be a finitely generated P- D (i) (i) (i) D closed set, define Ωi = Ω ∩C(ai) and let Bi = {β1 ,β2 ,...,βMi } be a right basis of Ωi (with (j) notation as in Theorem 4), for i = 1, 2,...,ℓ. Then, for all aj ∈ F, for j = 1, 2,...,Mi and for i =1, 2,...,ℓ, there exists F ∈ F[x; σ, δ] such that deg(F ) Remark 37. It was proven by Amitsur in [1, Theorem 2] that, if δ : F −→ F is a standard derivation over the division ring F, and K0 is the corresponding subring of constants, then for any right vector subspace ΩD of F over K0 of right dimension M, there exists a differential equation of order at most M whose space of solution is precisely ΩD. This result is recovered from Theorem 6 by setting n =1, σ = Id, ℓ =1 and a1 = 0. 4 Skew and linearized polynomial arithmetic In this section, we show that the natural product of skew polynomials, given either by the rule (1) or by the universal property in Lemma 2, corresponds with composition of linearized polynomials and conventional products of matrices, when considered over a single conjugacy class. These results are obtained in Subsection 4.1 and extend the well-known particular cases obtained when n = 1. In Subsection 4.2, we show that, when considering several conjugacy classes, the natural product of skew polynomials decomposes into coordinate-wise products over each conjugacy class. We conclude (Corollary 45) that quotients of free multivariate skew polynomial rings over the ideal of skew polynomials vanishing on a finite union of finitely generated conjugacy classes is a semisimple ring. Moreover, they are simple rings in the case of a single conjugacy class (Corollary 44). Apart from its own interest, we will use these tools to give a Galois correspondence in Subsection 6.3. 18 4.1 A single conjugacy class: Map composition and matrix multiplica- tion We start by showing that skew polynomial multiplication over a single conjugacy class corre- sponds with composition of right Ka-linear maps in F[Da], which we will denote from now on by ◦. We will implicitly consider F[Da] as a left F-algebra with product ◦. Theorem 7. Given F, G ∈ F[x; σ, δ] and a ∈ Fn, it holds that (F G)Da = F Da ◦ GDa . (16) In particular, the map given in (15), φa : F[x; σ, δ]/I(C(a)) −→ F[Da], is a left F-algebra isomorphism. Proof. For any β ∈ F, the reader may check the rule Da ◦ (βId) = σ(β)Da + δ(β)Id, (17) where the map δ(β)Id : F −→ Fn is defined by γ 7→ δ(β)γ. After untangling the definitions, (17) is only the short form of the equations n xi xj Da ◦ (βId) = σi,j (β)Da + δi(β)Id ∈ F[Da], j=1 X for i =1, 2,...,n. Since these equations are the defining property of the product of the free skew polynomial ring F[x; σ, δ], the theorem follows from its universal property (Lemma 2). This map restricts to left F[x; σ, δ]-linear module isomorphism between F[x; σ, δ]/I(Ω) and certain quotients of F[Da], for finitely generated P-closed subsets Ω ⊆ C(a). To this end, we introduced left ideals of linearized polynomials vanishing in right Ka-linear subspaces of F. Definition 38. Let a ∈ Fn, and let ΩD ⊆ F be a finite-dimensional right Ka-linear vector space. We define the ideal associated to ΩD as D D D D I(Ω )= {F ∈ F[Da] | F (β)=0, ∀β ∈ Ω }. The following result is straightforward. D Proposition 39. With notation as in Definition 38, the set I(Ω ) is a left ideal of F[Da]. More interestingly, we have the following anticipated isomorphism. The proof is straightfor- ward from Theorems 1, 4 and 7. Corollary 40. Let a ∈ Fn. Let Ω ⊆ C(a) be a finitely generated P-closed set and let ΩD ⊆ F be the corresponding right Ka-linear subspace, as in Theorem 4. The map φa in Theorem 7 D satisfies that φa(I(Ω)) = I(Ω ) and restricts to a natural left F[x; σ, δ]/I(C(a))-linear module isomorphism D φΩ : F[x; σ, δ]/I(Ω) −→ F[Da]/I(Ω ). (18) In particular, we conclude that L D L R D dimF (F[Da]/I(Ω )) = dimF (F[x; σ, δ]/I(Ω)) = Rk(Ω) = dimKa (Ω ). 19 Remark 41. As shown in [31, Proposition 18], for a P-closed set Ω ⊆ Fn, it holds that I(Ω) is a two-sided ideal of F[x; σ, δ] if, and only if, Ω is closed under conjugacy. Hence, if Ω ⊆ C(a), then I(Ω) is a two-sided ideal if, and only if, Ω = C(a). Therefore, we deduce from Theorem 4 that I(ΩD) is a two-sided ideal of F[Da] if, and only if, ΩD = F. In all other cases, the left modules in (18) are not rings. Remark 42. Just as we did in Subsection 3.1, corresponding to [31, Section V], we could define I(B) = {F D ∈ F[Da] | F D(β)=0, ∀β ∈ B} and Z(A) = {β ∈ F | F D(β)=0, ∀F D ∈ A}, for arbitrary sets B ⊆ F and A ⊆ F[Da]. Basic rules as in [31, Prop. 15] still hold. However, the interest of considering closures as in Definition 20 is lost, since it is easy to see, from the results obtained so far, that the sets Z(A) correspond to right Ka-linear vector subspaces of F, and that R Z(I(B)) = hBiKa ⊆ F. Now we turn to matrix multiplication. For the rest of this subsection, fix a ∈ Fn. Let V ⊆ F M be a right vector space over Ka and fix one of its ordered right bases β = (β1,β2,...,βM ) ∈ F . M M×M Denote by µβ : V −→ Ka the corresponding matrix-representation map, given by 1 1 1 x1 x2 ... xM 2 2 2 x1 x2 ... xM µβ (x)= . . . . , (19) . . .. . xM xM ... xM 1 2 M M 1 2 M for x = (x1, x2,...,xM ) ∈ V , where xj , xj ,...,xj ∈ Ka are the unique scalars such that M i xj = i=1 βixj ∈ F, for j = 1, 2,...,M. Observe that µβ is a right Ka-linear vector space isomorphism, and it is the identity map if M = 1 and β1 = 1. P Definition 43. Given x, y ∈VM , we define their matrix product with respect to the basis β as −1 M x ⋆ y = µβ (µβ(x)µβ(y)) ∈V . (20) The product ⋆ depends on the centralizer Ka ⊆ F (i.e., it depends on a) and the ordered basis β, but we will not denote this dependence for simplicity. M M i From the definitions, we note also that, if x = (x1, x2,...,xM ) ∈V and y = i=1 βiy ∈ M i M V , with xi ∈V and y ∈ Ka , for i =1, 2,...,M, then P M −1 i M µβ (µβ(x)µβ(y)) = xiy ∈V . (21) i=1 X We may now prove the following result. R R Theorem 8. Let M be a positive integer with M ≤ dimKa (F), where dimKa (F) need not be finite. R a Let β1,β2,...,βM ∈ F be right linearly independent over K and let V = hβ1,β2,...,βM iKa ⊆ F. M If the matrix product ⋆ is defined from the ordered basis β = (β1,β2,...,βM ) ∈ F as in (20), then it holds that F Da ◦ GDa (β)= F Da (β) ⋆ GDa (β), (22) for all F, G ∈ F[x; σ, δ] such that F Da (β), GDa (β) ∈ VM (i.e., F Da (V) ⊆ V and GDa (V) ⊆ V). In particular, if F has finite right dimension over Ka and β1,β2,...,βM form one of its right bases, then (22) holds for any F, G ∈ F[x; σ, δ]. 20 Da M i M Proof. Let y = G (β) ∈V and let y ∈ Ka , for i =1, 2,...,M, be the unique vectors such M i that y = i=1 βiy . It holds that P M Da Da i Da F (y)= F (βi)y = F (β) ⋆ y, i=1 X where the first equality follows from Lemma 11, and the second equality is (21), and we are done. We conclude with the following consequence. R M Corollary 44. If M = dimKa (F) < ∞ and β = (β1,β2,...,βM ) ∈ F is an ordered right basis of F over Ka, then the map L M Eβ : F[Da] −→ F M is a left F-algebra isomorphism, where the product in F is ⋆β defined from β (Definition 43). In particular, the map L M×M Mβ ◦ Eβ : F[Da] −→ Ka is a ring isomorphism. In conclusion, we have the following chain of natural left F-algebra and ring isomorphisms, where we indicate the considered products in case of ambiguity, M M×M F[x; σ, δ]/I(C(a)) =∼ (F[Da], ◦) =∼ (F ,⋆β) =∼ Ka . In particular, the ring F[x; σ, δ]/I(C(a)) is a simple ring [23, Definition (2.1)] by [23, Theorem (3.1)]. That is, F[x; σ, δ]/I(C(a)) has no non-trivial two-sided ideals. 4.2 Product decompositions over several conjugacy classes In this short subsection, we observe that, when a P-closed set contains elements of more than one conjugacy class, the skew polynomial product over the corresponding ideal decomposes as a coordinate-wise product over each conjugacy class. n Theorem 9. Let Ω ⊆ F be a finitely generated P-closed set and let Ωi = Ω ∩C(ai) 6= ∅, for n i =1, 2,...,ℓ, where a1, a2,..., aℓ ∈ F are pair-wise non-conjugate and Ω=Ω1 ∪ Ω2 ∪···∪ Ωℓ (see Theorem 5). With notation as in Definition 38, the maps ℓ ℓ D F[x; σ, δ]/I(Ω) −→ F[x; σ, δ]/I(Ωi) −→ F[Dai ]/I(Ω ) i=1 i=1 i (23) ℓ Da D ℓ F + I(Ω) 7→ (F + I(Ω )) 7→ F i + I(Ω ) L i i=1 L i i=1 are left F-algebra isomorphisms. In particular, we have that ℓ ℓ ℓ L D R D L dimF F[Dai ]/I(Ωi ) = dimF (Ωi )= Rk(Ωi) = dimF (F[x; σ, δ]/I(Ω)) = Rk(Ω). i=1 i=1 i=1 X X X Proof. It follows from Corollary 40 and the fact that ℓ ∼ F[x; σ, δ]/I(Ω) = F[x; σ, δ]/I(Ωi), i=1 M which follows from I(Ω) = I(Ω1) ∩ I(Ω2) ∩ ... ∩ I(Ωℓ) (see [31, Proposition 15]) and Theorem 5. 21 Similarly, we deduce the following result on coordinate-wise matrix multiplications by com- bining Corollary 44 and Theorem 9. Corollary 45. With assumptions and notation as in Corollary 44 and Theorem 9, for Ω = C(a1) ∪C(a2) ∪ ... ∪C(aℓ) (recall that a1, a2,..., aℓ are pair-wise non-conjugate), we have the following chain of natural left F-algebra and ring isomorphisms, ℓ ℓ ℓ ℓ M M ×M ∼ ∼ a ∼ i ∼ i i F[x; σ, δ]/I(Ω) = F[x; σ, δ]/I(C(ai)) = (F[D i ], ◦) = (F ,⋆βi ) = Kai , i=1 i=1 i=1 i=1 M M M M R Mi where Mi = dim a ( ) < ∞ and β ∈ is an ordered right basis of over Kai , for i = K i F i F F 1, 2,...,ℓ. In particular, the ring F[x; σ, δ]/I(Ω) is semisimple [23, Definition (2.5)] by [23, (3.3)] and [23, (3.4)]. That is, every left submodule of a left module over F[x; σ, δ]/I(Ω) is a direct summand. 5 Generalizations of Vandermonde, Moore and Wronskian matrices One of the main objectives behind the results on evaluations of univariate skew polynomials in [22, 25] was to generalize the concept of and results on classical Vandermonde [43], Moore [33, 37] and Wronskian [15, 34] matrices. A general method for calculating the rank of such matrices was obtained by combining [25, Theorem 4.5] and [22, Theorem 23], which amount to linearizing the concept of P-independence in the case n = 1, as done in Section 3 for the general case. Multivariate skew Vandermonde matrices were defined in [31] using the skew evaluation of multivariate skew polynomials as in Definition 7. In this section, we give an analogous defi- nition using linearized evaluations as in Definition 9. In Subsection 5.1, we revisit the results on multivariate skew Vandermonde matrices from [31], and in Subsection 5.2, we provide a lin- earization of such matrices and calculate their ranks as done in [25]. As we will show in the examples, the matrices defined in Subsection 5.2 simultaneously generalize multivariate versions of Vandermonde, Moore and Wronskian matrices. 5.1 Skew Vandermonde matrices In this subsection, we revisit the concept of skew Vandermonde matrix, which was introduced in [22] for n = 1 and δ = 0, and in [25, Eq. (4.1)] in general for n = 1. The multivariate case was introduced in full generality in [31, Definition 40]. Definition 46 (Skew Vandermonde matrices [22, 25, 31]). Let N ⊆M be a finite set n of skew monomials and let B = {b1, b2,..., bM } ⊆ F . We define the corresponding skew Vandermonde matrix, denoted by VN (B), as the |N | × M matrix over F whose rows are given by M (Nm(b1),Nm(b2),...,Nm(bM )) ∈ F , for all m ∈ N (given certain ordering in N or, more generally, in M). If d is a positive integer, we define Md as the set of monomials of degree less than d, and we denote |Md|×M Vd(B)= VMd (B) ∈ F . (24) The following result is [31, Prop. 41], and connects the rank of a skew Vandermonde matrix with the underlying P-closed set. 22 Proposition 47 ([31]). Given a finite set G ⊆ Fn with M elements, and Ω= G, it holds that Rk (VM (G)) = Rk(Ω). Moreover, a subset B ⊆ G is a P-basis of Ω if, and only if, |B| = Rk(Ω) = Rk(V|B|(B)). Remark 48. The last statement implies that, applying Gaussian elimination to the matrix VM (G), we may find the rank of Ω and at least one of its P-bases. This is an alternative method to partitioning G into conjugacy classes and finding a right basis on each conjugacy class, as implied by Theorems 4 and 5. Skew Lagrange interpolation (Theorem 2) can be reinterpreted as the left invertibility of a skew Vandermonde matrix defined over a P-basis. The following result is [31, Corollary 42]. n Corollary 49 ([31]). Let Ω ⊆ F be a finitely generated P-closed set with P-basis B = {b1, b2,..., bM }. There exists a solution to the linear system (Fm)m∈MM VM (B) = (a1,a2,...,aM ), (25) for any a1,a2,...,aM ∈ F (that is, VM (B) is left invertible). For any solution, the skew polyno- Fmm satisfies that F (b )= a , for i =1, 2,...,M, and deg(F )