
Annales Univ. Sci. Budapest., Sect. Comp. 23 (2004) 25-40 THE STRUCTURE OF THE BOOLEAN{ZHEGALKIN TRANSFORM J. Gonda (Budapest, Hungary) Dedicated to Professor I. K¶ataion his 65th birthday Abstract. In [6] a linear algebraic aspect is given for the transformation of a Boolean function to its Zhegalkin-representation. In this article, we investigate the linear-algebraic structure of that transform. In this article the elements of the ¯eld with two elements are denoted by 0 and 1; N0 denotes the non-negative integers, and N the positive ones. In [6] we pointed out that if we consider the coe±cients of a Boolean function of n variables and the coe±cients of the Zhegalkin polynomial of n variables, respectively, as the components of an element of a 2n-dimensional linear space over F2, then the relation between the vectors belonging to the two representations of the same Boolean function of n variables could be given by k = A(n)®. Here k is the vector containing the components of the Zhegalkin polynomial, ® is the vector, composed by the coe±cients of the Boolean representation of the given function, and A(n) is the matrix of the transform in the natural basis. In the article mentioned above is proved that 8 (1); if n = 0; <> (n) µ ¶ A = A(n¡1) 0(n¡1) :> ; if n 2 N; A(n¡1) A(n¡1) and as a consequence that 2 A(n) = I(n); where I(n) and 0(n) denote the 2n-dimensional identity and zero matrix, respectively. From this follows that if k = A(n)®, then ® = A(n)k. 26 J. Gonda In the following part of our article, we consider the transform given above by A(n). Theorem 1. 8 (1 + ¸)I(n); if n = 0; <> (n) (n) µ ¶ A + ¸I ´ I(n¡1) 0(n¡1) :> ; if n 2 N; 0(n¡1) (1 + ¸2)I(n¡1) where U(¸) ´ V(¸) means that the two ¸-matrices are equivalent, that is there are invertible ¸-matrices R(¸) and L(¸) so, that V(¸) = L(¸)U(¸)R(¸). Proof. If n = 0, then A(n) +¸I(n) = A(0) +¸I(0) = (1)+¸(1) = (1+¸)(1) = (1+¸)I(0) = (1+¸)I(n): Now let n 2 N0; µ ¶ µ ¶ 0(n) A(n) I(n) A(n) + ¸I(n) = L(n+1)(¸); = R(n+1)(¸) I(n) I(n) + ¸A(n) 0(n) A(n) µ ¶ I(n) 0(n) and let C(n+1)(¸) denote , then 0(n) (1 + ¸2)I(n) µ ¶ 0(n) A(n) L(n+1)(¸)A(n+1)R(n+1)(¸) = £ I(n) I(n) + ¸A(n) µ ¶ µ ¶ A(n) + ¸I(n) 0(n) I(n) A(n) + ¸I(n) £ = A(n) A(n) + ¸I(n) 0(n) A(n) µ ¶ I(n) 0(n) = = C(n+1)(¸); 0(n) (1 + ¸2)I(n) ³ ´ ¡ ¢2 ¡ ¢ ¡ ¢2 ¡ ¡ ¢¢2 A(n) = I(n), so 1 = det I(n) = det A(n) = det A(n) , ¡ ¢ ¡ ¢ ¡ ¢ and then det A(n) = 1. As det L(n+1)(¸) = det A(n) , we get that ¡ ¢ det L(n+1)(®) = 1. With a similar calculation we can get that ³ ´ det R(n+1)(®) = 1; so L(¸) and R(¸) are invertible matrices, and ³ ´ C(n+1) = L(n+1)(¸) A(n+1) + ¸I(n+1) R(n+1)(¸); The structure of the Boolean-Zhegalkin transform 27 i.e. the two matrices are equivalent. From the previous theorem, we can get many results. First, we can read out the minimal polynomial and characteristic polynomial of A(n). Corollary 2. If ¹(n) denotes the minimal polynomial of A(n), and c(n) denotes its characteristic polynomial, then ( ¸ + 1; if n = 0; ¹(n) = ¸2 + 1; if n 2 N; n c(n) = ¸2 + 1: Proof. The minimal polynomial of a quadratic matrix is its last invari- ant factor, in our case the abovementioned polynomials. The characteristic polynomial is the product of the invariant factors. If n = 0, then the 0 only invariant factor is ¸ + 1, and ¸2 + 1 = ¸ + 1. In the case, when n 2 N, that is when n ¸ 1, then there are 2n¡1 invariant factors equal to 1, and each of the further 2n¡1 invariant factors is equal to ¸2 + 1, so n¡1 n¡1 n c(n) = (¸2 + 1)2 = (¸2)2 + 1 = ¸2 + 1. The results mentioned above are not surprising. A(0) + ¸I(0) = (1 + ¸) and det((1 + ¸)) = ¸ + 1, so ¸ + 1 2 F2[¸] is the characteristic polynomial of A(0). The degree of that polynomial is equal to 1, which is the order of the matrix A(0). As there is no nonzero polynomial of degree less than 1, of which A(0) is the root, ¸ + 1 is the minimal polynomial of A(0), as well. Now let n > 0, then A(n) 6= I(n) and A(n) 6= 0(n), so neither ¸ nor ¸ + 1 2 can be the minimal polynomial of the matrix. On the other hand, A(n) = = I(n) shows, that A(n) is the root of the monic polynomial ¸2 + 1, and the minimal polynomial of a matrix is uniquely determined. Now let us consider the characteristic polynomial of A(n). The degree of that polynomial is equal to 2n, and the set of the roots of the characteristic polynomial is equal to the 2 2 set of the roots of the minimal polynomial. As ¸ + 1 = (¸ + 1) over F2, the only root of the minimal polynomial is 1. From this follows the characteristic polynomial of A(n) is a polynomial of degree 2n with exactly one root, namely n n 1. The only (monic) polynomial with these properties is (¸ + 1)2 = ¸2 + 1, n and then the characteristic polynomial of A(n) is ¸2 + 1. n Another simple way to prove ¸2 + 1 is the minimal polynomial of A(n) is 0 n¡1 as follows. We saw above, that c(0) = ¸ + 1 = ¸2 + 1. If c(n¡1) = ¸2 + 1, then c(n) = 28 J. Gonda ³ ´ µµ ¶¶ A(n¡1) + ¸I(n¡1) 0(n¡1) = det A(n) + ¸I(n) = det = A(n¡1) A(n¡1) + ¸I(n¡1) ³ ³ ´´ ³ ´ ³ ´ 2 2 n¡1 2 n¡1 2 = det A(n¡1) + ¸I(n¡1) = c(n¡1) = ¸2 + 1 = ¸2 + 1 = n = ¸2 + 1; n so for any nonnegative integer n c(n) = ¸2 + 1. n+1 Corollary 3. For any n 2 N0 the 2 dimensional linear space over F2 is a direct sum of 2n two-dimensional cyclic subspaces invariant with respect to A(n+1). Proof. The only invariant factor of A(n+1) is equal to ¸2 + 1 and the multiplicity of that invariant factor is equal to 2n. From this two facts immediately follows the statement above. Let A » B denote that the matrices A and B are similar, that is there is an invertible matrix T so, that B = T¡1AT. Corollary 4. 8 > B(0) = (1) = A(0); > > µ ¶ <> (1) 0 1 A(n) » B = ; > 1 0 > µ ¶ > B(n¡1) 0(n¡1) :> B(n) = ; 1 < n 2 N; 0(n¡1) B(n¡1) where for any nonnegative integer n B(n) is the Jordan matrix of A(n). Proof. c(0)(¸) = ¸ + 1 = ¹(0)(¸) and c(1)(¸) = ¸2 + 1 = ¹(1)(¸), that is 0 1 the 2 - and the 2 -dimensional linear spaces over F2 are cyclic and invariant with respect to A(0) and A(1), respectively. In such a case the Jordan matrix of (0) (1) A and A is equal to the companionµ ¶ matrix of their minimal polynomial, 0 1 and then B(0) = (1) and B(1) = . 1 0 Now if n > 1, then by Corollary 3 the 2n-dimensional linear space over n¡1 F2 is the direct sum of 2 two-dimensional cyclic spaces invariant to the transform represented by A(n) in the canonical basis of the space. The Jordan matrix of such a transform is the hypermatrix containing 2n¡1 blocks equal to B(1) in the main diagonal, and the zero matrix of order two in the other positions of that matrix. But the structure of B(1) corresponds to that form, and if the structure of B(n), where n 2 N, satis¯es this rule, then B(n+1) satis¯es, too. The structure of the Boolean-Zhegalkin transform 29 Corollary 5. 8 > C(0) = (1) = A(0); > > µ ¶ <> (1) 1 1 A(n) » C = ; > 0 1 > µ ¶ > C(n¡1) 0(n¡1) :> ; 1 < n 2 N; 0(n¡1) C(n¡1) where C(n) is the classical canonical matrix of A(n). Proof. c(0)(¸) = ¸ + 1 = ¹(0)(¸), and then the classical canonical matrix of A(0) is the identity matrix of order 1, that is C(0) = (1). (2) 2 2 Over F2 ¹µ (¸) =¶ ¸ + 1 = (¸ + 1) , so the classical canonical matrix of 1 1 A(1) is C(1) = , and for n > 1 we can argue similarly as we do it in the 0 1 proof of Corollary 4, substituting the Jordan matrix by the classical canonical matrix, and B(1); B(n) and B(n+1) by C(1); C(n) and C(n+1), respectively. n Now we can give a basis of the 2 -dimensional linear space over F2, in which the matrix of the transform represented by A(n) in the canonical basis (n) n (n;i) of the space is equal to B . For n 2 N and 2 > i 2 N0 let e be the i-th n vector of the canonical basis of the 2 -dimensional linear space over F2, that (n;i) n is the j-th component of e , where 2 > j 2 N0, is equal to ½ 1; if i = j, e(n;i) = ± = j i;j 0; if i 6= j.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages16 Page
-
File Size-