ELEC 8501: The Fourier Transform and Its Applications Zhao Li September 21st, 2011

Fourier Transform in Revealing for Sylvester matrices

We propose a fast algorithm to compute numerical rank of Sylvester matrices using FFT in this article which is a crucial part for approximate GCD computation of . Given two polynomials, p (x) , q (x) ∈ C [x] of degrees m and n respectively, with

m p (x) = p0 + p1x + ··· + pmx n q (x) = q0 + q1x + ··· + qnx , and p (x) and q (x) have the following factorization,

p = uv and q = uw where u = gcd (p, q) is the greatest common divisor along with cofactors v and w. Then the degree of u (after regularization) equals the numerical null dimension of the following Sylvester generated by p (x)) and q(x).(See any textbook on Numerical Algebra or [4].) the numerical null space dimension is m + n − rank (S, ) , where rank (S, ) is numerical rank of S(p, q) with respect to a user-given threshhold . The numer- ical rank of a matrix with respct to is defined as the number of singular values of the matrix which are larger than . n m z }| { z }| {   p0 | q0  .. ..   p1 . | q1 .    S(p, q) = (S(p)|S(q))=  ......   . . p0 | . . q0     pm p1 | qn q1     ......   . . | . .  pm | qn Note that the Sylvester matrix is composed of two rectangular S(p) ∈ Cm+n,n and S(q) ∈ Cm+n,m, where     p0 q0  ..   ..   p1 .   q1 .       . ..   . ..   . . p0   . . q0  S (p) =   S (q) =   .  pm p1   qn q1       .. .   .. .   . .   . .  p q m (m+n)×n m (m+n)×m A Teoplitz matrice can be emdedied into a , thus its multiplication with a vector can be implemented with O(n log(n)) flops using FFT(This is kind of standard technique to acclerate Toeplitz matrices computation, see [2]). Let x1 is column vector, we use C (y1) denoting the circulant matrix with its first column be the same with vector y1, then S(p),S(q) can be embedied T T into C (sp) and C (sq) respectively, where sp = (p, 0) , and sq = (q, 0) .

1 Recall that the Discrete Fourier Transform (DFT) for a n dimensional vector {xi} is defined as below n X −j·2π·(k−1)(i−1)/n xˆi = xie , 1 6 i 6 n, k=1 and the Inverse Discrete Fourier Transforms (IDFT) is defined as

n 1 X x = x ej·2π·(k−1)(i−1)/n, 1 i  n. i n bi 6 k=1 FFT algorithm can implement both DFT with O(n log(n)) flops. for another vector y2

C (y1) ∗ x2 = IDFT (DFT (y1) DFT (y2)) , where means dot product. Therefore, for any column vector y ∈ Cm+n,

S (p, q) ∗ y = IDFT (DFT (sp) DFT (y)) + IDFT (DFT (sq) DFT (y)) ,

Since we have already get a fast matrix-vector multiplication algorithm for Sylvester matrix, we can then apply it to an Arnoldi algorithm to reduce S (p, q) into upper Hessenberg form with O(n2 log(n)) flops. (For Arnoldi algorithm, one can refer to [1].) Then through m + n − 1 steps of Given rotations, the upper Hessenberg form can be further reduced to a upper . At last, we use a null vector finder algorithm proposed by Zeng to compute the null dimension of S (p, q). Sylvester matrices arising from numerical polynomial computation usually have low null space (which means the computing problem is well conditioned), so the null space finder usually takes O(n2) flops to terminate. To sum up, the whole process will take O(n2 log(n)) flops instead of O(n3) which is the com- plexity of the current algorithm used in Apalab (a state-of-the-art software to slove numerical polynomial problems developed by Zeng[5] ) which uses QR decomposition to reduce S(p, q) to uuper triangular form. Remark. To make the algorithm stable, Arnoldi algorithm may need retart or extra partial reorthogonalization especially when the size of S(p, q) grows larger. But still it will be faster than QR decompostion since QR needs fully reorthogonalization when the size of S(p, q) gets larger [3].

References

[1] J.W.Demmel, Applied Numerical Linear Algebra, SIAM, Philadelphia, 1997.

[2] G.H.Golub and C.F.Van Loan, Matrix Computations, The John Hopkins University Press, Baltimore and London, 3rd ed, 1996.

[3] L. Giraud, J. Langou, and M. Rozloznik. 2005. The loss of orthogonality in the Gram- Schmidt orthogonalization process. Comput. Math. Appl. 50, 7 (October 2005), 1069-1075. DOI=10.1016/j.camwa.2005.08.009 http://dx.doi.org/10.1016/j.camwa.2005.08.009

[4] Z.Zeng, On approximate polynomial greatest common divisors, J.Symb.Compt., 26(1998), pp.653-666.

[5] Z.Zeng, Apatools: A maple and Matlab toolbox for approximate polynomial algebra, in soft- ware for Algebraic Geometry, IMA Volume 148, M.Stillman, N. Takayama

2