FACTORIZATION of MATRICES

Total Page:16

File Type:pdf, Size:1020Kb

FACTORIZATION of MATRICES FACTORIZATION of MATRICES Let's begin by looking at various decompositions of matrices, see also Matrix Factorization. In CS, these decompositions are used to implement efficient matrix algorithms. Indeed, as Lay says in his book: "In the language of computer science, the expression of A as a product amounts to a pre-processing of the data in A, organizing that data into two or more parts whose structures are more useful in some way, perhaps more accessible for computation." Nonetheless, the origin of these factorizations lies long in the past when they were first introduced by various mathematicians after whom they are frequently named (though not always, and not always chronologically accurately, either!). Their importance in math, science, engineering and economics quite generally cannot be over-stated, however. As Strang in his book says: "many key ideas of linear algebra, when you look at them closely, are really factorizations of a matrix. The original matrix A becomes the product of 2 or 3 special matrices." But factorization is really what you've done for a long time in different contexts. For example, each positive integer n, say 72, can be factored as a product 72 = 23 × 32 of prime integers, while each polynomial such as P(x) = x4 − 16 can factored as a product P(x) = (x − 2)(x + 2)(x2 + 4) of linear factors with real roots and a quadratic factor for which no further factorization can be carried out. In this lecture we shall look at the first of these matrix factorizations - the so-called LU-Decomposition and its refinement the LDU-Decomposition - where the basic factors are the elementary matrices of the last lecture and the factorization stops at the reduced row echelon form. Let's start. Some simple hand calculations show that for each 2 × 2 matrix Gauss Decomposition: a b 1 0 a 0 1 b 1 0 a b = a = , a ≠ 0 . c 1 ad−bc c 1 ad−bc [ c d ] [ a ] [ 0 a ] [ 0 1 ] [ a ] [ 0 a ] Notice that in the 3-term factorization the first and third factors are triangular matrices with 1's along the diagonal, the first L(ower) the third U(pper), while the middle factor is a D(iagonal) matrix. This is an example of the so-called LDU-decomposition of a matrix. On the other hand, in the term 2-factorization both factors are triangular matrices the first Lower and the second Upper, but now the second one allows diagonal entries which need not be 1. It is an example of the important LU-decomposition of a matrix. As we shall see shortly, this decomposition - possibly the most important factorization of all - comes from the method of elimination for solving systems of linear equation. But what information might we get from such decompositions? Well, note the particular factors occuring in the Gauss Decomposition of a 2 × 2 matrix: if ad − bc ≠ 0, then each factor is an elementary matrix as defined in the previous lecture, and so invertible. In fact, by the known inverses given in Lecture 06, −1 −1 −1 1 0 1 0 a 0 1 0 1 b 1 − b = , = a , a = a . [ c 1 ] [ − c 1 ] 0 ad−bc a a a [ a ] [ 0 ad−bc ] [ 0 1 ] [ 0 1 ] −1 On the other hand, (AB) = B−1A−1 . Thus by Gauss Decomposition, d b −1 b 1 0 − a b 1 − a a 1 0 ad − bc ad − bc = c = ⎡ ⎤, [ c d ] 0 a [ − 1 ] ⎢ c a ⎥ [ 0 1 ] [ ad−bc ] a ⎢− ⎥ ⎣ ad − bc ad − bc ⎦ as a matrix product calculation shows. Why bother? We knew this already from a simple direct calculation! That's the point: except for the 2 × 2, there's no formula for computing the inverse of a general n × n matrix. But if we can decompose a general matrix A into simpler components, each of which we know how to invert, then A−1 can be calculated in terms of a product just as we've just showed using the Gauss Decomposition. What's more, as in the 2 × 2 case where the condition ad − bc ≠ 0 was needed for the diagonal matrix to be inverted, use of a decomposition often shows what conditions need to be imposed for A−1 to exist. As we saw in Lecture 05, each of the elementary matrices in the 2 × 2 Gauss Decomposition determines a particularly special geometric transformation of the plane: in the 3-term factorization, the lower and upper triangular matrices correspond to shearing the xy-plane in the y and x-directions, while the diagonal matrix provides a stretching of the plane away from, or towards, the origin (dilation). But without the Gauss Decomposition, would you have guessed that every invertible 2 × 2 matrix determines a transformation of the plane that can be written as the composition of shearings and dilation provided a ≠ 0? To handle the case when a = 0 and c ≠ 0, interchange rows by writing: 0 b 0 1 c d 0 1 c 0 1 d = = c . [ c d ] [ 1 0 ] [ 0 b ] [ 1 0 ] [ 0 b ] [ 0 1 ] Do the calculation to see why the lower triangular matrix didn't appear! Notice that the first matrix in this decomposition is again an elementary matrix - the one interchanging row 1 and row 2. As a geometric transformation it reflects the xy-plane in the line y = x. So both of the previous interpretations still apply whether a = 0 or a ≠ 0. 0 b What happens, however, if a, c = 0? Well, cannot be reduced to echelon form! This suggests [ 0 d ] hypotheses for general results: Fundamental Theorem 1: if an n × n matrix A can be reduced to row echelon form without row interchanges, then A has an LU-decomposition where L is lower triangular with entries 1 on the diagonal and U is upper triangular. Fundamental Theorem 2: if an n × n matrix A can be reduced to row echelon form possibly with row interchanges, then A has an PLU-decomposition where P is a product of row interchange elementary matrices, L is lower triangular with entries 1 on the diagonal and U is upper triangular. Fundamental Theorem 2 is the version that's most often used in large scale computations. But rather than prove the existence of either decomposition in generality, let's concentrate on using a given decomposition in solving a system of linear equations. Example 1: solve the equation Ax = b when 1 0 0 −7 −7 −1 3 −7 −2 −7 Ux = L b = ⎡1 1 0 ⎤⎡ 5 ⎤ = ⎡−2 ⎤. 3 5 1 2 6 A = ⎡−3 5 1 ⎤, b = ⎡ 5 ⎤, ⎣ ⎦⎣ ⎦ ⎣ ⎦ 6 −4 0 2 ⎣ ⎦ ⎣ ⎦ But then and A has an LU-decomposition 3 −7 −2 x1 −7 1 0 0 3 −7 −2 Ux = ⎡0 −2 −1 ⎤⎡x2 ⎤ = ⎡−2 ⎤. 0 0 −1 x 6 A = LU = ⎡−1 1 0 ⎤⎡0 −2 −1 ⎤ ⎣ ⎦⎣ 3 ⎦ ⎣ ⎦ 2 −5 1 0 0 −1 ⎣ ⎦⎣ ⎦ Because U is in upper triangular form this last equation can be solved in several different ways. Solution: set y = Ux. Then Ax = Ly = b For example, the associated augmented matrix and so y = L−1b. Now by the known inverse of 3 −7 −2 −7 3 × 3 lower triangular matrices given in lecture ⎡0 −2 −1 −2 ⎤ 06, 0 0 −1 6 ⎣ ⎦ 1 0 0 is in echelon form, so the solutions can be read off −1 L = ⎡1 1 0 ⎤, by back substitution as we did earlier in lecture 3 5 1 ⎣ ⎦ 01: in which case the equation Ax = b reduces to x3 = −6 , x2 = 4 , x1 = 3 . Thus solving the equation Ax = b has been reduced to computations with triangular matrices which are 3 × 3 always much simpler to handle then general matrices, even for matrices beyond 3 × 3. The price to be paid is that first we must compute LU-decompositions such as: 3 −7 −2 1 0 0 3 −7 −2 A = ⎡−3 5 1 ⎤ = ⎡−1 1 0 ⎤⎡0 −2 −1 ⎤ = LU 6 −4 0 2 −5 1 0 0 −1 ⎣ ⎦ ⎣ ⎦⎣ ⎦ in example 1. How can this be done? The key to doing this is to remember that U is in echelon form, so the U term should come from row reduction of A (lecture 02): 3 −7 −2 3 −7 −2 3 −7 −2 R2 +R1 R3 −2R1 R3 +5R2 A −−−−→ ⎡0 −2 −1 ⎤ −−−−→ ⎡0 −2 −1 ⎤ −−−−→ ⎡0 −2 −1 ⎤ = U. 6 −4 0 0 10 4 0 0 −1 ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ But this corresponds to left multiplication by the appropriate elementary matrices (lecture 06) giving : 3 −7 −2 E23(5) E13(−2)E12(1)A = ⎡0 −2 −1 ⎤ = U . 0 0 −1 ⎣ ⎦ Since elementary matrices are invertible, we thus see that A = LU where (lecture 06) 1 0 0 1 0 0 1 0 0 1 0 0 −1 −1 −1 L = E12(1) E13(−2) E23(5) = ⎡−1 1 0 ⎤⎡0 1 0 ⎤⎡0 1 0 ⎤ = ⎡−1 1 0 ⎤, 0 0 1 2 0 1 0 −5 1 2 −5 1 ⎣ ⎦⎣ ⎦⎣ ⎦ ⎣ ⎦ as a computation shows. These hand computations show how the U and L can be computed using only the ideas we've developed in previous lectures. In practice, of course, computer algebra systems like Mathematica, MatLab, Maple and Wolfram Alpha all contain routines for carrying out the calculations electronically for m × n matrices way way beyond m = n = 3. Use them!! Nonetheless, these hand calculations can be turned into a proof of Fundamental Theorems 1 and 2. We omit the details!!.
Recommended publications
  • Algorithmic Factorization of Polynomials Over Number Fields
    Rose-Hulman Institute of Technology Rose-Hulman Scholar Mathematical Sciences Technical Reports (MSTR) Mathematics 5-18-2017 Algorithmic Factorization of Polynomials over Number Fields Christian Schulz Rose-Hulman Institute of Technology Follow this and additional works at: https://scholar.rose-hulman.edu/math_mstr Part of the Number Theory Commons, and the Theory and Algorithms Commons Recommended Citation Schulz, Christian, "Algorithmic Factorization of Polynomials over Number Fields" (2017). Mathematical Sciences Technical Reports (MSTR). 163. https://scholar.rose-hulman.edu/math_mstr/163 This Dissertation is brought to you for free and open access by the Mathematics at Rose-Hulman Scholar. It has been accepted for inclusion in Mathematical Sciences Technical Reports (MSTR) by an authorized administrator of Rose-Hulman Scholar. For more information, please contact [email protected]. Algorithmic Factorization of Polynomials over Number Fields Christian Schulz May 18, 2017 Abstract The problem of exact polynomial factorization, in other words expressing a poly- nomial as a product of irreducible polynomials over some field, has applications in algebraic number theory. Although some algorithms for factorization over algebraic number fields are known, few are taught such general algorithms, as their use is mainly as part of the code of various computer algebra systems. This thesis provides a summary of one such algorithm, which the author has also fully implemented at https://github.com/Whirligig231/number-field-factorization, along with an analysis of the runtime of this algorithm. Let k be the product of the degrees of the adjoined elements used to form the algebraic number field in question, let s be the sum of the squares of these degrees, and let d be the degree of the polynomial to be factored; then the runtime of this algorithm is found to be O(d4sk2 + 2dd3).
    [Show full text]
  • Subclass Discriminant Nonnegative Matrix Factorization for Facial Image Analysis
    Pattern Recognition 45 (2012) 4080–4091 Contents lists available at SciVerse ScienceDirect Pattern Recognition journal homepage: www.elsevier.com/locate/pr Subclass discriminant Nonnegative Matrix Factorization for facial image analysis Symeon Nikitidis b,a, Anastasios Tefas b, Nikos Nikolaidis b,a, Ioannis Pitas b,a,n a Informatics and Telematics Institute, Center for Research and Technology, Hellas, Greece b Department of Informatics, Aristotle University of Thessaloniki, Greece article info abstract Article history: Nonnegative Matrix Factorization (NMF) is among the most popular subspace methods, widely used in Received 4 October 2011 a variety of image processing problems. Recently, a discriminant NMF method that incorporates Linear Received in revised form Discriminant Analysis inspired criteria has been proposed, which achieves an efficient decomposition of 21 March 2012 the provided data to its discriminant parts, thus enhancing classification performance. However, this Accepted 26 April 2012 approach possesses certain limitations, since it assumes that the underlying data distribution is Available online 16 May 2012 unimodal, which is often unrealistic. To remedy this limitation, we regard that data inside each class Keywords: have a multimodal distribution, thus forming clusters and use criteria inspired by Clustering based Nonnegative Matrix Factorization Discriminant Analysis. The proposed method incorporates appropriate discriminant constraints in the Subclass discriminant analysis NMF decomposition cost function in order to address the problem of finding discriminant projections Multiplicative updates that enhance class separability in the reduced dimensional projection space, while taking into account Facial expression recognition Face recognition subclass information. The developed algorithm has been applied for both facial expression and face recognition on three popular databases.
    [Show full text]
  • Finite Fields: Further Properties
    Chapter 4 Finite fields: further properties 8 Roots of unity in finite fields In this section, we will generalize the concept of roots of unity (well-known for complex numbers) to the finite field setting, by considering the splitting field of the polynomial xn − 1. This has links with irreducible polynomials, and provides an effective way of obtaining primitive elements and hence representing finite fields. Definition 8.1 Let n ∈ N. The splitting field of xn − 1 over a field K is called the nth cyclotomic field over K and denoted by K(n). The roots of xn − 1 in K(n) are called the nth roots of unity over K and the set of all these roots is denoted by E(n). The following result, concerning the properties of E(n), holds for an arbitrary (not just a finite!) field K. Theorem 8.2 Let n ∈ N and K a field of characteristic p (where p may take the value 0 in this theorem). Then (i) If p ∤ n, then E(n) is a cyclic group of order n with respect to multiplication in K(n). (ii) If p | n, write n = mpe with positive integers m and e and p ∤ m. Then K(n) = K(m), E(n) = E(m) and the roots of xn − 1 are the m elements of E(m), each occurring with multiplicity pe. Proof. (i) The n = 1 case is trivial. For n ≥ 2, observe that xn − 1 and its derivative nxn−1 have no common roots; thus xn −1 cannot have multiple roots and hence E(n) has n elements.
    [Show full text]
  • Prime Factorization
    Prime Factorization Prime Number ­ A number with only two factors: ____ and itself Circle the prime numbers listed below 25 30 2 5 1 9 14 61 Composite Number ­ A number that has more than 2 factors List five examples of composite numbers What kind of number is 0? What kind of number is 1? Every human has a unique fingerprint. Similarly, every COMPOSITE number has a unique "factorprint" called __________________________ Prime Factorization ­ the factorization of a composite number into ____________ factors You can use a _________________ to find the prime factorization of any composite number. Ask yourself, "what Factorization two whole numbers 24 could I multiply together to equal the given number?" If the number is prime, do not put 1 x the number. Once you have all prime numbers, you are finished. Write your answer in exponential form. 24 Expanded Form (written as a multiplication of prime numbers) _______________________ Exponential Form (written with exponents) ________________________ Prime Factorization Ask yourself, "what two 36 numbers could I multiply together to equal the given number?" If the number is prime, do not put 1 x the number. Once you have all prime numbers, you are finished. Write your answer in both expanded and exponential forms. Prime Factorization Ask yourself, "what two 68 numbers could I multiply together to equal the given number?" If the number is prime, do not put 1 x the number. Once you have all prime numbers, you are finished. Write your answer in both expanded and exponential forms. Prime Factorization Ask yourself, "what two 120 numbers could I multiply together to equal the given number?" If the number is prime, do not put 1 x the number.
    [Show full text]
  • Performance and Difficulties of Students in Formulating and Solving Quadratic Equations with One Unknown* Makbule Gozde Didisa Gaziosmanpasa University
    ISSN 1303-0485 • eISSN 2148-7561 DOI 10.12738/estp.2015.4.2743 Received | December 1, 2014 Copyright © 2015 EDAM • http://www.estp.com.tr Accepted | April 17, 2015 Educational Sciences: Theory & Practice • 2015 August • 15(4) • 1137-1150 OnlineFirst | August 24, 2015 Performance and Difficulties of Students in Formulating and Solving Quadratic Equations with One Unknown* Makbule Gozde Didisa Gaziosmanpasa University Ayhan Kursat Erbasb Middle East Technical University Abstract This study attempts to investigate the performance of tenth-grade students in solving quadratic equations with one unknown, using symbolic equation and word-problem representations. The participants were 217 tenth-grade students, from three different public high schools. Data was collected through an open-ended questionnaire comprising eight symbolic equations and four word problems; furthermore, semi-structured interviews were conducted with sixteen of the students. In the data analysis, the percentage of the students’ correct, incorrect, blank, and incomplete responses was determined to obtain an overview of student performance in solving symbolic equations and word problems. In addition, the students’ written responses and interview data were qualitatively analyzed to determine the nature of the students’ difficulties in formulating and solving quadratic equations. The findings revealed that although students have difficulties in solving both symbolic quadratic equations and quadratic word problems, they performed better in the context of symbolic equations compared with word problems. Student difficulties in solving symbolic problems were mainly associated with arithmetic and algebraic manipulation errors. In the word problems, however, students had difficulties comprehending the context and were therefore unable to formulate the equation to be solved.
    [Show full text]
  • Algebraic Number Theory Summary of Notes
    Algebraic Number Theory summary of notes Robin Chapman 3 May 2000, revised 28 March 2004, corrected 4 January 2005 This is a summary of the 1999–2000 course on algebraic number the- ory. Proofs will generally be sketched rather than presented in detail. Also, examples will be very thin on the ground. I first learnt algebraic number theory from Stewart and Tall’s textbook Algebraic Number Theory (Chapman & Hall, 1979) (latest edition retitled Algebraic Number Theory and Fermat’s Last Theorem (A. K. Peters, 2002)) and these notes owe much to this book. I am indebted to Artur Costa Steiner for pointing out an error in an earlier version. 1 Algebraic numbers and integers We say that α ∈ C is an algebraic number if f(α) = 0 for some monic polynomial f ∈ Q[X]. We say that β ∈ C is an algebraic integer if g(α) = 0 for some monic polynomial g ∈ Z[X]. We let A and B denote the sets of algebraic numbers and algebraic integers respectively. Clearly B ⊆ A, Z ⊆ B and Q ⊆ A. Lemma 1.1 Let α ∈ A. Then there is β ∈ B and a nonzero m ∈ Z with α = β/m. Proof There is a monic polynomial f ∈ Q[X] with f(α) = 0. Let m be the product of the denominators of the coefficients of f. Then g = mf ∈ Z[X]. Pn j Write g = j=0 ajX . Then an = m. Now n n−1 X n−1+j j h(X) = m g(X/m) = m ajX j=0 1 is monic with integer coefficients (the only slightly problematical coefficient n −1 n−1 is that of X which equals m Am = 1).
    [Show full text]
  • DISCRIMINANTS in TOWERS Let a Be a Dedekind Domain with Fraction
    DISCRIMINANTS IN TOWERS JOSEPH RABINOFF Let A be a Dedekind domain with fraction field F, let K=F be a finite separable ex- tension field, and let B be the integral closure of A in K. In this note, we will define the discriminant ideal B=A and the relative ideal norm NB=A(b). The goal is to prove the formula D [L:K] C=A = NB=A C=B B=A , D D ·D where C is the integral closure of B in a finite separable extension field L=K. See Theo- rem 6.1. The main tool we will use is localizations, and in some sense the main purpose of this note is to demonstrate the utility of localizations in algebraic number theory via the discriminants in towers formula. Our treatment is self-contained in that it only uses results from Samuel’s Algebraic Theory of Numbers, cited as [Samuel]. Remark. All finite extensions of a perfect field are separable, so one can replace “Let K=F be a separable extension” by “suppose F is perfect” here and throughout. Note that Samuel generally assumes the base has characteristic zero when it suffices to assume that an extension is separable. We will use the more general fact, while quoting [Samuel] for the proof. 1. Notation and review. Here we fix some notations and recall some facts proved in [Samuel]. Let K=F be a finite field extension of degree n, and let x1,..., xn K. We define 2 n D x1,..., xn det TrK=F xi x j .
    [Show full text]
  • 1 Factorization of Polynomials
    AM 106/206: Applied Algebra Prof. Salil Vadhan Lecture Notes 18 November 11, 2010 1 Factorization of Polynomials • Reading: Gallian Ch. 16 • Throughout F is a field, and we consider polynomials in F [x]. • Def: For f(x); g(x) 2 F [x], not both zero, the greatest common divisor of f(x) and g(x) is the monic polynomial h(x) of largest degree such that h(x) divides both f(x) and g(x). • Euclidean Algorithm for Polynomials: Given two polynomials f(x) and g(x) of degree at most n, not both zero, their greatest common divisor h(x), can be computed using at most n + 1 divisions of polynomials of degree at most n. Moreover, using O(n) operations on polynomials of degree at most n, we can also find polynomials s(x) and t(x) such that h(x) = s(x)f(x) + t(x)g(x). Proof: analogous to integers, using repeated division. Euclid(f; g): 1. Assume WLOG deg(f) ≥ deg(g) > 0. 2. Set i = 1, f1 = f, f2 = g. 3. Repeat until fi+1 = 0: (a) Compute fi+2 = fi mod fi+1 (i.e. fi+2 is the remainder when fi is divided by fi+1). (b) Increment i. 4. Output fi divided by its leading coefficient (to make it monic). Here the complexity analysis is simpler than for integers: note that the degree of fi+2 is strictly smaller than that of fi, so fn+2 is of degree zero, and fn+3 = 0. Thus we do at most n divisions.
    [Show full text]
  • Notes on Factors, Prime Numbers, and Prime Factorization
    Notes on Factors, Prime Numbers, and Prime Factorization Factors are the numbers that multiply together to get another number. A Product is the number produced by multiplying two factors. All numbers have 1 and itself as factors. A number whose only factors are 1 and itself is a prime number. Prime numbers have exactly two factors. The smallest 168 prime numbers (all the prime numbers under 1000) are: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 251, 257, 263, 269, 271, 277, 281, 283, 293, 307, 311, 313, 317, 331, 337, 347, 349, 353, 359, 367, 373, 379, 383, 389, 397, 401, 409, 419, 421, 431, 433, 439, 443, 449, 457, 461, 463, 467, 479, 487, 491, 499, 503, 509, 521, 523, 541, 547, 557, 563, 569, 571, 577, 587, 593, 599, 601, 607, 613, 617, 619, 631, 641, 643, 647, 653, 659, 661, 673, 677, 683, 691, 701, 709, 719, 727, 733, 739, 743, 751, 757, 761, 769, 773, 787, 797, 809, 811, 821, 823, 827, 829, 839, 853, 857, 859, 863, 877, 881, 883, 887, 907, 911, 919, 929, 937, 941, 947, 953, 967, 971, 977, 983, 991, 997 There are infinitely many prime numbers. Another way of saying this is that the sequence of prime numbers never ends. The number 1 is not considered a prime.
    [Show full text]
  • An Introduction to Factorization Algebras
    AN INTRODUCTION TO FACTORIZATION ALGEBRAS NOTES BY RYAN MICKLER AND BRIAN WILLIAMS Our primary reference are the books of Costello and Gwillam [CG17, CG]. 1. Prefactorization algebras 1.1. Definitions. A prefactorization algebra F on a topological space M, with values in V ect⊗ (the symmetric monodical category of vector spaces), is an assignment of a vector space F(U) for each open set U ⊆ M together with the following data: V • For an inclusion U ! V , a map µU : F(U) !F(V ). • For a finite collection of disjoint opens ti2I Ui ⊂ V , an ΣjIj- equivariant map µV : ⊗ F(U ) !F(V ) fUig i2I i The maps µV are subject to following compatibility for sequences fUig of inclusions: • For a collection of disjoint opens Vj ⊂ W , and collections of disjoint opens Uj;i ⊂ Vj, the composite maps ⊗j ⊗i F(Uj;i) / ⊗jF(Vj) & y F(W ) agree. For example, the composition maps for the following configuration of three open sets are given by: 1 Note that F(;) must be a commutative algebra, and the map ;! U for any open U, turns F(U) into a pointed vector space. A prefactorization algebra is called multiplicative if ∼ ⊗iF(Ui) = F(U1 q · · · q Un) via the natural map µtUi . fUig 1.1.1. An equivalent definition. One can reformulate the definition of a factorization algebra in the following way. It uses the following defi- nition. A pseudo-tensor category is a collection of objects M together with ΣjIj-equivariant vector spaces M(fAigi2I jB) for each finite open set I and objects fAi;Bg in M satisfying certain associativity, equivariance, and unital axioms.
    [Show full text]
  • Implementing and Comparing Integer Factorization Algorithms
    Implementing and Comparing Integer Factorization Algorithms Jacqueline Speiser jspeiser p Abstract by choosing B = exp( logN loglogN)) and let the factor base be the set of all primes smaller than B. Next, Integer factorization is an important problem in modern search for positive integers x such that x2 mod N is B- cryptography as it is the basis of RSA encryption. I have smooth, meaning that all the factors of x2 are in the factor implemented two integer factorization algorithms: Pol- 2 e1 e2 ek base. For all B-smooth numbers xi = p p ::: p , lard’s rho algorithm and Dixon’s factorization method. 2 record (xi ;~ei). After we have enough of these relations, While the results are not revolutionary, they illustrate we can solve a system of linear equations to find some the software design difficulties inherent to integer fac- subset of the relations such that ∑~ei =~0 mod 2. (See the torization. The code for this project is available at Implementation section for details on how this is done.) https://github.com/jspeiser/factoring. Note that if k is the size of our factor base, then we only need k + 1 relations to guarantee that such a solution 1 Introduction exists. We have now found a congruence of squares, 2 2 2 ∑i ei1 ∑i eik a = xi and b = p1 ::: pk . This implies that The integer factorization problem is defined as follows: (a + b)(a − b) = 0 mod N, which means that there is a given a composite number N, find two integers x and y 50% chance that gcd(a−b;N) factorspN.
    [Show full text]
  • Factorization of Polynomials Over Finite Fields
    Pacific Journal of Mathematics FACTORIZATION OF POLYNOMIALS OVER FINITE FIELDS RICHARD GORDON SWAN Vol. 12, No. 3 March 1962 FACTORIZATION OF POLYNOMIALS OVER FINITE FIELDS RICHARD G. SWAN Dickson [1, Ch. V, Th. 38] has given an interesting necessary con- dition for a polynomial over a finite field of odd characteristic to be irreducible. In Theorem 1 below, I will give a generalization of this result which can also be applied to fields of characteristic 2. It also applies to reducible polynomials and gives the number of irreducible factors mod 2. Applying the theorem to the polynomial xp — 1 gives a simple proof of the quadratic reciprocity theorem. Since there is some interest in trinomial equations over finite fields, e.g. [2], [4], I will also apply the theorem to trinomials and so determine the parity of the number of irreducible factors. 1. The discriminant* If f{x) is a polynomial over a field F, the discriminant of f(x) is defined to be D(f) = δ(/)2 with where alf , an are the roots of f(x) (counted with multiplicity) in some extension field of F. Clearly D(/) = 0 if / has any repeated τoot. Since D(f) is a symmetric function in the roots of /, D(f) e F. An alternative formula for D(f) which is sometimes useful may be obtained as follows: D(f) = Π(«i - <*sY = (-1)-'—1>/aΠ(«i - «;) = (-ir'-^'Π/'ίαO where n is the degree of f(x) and f\x) the derivative of f(x). In § 4, I will give still another way to calculate D(f).
    [Show full text]