
H. Riesel, Prime Numbers and Computer Methods for Factorization: Second Edition, Modern Birkhäuser Classics, DOI 10.1007/978-0-8176-8298-9, © Springer Science+Business Media, LLC 2012 BASIC CONCEPTS IN HIGHER ALGEBRA nd < x < (n + 1)d. But in such a case the integer y=x-d-d- .. ·-d=x-nd is also a member of M, because of its construction. But then obviously we also have 0 < x - nd = y < d, which implies that there is a positive number y < d in M, which contradicts the definition of d as the smallest positive integer of M. Hence it is impossible to have any integer :/: nd in the module, which proves the theorem. If a module contains the numbers a;, i = 1, 2, 3, ..., it also contains all (integer) multiples of these numbers and all sums and differences of these multiples (this is what is termed all linear combinations with integer coefficients of the numbers a;). If this completely exhausts the module, every element x of the module can be written in the form (Al.l) where the n; 's are arbitrary integers. If none of the numbers a; can be written as a linear combination with integer coefficients of the other a; 's, then the a; 's are called generators of the module, and the module (Al.l) is also the smallest module containing these generators. In this case also the representation (Al.1) for a general element of the module is the simplest possible. Example. The module having the generators 1, .fi and 1r has the elements x = m + n.fi + p1r, where m, nand pare arbitrary integers. Theorem Al.1 above can also be re-formulated as: Every integer module is generated by its smallest positive element. Euclid's Algorithm An important problem is: Given the integers a and b, what is the smallest module M containing a and b as elements? Since M is obviously an integer module and since each integer module consists of the integral multiples of its generator d, the problem is to find d. Since both a and b must be multiples of d, a = a1d and b = b1d, d certainly is a common divisor of a and b. Since we are looking for the smallest possible M, d obviously has to be as large as possible. Thus, d is the greatest common divisor (GCD) of a and b, denoted by d = GCD(a, b). Effective computation of d from a and b is carried out by Euclid's algorithm which, by repeated subtractions of integers known to belong to the module M, 240 EUCLID'S ALGORITIIM finally arrives at d. The technicalities of Euclid's algorithm may be described in this way: If a = b, obviously d = GCD(a, b) = a, and the problem is solved. If a =f. b, we can, if necessary, re-name the integers and call the larger of the two integers a. (Without loss of generality we consider only positive integers a and b, since -n has the same divisors as n.) Now subtract b from a as many times as possible to find the least non-negative remainder: · a-b-b-···-b=r. These subtractions are performed as a division of a by b, giving the quotient q and the remainder r: a = bq + r, with 0 ::::: r < b. (Al.2) The integer r belongs to the module M by construction. If r > 0, a still smaller member r1 of M can be found by a similar procedure from b and r, which both belong toM: b = rq1 + r1, with 0 ::::: r1 < r. (Al.3) In this manner, a decreasing sequence of remainders can be found until some remainder rn becomes = 0. We then have a > b > T > TJ > ... > Tn-1 > Tn = 0. (A1.4) Some r n will = 0 sooner or later, because every strictly decreasing sequence of positive integers can have only a finite number of elements. The integer rn-1 is the desired GCD of a and b. This follows from two facts: Firstly every common divisor of a and b is found in each of the remainders r;. Secondly all the integers a, b, r, TJ, r2, ••• , Tn-2 are multiples of Tn-1· Example. Find GCD(8991, 3293). The computation runs: 8991 = 2 . 3293 + 2405 3293 = 1 . 2405 + 888 2405 = 2 . 888 + 629 888 = 1 . 629 + 259 629 = 2 . 259 + 111 259 = 2 · Ill + 37 111=3·37. Thus d = 37, and we find a= 37 · 243, b = 37 · 89. Since -n has the same divisors as n, Euclid's algorithm operates equally well when using the smallest absolute remainder instead of using the smallest 241 BASIC CONCEPTS IN HIGHER ALGEBRA positive remainder, as we have done above. Normally this cuts down the number of divisions required. This variant, performed on the same integers as above, runs: 8991 = 3. 3293-888 3293 = 4 . 888 - 259 888 = 3 . 259 + 111 259 = 2 . 111 + 37 111 = 3. 37. The Labour Involved in Euclid's Algorithm The worst possible case in Euclid's algorithm is when all successive quotients= 1, because in this case the remainders decrease as slowly as they possibly can, and the computation will take a maximal number of steps for integers of a given size. It can be shown that this case occurs when afb ~.A = (1 + .JS)/2 = 1.618, where the maximal number of steps is about logl. a which is 4.8log10 a. The average number of steps for randomly chosen integers a and b is much smaller and turns out to be 1.94log10 a. This slow increase of the computational labour as a grows is very favourable indeed, because it implies that if we double the length (in digits) of a and b, the maximal number of steps needed to find GCD(a, b) is only doubled. If we also take into account that the multiplication and division labour, using reasonably simple multiple-precision algorithms in a computer, grows quadratically with the length of the numbers involved, then the computing labour for Euclid's algorithm in total is at most 0 (logmax(a, b))3 • (Al.5) This growth of the labour involved in executing an algorithm is called polynomial growth (because it is ofpolynomial order oflog N, N being the size of the number( s) involved). The best algorithms found in number theoretic computations are of polynomial order growth. A Definition Taken From the Theory of Algorithms The amount of computational labour required to solve a certain mathematical problem usually depends on the size of the problem given, i.e., on the number of variables or on the size of the numbers given, etc. For instance, the work needed to solve a system of n linear equations is proportional to n3, and the work involved in computing the value of the product of two very large numbers, using the normal way to calculate products, is proportional to the product of the lengths of the numbers, which means that numbers twice as large will demand computations four times as laborious. Within the theory of algorithms, it is important to study how much computa­ tional labour is required to carry out various existing algorithms. In this context, 242 A COMPUTER PROGRAM FOR EUCLID'S ALGORITHM the size of a variable is usually defined using its information content, which is given by a number proportional to the length of the variable measured in digits. The amount of computational labour needed to carry out the algorithm is then deduced as a function of the information content of each of the variables occurring as input to the problem in question. That is the reason why the work involved in Euclid's algorithm is said to grow cubically (tacitly understood "with the length of the largest of the two numbers a and b given as input to the algorithm"). The most important algorithms in number theory are those which have only polynomial order of growth, since for these algorithms the work needed grows only in propor­ tion to some power of log N. This is very favourable indeed, and such algorithms are applicable to very large numbers without leading to impossibly long execution times for computer runs. As examples, we may mention simple compositeness tests, such as Fermat's test on p. 85, and the solution of a quadratic congruence with prime modulus on p. 284. For many algorithms there is a trade-off between the computing time and the storage requirements when implementing the algorithm. This aspect is most important in all practical work with many of the algorithms described in this book. As an example we give factor search by trial division in which very little storage is needed, if implemented as on p. 143. If much storage is available, however, the algorithm can be speeded up considerably by storing a large prime table in the computer, as described on p. 8. Another example is the computation of lfl(x, a) on pp. 14-17, where the function values are taken from a table for all small values of a and have to be computed recursively for the larger values of a. The border line between the two cases depends on the storage available in the computer, and affects the speed of the computation in the way that a larger storage admits a faster computation of lfl(x, a) for any fixed, large value of a. A Computer Program for Euclid's Algorithm A PASCAL function for the calculation of GCD(a, b) by means of Euclid's algo­ rithm is shown below: FUNCTION Euclid(a,b : INTEGER) : INTEGER; {Computes GCD(a,b) with Euclid's algorithm} VAR m,n,r : INTEGER; BEGIN m:=a; n:=b; WHILE n <> 0 DO BEGIN r:=m MOD n; m:=n; n:=r END; Euclid:=m END {Euclid}; Exercise Al.l.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages226 Page
-
File Size-