
Basic Algorithms in Number Theory Joe Buhler, Stan Wagon July 29, 2007 Abstract A variety of number-theoretic algorithms are described and analyzed with an eye to providing background for other articles in this volume. Topics include Euclid’s algorithm, continued fractions, factoring, primality, modular congruences, and quadratic fields. 1 Introduction Our subject combines the ancient charms of number theory with the modern fascination with algorithmic thinking. Newcomers to the field can appreciate this juxtaposition by studying the many elementary pearls in the subject. The aim here is to describe a few of these gems with the combined goals of preparing the reader for subsequent articles in this volume, and luring him or her into pursuing full-length treatments of the subject, such as [2], [5], [10], [9], [16], [15], [37]. Many details will be left to the reader, and we will assume that he or she knows (or can look up) basic facts from number theory, algebra, and elementary programming. 2 Complexity, informally Algorithms take input and produce output. The complexity of an algorithm A is a func- tion CA(n), defined to be the maximum, over all input I of size n, of the cost of running A on input I. Cost is often measured in terms of the number of “elemental opera- tions” that the algorithm performs and is intended, in suitable contexts, to approximate running time of actual computer programs implementing these algorithms. A formalization of these ideas requires precise definitions for “algorithm,” “input,” “output,” “cost,” “elemental operation,” and so on. We will give none. Instead, this section gives a series of algorithms answering number-theoretic ques- tions, and then discusses their complexity. This permits a quick survey of some al- gorithms of interest in number theory, and a discussion of complexity theory from a naive number theorists’ point of view. Fortunately, this informal and intuitive approach to complexity is usually sufficient for purposes of algorithmic number theory. Precise 1 foundations can be found in many texts on theoretical computer science or algorithmic complexity [13], [17] [19] . The first problem arises in elementary school. Problem 1. Multiplication: Given integers x and y, find their product xy. As stated, the question is woefully imprecise. We interpret it in the following natural (but by no means only possible) way. An algorithm to solve this problem takes the base b > 1 representation of integers x and y as input, given as strings of symbols. The algorithm (idealized computer program or Turing machine) follows a well-defined (”deterministic”) procedure to produce a string of output symbols which is the base b representation of the product xy. In practice, one might expect b to be 2, 10, 232, 264, etc. The natural notion of the size of an integer x is the total of the number of symbols (base b digits) in the input, perhaps augmented by a small constant to delimit the number and give its sign. For the sake of definiteness, we define the base-b size of x to be sizeb(x) := 1 + dlogb(1 + |x|)e, where logb is the logarithm to the base b, and due is the ceiling of u — the smallest integer greater than or equal to u. The size of an integer x is O(log(|x|)), where g(x) = O(f(x)) is a shorthand statement saying that g is in the class of functions such that there is a constant C such that 0 |g(x)| ≤ C|f(x)| for sufficiently large x. Note that O(logb(x)) = O(logb0 (x)) for b, b > 1. In particular, if we are interested in complexity only up to a constant factor, the choice of b is irrelevant. The multiplication algorithm that we all learned as youngsters takes O(n2) digit operations on input of size n. More precisely, if x and y have size at most n, then approximately n2 digit-sized multiplications, and n additions of n-digit intermediate products are required. Notice that the O() notation gracefully summarizes the running time of the algorithm; e.g., the complexity O(n2) is independent of base b, precise details of measuring the size of an integer, the definition of size of two inputs (as the maximum of the two integer inputs, or the total of their sizes), etc. An algorithm is said to take polynomial time if for some k its complexity is O(nk) for input of size n. Although this is a very flexible definition, with unclear relevance to computational practice, it has proven to be remarkably robust. In fact, it is sometimes reasonable to take “polynomial time” as synonymous with “efficient,” and in any case the notion has proved useful in both theory and practice. Of course, once it is known that a problem can be solved in polynomial time, it is interesting to find the smallest possible exponent k. For instance, in the 1970’s Sch¨onhage ([30]) discovered a striking algorithm for Multiplication that takes time O(n log(n) log log(n)) to multiply two n-digit integers. This unexpectedly low complexity led to a number of related “fast” algorithms; see [4] in this volume for arithmetic and algebraic examples. The elemental operations above are operations on single digits. The resulting notion of the complexity (running time) of an algorithm is called bit complexity, since if b = 2 then the underlying operations act on bits. In other contexts it might be more useful to assume that any arithmetic operation takes constant time (e.g., if all integers are 2 known beforehand to fit into a single computer word). When complexity of an algorithm is defined by counting arithmetic operations, the result is said to be the arithmetic complexity of the algorithm. In this model the cost of a single multiplication is O(1), showing that complexity depends dramatically on the underlying computational model. Problem 2. Exponentiation: Given x and a nonnegative integer n, compute xn. Again, problem is underspecified. We will assume that x is an element of a set that has a well-defined operation (associative with an identity element) that is written mul- tiplicatively; moreover, we will measure cost as the number of such operations required to compute xn on input x and n. We take the size of the input to be the size of the integer n. Although x16 can be computed with 15 multiplications in an obvious way, it is faster P i to compute it by 4 squarings. More generally, the binary expansion n = ai2 , ai ∈ {0, 1} implies that xn = xa0 (x2)a1 (x4)a4 ··· , (1) which suggests to a clever way to interleave multiplications and squarings: Right-to-left Exponentiation Input: x as above, and a nonnegative integer n Output: xn 1. y := 1 2. While n > 0 if n is odd, y = xy (ai is 1) x := x2, n := bn/2c 3. Return y Here “:=” denotes assignment of values to variables, “1” denotes the identity for the operation, and the floor buc is the largest integer less than or equal to u. The correctness of the algorithm is reasonably clear from (1) since for k ≥ 0, x2k is multiplied into y if and only if the k-th bit ak bit of the binary expansion of n is nonzero; it can proved more formally by showing by induction that at the beginning of Step 2, XN = xny is true, where X and N denote the initial values of the variables x and n. When n is 0 equation says that XN = y, so that y is the desired power. The usual inductive definition of Exp(x, n) := xn gives an obvious recursive algo- rithm: 1 if n = 0 Exp(x, n) = Exp(x2, n/2) if n > 0 is even (2) x · Exp(x2, (n − 1)/2) if n is odd. Experienced programmers often implement recursive versions of algorithms because of their elegance and obvious correctness, and when necessary then convert them to equiv- alent, and possibly faster, iterative (non-recursive) algorithms. If this is done to the recursive program, the result is to Right-to-Left algorithm above. Curiously, if the clause for even n is changed to the mathematically equivalent Exp(x, n) = Exp(x, n/2)2, n even 3 (and similarly in the odd clause) then the corresponding iterative algorithm is genuinely different. Left-to-Right Exponentiation Input: x, a nonnegative integer n, and a power of two m such that n < m Output: xn 1. y := 1 2. While m > 1 m := m/2, y := y2 If n ≥ m then y := xy, n := n − m 3. Return y Correctness follows inductively by proving that at the beginning of Step 2, n < m and m n N y x = x . In contrast to the earlier algorithm, this version consumes the bits ai (in the binary expansion of n) starting with the leftmost, i.e., most significant, bit. The complexity of any of the versions of this algorithm (collectively called Exp in the sequel) is O(log(n)) since the number of operations is bounded by 2 · size2(n). This remarkable efficiency is put to innumerable uses in algorithmic number theory, as will be seen. Note that the naive idea of repeatedly multiplying by x takes time O(n), which is exponential in the size of the input. Remark 3. In a specific but important special case the left-to-right version of Exp is better than the right-to-left version. Suppose that our operation is “multiplying mod- ulo n” and that x is small relative to N. Then multiplications by the original x are likely to take less time than modular multiplications by an arbitrary integer X, 0 ≤ X < N.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages38 Page
-
File Size-