<<

On the Complexity of Real Functions

Mark Braverman1 Department of Computer Science University of Toronto [email protected]

Abstract with the computability notion of a single real . To- day, we still do not have a unified and widely accepted no- We establish a new connection between the two most tion of real computability and complexity. common traditions in the theory of real computation, the Consider the two simplest objects over Rn: real sets and Blum-Shub-Smale model and the Computable Analysis ap- real functions. Several different approaches to the com- proach. We then use the connection to develop a notion putability of these objects have been proposed over the last of computability and complexity of functions over the re- seven decades. Unlike the discrete case, most approaches als that can be viewed as an extension of both models. We deal with computability of functions before the argue that this notion is very natural when one tries to de- of sets. In the current paper we consider the two approaches termine just how “difficult” a certain is for a very that are most common today: the tradition of Computable rich class of functions. Analysis – the “bit model”, and the Blum-Shub-Smale ap- proach. These two approaches have been developed to cap- ture different aspects of real computation, and in general are 1 Introduction not equivalent. The bit-model is very natural as a model for scientific computing when continuous functions on compact domains Questions of computability and complexity over the re- are√ involved. In particular, the “calculator” functions, such als are of extreme importance for understanding the connec- as x, sin x and log x are bit-computable, at least on some tions between nature and computations. Assessing the pos- properly chosen domains. None of these functions are com- sibilities of scientific computing in simulating and predict- putable in the BSS model, because computability in the BSS ing natural processes depends on an agreed upon and well model requires the function to have a very special algebraic studied notion of “real computation”. Addressing issues of structure (essentially being piecewise-rational). This makes physics and the Church-Turing Thesis also requires a strong the BSS notion inapplicable to scientific computing unless notion of computability over continuous spaces [23]. some modifications are made. The BSS model mimics some In the discrete setting, where the objects are from aspects of the actual way numerical analysts think about { }∗ 0, 1 , the concepts of computability and complexity are problems, and in fact [4] contains many strong results on very well studied. There are different approaches that have solving fundamental numerical problems in algebra, such been proved to yield equivalent definitions of computabil- as finding roots of a . The model’s disadvan- ity and (almost) equivalent definitions of complexity. From tage is the difficulty in interpreting negative results. The the Formal Logic point of view we have the notion of re- function ex and its graph are not BSS computable, and yet cursiveness, from we have the we can easily compute the exponential function and plot its notion of a , and modeling the usual com- graph. Modifications that can be made to the model to deal puter yields the notion of abstract RAM. All these converge with these problems have been discussed in [20]. To our to the same well accepted concept of computability. knowledge, these modifications have not been formalized. In the continuous setting, where the objects are On the other hand, there are very natural and simple R in , the situation is far less clear. Real Computation has functions that are BSS computable but not bit-computable. been studied since Turing’s original 1936 paper [21], where The simplest step function χ[0,∞) is an example. he introduced the Turing Machine. In that paper he dealt In the set setting, the models also often give fundamen- 1Research is partially supported by an NSERC postgraduate scholar- tally different answers to questions of computability. Con- ship sider, for example, the Koch snowflake K, the Julia set J = Jz2+1/4, and the Mandelbrot set M. In the BSS model The paper is organized as follows. In Section 2 we dis- none of these sets are computable (see [4]). In the bit model, cuss the bit model, and present some old and some new re- K is very easy to draw and is computable, J is computable, sults. In Section 3 we discuss the BSS model and possible although not as easily as K [3], and the question of whether modifications to it. In Section 4 we propose a new com- M is computable is open and depends on some deep con- putability and complexity definition for some discontinuous jectures from complex dynamics [14]. For most “common” and multi-valued functions. sets BSS is a more restrictive model, but this is not always the case, as demonstrated in section 3.2. 2TheBitModel We propose three simple natural modifications to the BSS model. Firstly, we allow the machines to use only com- putable constants. This amounts to working on a smaller 2.1 The Model of Computation field of the computable reals. Secondly, we allow the ma- chines to make some errors. This has been done infor- The computability of functions in the bit model as we mally before by analyzing the “error+condition number” of know it today was first proposed by Grzegorczyk [13] and a problem in the BSS model. Thirdly, we require an apriori Lacombe [16]. It has since been developed and generalized. estimate on the running time given as a function of the error More recent references on the subject include [15], [18], parameter ε. In section 3.2 we present an example show- and [22]. ing the importance of this condition. We show that under The basic model of computation here is an oracle Turing these modifications the BSS model becomes equivalent to D { k ∈ N ∈ N} Machine. Denote by = 2 : k , the the bit-model for sets. This involves simulating an infinite- set of the dyadic numbers. These are the rationals that have precision machine with a finite precision one. It can only be a finite binary expansion. An oracle for a number x is a done symbolically using the algebraic structure of the ma- function φ : N → D such that |φ(n)−x| < 2−n. The oracle chine’s constants and Q, followed by a general quantifier terminology is just a natural way to separate the complexity elimination over R. of computing x from the complexity of computing on x as a parameter. For most purposes one can think of the oracle φ for x as an infinite tape containing the binary expansion of x. Consider a function f : R → R. In plain language, a program M computing f would output a good approx- imation of f(x), provided approximations of the input x. More formally, the oracle machine M φ(n) outputs a 2−n- approximation of f(x) for any valid oracle φ for x.The definition extends naturally to a function f : Rk → R. φ1,φ2,...,φk Figure 1. The sets K (left), J (center), and M Here M = M (n) is allowed to query each of (right) the k parameters with an arbitrarily good precision and is required to output the  values of f with precision 2−n.

The primary goals of this paper are: 2.2 Basic Properties and Examples • to show that under the reasonable modifications men- tioned above, the two models become computationally One of the main properties of computable functions is that they are continuous. For a computable f : R → R, equivalent for sets; φ the machine Mf (n) can query the input x via φ with only • to propose a new notion of computability for functions finite level of precision before it outputs an approximation which extends both models, and takes advantage of q of f(x). This means that q should be a good approxima- some positive features in both; and tion of f(x) for all x in some small neighborhood of x.   • based on the above, to propose the “right” notion of Thus f(x ) should be close to f(x) whenever x is close computational complexity for discontinuous functions, enough to x. A formal proof can be found in most standard extending naturally from the continuous case. references, e.g. in [22]. The computable ⇒ continuous property implies that This gives us the “right” notion of complexity for func- even the simplest discontinuous function, the step function tions for which no such notion√ previously existed. For ex- ample, the 2-valued function : C → C, as well as its 0 x<0 s0(x)= (1) single-valued branches defined on subsets of C. 1 x ≥ 0

2 is not computable under this definition. This is in contrary 1. If a pixel contains a point from S, then it is colored to the intuition that s0 must be a “simple” function. One black. This ensured that the entire set appears on the could also argue that some physical systems, e.g. quantum screen. energy levels, are best described using step functions and −n other simple discontinuous functions. One of the goals of 2. If a pixel is far (say 2 -far) from S, then it is colored the present work is to develop notions which deal with this white. This ensures that the picture is a faithful image problem. of S. We recall the classical definition of computable real We can take the pixels to be balls of radius 2−n with a numbers, introduced by Turing in [21]. Informally, this def- dyadic center d ∈ Dn. Formally, inition says that a x is computable if we can n compute arbitrarily good approximations of x. Definition 2 We say that a bounded set S in R is bit- computable, if there is a machine M(d, n) computing a Definition 1 A number x ∈ R is computable if and only if a function from the family  representation φ of x as described above can be computed.  1 if B(d, 2−n) ∩ S = ∅ · −n ∩ ∅ f(d, n)= 0 if B(d, 2 2 ) S = (2) A constant function f(x)=a is computable if and only 0 or 1 otherwise if the number a is computable. Most “standard” continuous functions are computable in this model. For instance, the On Fig. 2 we see some sample values of the function f. x R exponential function f(x)=e is computable on using It should be noted that the definition remains the same if x ∞ xk the Taylor series expansion of e = k=0 k! . It is not hard we take pixels instead of the round ones. It is also to estimate the number of terms and the precision of x we unchanged if we replace the ratio between the inner and the need to consider in order to get a 2−n-approximation of ex. outer radius with some α>1 instead of 2. Denote by C the set of computable real numbers. Then C is closed under applications of any computable functions. In particular C is a field. Using the convergence of the New- ton method for approximating roots of one sees that C + iC⊂C is an algebraically closed field. The Tf (n) for a function f is the worst- case time complexity for computing a 2−n-approximation of f(x) given an oracle φ for x. We charge m time units for querying φ(m). Note that this definition is completely in the classical setting, and all the usual complexity function classes are defined here. For continuous functions, the complexity notion backs our intuitive perception of “hard” vs “easy” functions. For example, all the continuous “calculator” functions are com- putable in time not exceeding O(n3). Figure 2. Sample values of f. The radius of −n 2.3 Computability and Complexity of Real Sets the inner circle is 2 .

n Definitions of effective subsets of R based on the con- The time complexity TS(n) is defined as the worst-case cept of computability have been proposed as early as the running time of a machine M(d, n) computing a function mid 50’s [17]. We refer the reader to [10] and [22] for a from the family (2). Low time complexity means that it is more detailed discussion. By “computing” a set S, we mean easy to draw deep zoom-ins of the set S. We say that S is generating increasingly precise “images” of S. We restrict poly-time computable if TS(n)

3 is bounded by a polynomial. But we are drawing S on We are given a point d in D, and n>0, and want to a finite-resolution display, and we will only need to draw return 1 if (d − 2−n,d+2−n) ∩ S = ∅ and 0 if (d − 2 · 1000 · 1000 = 106 pixels. Thus the running time would be 2−n,d+2· 2−n) ∩ S = ∅. 6 O(10 · TS(n)) = O(TS(n)). This running time is polyno- We consider the infinite tree T of all the oracles for all −n −n mial in n if and only if TS(n) is polynomial. Hence TS(n) the points in (d − 2 ,d+2 ). On Fig. 3, the first three 1 reflects the ‘true’ cost of zooming into S. levels of T are presented for d = 2 and n =1(all the So far, one might have been left with the impression that numbers are written in binary). Each infinite path in T rep- the computability definition above is not very robust. In resents a real number in the interval we are interested in. fact, on the contrary, it is equivalent to several other rea- For each real x in the interval, there is a path converging to sonable definitions. For example, S is computable if and it. In fact, there is usually more than one such path. only if the distance function, dS(x)=infy∈S |x − y|,is computable. We now present a more subtle equivalent definition. This definition was first introduced by Chow and Ko in [12] un- der the name of strong recognizability. It wasn’t known at the time that this definition is, in fact, equivalent to the usual bit-computability definition (Definition 2), as we will show below. The idea of the definition is to relax the conditions of Definition 2. We are given a point x as an oracle to M φ(n), and we must output 1 if x ∈ S and 0 if x is 2−n-far from S. Formally (renaming the concept to avoid confusion),

Definition 3 [12] We say that a set S is weakly computable if there is an oracle Turing Machine M φ(n) such that if φ represents a real number x, then the output of M φ(n) is   1 if x ∈ S φ −n ∩ ∅ M (n)= 0 if B(x, 2 ) S = (3) 0 or 1 otherwise Figure 3. The first three levels of the tree T

This definition is obviously weaker than the standard φ one, because we can output whatever answer we want if We simulate the run of the weak machine M (n).The x/∈ S is very close to S. On the other hand, if we try to use goal is to make the simulation for all the real points in the interval (d − 2−n,d+2−n) simultaneously. a “weak computer” for S to draw S by running it on some −m grid, we might be left with an empty picture. This occurs If the machine asks for x with precision 2 , m

4 are the subcomputations described above and a computation The BSS model in general is formulated for computa- s Ci is the parent of the 3 computations it launches. If the tion over an arbitrary ring or field R (for our purposes one entire computation does not terminate, then there are two can take R = R or R = C). The machines in this model possibilities: either one of the computations C fails to ter- are allowed to store an entire element of R in one mem- minate without calling to subcomputations, or the tree of all ory cell. The operations the machine is allowed to perform the computations to be performed is an infinite tree. on numbers are (i) the ring operations (+, −, ·, and ÷ if In the first case the points represented by the oracle R is a field); and (ii) branching conditioned on exact com- leading to C would cause M φ(n) to run forever. In the parisons between numbers (= and <, ≤,ifR is ordered). second case, by Konig’s¨ lemma, there must be an infinite Initially, the program is allowed to have only some finite branch in the computations tree. Denote the branch by number of constants from R. A machine computes a func- k  k C1,C2,C3,.... That is, C1 calls C2, C2 calls C3 etc. Note tion f : R → R on a domain D ⊂ R , if on an input  that each Ci works with a path pi of T and pi+1 strictly x ∈ D, it outputs f(x) ∈ R . A machine decides aset n extends pi for each i, hence the infinite sequence of Ci cor- S ⊂ R if it computes the characteristic function χS(x) on n responds to an infinite path p in T . The path converges to R . If one takes R = Z2 (or any other finite ring), the BSS a real number x ∈ [0, 1], and p gives rise to an oracle φ model becomes equivalent to the standard (discrete) Turing for x. By the construction, the sequence of C1,C2,C3,... Machine. φ φ simulates the computation of M (n). Hence M (n) does The BSS machines are closely related to the correspond- not terminate, contradiction. This shows that the ing arithmetic circuits over R, just as ordinary Turing Ma- terminates. Note that for the proof to work we need the fact chines correspond to boolean circuits. that every Cauchy sequence in D converges to a limit in our domain R. ∈ For the correctness we see that if there is an x S in the 3.2 Examples of BSS Computable and Uncom- interval, then there is an oracle φ for x which corresponds to putable Sets an infinite path in T . The computation branch correspond- ing to this path will return 1, and the algorithm will output 1. If, on the other hand, x is far from S, then every branch A big family of examples of BSS computable sets are the in the computation corresponds to a point which is at least −n semi-algebraic sets. In particular, any singleton S = {c}, 2 -far from S, and all of them will return 0, and the algo- any line segment and any ball in Rn is BSS computable. rithm outputs 0 in this case. Note that {c} as well as the constant function f(x)=c A version of the following theorem has been first proved are computable, regardless of whether the number c can be in [24], (see also [9]). Theorem 4 allows us to give a simple actually approximated or not. Unlike the bit model, in the proof of it by showing that the graph Γf is weakly com- BSS model the step function s0(x) is easily computable in putable. The simplified proof can be found in [11]. constant time. Theorem 5 Suppose D is a computable closed and On the other hand, as we have mentioned in the introduc- bounded set, and f is continuous on D. Then f is com- tion, simple functions and sets that lack the specific alge- putable if and only if its graph Γf = {(x, f(x)) : x ∈ D} braic structure are not√ BSS computable. Examples of those is computable. include the functions x, sin x and log x, and sets such as Koch’s snowflake K and the graph of the exponential func- x 3 The BSS Model and its Modifications tion e (cf. [8]). This is caused by the advantage given to the algebraic operations (+, −, · and ÷) in the BSS model. 3.1 The Model We will now present a somewhat more subtle example of a BSS computable set which is not bit-computable. It The BSS model is quite different from the bit model of will be useful later in the discussion. First, let R(x, y) computability. Like the bit model, it also extends the stan- be a computable (binary) predicate such that the predicate ∃ dard Turing machines to deal with the continuous reality. H(x)= yR(x, y) is uncomputable and if H(x) holds, In the bit model the extension is through application of then the y satisfying R(x, y) is unique. One can take x to the standard machine to continuous problems using oracles be the encoding of a Turing machine, and R(x, y) =“the and naming systems for continuous objects (cf. [22]). The machine encoded by x halts after exactly y steps”. Then BSS approach extends the model of computation itself. We R(x, y) is computable by a simple simulation, while H(x) present here an informal description of the model, which is is the halting problem, which is undecidable. equivalent to a formal one, but is simpler to comprehend for We construct the following C0 ⊂ [0, 1]×[0, 1]. 1 1 a reader who is new to the subject. Denote Ii =[i+1 , i ] for i =1, 2,.... Then [0, 1] =

5 “switch on/off”). From the theoretical point of view, this restriction bars us from classifying discontinuous and multi- valued functions. The simple step function, and a func- tion solving the halting problem fall into the same “uncom- putable” category.

3.3.1 Uncomputable Constants. The first concern to address is the use of uncomputable numbers. It is unreasonable to say that a function f(x)=a, where a is a constant encoding the halting problem is com- putable. The simple solution is to restrict the BSS machines to use only computable constants. Recall that the computable numbers, which we denote by C, are the numbers that can be approximated arbitrarily well on a computer. C is a countable real closed field, and that C + iC⊂C is algebraically closed. Thus, it makes sense to discuss BSS machines over C rather than on the entire R. To emphasize the field C we are working with, we Figure 4. A BSS computable set C0 one can- will denote this model by BSSC. It now follows easily that not draw. simple geometric objects, such as singletons, line segments and balls are BSSC-computable if and only if they are bit- computable. ∞ ∪i=1Ii ∪{0}. Define   3.3.2 Computation Errors     C0 =({0}×[0, 1]) Ii × Ij . . The next possible modification addresses the problem x R(i,j)=1 of the BSS uncomputability of functions such as e .The BSS model is in part based on the fact that real-life com- It is not hard to see that C0 is closed. There are no accu- puters usually use the four arithmetic operations as a base mulation points on (0, 1] ×{0} because for each value of i x ∞ xn to performing real computations. e = n=0 n! can be there is at most one value of j such that R(i, j)=1. See viewed as an infinite-degree polynomial, and is approxi- Fig. 4 for a schematic construction of C0. mated arbitrarily well with the finite degree polynomials C0 is BSS computable. We first check whether a given n xi pn(x)= i=0 i! . In fact, real-life programs never com- point (x, y) is on one of the axes. If it isn’t, we can localize x pute e , but only pn(x) with some suitably chosen n.This Ii,j (x, y) R(i, j) the rectangle in which lies, and output . can be done using only the four arithmetic operations. Observe that one cannot draw a good image of the set C0. We further modify BSSC by allowing the machines to In fact, in order to decide whether to put a pixel in a small err within a given precision ε. We denote this model by 2i+1 ε neighborhood of the point i(i+1) , 0 , the middle of the BSSC. In this model, a BSS machine M(x, ε) is said to interval Ii on the x-axis, one needs to essentially compute compute f(x), if on an input (x, ε), ε>0, it outputs f(x) H(i), which is impossible. with an error of at most ε, and using only computable con- stants. Note that the simple step function s0(x),aswell 3.3 Possible Modifications to the BSS Model as any other BSSC computable functions, is computable in ε BSSC. ε In this section we discuss possible modifications to the One can now define the BSSC computability of sets in a ε BSS models which address some issues raised in the pre- natural way. A bounded set S is BSSC-computable, if there vious section. Modifications to the model have been pro- is a BSS machine M(x, ε) which uses only computable posed in [6, 7] leading to a notion of “feasible real RAM” constants, and on input (x, ε) it outputs 1 if x ∈ S and 0 which is essentially equivalent to the bit-computability. The if d(x, S) >ε. With this definition, the graph of ex be- main idea there was, that exact comparisons are not possi- comes computable. The sets K and J mentioned in the ble on real-life devices, and should not be permitted in the introduction also become computable. model. We argue that this is not always the case. From the It should be noted that if we drop the requirement of us- practical standpoint, some physical and other natural sys- ing only computable numbers, all the bounded sets are eas- tems are best described with discontinuous functions (e.g. ily seen to be computable. It is not hard to encode all the

6 ε,b “pictures” of any bounded set S into one (possibly uncom- 3.4 Computability of sets in BSSC putable) number c. In other words, all the sets are com- ε putable in BSS . We prove the following theorem. Due to space con- straints we only outline most of the proof here, leaving some details out. The complete proof can be found in [11]. 3.3.3 Unbounded Computation Branches. Theorem 6 Let S ⊂ Rk be a bounded set. Then S is ε,b The third modification addresses the problem highlighted BSSC -computable if and only if it is bit-computable. by the example C0 on Fig. 4. The problem is the excessive power the BSS model gets from the possibility of having ε,b Proof Outline: S is bit-computable ⇒ S is BSSC - arbitrarily long computation paths (as it happened in the ex- computable. This is the easier direction. A BSS machine ample above). over any R is more general than a regular Turing Machine, x In the case of “simple” computations, such as e , or its and it can simulate the bit machine computing S. graph, we can easily estimate the number of steps the ma- ε,b S is BSS -computable ⇒ S is bit-computable. This chine would have to perform as a function of ε. We include C is a more involved direction. The reduction we will give this condition as an additional restriction on the BSSε ma- C is not uniform in S. It cannot be uniform due to the fact chines. ε,b ε,b that BSSC uses arbitrary computable constants. Even the We say that a function (or a set) is BSSC computable, ε simplest questions, such as “a = b?”, are not decidable for if it is BSSC computable by a machine M, and the running arbitrary computable reals a and b presented by Turing ma- 1 ε,b time of M can be bounded by τ ε for some integer chines computing them. Denote the BSS machine com- N → N n C computable function τ : . We say that τ(2 ) is the puting S by M(x, ε). time complexity of the set. The nonuniform information needed. Suppose that Under this restriction the set C0 on Fig. 4 is not com- the BSS machine M uses  constants a1,a2,...,a ∈ R. x x putable, while the function e on [0, 1], the graph of e and We would need the following algebraic information about the step function s0√(x) are computable. Other “calculator” a1,...,a, in addition to the Turing machines approximat- functions, such as x, sin x and log x also become com- ing them: putable. As before, if we allow the use of uncomputable constants, all the sets become computable. 1. The algebraic degree di of ai over the field The restriction guarantees that a computation with any Q(a1,...,ai−1), and finite precision has a branching tree of a computable finite 2. if this algebraic degree is finite (i.e. ai is algebraic over size. Note that the machine computing C0 has an infinite Q ∈ branching tree regardless of the precision: the time it takes (a1,...,ai−1)), the minimal polynomial pi(x) Q to localize x>0 into an interval [1/(n +1), 1/n] grows (a1,...,ai−1)[x] of degree di with leading coeffi- with n, regardless of the precision we are computing with. cient 1, such that pi(ai)=0. pi is presented symbol- ically, with non leading coefficients given as rational We summarize the modifications to the BSS model in the functions with non-zero denominators. following diagram: The next two lemmas show that the nonuniform information above suffices to decide any polynomial relation. ε ε,b BSSC ⊂ BSSC ⊃ BSSC (C0 comp., (C0 comp., (C0 not comp., Lemma 7 Provided the nonuniform information as x x x e not comp.) e comp.) e comp.) above, for any symbolic polynomial p(x1,x2,...,x) ∩∩ ∩ ∈ Q[x1,x2,...,x] we can check whether ⊂ ε ⊃ ε,b BSS BSS BSS p(a1,a2,...,a)=0. (C0 comp., (everything (everything x e not comp.) is comp.) is comp.) Proof Outline: By induction on .Ifa is transcenden- tal over Q(a1,a2,...,a−1), the equation becomes a set of ε,b dega p equations in a1, a2, ..., a−1, which we solve by We will now show that BSSC -computability is equiva- lent to bit computability for bounded sets. Note that they are the induction hypothesis. If a is algebraic over Q(a1,a2,...,a−1),weusethe still different for functions, since the step function s0(x) is ε,b non-uniform information to reduce the degree of p in a BSSC -computable, but not bit-computable. In Section 4 ε,b below d. Then the equation becomes a set of at most d we will connect BSSC function computability to bit com- equations in a1, a2, ..., a−1. putability.

7 Lemma 8 Provided the nonuniform information as above, Remark: From the complexity point of view, even the for any symbolic polynomial p(x1,...,x) ∈ Q[x1,...,x] simplest simulation of BSS machines using Turing ma- we can check whether p(a1,a2,...,a) > 0. chines appears to be quite costly. In the case where the ma- chine is an arithmetic straight line program a discussion on Proof: By Lemma 7 we can first check whether the complexity of such a simulation can be found in [1]. p(a1,a2,...,a)=0. If yes, we output ‘no’. Oth- erwise, using increasingly good approximations, we will 4 Complexity of Real Functions eventually be able to tell whether p(a1,a2,...,a) > 0 or p(a1,a2,...,a) < 0. In this section we propose a new definition for the com- ∈ Dk ∈ N Given a dyadic d and n ,wewouldliketo putability and complexity of real functions, and establish its compute f(d, n) as in (2). In other words, we would like to connections with bit and BSS computability. check whether d is 2−n-close or 2 · 2−n-far from S. −n For the rest of the proof set ε =2 . We know there is a 4.1 Computability of Real Functions computable bound on the running time of M(x, ε) in terms of ε. We compute this bound B = B(ε). This means that B The main idea arises from the equivalence between the M(•,ε) can have at most 2 different computation paths. function computability and the set computability in case of Each potential path has an output (either 0 or 1), and a set of a , which was established in Theorem 5. rational constraints on the input x =(x1,...,xk) and the We have a good notion of bit-computability for sets which constants a1,...,a that ensure that this path is followed. ε,b coincides with BSSC -computability. We use it to define a If the constraints are not satisfiable by any (x1,...,xk),it computability notion for functions: means that the path is never actually followed. The rational constraints can be rewritten as polynomial ones. Definition 9 We say that a bounded real function f on a Choose some computation path γ on which M(•,ε) out- bounded domain D is graph computable, if its graph Γf = puts 1. We denote the polynomial constraints to be satisfied {(x, f(x)) : x ∈ D} is computable as a set. in order to follow γ by Cγ (x1,...,xk,a1,...,a).Weare ∈ −n ∩ interested whether there is an x B(d, 2 ) S. In partic- By Theorem 5, Definition 9 coincides with the bit com- ular, we would like to know whether there is such an x that putability definition in the case of continuous functions on is accepted by the path γ. This is stated by the following closed domains. In some sense, graph computability ex- quantified formula tending bit-computability for functions, is similar to the no- 2 tion of Lebesgue integral extending the Riemann integral. fγ (a1,...,a)=∃x1,...,xk ((x1 − d1) + ... ε,b BSSC -computable functions on “nice” domains have 2 −2n ε,b +(xk − dk) < 2 ) ∧ Cγ (x1,...,xk,a1,...,a). BSSC -computable graphs. Here a “nice” domain is bounded semi-algebraic and BSSC-computable, e.g. D = Using a quantifier elimination algorithm, we can convert [a1,b1]×[a2,b2]×...×[ak,bk] with computable endpoints. fγ (a1,...,a) into a quantifier-free formula gγ (a1,...,a) It follows from Theorem 6 that these functions are graph which has the same truth value. We then can use Lemmas computable. So graph-computability extends the modified 7 and 8 to decide whether gγ (a1,...,a) is true or false BSS function computability on “nice” domains. (which is the same as deciding fγ (a1,...,a)). fγ has only some constant number of existential quantifiers with no al- 4.2 Complexity of Real Functions ternations, hence the complexity of the quantifier elimina- tion can be reduced to be exponential in k (and polynomial By now we have established that graph computability is a in the other parameters). See [2] and [19] for the useful notion extending both bit computability and a natural and their analysis. modified version of the BSS computability. Our next goal As an answer, we output the following is to give a reasonable definition for graph complexity of real functions. Should we take the complexity of computing f(d, n)= fγ (a1,...,a). Γf ? Or the complexity of weakly computing it? It turns • γ is a 1-valued path of M( ,ε) out that neither one extends the bit-complexity of f in the (4) continuous case. In fact, we have the following theorem. As there are at most 2B such paths γ, the computation will B k  involve computing fγ at most 2 times. Theorem 10 Let f :[0, 1] → [0, 1] be a continuous −n ∩ Now, if there is an x in B(d, 2 ) S, it will satisfy fγx function, with a polynomial modulus of continuity µ(n) −n −µ(n) in (4) for the corresponding path γx of M(x, ε). If there is (|f(x) − f(y)| < 2 whenever |x − y| < 2 ). Denote no x in B(d, 2 · 2−n) ∩ S, (4) can never be satisfied. the following properties:

8 (a) The graph Γf is poly-time computable as a set. Examples: The step function s0 defined in (1) is now (b) f is a poly-time computable function. computable in linear time. For an input x output Am = {0} −(m+1) −(m+1) (c) The graph Γf is weakly poly-time computable as a if x<−2 , output Am = {1} if x>2 , and −m −m set. Am = {0, 1} if −2

9 Thus by approximating the set Am, we can approximate [2] S. Basu, R. Pollack, and M. F. Roy. Algorithms in Real f(x). Algebraic Geometry. Springer-Verlag, 2003. Theorems 12 and 13 show that graph complexity is the [3] I. Binder, M. Braverman, and M. Yampolsky. Filled julia natural complexity notion for graph computable functions. sets with empty interior are computable. e-print, Oct. 2004. It also extends the natural complexity definitions for other math.DS/0410580. [4] L. Blum, F. Cucker, M. Shub, and S. Smale. Complexity and extensions of the continuous functions. For example, the Real Computation. Springer-Verlag, New York, 1998. piecewise continuous functions with finitely many com- [5] L. Blum, M. Shub, and S. Smale. On a theory of putable discontinuity points. computation and complexity over the real numbers: NP- completeness, recursive functions and universal machines. 5 Closing Remarks Bulletin of the Amer. Math. Soc., 21:1–46, 1989. [6] P. Boldi and S. Vigna. δ-uniform BSS machines. J. of Com- plexity, 14:234–256, 1998. In Section 3 it has been shown that under some quite [7] V. Brattka. Feasible real random access machines. J. of natural modifications, the BSS model and the bit model are Complexity, 14:490–526, 1998. equivalent for sets. While these modifications make sense [8] V. Brattka. The emperor’s new recursiveness: The epigraph from the point of view of actual computations, some of the of the exponential function in two models of computability. merits of the BSS model are inevitably lost. In Masami Ito and Teruo Imaoka, editors, Words, Languages There are interesting complexity questions that can only & Combinatorics III, pages 63–72, 2003. be formulated in the original model. For example, the ques- [9] V. Brattka. Plottable real number functions. In Marc Dau- mas and et al., editors, RNC’5 Real Numbers and Comput- tion of PR vs. NPR, and other questions regarding com- plexity classes over R and C. Another example is the ques- ers, pages 13–30. INRIA, September 2003. [10] V. Brattka and K. Weihrauch. Computability of subsets of tion whether linear programming can be solved in polyno- euclidean space I: Closed and compact subsets. Theoretical mial time using only exact arithmetic and branching on < Computer Science, 219:65–93, 1999. and =. Nonetheless, as we have seen above, modifications [11] M. Braverman. On the complexity of real functions. e-print, are necessary when one considers the practical applications Feb. 2005. cs.CC/0502066. of the model to computability. [12] A. Chou and K. Ko. Computational complexity of two- In the functions case matters are more complicated. The dimensional regions. SIAM J. Comput., 24:923–947, 1995. traditional recursive analysis approach seems to work quite [13] A. Grzegorczyk. Computable functionals. Fund. Math., well in the case when the underlying functions are continu- 42:168–202, 1955. Math. Log. ous. In Section 4 the standard notion of function complexity [14] P. Hertling. Is the Mandelbrot set computable? Quart., 51(1):5–18, 2005. has been generalized to a richer class which includes some [15] K. Ko. Complexity Theory of Real Functions.Birkhauser,¨ discontinuous functions. This class might be even too rich Boston, 1991. in some cases. The exact class of functions F to which the [16] D. Lacombe. Classes recursivement´ fermes´ et fonctions ma- notion should be applied depends on the specific applica- jorantes. C. R. Acad. Sci. Paris, 240:716–718, 1955. tions of this notion. [17] D. Lacombe. Les ensembles recursivement´ ouverts ou Suggestions for a “second-generation BSS machine” in- fermes,´ et leurs applications a` l’analyse recursive.´ C. R. cluded incorporating errors and condition numbers into the Acad. Sci. Paris, 246:28–31, 1958. model [20]. It is possible that with the modifications both [18] M. B. Pour-El and R. J. I. Computability in Analysis and Physics approaches would converge to a common notion of real- . Springer-Verlag, Berlin, 1989. [19] J. Renegar. On the computational complexity and geometry function computability, and maybe even complexity. of the first order theory of the reals. J. of Symb. Comp., 13:255–352, 1992. Acknowledgments [20] S. Smale. Complexity theory and . Acta I would like to thank my graduate supervisor, prof. Numerica, 6:523–551, 1997. Stephen Cook, for his insights and support during the prepa- [21] A. M. Turing. On computable numbers, with an application ration of this paper and for the many hours we spent dis- to the . Proceedings, London Mathe- cussing Real Computation. matical Society, pages 230–265, 1936. [22] K. Weihrauch. Computable Analysis. Springer-Verlag, I would like to thank prof. Toniann Pitassi, for her advice Berlin, 2000. during the preparation of this paper. [23] A. C. Yao. Classical physics and the church-turing thesis. ECCC report TR02-062, 2002. References [24] Q. Zhou. Computable real-valued functions on recursive open and closed subspaces of Rq. Math.Log.Quart., [1] E. Allender, P. Burgisser,¨ J. Kjeldgaard-Pedersen, and P. B. 42:379–409, 1996. Miltersen. On the complexity of numerical analysis. ECCC report TR05-037, 2005.

10