arXiv:1901.09234v2 [cs.CG] 18 Apr 2019 eehbtacniinbsdaayi fteaatv subdi adaptive the of analysis condition-based a exhibit We etakMcalBr o sfldsusos i okwas work Œis discussions. useful for Burr Michael thank We ihtego efrac fteagrtmi rcie More practice. in algorithm the of performance good the with iinagrtmdet lnig n etr efis com- first Œe Vegter. and Plantinga to due algorithm vision sgrdswopoe a proved who Tsigaridas ABSTRACT mtos i ubro ue per ob rprinlto, proportional be to approx appears these cubes the of of description in number the Œis (squares in imations. cubes were curves) many plane how of of approx- case record their the with and examples some imations provided Section they in and paper correctness) their its certificat ensure to (i.e., arithmetic procedure interval the with enhanced method sion lxt nlsso the of analysis plexity o degree for il;Itra rtmtc • arithmetic; Interval mials; ra aiyo esrs h xetdtm opeiyof complexity time to expected respect the the with measures, of that, family show broad we a performance, this with contrast line stark in in is observed, was it bound, exponential Œis uvs eas xii mohdaayi fthe of analysis smoothed a exhibit also We curves. uv rsrae[ surface or curve approximatio linear a com- piecewise for isotopic algorithm regularly an a proposed puting Vegter and Plantinga 2004 In INTRODUCTION 1 11202017). CityU number Council (project Grants SAR Research Kong Hong the the from of grant GRF a partial by was FC supported . Foundation Einstein the by supported ACKNOWLEDGMENTS complexity curves, of isotopy methods, subdivision ada tive methods, numerical geometry, algebraic computational KEYWORDS algorithms; of analysis and sign • CONCEPTS CCS com- numerical to exchange approaches fruitful putation. different a the motivate between will ideas this of hope We Yap. Krah and Burr, mer by introduced paradigm amortization t continuous and numbers condition with analysis functional geometric coming techniques probabilistic robust combine t we obtain results To estimates. complexity similar yields that rithm ahmtc fcomputing of PV lnig-etragrtmtksaeaeplnma time polynomial average takes algorithm Plantinga-Vegter loih sbuddby bounded is Algorithm iyUiest fHn Kong Hong of University City olo og ogKong Hong Tong, Kowloon d [email protected] et fMathematics of Dept. ln uvswt aiu offiin bit-size coefficient maximum with curves plane a heAvenue Chee Tat eieCucker Felipe 16 .Œi loih eido subdivi- a on relied algorithm Œeir ]. PV O loih sdet ur a and Gao Burr, to due is Algorithm er fcomputation of ‡eory 2 d τ opttoa geometry; Computational 4 O → log ( d opttoso polyno- on Computations d 7 )  o el degree real, for os-aecs bound cost worst-case PV ehiceUniversit Technische d [email protected] → plane , Algo- ntttf Institut eln163 Germany 10623, Berlin hese of n of 7 lee .Erg A. Alperen from De- t.ds1.Jn 136 Juni 17. des Str. he ly p- τ e - - - . rMathematik ur ¨ tBerlin at ¨ ehp htti egn ftcnqe ilsatafruitful a start will techniques of merging this that hope We ol aebe mosbewtotthe without impossible been have would ilse tpy ffi u aea tyed ml ie,polyn (i.e., small yields it as case our in off pays it see, will ubro etrsi hssaeo ffissgetta a that suggest affairs of state this in features of number A sgrds[ Tsigaridas tion eatosfrhrosreta,floigfo hi Propo Œeorem their (see from 5.2 following sition that, observe further authors Œe the at comment following logarithmic paper the the its of motivates in end that and fact polynomial input a the height), (both of exponential degree are the are bounds these in quantity Yet, this optimal. for be proved to shown bounds Œe cost computation. ap- the the the dominates of of mentioned, description just we the as in complexity which, cubes worst-case proximation of a number is the paper for the pr analysis in infinite result is main arithmetic Œe the data, sion. of choice Consis this set. with zero tently smooth and coefficients integer with nomials hsi omnpatc o ueia loihsad swe as but and, coefficients algorithms numerical real for of practice common that a by is it this replace inte of and assumption coefficients ger the drop would analysis condition-based writing cer- by a paper of their evaluation conclude the they from And th integral. derived on tain be depends that could cost hand) a at yielding input one (i.e., algorithm the of aawihaentalwda nusi [ in of inputs poly set as of the allowed not precisely set are is the which set, inputs, data zero ill-posed non-smooth a of having “additional set nomials an obvious of the the add notion that may the fact we which for condi To parameter.” fit a intrinsic and that perfect geometric fact a the is with, number of begin tion To quantity useful. our be of could analysis interest the to approach condition-based f .. [ e.g., of, probabi of class large measures. a ity for bounds complexity average mial) h osdrdcre n ufcso h rtmtcused. arithmetic the or definin surfaces implicitly and functions curves of considered the kind formal the how- a either even paper fixing not Œe seŠing and analysis computation. complexity the no of contained ever, cost the dominate, and ur ¨ lhuhorapoc olw h odto-ae ideas condition-based the follows approach our Although natcedigs a ulse n21 yBr,Goand Gao Burr, by 2017 in published was so doing article An ehiu eeoe nteeatnmrclcnet[ context numerical exact the in developed technique ie hs r ut esmsi [...] pessimistic quite are these prac- tice, in optimal, are bounds our though Even ercaditiscparameters. intrinsic geo- and additional metric of terms in integral described be the must [size], inputs the be in can exponential algorithm the of complexity the Since 1 , 8 6 , .Œ ucin hsatcedaswt r poly- are with deals article this functions Œe ]. 9 , 12 , 18 ehiceUniversit Technische ,tecmlxt nlssi hspaper this in analysis complexity the ], [email protected] 6.3 ou Tonelli-Cueto Josue ntttf Institut eln163 Germany 10623, Berlin t.ds1.Jn 136 Juni 17. des Str. eo)a ntnebsdanalysis instance-based an below) rMathematik ur ¨ otnosamortiza- continuous 6 .O ore uha such course, Of ]. tBerlin at ¨ 4 eci- , 5 o- g e ]. l------, , Felipe Cucker, Alperen A. Ergur,¨ and Josue Tonelli-Cueto exchange of ideas between different approaches to continuous 2 THE PV ALGORITHM computation. Given a real smooth hypersurface in Rn described implicitly by a map f : Rn R and a region a, a n, the PV Algo- → [− ] rithm constructs a piecewise-linear approximation of the in- tersection of its zero set VR f with a,a n isotopic to this ( ) [− ] 1.1 Notation intersection inside a,a n. [− ] m Let m be the set of m-cubes of R . Recall that an interval Œroughout the paper, we will assume some familiarity with I approximation of a function F : Rm Rm′ is a map  F : the basics of differential geometry and with the sphere Sn as → [ ] m such that for all J m, F J  F J (c.f. [17]). a Riemannian manifold. For scalar smooth maps f : Rm R, I → Im′ ∈ I ( ) ⊆ [ ]( ) m m → We notice that if we see J as error bounds for the midpoint we will write the tangent map at x R as ∂x f : R R ∈ → m J , then  F J is nothing more than error bounds for F m J . when we want to emphasize it as a linear map and as ∂f : ( ) [ ]( ) ( ( )) Assume that we have interval approximations of both f Rm Rm, x ∂f x , when we want to emphasize it as a → 7→ ( ) and its tangent map ∂f or, more generally, of hf and h′∂f for smooth function. For general smooth maps F : , we n M → N some positive maps h,h′ : R 0, . Œe PV Algorithm will just write ∂x F :Tx Tx as the tangent map. → ( ∞) M → N on a,a n will subdivide this region into smaller and smaller In what follows, , will denote the set of real polyno- [− ] Pn d cubes until the condition mials in n variables with degree at most d, n,d the set of + H    homogeneous real polynomials in n 1 variables of degree d, Cf J : either 0 < hf I or 0 < h′∂f J , h′∂f J and and , will denote the usual norm and inner prod- ( ) [ ]( ) h [ ]( ) [ ]( )i k k h i uct in Rm as well as the Weyl norm and inner product in m is satisfied in each of the n-cubes J of the obtained subdivi- Pn,d , n m h sion of a a . In Section 4, we will be more precise on the and . Given a polynomial f , , f , will be [− ] Hn,d ∈ Pn d ∈ Hn d assumptions on our interval approximations and the functions its homogenization and ∂f the polynomial map given by its h and h′ that we will use. partial derivatives. For details about the concrete definition of each of these notions, see Section 4. Additionally, VR f and ( ) VC f will be, respectively, the real and complex zero sets of Algorithm 2.1: Subdivision routine of PV Algorithm ( ) f . Input: a 0, and f : Rn R n ∈( ∞) → We will denote by n the set of n-cubes of R and, for a with interval approximations  hf and  h ∂f I [ ] [ ′ ] given J n, m J will be its middle point, w J its width, and ∈I ( ) ( ) vol J = w J n its volume. Starting with the trivial subdivision := a,a n , ( ) S n {[− ] } Also, P A will denote the probability of the event A, Ex Kg x repeatedly subdivide each J into 2 cubes until the ( ) ∈ ( ) ∈S the expectation of g x when x is sampled uniformly from K condition C J holds for all J ( ) f ( ) ∈S and Eyg x the expectation ofg y with respect to a previously ( ) ( ) n specified probability distribution of y. Output: Subdivision n of a,a S⊆I [− ] Regarding complexity parameters, n will be the number of such that for all J , Cf J is true = n+d ∈S ( ) variables, d the degree bound, and N n the dimension of , . Pn d  Finally, ln will denote the natural logarithm and log the log- Œe procedure in Algorithm 2.1 is only the subdivision rou- arithm in base 2. tine of the PV Algorithm but it dominates its complexity in the sense that the remaining computations do not add to the final cost estimates in Landau notation. Moreover, these addi- tional computations have been implemented only for n 3. ≤ 1.2 Outline So, proceeding as in [6], we will only analyze the complexity of the subdivision routine, keeping track of the dependency In Section 2, we discuss the PV Algorithm and the n-dimensional on n.Alsoasin[6], our complexity analysis will not deal with generalization of its subdivision method that we will analyze. the precision needed for the algorithm. In Section 3, we state the main complexity results of this pa- per. In Section 4, we present the geometric framework of poly- 3 MAIN RESULT nomials we will work with. Following a common practice in condition-based analysis we use homogenization to get many In this section, we outline without proofs the main results of of our results. In Section 5, we introduce the condition num- this paper. In the first part, we describe our randomness as- ber along with some of its main properties. In Section 6, we sumptions for polynomials. In the second one, we give pre- present the existing results of complexity of the subdivision cise statements for our bounds on the average and smoothed method of the PV Algorithm based on local size bound func- complexity of the PV Algorithm. tions from [6] and we relate them to the local condition num- ber. In Section 7, we rely on the bounds for the condition num- 3.1 Randomness Model ber obtained in Section 5 to derive average and smoothed com- Most of the literature on random multivariate polynomials plexity bounds under (quite) general randomness assumptions. considers polynomials with Gaussian independent coefficients Plantinga-Vegter algorithm takes average polynomial time , , and relies on techniques that are only useful for Gaussian mea- 3.2 Average and Smoothed Complexity sures. We will instead consider a general family of measures re- Œe following two theorems give bounds for, respectively, the lying on robust techniques coming from geometric functional average and smoothed complexity of Algorithm 2.1. In both analysis. Let us recall some basic definitions. of them c1 and c2 are, respectively, the universal constants in P1 A random variable X is called centered if EX = 0. Œeorems 7.2 and 7.4. P2 A random variable X is called subgaussian if there ex- Theorem 3.1. Let f , , σ > 0, and g , a do- ist a K such that for all p 1, ∈ Pn d ∈ Pn d ≥ bro random polynomial with parameters K and ρ. Še expected 1 number of n-cubes in the final subdivision of Algorithm 2.1 on E X p p K√p. ( | | ) ≤ input f , a is at most ( ) Ψ 2 Œe smallest such K is called the 2-norm of X. n2+3n n +16n log n n ( ) n+1 P3 A random variable X satisfies the anti-concentration d 2 max 1,a 2 2 c1c2Kρ { } ( ) property with constant ρ if if the interval approximations satisfy (4.4) and (4.5) and

2 max P X u ε u R ρε. n2+5n 7n +9n log n n ( ) n+1 { (| − | ≤ ) | ∈ } ≤ d 2 max 1,a 2 2 c1c2Kρ { } ( ) Œe subgaussian property (P2) has other equivalent formu- if they satisfy the hypothesis of [6]. lations. We refer the interested reader to [22]. Theorem 3.2. Let f n,d , σ > 0, and g n,d a dobro Definition 3.1. A dobro random polynomial f , with ∈ P ∈ P ∈ Hn d random polynomial with parameters K and ρ . Šenthe expected parameters K and ρ is a polynomial number of n-cubes of the final subdivision of Algorithm 2.1 for 1 2 input qσ , a where qσ = f + σ f g is at most d / f := c X α (3.1) ( ) k k α 2 n+1 α n2+3n n +16n log n = n ( ) n+1 1 α d   d 2 max 1,a 2 2 c1c2Kρ 1 + | Õ| { } ( ) σ   such that the cα are independent centered subgaussian ran- if the interval approximations satisfy (4.4) and (4.5) and dom variables with Ψ2-norm K and anti-concentration prop- ≤ 2 n+1 erty with constant ρ.A dobro random polynomial f , is n2+5n 7n +9n log n n d n ( ) n+1 1 h ∈ P d 2 max 1,a 2 2 c1c2Kρ 1 + a polynomial f such that its homogenization f is so. { } ( ) σ   Some dobro random polynomials of interest are the follow- if they satisfy the hypothesis of [6]. ing three. If we compare the results above with the worst-case bound N A KSS random polynomial is a dobro random polyno- of [6, Œeorem 4.3], which is mial such that each cα in (3.1) is Gaussian with unit nd n+1 nτ +nd log nd +9n+d log a variance. For this model we have Kρ = 1 √2π. 2O( ( ( ) ) ) / U A Weyl random polynomial is a dobro random poly- with τ being the largest bit-size of the coefficients of f , we nomial such that each cα in (3.1) have uniform distri- can see that our probabilistic bounds are exponentially bet- bution in 1, 1 . For this model we have Kρ 1. [− ] ≤ ter: they may provide an explanation of the efficiency of the E A p-random polynomial is a dobro random polyno- PV Algorithm in practice. mial whose coefficients are independent identically We note, however, that the bound in [6] and our bounds distributed random variables with the density func- cannot be directly compared. Not only because the former t p tion g t = cpe with cp being the appropriate ( ) −| | is worst-case and the laŠer average-case (or smoothed) but constant and p 2. ≥ because of the different underlying complexity seŠings: the Remark 3.2. When we are interested in integer polynomials, bound in [6] applies to integer data, ours to real data. A first dobro random polynomials may seem inadequate. One may approach to bridge this difference relies on the approximation of distributions described in Remark 3.2. But, as mentioned be inclined to consider random polynomials f n,d such τ ∈τ P there, this approach does not give completely satisfactory re- that cα is a random integer in the interval 2 , 2 , i.e., cα is [− ] a random integer of bit-size at most τ. As τ and a‰er we sults. A more detailed study of how the PV Algorithm behaves →∞ normalize the coefficients dividing by 2τ , this random model on random integer polynomials is desirable. converges to that of Weyl random polynomials. To have a more satisfactory understanding of random inte- 4 GEOMETRIC FRAMEWORK ger polynomials, one has to consider random variables with- Œere is an extensive literature on norms of polynomials and out a continuous density function. Œe techniques used in this their relation to norms of gradients in , . We can use ho- Hn d note are already extended to include such random variables in mogenization to carry these results from , to , . Hn d Pn d the case of random matrices [19, 22]. We hope to pursue this To be more precise, let ϕ : Rn Sn, given by → delicate case in a more general seŠing (including complete in- T tersections) in future work. x 1 xT 1 + x 2. 7→ / k k   p , , Felipe Cucker, Alperen A. Ergur,¨ and Josue Tonelli-Cueto

Rn Œis gives a diffeomorphism between and the upper half with ∂ϕ x f Sn √d f , by the Exclusion Lemma [1, Sn ( ) | ≤ k k of , and we have Lemma 19.22]. Œus taking norms and using (4.3) finishes the

h 2 d 2 proof. f ϕ x = f x 1 + x / . (4.1) ( ( )) ( )/( k k ) Œe claim about f˜ x is just [1, Lemma 16.6].  Using the chain rule, we can see that ( )

T Corollary 4.2. Let f . Šen f and ∂f are Lipschitz ∂x f d f x x n,d ∂ f h ∂ ϕ = · ( ) (4.2) ∈ P ϕ x x + 2 d 2 + 2 d 2+1 with Lipschitz constants 1 + √d and 1 + √d 1 , respectively, ( ) 1 x / − 1 x / ( ) ( − ) ( k k ) ( k k ) and for all x, f x , ∂f x 1 + xb2. c h n+1 n n where ∂y f : R R, ∂x ϕ : R Tx S = x⊥ and | ( )| k ( )k ≤ k k n → → h p ∂x f : R R are respectively the tangent maps of f , ϕ Proof. Œe claims about f are immediate from Proposi- → b c and f . tion 4.1. For the claims about ∂f , observe that ∂f n ∈ Pn,d 1 As it will be useful later, we note that a direct computation and that, by a direct computation,b ∂f d f . Œus Propo-− k k ≤ k k shows sition 4.1 completes the proof. c  2 1 2 ∂xϕ = 1 1 + x and ∂xϕ− = 1 + x . (4.3) k k / k k k k k k 4.2 Interval approximations An inner productp on , with desirable geometric prop- Hn d Œe Lipschitz properties above (Corollary 4.2) ensure that the erties is known as the Weyl inner product and it is given by mapping 1 d − f ,g W := fαgα J f m J + 1 + √d √nw J 1 2, 1 2 h i α 7→ ( ( )) ( ) ( ) [− / / ] α   Õ is an interval approximation of f and the mapping = α = α n,d for f α fα X , g α gα X n,d . Œis product is b ∈ P h∈ Hh k n extended to , using f ,g := f ,g and to using J ∂f m J + 1 + √d 1 √n w J 1 2, 1 2 Í Pn d hÍ i h i Pn,d 7→ ( ( )) − ( )[− / / ] k f, g := = fi ,gi . one of ∂f . Also, leŠing h i i 1h i  We will use to denote both the Weyl norm for polyno- c Í k k n 1 1 mials and the usual Euclidean norm in R . h x = and h′ x = ( ) f 1 + x d 1 2 ( ) d f 1 + x d 2 1 k k( k k)( − )/ k k( k k) / − 4.1 Lipschitz properties the mappings above become interval approximations  hf [ ] and  h ∂f of hf and h ∂f , respectively, satisfying that, for Given f n,d , let us consider the maps [ ′ ] ′ ∈ P each J n, 2 d 1 2 ∈I f : x f x f 1 + x ( − )/ 7→ ( )/ k k( k k ) dist  hf J , hf m J 1 + √d √n w J 2 (4.4) ( [ ]( ) ( )( ( ))) ≤ ( )/ and   b and  2 d 2 1 ∂f : x ∂f x d f 1 + x / − 7→ ( )/ k k( k k ) dist  h′∂f J , h′∂f m J 1+√d 1 nw J 2. (4.5) which are just ”linearized” version of f and its derivative. Œe [ ]( ) ( )( ( )) ≤ − ( )/ c intuition behind this fact is that for large values of x a polyno- With these conditions on our interval approximations, we can reformulate a weaker, but easier to check, condition C J . mial map of degree d grows like x d . f′ k k ( ) Proposition 4.1. Let f k be a polynomial map. Šen Theorem 4.3. Let J n and assume that our interval ap- ∈ Pn,d ∈ I the map proximation satisfies (4.4) and (4.5). If the condition

2 d 1 2 f m J > 1 + √d √nw J x f˜ x := f x f 1 + x ( − )/ 7→ ( ) ( )/ k k( k k ) C ′ J : ( ( )) ( ) ( ) f ( ) + √    or ∂f m J > √2 1 d 1 nw J is 1 + √d -Lipschitz and, for all x, f˜ x 1 + x 2.  b ( ( )) ( − ) ( )

( ) ( ) ≤ k k is satisfied, then C J is true. p  f ( ) c Proof. For the Lipschitz property, it is enough to bound  Lemma 4.4. Let x Rn . Šen for all v,w B x , the norm of the derivative of the map by 1 + √d.Dueto(4.1), ∈ ∈ x √2( ) v,w > 0. k k/ 2 d 1 2 2 h h i f x f 1 + x ( − )/ = 1 + x f ϕ x f ( )/ k k( k k ) k k ( ( ))/k k Proof. Let W := u Rn x,u x u √2 be the   p { ∈ | h i≥k kk k/ } and so the derivative equals convex cone of those vectors u whose angle x, u with x is at , , , + , fh ϕ x T ∂ f most π 4. For v w W , v w v x x w π 2, by the x + + 2 ϕ x / ∈ ≤ ≤ / ( ( )) 1 x ( ) ∂xϕ. triangle inequality. Œus cos v,w 0. d f 1 + x 2 k k f ≥ k k k k Now, dist x, ∂W = mindx ud x,du = x u √2 k k p ( ) {k − k | h i k kk k/ } Now, fh ϕ x pf 1,by [1, Lemma 16.6], and where the laŠer equals the distanced of x to a line having an ( ( )) /k k ≤ √ angle π 4 with x, which is x 2. Hence B x √2 x W = / k k/ k k/ ( ) ⊆ ∂ϕ x f ∂x ϕ ∂ϕ x f Sn ∂x ϕ and we are done.  ( ) ( ) |   Plantinga-Vegter algorithm takes average polynomial time , ,

Proof of Theorem 4.3. When the condition on f m J is 5.2 A fundamental proposition ( ( )) satisfied, (4.4) guarantees that 0 <  hf J . Whenever the [ ]( ) Œe following result plays a fundamental role in our develop- condition on ∂f m J is satisfied, (4.5) and Lemmab4.4 guar- ment. Despite having been used in many occasions within var- ( ( ))  antee that 0 < h′∂f J , h′∂f J . Hence C ′ J implies ious proofs, it wasn’t explicitly stated until recently. h [ ]( ) [ ]( )i f ( ) c  Cf J under the given assumptions. n ( ) Lemma 5.4. Let F , and y S . Šen either ∈ Hn d ∈ Remark 4.5. Œe interval approximations in [6] are based on F y F 1 √2κ F,y Taylor expansion at the midpoint, so they are different from | ( )|/k k ≥ / ( ) ours. However, our complexity analysis also applies to the in- or   terval approximations considered in [6], see §6.2 below for the ∂ n √ √ , . y F Ty S d F 1 2κ F y details. k | k/ k k ≥ / ( )     Proof. Œis is [3, Proposition 3.6].  5 CONDITION NUMBER n Proposition 5.5. Let f , and x R . Šen either As other numerical algorithms in computational geometry, the ∈ Pn d ∈ PV Algorithm has a cost which significantly varies with inputs 1 1 f x > or ∂f x > . of the same size. One wants to explain this variation in terms | ( )| 2√2dκ f , x ( ) 2√2dκ f , x aff( ) aff( ) of geometric properties of the input. Condition numbers allow Proof.b Without loss of generality c assume that f = 1. for such an analysis. k k Let y := ϕ x , F := f h and assume that the first inequality does Definition 5.1. [2, 11] Given F , , the local condition ( ) ∈ Hn d not hold. Œen, by (4.1), F y 1 2√2dκ F,y 1 + x 2 . number of F at y Sn is | ( )| ≤ / ( ) k k ∈ By (4.2), (4.3) and Lemma 5.4, we get p  = 2 + 2 κ F,y : F F y ∂y F T Sn d. T 2 ( ) k k/ ( ) k y k / 1 ∂x f d f x x 1 + x | ( ) k k . q √ ≤ + 2 d 2 − + 2 d 2+1 √ Given f n,d , the local affine condition number of f at x 2κ F,y 1 x / 1 x /  d  Rn ∈ P = h ∈ ( ) ( k k ) ( k k ) is κaff f , x : κ f , ϕ x . ( ) ( ( )) We divide by √d and use the triangle inequality to obtain 1 ∂x f f x x 5.1 What does κaff measure? k k + | ( )| k k . + 2 d 2 1 + 2 d 1 2 2 Œe nearer the hypersurface VR f is to having a singularity √2dκ F,y ≤ d 1 x / − 1 x ( − )/ 1 + x ( ) ( ) ( k k ) ( k k ) k k at x Rn , the smaller are the boxes drawn by the PV Algo- ∈ Using (4.1) and our initial assumption on the secondp term rithm around x. A quantity controlling how close if f to have in the sum, which we subtract, we get the desired inequality a singularity at x will therefore control the size of these boxes. since x < 1 + x 2.  Œis is precisely what κaff f , x does. k k k k ( ) p Theorem 5.2 (Condition Number Theorem). Let x Rn 6 ADAPTIVE COMPLEXITY ANALYSIS ∈ and Σx be the set of hypersurfaces having x as a singular point. As stated in Section 2 (and as done in [6]), our complexity anal- Šat is, Σx := g , g x = 0, ∂xg = 0 . Šen for every ysis will focus on the number of subdivisions steps of the sub- { ∈ Pn d | ( ) } f , , division routine of the PV Algorithm (Algorithm 2.1). Œat is, ∈ Pn d f κaff f , x = dist f , Σx our cost measure will be the number of n-cubes in the final k k/ ( ) ( ) subdivision of a,a n. where the distance is induced by the Weyl norm of n,d . [− ] P We note that we do not deal with the precision needed to Proof. Œis follows from [1, Proposition 19.6],[2, Œeorem run the algorithm. Œis issue should be treated in the future.  4.4] and the definition of κaff. 6.1 Local size bound framework [6] Œeorem 5.2 provides a geometric interpretation of the local Œe original analysis in [6] was based on the notion of local condition number, and a corresponding ”intrinsic” complexity size bound. parameter as desired by the authors of [6, 7]. Œe next result will be useful in the probabilistic analyses. Definition 6.1. A local size bound for f is a function bf : Rn Rn n 0, such that for all x , Corollary 5.3. Let x R and let Px : , Σ be →[ ∞) ∈ ∈ Pn d → ⊥x the orthogonal projection onto the orthogonal complement of the b x inf vol J x J n and C J false . f ( ) ≤ { ( )| ∈ ∈I f ( ) } linear subspace Σx . Šen κ f , x = f Px f . aff( ) k k/k k Arguing as in [6, Proposition 4.1], one can easily get the following general bound. Proof. We have that dist f , Σx = Px f since Σx is a ( ) k k  linear subspace. Hence Œeorem 5.2 finishes the proof. Proposition 6.2. [6] Še numberof n-cubes of the final sub- division of Algorithm 2.1 on input f , a , regardless of how the We notice that the above expression should not come as a ( ) subdivision step is done, is at most surprise, since κaff is define in a way that the denominator is the norm of a vector depending linearly of f . 2a n inf b x x a,a n .  ( ) / { f ( ) | ∈ [− ] } , , Felipe Cucker, Alperen A. Ergur,¨ and Josue Tonelli-Cueto

Œe bound above is worst-case, it considers the worst b x for which we use that f ( ) among the x a, a n. Continuous amortization developed ∈ [− ] 1 ln 1 + 22 2n 22n 3 and 1 ln 1 + 22 4n 24n 3. by Burr, Krahmer and Yap [4, 5], provides the following refined / − ≤ − / − ≤ − complexity estimate [6, Proposition 5.2] which is adaptative.     

Theorem 6.3. [4–6] Še number of n-cubes of the final sub- Œe main difference betweenC f , x and κ f , x is thatC f , x division of Algorithm 2.1 on input f , a is at most ( ) ( ) ( ) ( ) is a non-linear quantity and is hard to compute, while the local n condition number κ f , x —as indicated in Corollary 5.3—is a 2 ( ) max 1, dx . linear quantity and is rather easy to compute. Œeorem 6.6 be- ( a,a n bf x ) ¹[− ] ( ) low and the complexity analysis in Section 7 show that the lo- Moreover, the bound is finite if and only if the algorithm termi- cal condition number κ f , x is easily amenable to the adap- aff( ) nates.  tive complexity analysis techniques developed by Burr, Krah- To effectively use Œeorem 6.3 we need to explicit estimates mer and Yap [4, 5]. for the local size bound. 6.3 Condition-based complexity 6.2 Construction in [6] Œe following result expresses a local size bound in terms of the local condition number κ f , x directly, without using In [6], the authors use the following function aff( ) the construction in [6]. n 1 2 2n 2 − d ln 1 + 2 − + √n 2 f , x := min / / , Theorem 6.6. Assume that the interval approximation sat- C( ) dist x,VC f ( ( ( )) isfies (4.4) and (4.5). Šen 22n d 1 ln 1 + 22 4n + n 2 n − 5 2 ( − )/ / x 1 2 / dnκ f , x dist x, x ,VC g aff f  p ) 7→ / ( ) (( ) ( ))   where g is the polynomial ∂f X , ∂f Y , to construct a lo- is a local size bound for f . f h ( ) ( )i cal size bound. Proof. Let x Rn . As, by Œeorem 4.3, C J implies ∈ f′ ( ) Theorem 6.4. [6] Assume that the interval approximation C J , it is enough to compute the minimum volume of J n f ( ) ∈I satisfies the hypothesis of [6]. Šen containing x such that C J is false. Œis will still give a local f′ ( ) x 1 f , x n size function for f . 7→ /C( ) Since x J, x m J √nw J 2. Hence, by Corol- is a local size bound function for f .  ∈ k − ( )k ≤ ( )/ lary 4.2 and Proposition 5.5, either Looking at the definition of f , x in [6] one can see that C( ) 1 1 measures how near is x of being a singular zero of f . Œis f m J 1 + √d √n w J 2 /C ( ( )) ≥ √ −( ) ( )/ is similar to 1 κ which, by Œeorem 5.2, measures how near 2 2dκaff f , x / aff ( ) is f of having a singular zero at x. Œe following result relates or b 1 these two quantities. ∂f m J 1 + √d 1 √n w J 2. n ( ( )) ≥ 2√2dκ f , x −( − ) ( )/ Theorem 6.5. Let d > 1 and f , . Šen, for all x R , aff ∈ Pn d ∈ ( ) 3n 2 Œis c means that C J is true if either f , x 2 d κ f , x . f′ ( ) C( )≤ aff( ) Proof. Note that Corollary 4.2 holds over the complex num- 2√2d 1 + √d √nκ f , x w J < 1 ( ) aff( ) ( ) bers as well. Due to this and the fact that VC f = VC f , we ( ) ( ) or have that + 2√2d 1 √d 1 nκaff f , x w J < 1. f x 1 + √d dist x, C f . b ( − ) ( ) ( ) ( ) ≤( ) ( Z ( )) Hence we get that Cf′ J is true when both conditions are sat- Now, if √2 1 + √d 1 dist y1,y2 , x, x < ∂f x , then ( ) ( b − ) (( ) ( )) k ( )k isfied and the inequality 1 + √d 2√d finishes the proof.  √2 1 + √d 1 yi x < ∂f x . Œus, by Corollary 4.2, ( − )k − k k ( )k ≤ √2 ∂f yi ∂f x < ∂f x and so, by Lemmac 4.4, 0 , k ( )− ( )k k ( )k Using the results above, we get the following theorem ex- ∂f y1 , ∂f y2 . Hence c hibiting a condition-based complexity analysis of Algorithm 2.1. h (c) ( c)i c ∂f x √2 1 + √d 1 dist x,VC g . c ck ( )k ≤ ( − ) ( ( f )) Theorem 6.7. Še number of n-cubes in the final subdivision Œe bound now follows from Proposition 5.5, together with of Algorithm 2.1 on input f , a is at most 3 n 1 + c 3n 2 ( ) 2 ( − )d √n 2 − d and n n n log n+9n 2 E n ≤ d max 1, a 2 / x a,a n κaff f , x 2n 1d √n 22n d 1 n { } ∈[− ] ( ) min − + , ( − ) + if the interval approximation satisfies (4.4) and (4.4), and at most + 2 2n + 2 4n  ln 1 2 − 2 ln 1 2 − 2  r  2n n 3n2+2n n d max 1,a 2 Ex a,a n κaff f , x   3n 4 √n { } ∈[− ] ( ) 2 − d + , ≤ 2 if the interval approximation satisfies the hypothesis of [6]. Plantinga-Vegter algorithm takes average polynomial time , ,

Proof. Œis is just Œeorems 6.3, 6.6 and 6.5 combined with loss of generality, we will assume c1c2Kρ 1. Moreover, since , n ≥ the fact that the integral a,a n κaff f x dx is nothing more κaff is scale invariant, we can assume, again without loss of n E [− ] n ( )  than 2a x a,a n κaff f , x . generality, that c1K 1. ( ) ∈[− ] ( ∫ ( ) ) ≥ We observe that in contrast with the complexity analyses Proof of Theorem 7.1. κ f , x = f Px f by Corol- (of condition numbers closely related to κ ) in the literature aff( ) k k/k k aff lary 5.3. So, by an union bound, for all u,s > 0, (see, e.g., [2, 3, 8–11]), the bounds in Œeorem 6.7 depend on E n n x a,a n κaff f , x and not on maxx a,a κaff f , x . Whereas P P + P ( ( ) ) ( ) κaff f , x s f u Px f u s . (7.1) the∈[− former] has finite expectation (over f∈[−), the] laŠer has not. ( ( ) ≥ )≤ (k k ≥ ) (k k ≤ / ) Œis shows that condition-based analysis combined with adap- By Œeorems 7.2 and 7.4, we have tive complexity techniques such as continuous amortization n+1 may lead to substantial improvements. 2 2 uc2ρ P κaff f , x s exp 1 u c1K + . ( ( )≥ )≤ ( − /( ) ) s√n + 1 7 PROBABILISTIC ANALYSES   N n+1 In this section, we prove Œeorems 3.1 and 3.2 stated in Sec- We set u = c1K N ln s and use s s , so we get ( ) − ≤ −( ) tion 3. p n+1 n+1 c c Kρ√N ln s 2 P , 1 2 . 7.1 Average Complexity Analysis κaff f x s 4 (n)+1 ( ( )≥ )≤ √n + 1 ! s Œe following theorem is the main technical result from which 1 the average complexity bound will follow. By substituting s = t n we are done.  Theorem 7.1. Let f , be a dobro random polynomial ∈ Pn d with parameters K and ρ. For all x Rn and t en, Combining Œeorem 6.7 with the next theorem, we get the ∈ ≥ n+1 n+1 proof of Œeorem 3.1. c c Kρ√N ln t 2 P , n 1 2 κaff f x t 4 ( +) 1 ( ) ≥ ≤ n n + 1 1 n Theorem 7.7. Let f n,d be a dobro random polynomial ! t ∈ P  ( ) with parameters K and ρ. Šen where c1 and c2 are, respectively,p the universal constants of Še- 2 orems 7.2 and 7.4. n2+n n +3 log n +9 + E E n 2 2 ( ) n 1 f x a,a n κaff f , x d 2 c1c2Kρ Œe proof of Œeorem 7.1 relies on two basic results from ∈[− ] ( ) ≤ ( ) geometric functional analysis. where c1 andc2 are the universal constants of Šeorems 7.2 and 7.4. Theorem 7.2. [22, Šeorems 2.6.3 and 3.1.1] Šere is a univer- sal constant c1 1 with the following property. For all random Proof. By the Fubini-Tonelli theorem, ≥ vectors X = X ,..., X T with each X centered and sub- 1 N i n n ( ) E E n κ f , x = E n E κ f , x Gaussian with ψ2-norm K, and for all t c1K√N the following f x a,a aff x a,a f aff ≥ ∈[− ] ( ) ∈[− ] ( ) inequality is satisfied so it is enough to have a uniform bound for  2 2 P X t exp 1 t c1K .  (k k ≥ )≤ − /( ) n = ∞ n   Ef κaff f , x P κaff f , x t dt. Definition 7.3. Œe concentration function of a random vec- ( ) 1 ( ) ≥ Rk = P ¹ tor X is the function X ε : maxu Rk X u ε .   ∈ L ( ) ∈ (k − k ≤ ) Now, by Œeorem 7.1, this is bounded by Theorem 7.4. [20, Corollary 1.4] Šere is a universal con- n+1 + stant c2 1 with the following property. For every random vec- n 1 c1c2Kρ√N ln t 2 = ≥ ,..., T en + 4 ∞ ( ) dt. tor X X1 XN with independent random variables Xi , 1+1 n ( ) RN n n + 1 ! 1 t / and every k-dimensional linear subspace S of we have ( ) ¹ k A‰er the change ofp variables t = ens the integral becomes √ P X ε k c2 max Xi ε , L k ( ) ≤ 1 i N L ( ) + + +    ≤ ≤  ∞ n 1 s n 3 n 3 where P is the orthogonal projection onto S.  n ns 2 e− ds = n 2 Γ , k ( ) 2 ¹0   Remark 7.5. In [15] and references therein one can find infor- where Γ is Euler’s Gamma function. Using the Stirling esti- mation about the optimal value of the absolute constant c2 in Œeorem 7.4. mates for it, we obtain n+2 n+2 Remark 7.6. We notice that for a dobro random polynomial + + 2 + 2 Γ n 3 n 3 n 3 f with parameters, K and ρ we have Kρ 1 [13, (1)]. Actu- √2π 4 ≥ 4 2 ≤ 2e ≤ 4 ally, the product Kρ is invariant under scaling in the following       sense; for t > 0, t f is again a dobro polynomial with parame- and N 2d n. Combining all these inequalities, we obtain ≤ ( ) ters tK and ρ t. Hence, for the sake of simplicity and without the desired upper bound.  / , , Felipe Cucker, Alperen A. Ergur,¨ and Josue Tonelli-Cueto

7.2 Smoothed Complexity Analysis REFERENCES Œe tools used for our average complexity analysis yield also [1] Peter B¨urgisser and Felipe Cucker. 2013. Condition. Grundlehren der math- ematischen Wissenscha‰en, Vol. 349. Springer-Verlag, Berlin. hŠps://doi. a smoothed complexity analysis (see [21] or [1, §2.2.7]). We org/10.1007/978-3-642-38896-5 provide this analysis following the lines of [14], [2] Peter B¨urgisser, Felipe Cucker, and Pierre Lairez. 2018. Computing the Homology of Basic Semialgebraic Sets in Weak Exponential Time. J. ACM Œe main idea of smoothed complexity is to have a complex- 66, 1, Article 5 (2018), 30 pages. hŠps://doi.org/10.1145/3275242 ity measure interpolating between worst-case complexity and [3] Peter B¨urgisser, Felipe Cucker, and Josu´e Tonelli-Cueto. 2018. Com- average-case complexity. More precisely, we are interested in puting the Homology of Semialgebraic Sets. I: Lax Formulas. (2018). arXiv:1807.06435 To appear at Found. Comput. Math.. the maximum—over f , —of the average cost of Algo- ∈ Pn d [4] Michael Burr, Felix Krahmer, and Chee Yap. 2009. Continuous amorti- rithm 2.1 with input zation: A non-probabilistic adaptive analysis technique. Electronic Collo- quium on Computational Complexity Report. No. 136 (2009). qσ := f + σ f g (7.2) [5] Michael A. Burr. 2016. Continuous amortization and extensions: with ap- k k plications to bisection-based root isolation. J. Symbolic Comput. 77 (2016), where g , is a dobro random polynomials with param- 78–126. hŠps://doi.org/10.1016/j.jsc.2016.01.007 ∈ Pn d eters K and ρ and σ 0, . Notice that the perturbation [6] Michael A. Burr, Shuhong Gao, and Elias P. Tsigaridas. 2017. Œe complex- ∈ ( ∞) ity of an adaptive subdivision method for approximating real curves. In σ f g of f is proportional to both σ and f . k k k k ISSAC’17—Proceedings of the 2017 ACM International Symposium on Sym- bolic and Algebraic Computation. ACM, New York, 61–68. hŠps://doi.org/ Lemma 7.8. Let qσ be as in (7.2). Šen for t > 1 + σ√N 10.1145/3087604.3087654 [7] Michael A. Burr, Shuhong Gao, and Elias P. Tsigaridas. 2018. Œe Complex- 2 2 P qσ t f exp 1 t 1 σc1K ity of Subdivision for Diameter-Distance Tests. (2018). arXiv:1801.05864 (k k ≥ k k) ≤ −( − ) /( ) [8] Felipe Cucker, Teresa Krick, Gregorio Malajovich, and Mario Wschebor. and for every x Rn ,   2008. A numerical algorithm for zero counting. I: Complexity and accu- ∈ racy. J. Complexity 24 (2008), 582–605. hŠps://doi.org/10.1016/j.jco.2008. n+1 03.001 P Px qσ ε c2ρε σ f √n + 1 (k k ≤ )≤ / k k [9] Felipe Cucker, Teresa Krick, Gregorio Malajovich, and Mario Wsche-    bor. 2009. A numerical algorithm for zero counting. II: Distance to ill- where Px is as in Corollary 5.3.  posedness and smoothed analysis. J. Fixed Point Šeory Appl. 6 (2009), 285–294. hŠps://doi.org/10.1007/s11784-009-0127-4 Proof. By the triangle inequality we have P qσ t f [10] Felipe Cucker, Teresa Krick, Gregorio Malajovich, and Mario Wschebor. (k k ≥ k k) ≤ 2012. A numerical algorithm for zero counting. III: Randomization and P g t 1 σ . Œen we apply Œeorem 7.2 which finishes (k k≥( − )/ ) condition. Adv. Applied Math. 48 (2012), 215–248. hŠps://doi.org/10.1016/ the proof of the first claim. Œe second claim is a direct conse- j.aam.2011.07.001 quence of Œeorem 7.4.  [11] Felipe Cucker, Teresa Krick, and . 2018. Computing the Ho- mology of Real Projective Sets. Found. Comput. Math. 18 (2018), 929–970. hŠps://doi.org/10.1007/s10208-017-9358-8 As in the average case, this leads to a tail bound. [12] James Demmel. 1988. Œe probability that a numericalanalysis problem is difficult. Math. Comp. 50 (1988), 449–480. hŠps://doi.org/10.2307/2008617 Theorem 7.9. Let qσ be as in (7.2). Šen for σ > 0 and t [13] Alperen A. Erg¨ur, Grigoris Paouris, and J. Maurice Rojas. 2018. Probabilis- n n ≥ e , P κaff qσ , x t is bounded by tic Condition Number Estimates for Real Polynomial Systems I: A Broader ( ( ) ≥ ) Family of Distributions. Found. Comput. Math. (2018). hŠps://doi.org/10. + + n 1 n 1 n+1 1007/s10208-018-9380-5 c1c2Kρ√N ln t 2 1 4 ( ) 1 + [14] Alperen A. Erg¨ur, Grigoris Paouris, and J. Maurice Rojas. 2018. Probabilis- 1+ 1 n n + 1 ! t n σ tic Condition Number Estimates for Real Polynomial Systems II: Structure ( )   and Smoothed Analysis. (2018). arXiv:1809.03626 where c1 and cp2 are, respectively, the universal constants of Še- [15] Galyna Livshyts, Grigoris Paouris, and Peter Pivovarov. 2016. On sharp orems 7.2 and 7.4. bounds for marginal densities of product measures. Israel Journal of Math- ematics 216, 2 (2016), 877–889. hŠps://doi.org/10.1007/s11856-016-1431-5 [16] Simon Plantinga and Gert Vegter. 2004. Isotopic Approximation of Im- Proof. We proceed as in the proof of Œeorem 7.1,butwith plicit Curves and Surfaces. In Proceedings of the 2004 Eurographics/ACM Lemma 7.8 using u = f σc1K N ln t + 1 . Œis gives the SIGGRAPH Symposium on Geometry Processing (SGP ’04). ACM, New York, k k( ( ) ) NY, USA, 245–254. hŠps://doi.org/10.1145/1057432.1057465 desired bound arguing as in that proof a‰er noticing that p [17] Helmut Ratschek and Jon Rokne. 1984. Computer methods for the range of functions. Ellis Horwood Ltd., Chichester; Halsted Press [John Wiley & u f 1 + σ c1K N ln t ≤ k k( ) ( ) Sons, Inc.], New York. 168 pages. [18] James Renegar. 1995. Incorporating condition measures into the com- which holds since c1K N ln t 1.p  plexity theory of . SIAM J. Optim. 5 (1995), 506–524. ( )≥ hŠps://doi.org/10.1137/0805026 Finally, the followingp theorem, together with Œeorem 6.7, [19] Mark Rudelson and Roman Vershynin. 2008. Œe LiŠlewood-Offord prob- gives the proof of Œeorem 3.2. lem and invertibilityof random matrices. Adv. Math. 218, 2 (2008), 600–633. hŠps://doi.org/10.1016/j.aim.2008.01.010 [20] Mark Rudelson and Roman Vershynin. 2015. Small ball probabilities for Theorem 7.10. Let qσ be as in (7.2). Šen for all σ > 0, E E linear images of high-dimensional distributions. Int. Math. Res. Not. IMRN qσ x a,a n is bounded by 19 (2015), 9594–9617. hŠps://doi.org/10.1093/imrn/rnu243 ∈[− ] [21] Daniel A. Spielman and Shang-Hua Teng. 2002. Smoothed Analysis of 2 n+1 n2+n n +3 log n +9 Algorithms. In Proceedings of the International Congress of Mathematicians, ( ) n+1 1 d 2 2 2 c1c2Kρ 1 + Vol. I. 597–606. ( ) σ   [22] Roman Vershynin. 2018. High-dimensional probability: An introduction where c andc are the universal constants of Šeorems 7.2 and 7.4. with applications in data science. Cambridge Series in Statistical and Prob- 1 2 abilistic Mathematics, Vol. 47. Cambridge University Press, Cambridge. hŠps://doi.org/10.1017/9781108231596 Proof. Œe proof is as that of Œeorem 7.7, but using Œe- orem 7.9 instead of Œeorem 7.1. Actually, the integrand one ends up with is the same, up to a constant, so the calculation of the integral is the same up to that constant.