arXiv:1502.01561v1 [math.FA] 5 Feb 2015 nes ucinterm;setesre yHmlo 1]fra a an for [10] Hamilton by survey the see theorems; inverse sueta h the that Assume one nanihorodof neighbourhood a in bounded k k eut hc eaeaaeo eur h function the require However, of Mouhot. aware and are Villani by we lately wh which and H developments results Bolle, by the and since of Berti C impossible made account Hermann, Bourgain, be been Hamilton, full Tougeron, would have Sergeraert, a it Mather, contributions give and Zehnder, Important to ([15]), States paper, since. United short occured the a in such Nash in S and the [4]) in Kolmogorov [3], with starting ([2], history, long a have theorems Such least atcnegneo etnsmto ooecm h oso der of loss the on assumption overcome Gˆateaux-differentiable. smoothness to no make method we Newton’s contrast, of convergence fast hoe 1 Theorem o proof the namely: spaces), [12], Banach in (between theorem function inverse nti ae,w rv hr”ivrefnto hoe,ta is that theorem, function inverse ”hard” maps a for prove theorem function we paper, this In Introduction 1 x y ¯ k ≤ k EEAE nvri´ ai-apie 51 ai,France Paris, 75016 Universit´e Paris-Dauphine, CEREMADE, sw utsi,w ilmk oatmtt eiwteltrtr nha on literature the review to attempt no make will we said, just we As hn o every for Then, ewl vroetels fdrvtvsb sn e eso fte” the of version new a using by of loss the overcome will We ′ nipii ucintermfrnon-smooth for theorem function implicit An Let . C R 2 nteKlooo-ro’-oe rdto,frisac,oeuse one instance, for tradition, Kolmogorov-Arnol’d-Moser the in ; uhthat such , f : X Let asbtenFr´echet spaces. between maps → X Y f y sup and (¯ ecniuu n Gˆateaux-differentiable, with and continuous be x ∈ ¯ = )  Y vrEead rcS´er´e Eric Ekeland, Ivar Y

F [ y Df eBnc pcs ihrsetv norms respective with spaces, Banach be Df with hc oederivatives: lose which and oebr2013 November 0 ( : x ( x )] k Df k ) r − x ¯ y 1 ≤ k k a right-inverse a has

′ ( x 1 ≤ k | [ ) m Df Rm x k ≤ k y ¯ ( k x − ′ . )] 1 R F r −

1 nyta scniuu and continuous is that only , hr ssome is there h F m < F = ( u ob netdt eat be to inverted be to h sls eua than regular less is ) [ Df ( x ag Dacorogna, raig, hc sgiven is which f )] r − x 1 ¯ cutu to up ccount vtvs In ivatives. ninverse an , uniformly , ve Union oviet ¨ormander, ∈ f c have ich k 0 0 = (0) X x l the all k the s soft” with and rd u . , . 1982. We have drawn inspiration from the version in [1], which itself is inspired from H¨ormander’s result [11]. We have also learned much from the work on the nonlinear wave equation by Berti and Bolle, [5], [6], [7], [8] whom we thank for extensive discussions. In the section 2, we state our main result, Theorem 1, and we derive it from an approximation procedure, which is described in Theorem 2. We also given some variants of Theorem 1, for instance an implicit function theorem and we describe a particular case when we can gain some regularity. Theorem 2 is proved in section 3, and the theoretical part is thus complete. The next two sections are devoted to applications. Section 5 revisits the classical isometric imbedding problem, which was the purpose for Nash’s original work. This is somewhat academic, since it known now that it can be treated without resorting to a hard theorem (see [9]), but it gives us the opportunity to show on a simple example how our result improves, for instance, on those of Moser [14].

2 The setting 2.1 The spaces

Let (Xs, s)s 0 be a scale of Banach spaces: k·k ≥ 0 s s = (X X and ) ≤ 1 ≤ 2 ⇒ s2 ⊂ s1 k·ks1 ≤k·ks2 We shall assume that there exists a sequence of projectors Π : X E N 0 → N where EN s 0 Xs is the range of ΠN , with Π0 = 0, EN EN+1 and ⊂ ≥ ⊂ N 1 EN is dense in each space Xs for the norm s. We assume that for ≥ T k·k any finite constant A there is a constant CA > 0 such that, for all nonnegative S 1 numbers s, d satisfying s + d A: ≤ Π u CAN d u (1) k N ks+d ≤ 1 k ks

A d (1 Π )u C N − u (2) k − N ks ≤ 1 k ks+d Note that these properties imply some interpolation inequalities, for 0 t ≤ ≤ 1 and 0 s , s A , and for a new constant C(2) (see e.g. [8]): ≤ 1 2 ≤ A A t 1 t x ts1+(1 t)s2 C2 x s x s− . (3) k k − ≤ k k 1 k k 2 If all these properties are satisfied, we shall say that the scale (X ) , s 0, is s ≥ regular, and we shall refer to the ΠN as smoothing operators. Let(Ys, s′ )s 0 k·k ≥ be another regular scale of Banach spaces. We shall denote by Π′ : Y N 0 → EN′ s 0 Ys the smoothing operators. In the sequel, Bs(R) (resp. Bs′ (R)) ⊂ ≥ will denote the open ball of center 0 and radius R in Xs (resp. Ys) T

2 2.2 The map Recall that a map F : X Y , where X and Y are Banach spaces, is Gˆateaux- differentiable at x if there→ is a DF (x): X Y such that: → 1 h X, lim [F (x + th) F (x)] = DF (x) h t 0 ∀ ∈ → t − In the following, R> 0 and S > 0 are prescribed, with possibly S = ∞

Definition 2 We shall say that F : B0(R) Y0 is roughly tame with loss of regularity µ if: →

(a) F is continuous and Gˆateaux-differentiable from B0(R) Xs to Ys for any s [0,S). ∩ ∈ (b) For any A [0,S) there is a finite constant KA such that, for all s < A and x B∈(R): ∈ 0 A h X, DF (x)h ′ K ( h + x h ) (4) ∀ ∈ k ks ≤ k ks k ksk k0

(c) For x B0(R) EN , the linear maps LN (x): EN EN′ defined by ∈ ∩ → 1 L = Π′ DF (x) N have a right-inverse, denoted by [L (x)]− . There N N |E N r is a constant µ> 0 and, for any A [0,S), a positive constant γA, such that, for all s < A and x B (R) we∈ have: ∈ 0

1 1 µ k E′ , [L (x)]− k N ( k ′ + x k ′ ) (5) ∀ ∈ N k N r ks ≤ γA k ks k ksk k0

In our assumptions, there is no regularity loss between x and F (x), or more exactly the regularity loss, if there is one, has been absorbed by translating the indexation of the spaces Ys. The number S represents the maximum regular- ity available, and the constant µ may be interpreted as the loss of derivatives incurred when solving the linearized equation LN (x)h = k. Note that we need µµ.

2.3 The main result We will need an assumption relating µ, δ and S with µ<δ and S > µ. Here it is:

Condition 3 There is some κ such that:

3 1 <κ< 2 and min κ2, κ +1 µ<δ (6) { } κ2 µ 4µ, since 4 is the minimum value of κ 1 , attained when κ = 2. On the other hand, min κ2, κ +1 is an increasing function− of κ, which coincides with κ2 when κ 1+{ √5 /2} and coincides with κ + 1 when ≤ κ> 1+ √5 /2. So inequality (6) imposes δ > 1, attained when κ = 1.  Let us represent condition (3) geometrically. Define a real function ϕ on (0, 3] by: 

1 x 1 1 4 + 1 if 4 < x 3+√5 2 x √5 1 ϕ (x)= − − 2 ≤ − (8) 2 q √  x  1 1 4 if x 3+ 5  4 x √5 1 − − ≥ −  q  Proposition 4 (µ,δ,S ) R2 satisfies condition (3) if and only if δ 3 or ∈ + µ ≥ δ 3 and δ ϕ S µ ≤ µ ≥ µ   Proof. Follows immediately from inverting formulas (6) and (7). S δ We have represented the admissible region for µ , µ on Figure 1: it is the region Ω above the curve.  

Figure 1: The function ϕ (x)

4 The parameter κ decreases from κ = 2 (corresponding to S/µ = 3) to κ =1 (corresponding to S/µ ) along the curve. Note the kink at S/µ = 3+√5 , → ∞ √5 1 1 √ 1 √ − δ/µ = 2 3+ 5 corresponding to κ = 2 1+ 5 (the golden ratio). In the 1+√5 1+√5 sequel, we will separate the case κ 2 (to the right) from the case κ 2 (to the left): ≤ ≥

2 1 <κ 1+√5 δ >κ2 1 S > κ 3+√5 2 µ µ κ 1 √5 1 ≤ ≥ − 2≥ 1+√5 δ 3+√5 S κ − 2 κ< 2 µ >κ +1 2 µ > κ 1 4 ≤ ≥ − ≥ Theorem 5 Assume F : B0 (R) Xs Ys, 0 s ≤0 and α> 0 be such that δ S > ϕ (9) µ µ   α δ S S 1 δ < min ϕ , ϕ− (10) µ µ − µ µ − µ     

Then one can find ρ > 0 and C > 0 such that, for any y Yδ with y δ′ ρ , there some x X such that: ∈ k k ≤ ∈ α F (x)= y x 1 k k0 ≤ x C y k kα ≤ k kδ 1 It follows that the map F sends X0 into Y0, while F − sends Yδ into X0. This parallels the situation with the linearized operator DF (0), which sends X0 1 into Y0 while DF (0)− sends Yµ into X0, with µ<δ. More precisely, we have: if F (¯x)=¯y, then F (X ) contains some δ-neighbourhood ofy ¯ • 0 1 if F (¯x)=¯y, then F − (¯y + Y ) contains some α-neighbourhood ofx ¯ • δ S is the maximal regularity on x and δ is the minimal regularity on y that we will need in the approximation procedure, bearing in mind that F (0) = 0 (see Corollary 6 below for the case when F (0)=y ¯ = 0 has fininte regularity). Note that we may have δ>S: this simply means that6 the right-hand side y is more regular than the sequence of approximate solutions xn that we will construct. α> 0 is the regularity of the solution x. Note the significance of 9) and (10) δ S take together. The inequality µ > ϕ µ tells us that the right-hand side y is more regular than needed (or, alternatively,  that the full range of S has not been used), and this ”excess regularity”, measured by the difference δ ϕ S (or, µ − µ S 1 δ   alternatively, ϕ− ) can be diverted to x. For instance, if S/µ , µ − µ → ∞ the total loss of regularity δ α between y and x satisfies − S δ α>µϕ − µ  

5 and can be made as close to µ as one wishes: we can start the iterative procedure xn from a very regular initial point. However, we lose control on ρ (which goes to zero) and C (which goes to infinity). On the other hand, when δ/µ > 3, that is, when we are not worried about the loss of regularity, then S can be any number larger than µ, that is, we need very little regularity to start with. We now go from the case F (0) = 0 to the case F (¯x)=¯y. There is some sub- telty there because 0 belong to all the Xs, whiley ¯ does not, and puts adittional limits to the regularity. We shall say that F is roughly tame atx ¯ if F (x x¯) is roughly tame at 0. −

Corollary 6 Suppose x¯ X , y¯ Y and F (¯x)=¯y. Assume F (x) sends ∈ S1 ∈ S2 Xs B0 (¯x, R) into Ys for every s 0, and is roughly tame at x¯ with loss of regularity∩ µ. Set S = min S ,S ≥. Let δ and α satisfy (9) and (10). Then { 1 2} one can find ρ > 0 and C > 0 such that, for any y with y¯ y δ′ ρ , there is a solution x of the equation F (x) = y, with x x¯ k 1−andk ≤x x¯ k − k0 ≤ k − kα ≤ C y y¯ ′ . k − kδ Proof. Consider the map Φ (x) := F (x +¯x) y¯. It is roughly tame, with − F (0) = 0, and we can apply the preceding Theorem with S = min S1,S2 . The result follows { } We now deduce an implicit function theorem. Let V be a and let F : B (R,X V ) (X V ) Y , 0 s

(b’) For any A [0,S) there is a finite constant KA such that, for all s < A and (x, v) ∈B (R,X V ): ∈ 0 × A h X, DF (x, v)h ′ K ( h + x h ) ∀ ∈ k ks ≤ k ks k ksk k0 (c’) For x (x, v) B (R,X V ) (E V ), the linear maps L (x, v): ∈ ∈ 0 × ∩ N × N EN EN′ defined by LN = ΠN′ DxF (x, v) EN have a right-inverse, de- → 1 | noted by [L (x, v)]− . There is a constant µ> 0 and, for any A [0,S), N r ∈ a positive constant γA, such that, for all s < A and (x, v) B (R,X0 V ) we have: ∈ ×

1 1 µ k E′ , [L (x, v)]− k N ( k ′ + x k ′ ) ∀ ∈ N k N r ks ≤ γA k ks k ksk k0 Corollary 8 Assume (a’), (b’), (c’ ) are satisfied and F (0, 0)=0 . Take any α with 0 <α 0 and C > 0 such that, for any v with v ρ , there− is a some x such that: k k≤ F (x, v)=0 x 1 k k0 ≤ x C v k kα ≤ k k

6 Proof. Consider the Banach scale X V and Y V with the natural norms. s × s × Consider the map Φ (x, v) = (F (x, v) , v) from Xs V ,0 s

2.4 A particular case In the case when F (x) = Ax + G (x) where A is linear, we can improve the regularity.

Proposition 9 Suppose F (x)= Ax + G (x), where A : Xs+ν Ys is a contin- uous linear operator, independent of the base point x, and G satisfies→ G (0) = 0. Suppose moreover that:

(a) G is continuous and Gˆateaux-differentiable from B0(R) Xs to Ys for any s [0, S). ∩ ∈ (b) For any A [0, S) there is a finite constant KA such that, for all s A and x B∈(R): ≤ ∈ 0 A h X, DG(x)h ′ K ( h + x h ) ∀ ∈ k ks ≤ k ks k ksk k0

(c) For x B0(R) EN , the linear maps LN (x): EN EN′ have a right- ∈ ∩ 1 → inverse, denoted by [LN (x)]r− . There is a constant µ > 0 and, for any A [0, S), a positive constant γA, such that, for all s [0, S) and x ∈B (R) we have: ∈ ∈ 0 1 1 µ k E′ , [L (x)]− k N ( k ′ + x k ′ ) ∀ ∈ N k N r ks ≤ γA k ks k ksk k0

(d) The EN are A-invariant. Let δ > 0 and α> 0 be such that δ S > ϕ µ µ   α δ S S 1 δ < min ϕ , ϕ− µ µ − µ µ − µ     

. Then one can find ρ > 0 and C > 0 such that, for any y Yδ with y δ′ ρ , there some x X such that: ∈ k k ≤ ∈ α F (x)= y x 1 k k0 ≤ x C y k kα ≤ k kδ

7 In this situation, a direct application of Theorem 5 would give a loss or regularity of µ + ν. Proposition 9 tells us that the µ is enough: the loss of regularity due to the linear part can be circumventend.

2.5 The approximating sequence Theorem 5 is proved by an approximation procedure: we construct by induc- tion a sequence xn having certain properties, and we show that it converges to the desired solution. We now describe that sequence, and give the proof of convergence. The actual construction of the sequence is postponed to the next section. Given an integer N and a real number α> 1, we shall denote by E [N α] the integer part of N α: E [N α] N α < E [N α]+1 ≤ Theorem 10 Assume that µ,δ,S and κ satisfy condition ??. Choose σ and β such that: κ2 µ<κβ<σ

1+√5 For 1 <κ 2 • ≤ δ κβ > κµ + σ (12) − κ

1+√5 For 2 κ< 2 • ≤ β>µ + σ δ (13) −

Then one can find N0 2, ρ > 0 and c > 0 such that, for any y Y with ≥ (κn) ∈ y ′ ρ, there are sequences (xn)n 1 in B0(1) and Nn := N0 satisfying: k kδ ≤ ≥ For 1 <κ 1+√5 , • ≤ 2

Π′ F (x ) = Π′ y and x E n (14) Nn n Nn−1 n ∈ N

For 1+√5 κ< 2 • 2 ≤

Π′ F (x ) = Π′ y and x E n (15) Nn n Nn n ∈ N And in both cases:

µ κβ σ x cN y ′ and x x cN − y ′ (16) k 1k0 ≤ 1 k kδ k n+1 − nk0 ≤ n k kδ β κβ x cN y ′ and x x cN y ′ (17) k 1kσ ≤ 1 k kδ k n+1 − nkσ ≤ n k kδ

8 The set of admissible σ and β is non-empty. Indeed, because of (7), we have κ2 κ 1 µ µκ2, so κµ δ/κ < 0 and κβ can satisfy both (11) − and (12). For κ 1+ √5 /2, we have δ µ>κµ, so µ + σ δ < σ κµ and  condition (13) is≥ satisfied provided β > σ −κµ, or κβ > κσ −κ2µ. If σ−satisfies  (11), we have κσ κ2µ> (κ 1) σ > 0, so− we can find κβ−satisfying (11) and (13). − − Note that the estimate on x x blows up very fast when n , k n+1 − nkσ → ∞ while the estimate on xn+1 xn 0 goes to zero very fast, since κβ σ < 0. Using the interpolationk inequality− k (3), this will enable us to maintain con− trol of some intermediate norms. The proof of Theorem 10 is postponed to the next section. We now show that it implies Theorem 5. Let u begin with an estimate:

A Lemma 11 Given 0

The case 1 <κ 1+√5 . We have, by (14): ≤ 2

F (x )=(1 Π′ )F (x ) + Π′ y¯ . (18) n − Nn n Nn−1 Then Lemma 11 gives the estimate

(3) s (1 Π′ )F (x ) ′ C N − x for all s<δ k − Nn n k0 ≤ δ n k nks

9 κβ Substituting x C′N y ′ , we get: k nkσ ≤ n k kδ (3) κβ σ (1 Π′ )F (x ) ′ C′ C N − y ′ for all s δ k − Nn n k0 ≤ δ n k kδ ≤

By (11), the exponent (κβ σ) is negative. So (1 Π′ )F (x ) converges − − Nn n to zero in Y0. Now, using the inequality (2) we get

(1) δ (1 ΠN′ n−1 )¯y 0′ Cδ Nn− 1 y¯ δ′ for all s δ k − k ≤ − k k ≤ so ΠN′ n−1 y¯ converges toy ¯ in Y0. So both terms on the right-hand side of (18) converge to zero, and we get F (¯x)=¯y, as announced. Together with the interpolation inequality (3), conditions (16) and (17) im- ply: κβ tσ xn+1 xn (1 t)σ ctNn − y δ′ k − k − ≤ k k The exponent on the right-hand side is negative for t > κβ/σ, so that (1 t) σ < σ κβ. Arguing as above, if follows that x¯ C y ′ , provided: − − k kα ≤ k kδ α< sup σ κβ 1 { − } A where: δ σ κβ < κ κµ 1 = (κ,β,σ) κ2 − − A ( | κ 1 µ<κβ<σ

α′ < sup σ′ κβ′ ′ { − } A1 ′ δ σ′ κβ′ < κ κ 1′ = (κ,β′, σ′) κ2 − − A ( | κ 1 < κβ′ < σ′

For given κ, Figure 3 gives the admissible (β, σ) region in the case S′ >δ′/κ κ − (upper horizontal line) and in the case S′ < δ′/κ κ (lower horizontal line). − The admissible region is to the right of the vertical β′ = κ/ (κ 1), both in the − case S′ >δ′/κ κ (right line) and in the case S′ <δ′/κ κ (left line) − −

10 Figure 3: The admissible (β, σ) region in the first case The maximum is attained at the upper left corner of the admissible region, 1 which is the point (β′, min S′, κβ′ κ + δ′/κ ), with β′ = κ (κ 1)− . Hence: { − } − 2 κ δ′ sup σ′ κβ′ = min S′ , κ (19) 1 { − } − κ 1 κ − A  − 

The case 1+√5 κ < 2 The argument is the same, except that we have to 2 ≤ replace ΠN′ n−1 y¯ by ΠN′ n y¯ in (18). We now have:

α′ < sup σ′ κβ′ ′ { − } A2 σ′ <β′ + δ 1 2′ = (κ,β′, σ′) κ2 − A | κ 1 < κβ′ < σ′ δ 1+κ/ (κ 1) (upper horizontal line) and in the case S<δ 1+κ/ (κ 1) (lower− horizontal− line). The vertical is β = κ/ (κ 1). − − −

11 Figure 4: The admissible (β, σ) region in the second case Again the maximum is attained in the upper left corner, which is the point (β′, min S′, δ′ 1+ β′ ) with β′ = κ (κ 1). This gives: { − } − κ2 sup σ′ κβ′ = min S′ , δ′ 1 κ (20) 2 { − } − κ 1 − − A  −  Putting (19) and (20) together gives formula (10)

3 Proof of Theorem 2

We work under the assumptions of Theorem 10. So µ,δ,S,κ,σ,β, y¯ are given. Note that we may have σ < δ. We assumey ¯ = 0 (the casey ¯ = 0 is obvious). We fix A = σ , and the A A6 A A A constants C1 , C2 , C3 ,K , γ of (1, 2, 3, 4, 5) and Lemma 11 are simply A A denoted C1, C2 , C3 , K,γ. The proof will make use of a certain number of constants, which we list here to make sure that they do not depend on the iteration step and can be fixed at the beginning. n (α) α κ Recall that 2 is the integer part of 2 , and set Pn = E 2 . There is a constant g> 1 such that for all N 2 and n 0, 0 ≥ ≥   1 κ κ g− P P g P (21) n ≤ n+1 ≤ n

12 We define constants B0, B1 and B2 by:

κβ σ 1 B0 = (P1 + Pn − )− (22) n 1 X≥ β β κβ B1 := sup Pn− (P1 + Pi ) n 1 (23) n { | ≥ } 1 i n 1 ≤X≤ − 2 (κ 1)β B := sup B1Pn− − +1 n 1 (24) n | ≥ n o We shall use Theorem 1 to construct inductively the sequence xn, thanks to a sequence of carefully chosen norms. For this purpose, we will have to take c large and ρ small.

4 Choice of N0

For 1 <κ 1+√5 , we consider the following function of n: ≤ 2

β κ(β µ) σ δ/κ κ(β µ) δ/κ (σ δ)+ (δ σ)+/κ κ(β µ) ϕ2 (n) := 2 B1C3 c Pn − − +C1 Pn − − − g + Pn − − − − −  (25)  By condition (11) and (12), all the exponents are negative. So we may pick n0 so large that:

1 µ ϕ (n ) γc (B + 2)− g− for all n n (26) 2 0 ≤ 2 ≥ 0 For 1+√5 κ< 2, we consider the following function of N and n: 2 ≤ 0

β κ(β µ) σ δ κ(β µ) (σ δ)+ κ(σ δ)+ (δ σ)+ κ(β µ) ϕ1 (n0) := 2 B1C3 c Pn − − +C1 Pn − − − + g − Pn − − − − −  (27)  By condition (11) and (13), all the exponents are negative. So we may pick N0 so large that:

1 µ ϕ (n ) γc (B + 2)− g− for all n n (28) 1 0 ≤ 2 ≥ 0 κ κ n (non) In both cases we set N0 = Pn0 , and Nn = E N0 = E 2 . So the 1 µ expressions (25) and (26) are less than γc (B + 2)− g when one substitutes 2  −   Nn for Pn.

4.1 Construction of the initial point

1+√5 The case 1 < κ 2 . Thanks to inequality (2), ΠN′ 0 y¯ 0 C1 y¯ δ′ and ≤(σ δ)+ k k ≤ k k − ΠN′ 0 y¯ σ C1N0 y¯ δ′ , where t+ denotes the positive part of the real numberk k t.≤ We choose thek normk

(σ δ)+ (x)= x + N − − x N0 k k0 0 k kσ

13 on EN1 and the norm:

(σ δ)+ ′(y)= y ′ + N − − y ′ N0 k k0 0 k kσ on EN′ 1 . For these norms, EN1 and EN′ 1 are Banach spaces. Note that

′(Π′ y) < 2C y for y E′ (29) N0 N0 1k kδ ∈ N1 For (x) 1, we define N0 ≤ f(x) := Π′ F (x) E′ N1 ∈ N1 The function f is continuous and Gˆateaux-differentiable for the norms N0 and 0′, with f (0) = 0. Moreover, using the tame estimate (5) and applying assumptionN (1) to x , we find that: k kσ µ 1 2N1 sup [Df(x)]− k (x) 1 < k ′ (30) 0 | N ≤ γ k k0  µ 1 N1 (σ δ)+ sup [Df(x)]− k (x) 1 < ( k ′ + N − k ′ ) (31) σ | N ≤ γ k kσ 0 k k0  hence: µ 1 3N1 sup ([Df(x)]− k) (x) 1 < ′(k) (32) N0 | N ≤ γ N0  By Theorem 1, we can solve f (¯u)=v ¯ with 0(¯u) 1 if 0′(ΠN′ 0 y¯) µ 1 N ≤ N ≤ γ (3N1 )− . By (29), this is fulfilled provided: γ y¯ δ′ µ =: ρ . (33) k k ≤ 6C1N1 In addition, Theorem 1 tells us that we have the estimate:

µ 1 (1) µ 1 (¯u) 3N γ− ′ Π′ y¯ 6C N γ− y¯ ′ (34) N0 ≤ 1 N0 N0 ≤ 1 k kδ  If (33 is satisfied, x1 :=u ¯ is the desired solution in EN1 of the projected equation ΠN′ 1 F (x1) = ΠN′ 0 y¯, with 0(¯u) 1. Let us check conditions (17) and (16). We have, by (34): N ≤

(σ δ)+ (1) µ 1 (x )= x + N − − x R =6C N γ− y¯ ′ N0 1 k 1k0 0 k 1kσ ≤ 1 k kδ Since N g1/κN 1/κ, we find: 0 ≤ 1 −1 κ (σ δ)+ (1) µ 1 x + (gN )− − x 6C N γ− y¯ ′ k 1k0 1 k 1kσ ≤ 1 k kδ

1 µ Since µ + κ− (σ δ)+ < β, this yields x1 0 cN1 y¯ δ′ and x1 σ β − k k ≤ k k k k ≤ cN y¯ ′ as required, with 1 k kδ

(1) (σ δ)+/κ 1 c := 6C g − γ− (35)

14 The case 1+√5 κ < 2 Very few modifications are needed in the above 2 ≤ arguments. Replace N0 by N1, so that the norms become:

(σ δ)+ (x)= x + N − − x N0 k k0 1 k kσ (σ δ)+ ′(y)= y ′ + N − − y ′ N0 k k0 1 k kσ and define as above f(x) := Π′ F (x) E′ . Because (14) is replaced by N1 ∈ N1 (15), we now consider ΠN′ 1 y¯ EN′ 1 . Estimates (29) and (32) still hold. Using Theorem 1 as before, we will∈ be able do find someu ¯ E with (¯u) 1 ∈ N1 N0 ≤ solving f (¯u) = ΠN′ 1 y¯ providedy ¯ satisfies (33). The estimate (34) still holds:

(σ δ)+ µ 1 (x )= x + N − − x 6C N γ− y¯ ′ N0 1 k 1k0 1 k 1kσ ≤ 1 1 k kδ µ β Since µ + (σ δ)+ <β, this yields x1 0 cN1 y¯ δ′ and x1 σ cN1 y¯ δ′ as above, with − k k ≤ k k k k ≤ k k (1) 1 c := 6C γ− (36) Diminishing ρ, if necessary, we can always assume that, in both cases, ρ and c satisfy the constraint, where B0 is defined by (22):

4.2 Induction The case 1 < κ 1+√5 . Assume that the result has been proved up to n. ≤ 2 In other words, define c by (35), and assume we have found ρ with cρ < B(0) such that, fory ¯ Y with y¯ δ′ ρ, there exists a sequence x1, , xn satisfying (14), (17), and (16).∈ To bek precise:k ≤ ···

κβ σ x x cN − y¯ ′ for i n 1 (38) k i+1 − ik0 ≤ i k kδ ≤ − κβ x x cN y¯ ′ for i n 1 (39) k i+1 − ikσ ≤ i k kδ ≤ −

Since x1, , xn satisfy (16), and y¯ δ′ ρ, this will imply that xn 0 1 η with: ··· k k ≤ k k ≤ − n κβ σ i n Ni − κβ σ η := ≥ cρ N − . (40) n µ κβ σ ≤ i N1 P+ n 1 Nn − i n ≥ X≥ Since x , , x satisfy (17),P we also have: 1 ··· n β κβ x c (N + N ) y¯ ′ k nkσ ≤ 1 i k kδ 1 i n 1 ≤X≤ −

Using the constant B1 defined in (22), this becomes:

x B cN β y¯ (41) k nkσ ≤ 1 n k kδ

15 We are going to construct xn+1 so that (14), (17), and (16) hold for i n. Write: ≤ xn+1 = xn + ∆xn

By the induction hypothesis, x E n and Π′ F (x ) = Π′ y¯. The n ∈ N Nn n Nn−1 equation to be solved by ∆xn may be written in the following form:

fn (∆xn)= en + ∆yn 1 (42) − f (u) := Π n (F (x + u) F (x )) (43) n N +1 n − n e := Π n (Π n 1)F (x ) (44) n N +1 N − n ∆yn 1 := ΠNn (1 ΠNn−1 )¯y (45) − −

The function fn is continuous and Gˆateaux-differentiable with f (0) = 0. We will solve equation (42) by applying Theorem 1. We choose the norms:

σ (u)= u + N − u on E n (46) Nn k k0 n k kσ N +1 σ ′ (v)= v ′ + N − v ′ on E′ (47) Nn k k0 n k kσ Nn+1

κβ σ Let R := cN − y¯ . If (u) R , we have, by (40) n n k kδ Nn ≤ n κβ σ u (u) < R < cN − ρ < η k k0 ≤ Nn n n n so that xn + u 0 1 and the function fn is well-defined by (43). Using (41) and (46),k we findk that,≤ for (u) R , we have u N σR = cN κβ y¯ , Nn ≤ n k kσ ≤ n n n k kδ and hence, with the constants B1 and B2defined by (23) and (24):

β κβ κβ x + u c (B N + N ) y¯ B cN y¯ ′ (48) k n kσ ≤ 1 n n k kδ ≤ 2 n k kδ Plugging this into the tame inequality (5), and taking into account that σ κβ c y¯ = R N − then gives: k kδ n n 1 µ 1 sup [Df (u)]− k (u) R < 2N γ− k ′ (49) n 0 | Nn ≤ n n+1 k k0 1 µ 1 κβ sup  [Dfn(u)]− k n(u) Rn

Hence:

1 1 (2) µ sup ([Df (u)]− k) (u) R <γ− B +2 N ′ (k) Nn n | Nn ≤ n n+1Nn    By Theorem 1, we will be able to solve (42) with (∆x ) R provided: Nn n ≤ n 1 1 (2) − µ (2) − µ κβ σ n′ (en + ∆yn 1) γ B +2 Nn−+1Rn = γc B +2 Nn−+1Nn − y¯ δ N − ≤ k k     (51)

16 We now compute n′ (en + ∆yn 1). Applying inequality (2) to (45), we get: N − δ ∆yn 1 0′ C1Nn− 1 y¯ δ′ (52) k − k ≤ − k k (δ σ)+ (σ δ)+ ∆yn 1 σ′ C1Nn− 1− Nn − y¯ δ′ (53) k − k ≤ − k k Because of the induction hypothesis, we get

β κβ β x c (N + N ) y¯ ′ B cN y¯ ′ (54) k nkσ ≤ 1 i k kδ ≤ 1 n k kδ 1 i n 1 ≤X≤ − So, combining Lemma 11 with conditions (17) and (16), we find that

(3) σ β σ e ′ C N − x B C cN − y¯ ′ (55) k nk0 ≤ k nkσ ≤ 1 3 n k kδ (3) β e ′ C x B C cN y¯ ′ (56) k nkσ ≤ k nkσ ≤ 1 3 n k kδ We now check (51). Using the estimates (52), (53), ( 55), (56), and remem- bering (21), we have :

β σ ′ (e ) 2B C cN − y¯ ′ (57) Nn n ≤ 1 3 n k kδ δ σ (δ σ)+ (σ δ)+ n′ (∆yn 1) C1 y¯ δ′ Nn− 1 + Nn− 1Nn− 1− Nn − (58) N − ≤ k k − − −

 β σ δ σ (δ σ)+ (σ δ)+ n′ (en)+ n′ (∆yn 1) 2 B1C3 cNn − + C1 Nn− 1 + Nn− 1− − Nn − y¯ δ′ N N − ≤ − − k k h  (59)i

Condition (51) is satisfied provided:

β σ δ/κ δ/κ (σ δ)+ (δ σ)+/κ 1 µ κ(β µ) 2 B C cN +C N − g + N − − − γc (B + 2)− g− N − 1 3 n 1 n n ≤ 2 n   (60) κ(β µ) Dividing by Nn − on both sides, we find that the sequence on the left- hand side is just a subsequence of ϕ2 (n), where ϕ2 is defined by (25), and so (60) follows directly from (26), that is, from the construction of N0. So we may apply Theorem 1, and we find u = ∆xn with n(∆xn) Rn solving (42). σ N κβ ≤ Since n(∆xn) Rn, we have ∆xn σ Nn Rn = cNn y¯ δ′ and ∆xn 0 N κβ σ ≤ k k ≤ k k k k ≤ Rn = cNn − y¯ δ′ , so inequalities (16) and (17) are satisfied. The induction is proved. k k

1+√5 The case κ < 2. The induction hypothesis becomes Π′ F (x ) = 2 ≤ Nn n ΠN′ n y¯. The system (42) (43), (44), 45) becomes:

fn (∆xn)= en + ∆yn

f (u) := Π n (F (x + u) F (x )) n N +1 n − n e := Π n (Π n 1)F (x ) n N +1 N − n ∆y := Π n (1 Π n )¯y n N − N

17 Using Theorem 1 in the same way, we find that we can find ∆xn satisfying these equations and the estimates (16) and (17) provided:

1 (3) β σ δ (σ δ)+ κ(σ δ)+ (δ σ)+ (2) − µ κ(β µ) 2 B cN +C N − + g − N − − − γc B +2 g− N − n 1 n n ≤ n     (61) κ(β µ) But the left-hand side is just ϕ1 (n,N0) Nn − , where ϕ1 is defined by (27), and so (61) follows directly from (28), that is, from the construction of N0. The induction is proved in this case as well.

4.3 Proof of Proposition 9

We will take advantage of the special form of en := ΠNn+1 (ΠNn 1)F (xn).The proof is the same, with the estimates (55) and (56) derived as− follows. Since x E n , and E n is A-invariant, we have Π n Ax = Ax and: n ∈ N N N n n

e := Π n (Π n 1)F (x ) n N +1 N − n = Π n (Π n 1) (Ax + G(x )) N +1 N − n n = Π n (Π n 1)G(x ) N +1 N − n Lemma 11 holds for G (though no longer for F ), so that estimates (55) and (56) follow readily.

5 An isometric imbedding

We will use the same example as Moser in his seminal paper [14], who himself follows Nash [15]. Suppose we are given a Riemannian structure g0 on the two- 2 5 dimensional torus T2 = (R/Z) , and an isometric imbedding into Euclidian R . 0 0 0 In other words, we know x = x1, ..., x5 with:

∂x0 ∂x0  , = g0 (θ ,θ ) ∂θ ∂θ i,j 1 2  i j  0 0 where gi,j = gj,i, so there are three equations for five unknown functions. If we slightly perturb the Riemannian structure, does the imbedding still exist ? If we replace g0 on the right-hand side by some g sufficiently close to g0, can we 5 still find some x : T2 R which solves the system ? We consider the Sobolev→ spaces Hs T ; R5 and we assume that x0 Hµ, 2 ∈ with µ> 3, and g0 Hσ. Define F = F i,j by: ∈  ∂  ∂x F x + x0 = x + x0 , x + x0 g0 (62) i,j ∂θ ∂θ −  i j     Clearly F (0) = 0. For s 3/2, we know Hs is an algebra, so if s µ s ≥ s 1 ≥ and x H , the first term on the right is in H − . On the other hand, the right-hand∈ side cannot be more regular than g0, which is in Hσ. So F sends Hs

18 s 1 into H − for µ s σ + 1. The function F is quadratic, hence smooth, and we have: ≤ ≤ ∂x ∂u ∂u ∂x [DF (x) u] = , + , (63) i,j ∂θ ∂θ ∂θ ∂θ  i j   i j  We need to invert the derivative DF (x), that is, to solve the system

DF (x) u = vi,j (64)

It is an undetermined system, since there are three equations for five un- knowns. Following Nash, and Moser, we impose two additional conditions:

∂x ∂x ,u = ,u = 0 (65) ∂θ ∂θ  1   2  Differentiating, and substituting into (63), we find:

∂2x 2 ,u = v (66) − ∂θ ∂θ i,j  i j  So any solution ϕ of (65), (66) is also a solution of (64). The five equations (65), (66) can be written as:

0 M (x (θ)) u (θ)= v (θ)   with u = ui , 1 i 5, v = v11, v12, v22 and M (x (θ)) a 5 5 matrix. These are no longer≤ partial≤ differential equations. If ×   ∂x ∂x ∂2x ∂2x ∂2x det M (x (θ)) = det , , , , = 0 (67) ∂θ ∂θ ∂θ2 ∂θ ∂θ ∂θ2 6  1 2 1 1 2 2  1 they can be solved pointwise. Set L (x (θ)) := M (x (θ))− , and denote by M (x) and L (x) the operators u (θ) M (x (θ)) u (θ) and v (θ) L (x (θ)) v (θ). Since x0 Hµ, with µ > 3,→ we have x0 C2, so M →x0 (θ) is well-defined ∈ ∈ and continuous with respect to θ. If det M x0 (θ) = 0 for all θ T , then  2 there will be some R > 0 and some ε > 0 such that 6 det M (x (θ)) ∈ ε for all 0  | | ≥ x with x x µ R. So the operator L (x) is a right-inverse of DF (x) on 0 − ≤ 0 x x µ R, and we have the uniform estimates, valid on x + Bµ (R) and s −0 ≤ ≥

DF (x) u C x u k k0 ≤ 0 k k1 k k1 DF (x) u C x u + x u k ks ≤ s k ks+1 k k1 k k1 k ks+1 L (x) v C x v k k0 ≤ 0 k kµ k k0  L (x) v C′ x v + x v k ks ≤ s k kµ+s k k0 k kµ k ks  

19 This means that the tame estimate (4) is satisfied. However, (5) requires a proof. For this, we have to build a sequence of projectors ΠN satisfying the estimates (1) and (2). For this, we use a multiresolution analysis of L2 (R) (see [13]). Recall that it is an increasing sequence VN ,N Z, of closed subspaces of L2 (R) with the following properties: ∈ N=+ N=+ 2 N= ∞VN = 0 and N= ∞VN is dense in L • ∩ −∞ { } ∪ −∞ u (t) V u (2t) V • ∈ N ⇐⇒ ∈ N+1 k Z, u (t) V u (t k) V • ∀ ∈ ∈ 0 ⇐⇒ − ∈ 0 there is a function ϕ (t) V0 such that the ϕ (t k) , k Z, constitute a • Riesz basis of L2. ∈ − ∈ It is known ([13], Theorem III.8.3) that for every r 0 there is a multireso- lution analysis of L2 (R) such that: ≥ the ϕ (t k) , k Z, constitute an orthogonal basis of V • − ∈ 0 ϕ (t) is Cr and has compact support: there is some a (depending on r) • such that t a = ϕ (t) = 0. | |≥ ⇒ We choose r so large that Cr HS. Set ϕ (t) := 2N/2ϕ 2N t . For N ⊂ N large enough, say N N0, the function ϕN has its support in ] 1/2, 1/2[ so ≥ N/−2 N we can consider it as a function on R/Z, and the ϕN,k (θ) := 2 ϕ 2 θ k , N − for 0 k 2 1, constitute an orthonormal basis of VN . In this way, we get a multiresolution≤ ≤ − analysis on L2 (R/Z). Setting  Φ (θ)=2N ϕ 2N θ k ϕ 2N θ k N,k 1 − 1 2 − 2 E = Span Φ (θ) k = (k , k ) , 0 k , k 2N 1 N N,k | 1 2  ≤ 1 2 ≤ − 2 we get a multiresolution analysis of L (T2), and the ΦN,k (θ) are an orthonormal 5 2 T 5 basis of EN . Finally, the EN constitute a multiresolution analysis of L ( 2) = 2 5 L T2, R . Denote by ΠN the orthogonal projection:

i i  (ΠN u) = < ΦN,k,u > ΦN,k Xi,k Introduce an orthonormal basis of wavelets associated with this multireso- 5 2 T R5 lution analysis EN N 0 of L 2, . More precisely, the ΦN0,k, 0 k1, k2 N ≥ r ≤ ≤ 2 1, span EN0 , and one can find a C function Ψ with compact support such that− the Ψ =2N Ψ 2N θ k , 2N θ k span the orthogonal complement N,k 1 − 1 2 − 2 of EN 1 in EN . TheΦN0,k and the ΨN,k for N N0 constitute an orthonormal −  basis for L2 (T ). We have, for u L2 (T ) ≥ 2 ∈ 2 ui = ui, Φ Φ + ui, Ψ Ψ h N0,ki N0,k h N,ki N,k k N N0, k X X≥ X 5 2 i 2 i 2 u 2 = u , Φ + u , Ψ k kL  h N0,ki h n,ki  i=1 k n N0, k X X X≥ X  

20 It follows from the definition that, for N N , we have: ≥ 0 5 Π u 2 = ui, Φ 2 + ui, Ψ 2 k N k  h N0,ki h n,ki  i=1 k N0 n N, k X X ≤X≤ X 5   2 i 2 Π u u 2 = u , Ψ k N − kL h n,ki i=1 n N, k X X≥ X It is known (see [13] Theorem III.10.4) that:

2 2N0s 2 2ns 2 2 C1 u Hs 2 u, ΦN0,k + 2 u, Ψn,k C2 u Hs k k ≤ 5 h i 5 h i ≤ k k k (KN ) n N0, k (Kn) ∈X X≥ ∈X If v = Π u, we must have v, Ψ = 0 for all n N + 1, so that: N h n,ki ≥ 2 2Ns 2 2Ns 2 C Π u s 2 Π u 2 2 u 2 1 k N kH ≤ k N kL ≤ k kL

2 2Ns 2ns 2 ΠN u u L2 2− 2 u, Ψn,k k − k ≤ 5 h i n N, k (Kn) X≥ ∈X 2Ns 2 2− C u s ≤ 2 k kH So estimates (1) and (2) have been proved. Finally, we prove (5). We have:

(Π u)i (θ)= ui ϕ (θ) , 1 i< 5 N k N,k ≤ Xk j j i (M (x (θ)) ΠN u) (θ)= ϕN,k (θ) Mi (x (θ)) uk i Xk X   N N So ΠN M (x) ΠN is a 2 1 2 1 matrix of 5 5 matrices mk,k′ , with: − × − ×   mk,k′ = ϕN,k (θ) ϕN,k′ (θ) M (x (θ)) T Z 2 Looking at the supports of ϕN,k and ϕN,k′ , we see that there is a band along the diagonal outsides which mkk′ vanishes.

′ 1 N mk,k = 0 if max ki ki′ > 2 − a (68) i=1,2 | − |

N 1 N Choose some ε > 0 and N so large that max θ 2− k 2 − a i=1,2 − i ≤ implies that M (x (θ)) M x 2 N k ε. Then: − − ≤  N mk,k′ ϕN,k (θ) ϕN,k′ (θ) M x 2− k dθ ε ϕN,k (θ) ϕN,k′ (θ) dθ − T ≤ T | | | | Z 2 Z 2 

21 Using the fact that the system ϕN,k is orthonormal, we get:

N 2 m ′ δ ′ M x 2− k ε (max ϕ) k,k − k,k ≤ In addition, for every k, (68) gives us at most 4a2 non-zero values for k . ′ So the matrix mk,k′ is a small perturbation of the diagonal matrix ∆N := N δ ′ M 2− k , k K , which is invertible by (67). More precisely, we have: k,k ∈ N 1 1 1 1 (Π M (x) Π )− = I + ∆− (Π M (x) Π ∆ ) − ∆− N N N N N − N N  2 2 1  1 with Π M (x) Π ∆ ε4a (max ϕ) and ∆− max M (x (θ))− . k N N − N k≤ N ≤ θ So (Π M (x) Π ) is invertible for ε small enough, for instance: N N

2 2 1 1 ε4a (max ϕ) max M (x (θ))− < θ 2 and we have:

1 1 1 (ΠN M (x) ΠN )− 2 ∆N− =2 min M (x (θ))− ≤ θ

Estimate (5) then follows immediately s s+µ s s+µ 1 We now introduce the new spaces X := H and Y := H − . We denote their norms by x s∗ := x s+µ and y s∗ := y s+µ 1 respectively, so the above estimates become:k k k k k k k k −

DF (x) u ∗ C x ∗ u ∗ + x ∗ u ∗ for s 0 k ks ≤ s k ks k k0 k k0 k ks ≥ u L (x) v ∗ C′ x ∗ v ∗ + x ∗ v ∗ for s 0 k ks ≤ s k ks+µ k k1 k k0 k ks+1 ≥   and the range for s becomes 0 s S with S = σ + µ 1.We have proved that all the conditions of Corollary≤ 6 are≤ satisfied, with x0 − Hµ = X0, g0 Hσ = σ µ+1 ∈ ∈ Y − and S = σ µ + 1. Hence: − Theorem 12 Take any µ > 3. Suppose x0 Hµ and g0 Hσ with σ 5µ 1 > 14. Suppose the (67) does∈ not vanish. Set∈ S := σ µ +1≥. Then,− for any δ and α such that: − δ S ϕ µ ≥ µ   α δ S S 1 δ < min ϕ , ϕ− µ µ − µ µ − µ      0 there is some ρ > 0 and some C > 0 such that, for any g with g g δ+µ 1 µ+α − − ≤ ρ , there is x H such that: ∈ ∂x ∂x , = g (θ ,θ ) ∂θ ∂θ i,j 1 2  i j  x x0 1 − µ ≤ 0 0 x x µ+ α C g g µ+δ 1 k − k ≤ − −

22 Moser [14] finds that if g,g0 Cr+40 and x0 Cr for some r 2, and if 0 ∈ ∈ ≥ g g r is sufficiently small, then we can solve the problem. Although he made no− effort to get optimal differentiability assumptions, we note that our loss of regularity is substantially smaller (σ µ 4µ 1 > 11 instead of 40). Note that when µ 3, we have σ− 14≥ and−δ . In another direction: → → → ∞ 0 0 Corollary 13 Suppose x C∞, g C∞, and the determinant (67) does not vanish. Then, for any δ>µ>∈ 3 and∈ any α<δ µ, there is some ρ > 0 0 − and some C > 0 such that, for any g with g g δ+µ 1 ρ , there is some α+µ − − ≤ x H such that: ∈ ∂x ∂x , = g (θ ,θ ) ∂θ ∂θ i,j 1 2  i j  x x0 1 − µ ≤ 0 0 x x µ+ α C g g µ+δ 1 k − k ≤ − −

References

[1] Alinhac, Serge and G´erard, Patrick, ”Op´erateurs Pseudo-diff´erentiels et Th´eor`eme de Nash-Moser”, Inter´editions et Editions´ du CNRS, Paris (1991). English translation: ”Pseudo-differential Operators and the Nash- Moser Theorem”, Graduate Studies in vol. 82, AMS, Rhode Island, (2000) [2] Arnol’d, Vladimir I. ”Small divisors”, Dokl. Akad. Nauk CCCP 137 (1961), 255-257 and 138 (1961), 13-15 (Russian) [3] Arnol’d, Vladimir I. ”Small divisors I ”, Izvestia Akad. Nauk CCCP 25 (1961), 21-86 (Russian) [4] Arnol’d, Vladimir I. ”Small divisors II ”, Ouspekhi Mathematitcheskikh Nauk 18 (1963), 81-192 (Russian) [5] Berti, Massimiliano, and Bolle, Philippe. ”Cantor families of periodic so- lutions for completely resonant nonlinear wave equations ”, Duke Mathe- matical Journal, 134 (2006), 359-419 [6] Berti, Massimiliano, and Bolle, Philippe. ”Cantor families of periodic so- lutions for completely resonant nonlinear wave equations ”, NoDEA 15 (2008), 247-276 [7] Berti, Massimiliano, and Bolle, Philippe. ”Sobolev periodic solutions of non- linear wave equations in higher spatial dimensions”, Archive for Rational Mechanics and Analysis, 195 (2010), 609 -642 [8] Berti, Massimiliano, Bolle, Philippe, Procesi, Michela. ”An abstract Nash- Moser theorem with parameters and applications to PDEs”, Ann. IHP C 27, no. 1 (2010), 377-399

23 [9] G¨unther, Mathias, ”Isometric embeddings of Riemannian ”, Proc. ICM Kyoto (1990), 1137-1143 [10] Hamilton, Richard, ”The Inverse Function Theorem of Nash and Moser” Bull. AMS 7 (1982), 165-222 [11] H¨ormander, Lars, ”The Boundary Problems of Physical Geodesy”. Arch. Rat. Mech. An. 62 (1976), 1-52 [12] Ekeland, Ivar, ”An inverse function theorem in Fr´echet spaces”, Ann. IHP C 28, no. 1 (2011), 91-105 [13] Meyer, Yves, ”Ondelettes ”, Hermann, Paris, 1990 [14] Moser, J¨urgen, ”A new technique for the construction of solutions to non- linear differential equations”, Proc. Nat. Acad. Sci. USA, 7 (1961), 1824-31 [15] Nash, John, ”C1 isometric imbeddings” Ann. of Math. (2) 60 (1954). 383– 396.

24