Ambient Approximation
of Functions and Functionals
on Embedded Submanifolds
Vom Fachbereich Mathematik der
Technischen Universität Darmstadt zur Erlangung des akademischen Grades eines
Doctor rerum naturalium
(Dr. rer. nat)
genehmigte
Dissertation
von Dipl. Math. Lars-Benjamin Maier
Erster Gutacher: Prof. Dr. Ulrich Reif
Zweiter Gutachter: Prof. Dr. Armin Iske
Dritter Gutacher: Prof. Dr. Oleg Davydov
Darmstadt 2018 Maier, Lars-Benjamin: Ambient Approximation of Functions and Functionals on
Embedded Submanifolds
Darmstadt, Technische Universität Darmstadt
Jahr der Veröffentlichung der Dissertation auf TUprints: 2018
Tag der mündlichen Prüfung: 31.08.2018
Veröffentlicht unter CC BY-SA 4.0 International https://creativecommons.org/licenses/
2 Contents
1 Introduction and Related Work 1
2 Embedded Submanifolds and Function Spaces 9
2.1 Embedded Submanifolds ...... 9
2.1.1 Closed Submanifolds, Normal Foliation and Normal Extension 10
2.1.2 Open Submanifolds and General Foliations ...... 14
2.2 Function Spaces on Embedded Submanifolds ...... 18
2.2.1 Trace Theorems ...... 21
2.2.2 Friedrichs’ Inequality ...... 24
3 Ambient Approximation Theory 25
3.1 General Approximation Operators ...... 26
3.2 Quasi-Projection and Polynomial Reproduction ...... 29
3.2.1 Definition and Approximation Power ...... 29
3.2.2 Interpolating Quasi-Projections ...... 32
3.3 Tensor Product B-Splines ...... 34
3.3.1 Definition and Initial Approximation Results ...... 35
3.3.2 Locality of Approximation and the Continuity of Operators ... 37
3.3.3 Improved Approximation Results ...... 41
3.3.4 Approximation in Fractional Orders and under fixed Interpola- tion Constraints ...... 48
3.3.5 Approximating Normal Derivatives ...... 51
3.3.6 Practical Examples ...... 57
i 4 Tangential Calculus, Function Spaces and Functionals 63
4.1 Tangential Derivatives, Gradients and Hessians ...... 64
4.1.1 Definition and Basic Properties ...... 64
4.1.2 Intrinsic Characterisation of Sobolev Spaces ...... 70
4.2 Unisolvency in Tangential Calculus ...... 75
4.2.1 Unisolvency and Curvature ...... 76
4.2.2 Unisolvency and Isometry ...... 81
4.3 Tangential Bilinear Functionals and Tangential Energies ...... 89
4.3.1 Some Specific Energy Functionals ...... 91
4.3.2 Functional Residuals and Norm Distances ...... 95
4.4 Approximately Intrinisic Functionals ...... 96
5 Ambient Functional Approximation Methods 105
5.1 Penalty Approximation in Functional Optimisation ...... 106
5.2 Penalty Approximation for Energies in Convex Sets ...... 109
5.3 Penalty Approximation for Energy Residuals ...... 115
5.4 Penalty Approximation of Energies with Linear Portion ...... 119
6 Scattered Data Problems on Embedded Submanifolds 125
6.1 Sparse Data Extrapolation ...... 125
6.1.1 Problem Statement and Naive Approaches ...... 126
6.1.2 Extrapolation by Penalty Based Energy Minimisation ...... 128
6.2 Smoothing with Scattered Data Sites ...... 142
6.3 Irregular Samplings ...... 149
6.3.1 A Two-Stage Approximation Approach ...... 150
6.3.2 A Bilevel Algorithm ...... 156
7 Partial Differential Equations on Embedded Submanifolds 159
7.1 Elliptic Problems on Closed Submanifolds ...... 159
7.2 Ideas for Open Subdomains of Submanifolds ...... 170
ii 8 Conclusion and Prospects 173
9 Appendix 177
9.1 An Atlas for Embedded Submanifolds ...... 177
9.1.1 The Exponential Map ...... 177
9.1.2 The Construction of an Atlas ...... 178
9.2 A Theory of Function Spaces on Embedded Submanifolds ...... 181
9.2.1 Norm Equivalences for Integer Order Spaces ...... 181
9.2.2 Embeddings in Sobolev Spaces ...... 183
9.2.3 A Note on Besov Spaces ...... 184
9.2.4 Consequences of Sobolev Space Interpolation ...... 185
9.2.5 Trace Theorems ...... 189
9.2.6 Friedrichs’ Inequality ...... 192
9.3 Notes on Riemannian Geometry ...... 197
Bibliography 201
Index 209
iii iv Acknownledgments
First of all, I would like to thank Prof. Ulrich Reif for supervising the voyage that ultimately culminated in this thesis. In particular, I would like to thank him for the opportunity to follow my own agenda and investigate what I found interesting. I would also like to thank Prof. Oleg Davydov and Prof. Armin Iske for agreeing to coreferee this thesis. My special thanks go to Prof. Karsten Große-Brauckmann for his support and his patience with my questions about Riemannian geometry. I would further like to thank my colleagues at TU Darmstadt for some very helpful ideas, for proofreading and their companionship, the team of the DFS section of Cenit AG for giving me a bit of distraction when my thoughts were stuck and — last but not least — my wife for her patience and understanding.
v vi Abstract of the Thesis
While many problems of approximation theory are already well-understood in Eu- clidean space and its subdomains, much less is known about problems on subman- ifolds of that space. And this knowledge is even more limited when the approxima- tion problem presents certain difficulties like sparsity of data samples or noise on function evaluations, both of which can be handled successfully in Euclidean space by minimisers of certain energies. On the other hand, such energies give rise to a considerable amount of techniques for handling various other approximation prob- lems, in particular certain partial differential equations. The present thesis provides a deep going analysis of approximation results on sub- manifolds and approximate representation of intrinsic functionals: It provides a method to approximate a given function on a submanifold by suitable extension of this function into the ambient space followed by approximation of this extension on the ambient space and restriction of the approximant to the manifold, and it investigates further properties of this approximant. Moreover, a differential cal- culus for submanifolds via standard calculus on the ambient space is deduced from Riemannian geometry, and various energy functionals are presented and approx- imately handled by an approximate application of this calculus. This approximate handling of functionals is then employed in several penalty-based methods to solve problems such as interpolation in sparse data sites, smoothing and denoising of function values and approximate solution of certain partial differential equations.
vii viii German Summary — Deutsche Zusammenfassung
Die vorliegende Dissertationsschrift befasst sich mit Problemen der Approximation auf eingebetteten Untermannigfaltigkeiten des euklidischen Raumes.
Nach einer kurzen Einführung über geeignete Untermannigfaltigkeiten und über Funktionenräume auf ebensolchen erweitert sie zunächst bekannte Konvergen- zresultate für die sogenannte ambient approximation method und deren bisher wichtigsten Spezialfall, die ambient B-spline method. Insbesondere generalisiert sie diese auf Untermannigfaltigkeiten mit höherer Codimension und ergänzt die betreffenden Resultate um Ergebnisse zu Approximation unter einer endlichen, festen Anzahl an Interpolationsbedingungen und um Approximationsaussagen zur Ableitung entlang des Normalenbündels der Untermannigfaltigkeit.
Im Anschluss wird ein intrinsischer, tangentialer Calculus für solche Unterman- nigfaltigkeiten eingeführt, der auf bestehenden Konzepten der riemannschen Ge- ometrie basiert. Dabei wird insbesondere die Beziehung zwischen intrinsischer, tangentialer Ableitung und der korrespondierenden euklidischen Ableitung ent- lang von Elementen des Tangentialraums beleuchtet. Außerdem werden in diesem Zusammenhang eine Reihe von intrinsischen Funktionalen eingeführt, und es wird insbesondere das Konzept der polynomiellen Unisolvenz in die Situation des tan- gentialen Calculus übertragen.
Als nächstes wird eine Methodik vorgestellt, die die Approximation der Optima be- sagter Funktionale mittels im Kern extrinsischer Methoden erlaubt, die sich auf die sogenannten penalty-Verfahren beziehen. Für die resultierende ambient penalty approximation werden Konvergenzresultate präsentiert, und es werden beispiel- haft verschiedene interessante Funktionale diskutiert, die unter anderem Extrap- olation aus wenigen verstreuten Datenpunkten auf der Untermannigfaltigkeit Ϻ, Glättung von Funktionswerten über Ϻ und die näherungsweise Lösung elliptischer partieller Differentialgleichungen auf Ϻ erlauben.
ix x Chapter 1
Introduction and Related Work
The problem of approximation of functions on manifolds, particularly surfaces, has gained both relevance and attraction over the last years. The relevance came with the increased capability and popularity of computer systems in representation, pro- cessing and simulation of problems in engineering, manufacturing and the natural and medical sciences. Computer graphics (CG), medical imaging, computer aided design (CAD) and manufacturing (CAM) or recently industry 4.0 are just some of the keywords that one frequently encounters in this area. And the problems that need to be solved are as diverse as
• simulation of biological, physical or manufacturing processes on computer models, for example in terms of distributions of heat or material, deformation of models or calibration and registration between different states of one model during a manufacturing process,
• reasonable extrapolation of data measurements on real world surfaces from a comparably sparse set of data sites to the whole surface,
• high-detail approximation and processing of dense measurements on real world surfaces, including data reduction and the elimination of noise and measure- ment errors,
• reduction of data measurements for fast evaluation, compression and easy processing,
• and many more.
In mathematical terms, these lead to problems of approximation theory as diverse as scattered data approximation, sparse data extrapolation, smoothing and noise reduction and the solution of partial differential equations (PDE). Within this thesis, we are going to address all of these problems by essentially novel approaches to obtain approximate solutions.
1 Related Work
The problem of approximating a function 푓 ∶ Ϻ → ℝ for a given embedded subman- ifold Ϻ is quite a delicate matter even if the function is explicitly known or sampled in a very dense set of data sites Ξ ⊆ Ϻ without any measurement errors. It has thus attracted numerous researchers over the past decades. Effectively, three types of approaches seem to exist: The first approach is to parameterise the submanifold suitably by a finite number of parameter spaces with corresponding parameterisations and to solve the approxi- mation problem on each parameter space independently before blending the local solutions with the help of the parameterisations. This approach was for example followed in [25, 26, 30, 31]: The authors use projections on the tangent plane of a submanifold to solve the approximation problem locally.
The second approach is to provide some set of truly intrinsic functions like spher- ical harmonics [8], spherical splines [10, 47], other functions on the sphere [86, 101, 104] or on more general manifolds [56]. Many of these are tailored for the sphere and thereby already illustrate the problem of this approach: for an arbitrary submanifold it can be quite complicated. Actually, the determination of suitable intrinsic functions can be a very difficult task in its own right — it may even be as complex as, or even more complex than, the given approximation problem; the authors of [43] provide a kind of hybrid of the first two approaches and employ a projection onto the sphere where the approximation problem is solved afterwards. So it is applicable only to sphere-like surfaces and effectively uses the sphere as its ”chart”. Similarly restricted to sphere-like surfaces are for example methods proposed in [4, 5, 91].
The third approach is to solve the approximation problem in the ambient space of the submanifold Ϻ and to restrict the solution to Ϻ afterwards. Such a method will benefit from the fact that there is often a well understood theory for approximation methods in Euclidean spaces, for example all kinds of splines, polynomials, kernel functions and the like: In particular, important properties like smoothness can be deduced directly from the corresponding properties in the ambient space; for ra- dial basis functions (RBFs), this approach was recently investigated in [49], where the authors made also use of the fact that the restriction of such a kernel function is an intrinsic kernel as well. They found that for many kernels, this approach gave the same convergence behaviour as the corresponding kernel in a Euclidean space of the same dimension as that of the submanifold. For tensor product splines and other methods that meet certain locality require- ments, this approach was also investigated recently for the important case of hy- persurfaces in [74, 75, 76, 82], where the authors proved that the approximation
2 order achievable in the ambient space is essentially reproduced by the restrictions — at least under some mild further restriction of the applicable norms. This ap- proach will play a major role in the present thesis as well; it is based on extending an intrinsic function constantly along the normals of the submanifold, followed by application of the respective approximation method to this extension on the ambi- ent space and restriction of the approximation to the submanifold.
While the problem of suitable approximation to function values in scattered data sites on submanifolds itself is a nontrivial issue even if the data is sufficiently dense, it can nonetheless be solved satisfactory by some of the above approaches. But the problem becomes even more involved when the data is sparse. In fact, the literature on that matter is quite sparse itself. Of course, for the Euclidean situ- ation, there exist well understood approaches that provide pleasant results: One are polyharmonic splines with the particular examples of thin-plate splines in ℝ2 and cubic splines in ℝ1, see for example [17, 40, 105]. These minimise certain energies under fixed interpolation constraints in a finite set of points and can also be considered to be special cases of RBF. Other examples include inverse distance weighting [46, 53, 70, 48, 92, 105], sometimes also called Shepard’s method, and Kriging [94]. Of course, these methods can be employed in the ambient space, just ignoring the geometry of the submanifold, and the solution on the submanifold would then be obtained by simple restriction. This can yield reasonable results if the geometry of the submanifold is very nice, for example a sphere, but as soon as the geometry is more intricate, significant artifacts will appear — we will present examples of this problem later in this thesis. Considering the other approaches to approximation on submanifolds in general, we first see that any application of a method based on charts is effectively point- less in this setting: one could easily end up with parameter spaces that contain no data sites at all. On the other hand, the Euclidean approaches are essentially transferable to a purely intrinsic setting, particularly for the case of the sphere (cf. [58, 86, 104, 101]), but also for other specific (cf. [62]) and even more general man- ifolds (cf. [60, 59, 61]), like submanifolds that are compact and have no boundary. Further options include the spherical splines and spherical harmonics mentioned before. However, the case of more general manifolds often includes the solution of certain differential equations in that manifold. This means that first and foremost this submanifold needs to be known explicitly, not just by some discretisation, and can be a very challenging task for arbitrary manifolds. Moreover, the evaluation of these functions can be very costly: For example, even something as simple as a direct generalisation of the inverse distance weighting approach would require the calculation of 푛 geodesic distances for 푛 data sites in each evaluation. This can be quite significant if geodesic distances do not have a closed form expression like
3 in the spherical case, even if the nontrivial issue of existence of geodesics is solved at a satisfactory level. And for some of the other approaches mentioned above, there do not even exist closed form expressions for suitable functions on general manifolds, as the involved equations are by far to complex.
In the Euclidean case, the problem of smooth extrapolation often leads to corre- sponding solutions for problems of smoothing of (possibly noisy or oversampled) data: In the univariate case, the smoothing splines (cf. [33]) appear as generali- sations of interpolating cubic splines, and for the multivariate setting smoothing problems are often addressed by methods that have a counterpart in extrapolation as well (cf. [33, 40, 48]). Consequently, the same holds also for the sphere (cf. [33, 101]) and presumably also for other manifolds. But with these approaches, the drawbacks of the corresponding extrapolation methods will of course remain present as well.
Finally, the solution of intrinsic partial differential equations for a given subman- ifold has also gained increased attraction in recent years and decades. Several approaches exist in the literature: The first is based on fairly obvious generalisa- tions of standard methods in the Euclidean setting to the situation of an embedded submanifold, particularly finite elements over a suitable triangulation of the sub- manifold (cf. e.g. [35, 36]). However, this approach has the significant drawback that the discretisation of the submanifold is either very coarse — and in this case only a rough approximation of the actual surface — or the appearing linear sys- tems are, though sparse, very large. And further, the quantities of the equation need to be discretised appropriately, which is a possibly nontrivial task in its own right — although for common operators there are suitable discretisations, cf. e.g. [36, 103]. The second approach is based on the idea that models, often steming from CAD or CG, and spaces or sets of functions used for their parameterisations can be em- ployed in the solution of the PDE. It firms under the name isogeometric analysis, cf. [11, 28] and is consequently a representative of the parametric approximation approach family: It exploits the presence of exact parameterisations for many CAD and CG models and uses a suitable Galerkin formulation based on the space or set of functions that is also employed in the parameterisation, typically tensor prod- ucts of B-splines or NURBS. But as a direct consequence, this method is limited to these specific representations. Nonetheless, the approach has applications for problems both on submanifolds and in various domains of Euclidean space, and an overview of some approaches and applications is given in [22]. A third approach is based on so-called collocation, where the respective differential equation is solved for a finite but sufficiently dense set of points in the respective domain or submanifold, called collocation points, cf. [38, 39, 45]. This approach
4 has been transferred to the sphere [72]. The concept is further adapted in [50] for application on general surfaces by suitable discretisation of intrinsic differen- tial operators, which can otherwise be hard to handle on arbitrary surfaces. In contrast to this, [84] approaches the intrinsic operators by enforcing certain con- straints on the functions in collocation points in terms of normal derivatives that are familiar to the constraints occuring in this thesis. Ultimately, a fourth approach involves the extension of the intrinsic equation into the ambient space, thereby introducing a new problem in the ambient space whose restriction solves the intrinsic problem. Methods of this kind are often based on a representation of the submanifold by an implicit function (cf. [15, 21, 27, 36, 54, 82]). Unfortunately, the extension introduces new difficulties: In particular, the problem may loose important properties like ellipticity (e.g. [36]) or introduce dis- continuities (cf. [15]), and the extended domain introduces further boundaries for which appropriate boundary conditions may be required or have to be circumvened (cf. e.g. [15, 82]). Furthermore, essentially any of these methods is presented only for hypersurfaces.
In this thesis, we present approaches to the problems stated above that are capable of overcoming many of the issues and difficulties stated so far.
Outline of the Thesis
This thesis is organised as follows: In the following second chapter, we will in- troduce some basic facts and concepts for the treatment of embedded submani- folds within their ambient space. This includes in particular a method to extend functions defined on the submanifold into this ambient space. And we will also briefly revise and enhance certain facts about function spaces, particularly Sobolev spaces, for the treatment of functions on submanifolds.
The third chapter features results of approximation theory for submanifolds: We will generalise ideas from [74, 75, 76, 82] in various ways, particularly to a set- ting with codimension greater than one. There, we are able to deduce that under mild restrictions on the applicable norms we will be able to reproduce the rates of convergence available in the ambient space for approximations on submanifolds. These can be based on tensor product B-splines or other approximation methods that meet certain locality requirements. The general framework we propose, fea- tured for codimension one in [76] as the ambient approximation method (AAM) and in [74, 82] for tensor produce splines as the ambient B-spline method (ABM), is es- sentially applicable to any kind of embedded submanifold and easily implemented in practice.
5 Further, we will investigate certain other properties of this and related approaches in terms of achievable rates of convergence. In particular, we will find conditions under which the rate of convergence provided by tensor product splines is main- tained under a finite, fixed set of interpolation constraints. And we will investigate rates of convergence that can be obtained simultaneously for the derivative along the normals of the submanifold, so an essentially extrinsic property. In the end, we support our results on the convergence order in the intrinsic setting with numerical examples, in particular for a surface embedded in ℝ4.
The fourth chapter introduces an intrinsic calculus for functions defined on sub- manifolds and its relation to the calculus of functions in the ambient space that yield intrinsic functions by restriction. This calculus is to some degree a degener- alisation of concepts from Riemannian geometry. In this, we are particularly in- terested in intrinsic versions of first and second order derivatives to a sufficiently (weakly) differentiable function 푓 on an submanifold and their relation to extrinsic derivatives of a function 퐹 on the ambient space such that 푓 is the restriction of 퐹 to the submanifold. We will find that the first order derivative of 퐹 along the normals of the submanifold plays a crucial role therein. Further, we will introduce certain intrinsic functionals on submanifolds and in- vestigate their properties. In particular, we will be interested in finding conditions under which a finite set Ξ of points yields that any function with vanishing intrinsic second order derivative that vanishes in these points must vanish itself. Thereby we transfer the concept of polynomial unisolvency on Euclidean space to the in- trinsic setting. And we will introduce a couple of model functionals that will in the end lead to solutions for problems as diverse as the ”optimal” extrapolation from sparse data, noise reduction, smoothing and the solution of certain elliptic PDE.
Chapter five introduces a general framework for the formulation and approximate solution of minimisation problems for the intrinsic functionals introduced in the third chapter. We will present a penalty-based minimisation approach called am- bient penalty approximation (APA) that is capable of handling various functionals; in particular we can thereby treat the specific functionals we have introduced in the previous chapter for extrapolation from sparse data, noise reduction and the solution of elliptic PDE. We are able to present upper bounds on the convergence rate in the intrinsic energy norm induced by those functionals, which are at least in some cases optimal.
Chapter six applies the concepts of the fifth chapter to problems of extrapolation, smoothing and noise reduction. We will apply the results of chapter three to obtain theoretical rates of convergence and find that in fact the results are even better than the theory implies. Also, we present various numerical examples for both curves in ℝ2 and surfaces in ℝ3 that verify the validity of our method. And we will
6 additionally present a two-stage approximation method in terms of radial basis functions (RBF) and tensor product splines for scattered data problems that is afterwards integrated into a bilevel algorithm to solve scattered data problems with irregular samplings.
Finally, chapter seven presents the application of the results of chapter five to the partial differential ”model” equation
ΔϺ푓 − 휆푓 = 푔
for scalar 휆 ≥ 0 and intrinsic Laplacian ΔϺ of submanifold Ϻ. By various numerical examples we verify that the optimal convergence rate on compact submanifolds without boundary in the energy norm which chapter five has implied is indeed achievable. And we also provide examples for submanifolds with boundary, where the same approach under suitable boundary conditions leads to pleasant approxi- mations of the solution with optimal convergence of the residual ΔϺ푓 − 휆푓 − 푔.
We conclude our argumentation in chapter eight by summing up our main results, and we also discuss open problems and possible further developments there. Ul- timately, a subsequent appendix features auxiliary statements and proofs left out in the main part of the thesis.
7 8 Chapter 2
Embedded Submanifolds and Function Spaces
In this chapter, we will give a basic introduction of what kind of embedded sub- manifolds we are going to work with. And we also give some brief introduction into basics of the theory we need for them. In addition, we will present a way to parameterise both the submanifold and its ambient space simultaneously. By this, we will be able to define extension operators for functions given only on the submanifold. In particular, we will encounter the normal extension operator that extends constantly along the normals of hypersurfaces, and constant in the normal space of submanifolds of higher codimension.
Further, we will briefly introduce and revise some important aspects of function spaces both in Euclidean space and on submanifolds, particularly Sobolev spaces. Most proofs and additional results are postponed to the appendix; we restrict our- selves here to the most important results that are frequently referred to in the thesis.
2.1 Embedded Submanifolds
In this section, we will introduce what types of embedded submanifolds we are going to deal with, and we will introduce the beforementioned extension opera- tors. We will first clarify the concepts for compact submanifolds without boundary and the normal extension operator before we generalise the ideas to certain subdo- mains of such compact submanifolds and also to more general extension operators.
9 2.1.1 Closed Submanifolds, Normal Foliation and Normal Exten- sion
We will be developing approximation concepts on embedded submanifolds (ESMs) later in this thesis, so we first have to make some introductory considerations about these structures. In particular, we need some basic definitions and certain impor- tant properties. Within this subsection, we are going to introduce or revise a collec- tion of relevant properties for submanifolds that are compact, so they are closed, bounded and have no boundary. Standard examples of these include in particular the sphere and the torus. Certain embedded submanifolds with boundary will be treated later due to their increased complexity, once we have clarified our main ideas in the simpler case of a compact submanifold. In any case, we presume that our submanifold Ϻ is embedded into some ℝ푑, has dimension 푘 ∈ {1, ..., 푑 − 1} and codimension ĸ = 푑 − 푘.
2.1 Notation A smooth embedded submanifold Ϻ of ℝ푑 of dimension 푘, compact 푘 푑 and without boundary, is denoted by Ϻ ∈ 필cp(ℝ ). More generally speaking, a smooth embedded submanifold Ϻ, compact and without boundary, of a smooth ̂ 푘 ̂ embedded submanifold Ϻ is denoted by Ϻ ∈ 필cp(Ϻ). Before we come to the specific properties we demand for our embedded sub- manifolds, we will have to introduce some basic concepts: The first is a suit- able version of the well-known concept of a bounded (strong) Lipschitz domain (cf. [1, 2, 64, 100]) that we will restrict ourselves to. If we follow [2], this type of domain is given by demanding that there is a finite open cover {Θ푗}푗∈퐽 of the compact boundary ∂Ω such that ∂Ω ∩ Θ푗 is graph of a Lipschitz function for each 푗 ∈ 퐽.
We will use this kind of Lipschitz domains when Sobolev spaces are concerned. Therefore, they will soon be particularly important as parameter spaces of our submanifolds in order to define Sobolev spaces on submanifolds. In the remaining course of this thesis, we will use the following notation:
2.2 Notation If Ω is a bounded Lipschitz domain in ℝ푑, this is denoted by Ω ∈ ∗ 푑 핃핚핡푑. We also use the notation Ω ∈ 핃핚핡푑 to express that either Ω ∈ 핃핚핡푑 or Ω = ℝ .
푑 푑 Most importantly, any Euclidean ball B푟 (푥) ∶= {푦 ∈ ℝ ∶ ∣∣푥 − 푦∣∣2 < 푟} of fixed radius 푟 is such a bounded (strong) Lipschitz domain, any maximum norm ball 푑[푥] ∶= {푦 ∈ ℝ푑 ∶ ∣∣푥 − 푦∣∣ < 푟} B푟 ∞ , so any hypercube, is such a bounded (strong) Lipschitz domain and any Cartesian product of the form B푘 (푥 ) × Bĸ [푥 ] is also a 푟1 1 푟2 2 bounded (strong) Lipschitz domain.
The second concept is a useful concept of boundedness of (diffeomorphic) maps, namely the concept of C-boundedness:
10 2.3 Definition
1. A smooth map φ ∈ C∞(Ω, ℝ) on open domain Ω ⊆ ℝ푑 is said to be C-bounded
if for each 푛 ∈ ℕ0 there is a constant c푛 such that any partial derivative of φ
up to total order 푛 is bounded by c푛 on Ω. 2. If φ ∶ Ω → ℝ푑 is a diffeomorphism, then it is said to be a bidirectionally C- bounded diffeomorphism if both φ and φ-1 are C-bounded maps.
Particularly important is that C-bounded diffeomorphisms preserve Lipschitz do- mains. That is, it holds the following proposition of [64]:
2.4 Proposition If φ ∶ U → Û is a C-bounded smooth diffeomorphism and Ω ∈
핃핚핡푑 with clos(Ω) ⊆ U, then φ(Ω) ∈ 핃핚핡푑. Now we come to the additional assumptions for compact embedded submanifolds we will make. Namely, we will demand that the embedded submanifold is smoothly parameterised in a suitable way. So we presume that the ESM is equipped with a
finite inverse atlas 픸Ϻ = (ψ푖, ω푖)푖∈퐼 providing parameterisations ψ푖 and parameter spaces ω푖 such that it holds for any 푖 ∈ 퐼: ⋆ ⋆ • ω푖 ∈ 핃핚핡푘 and there is superset ω푖 ∈ 핃핚핡푘 of ω푖 such that clos(ω푖) ⊆ ω푖 and ⋆ ⋆ ψ푖 ∶ ω푖 → Ϻ is well-defined and injective on ω푖 .
푖 • There is a smooth map T that maps each 푥 ∈ ω푖 to an orthonormalised tangent 1 푘 frame (τ , ..., τ ) of Ϻ in ψ푖(푥).
푖 • There is a smooth map N that maps each 푥 ∈ ω푖 to an orthonormalised normal 1 ĸ frame (ν , ..., ν ) of Ϻ in ψ푖(푥).
2.5 Notation Any inverse atlas 픸Ϻ of an embedded submanifold Ϻ that meets these requirements is denoted by 픸Ϻ ∈ ℿ(Ϻ). 푖 푖 푖 2.6 Remark: In the following, we will identify T (푥) with T (ψ푖(푥)) and N (푥) with 푖 푖 푖 푖 N (ψ푖(푥)). Furthermore, we will frequently omit the index ” ” in T or N . We can do so because we will usually be either within a fixed parameter space of an inverse atlas as introduced above or in a situation where the quantities depending on T푖, N푖 turn out to be invariant under rotations.
2.7 Important: By arguments provided in Section 9.1.2 of the appendix, we can ⋆ choose any ω푖 or ω푖 as a ball or a cylinder of fixed height and radius, so
푘 푘−1 ω푖 = B휚(0) or ω푖 = B휚 (0) × ]−휚, 휚[ ⋆ 푘 푘−1 ⋆ ⋆ ω푖 = B휚⋆ (0) or ω푖 = B휚⋆ (0) × ]−휚 , 휚 [
⋆ 푖 푖 with 휚 > 휚 > 0. This gives us C-boundedness of ψ푖, T , N on ω푖 directly by com- pactness. For the rest of this thesis, we will presume to be equipped with an inverse atlas that meets these requirements.
11 As we are investigating ESMs in ℝ푑, the portion of the ambient space near an ESM will play a crucial role for us. More specifically, we will be particularly interested in sufficiently narrow tubular neighbourhoods:
푘 푑 ɴ 2.8 Definition Let Ϻ ∈ 필cp(ℝ ) and 푥 ∈ Ϻ. We define the set B휚(푥) for 휚 > 0 and N푥Ϻ the normal space to Ϻ in 푥 ∈ Ϻ as
ɴ 푑 B휚(푥) ∶= B휚(푥) ∩ N푥Ϻ.
ɴ The union of all B휚(푥) of fixed radius 휚 is called the tubular neighbourhood of Ϻ and denoted by U휚(Ϻ). It is said to have the closest point property if any element
푧 ∈ U휚(Ϻ) has a uniquely determined closest point on Ϻ. This definition leads us directly to the following proposition of [44]:
2.9 Proposition — Tubular Neighbourhood Theorem — 푘 푑 ɴ ɴ If Ϻ ∈ 필cp(ℝ ), then there is a fixed ԑ > 0 such that Bԑ (푥) and Bԑ (푧) are disjoint for any 푥 ≠ 푧 ∈ Ϻ. Consequently, Uԑ(Ϻ) has the closest point property. Further, the projection ΠϺ onto Ϻ defined for any 푧 ∈ Uԑ(Ϻ) by
ΠϺ(푧) = ɑrg min ||푥 − 푧||2 . 푥∈Ϻ is smooth in Uԑ(Ϻ). Now we make some further considerations on the ambient space in which our ESM is embedded, and some considerations on normal maps. In later chapters, we will be interested in extending functions defined only on the ESM into the ambient space. So we would prefer to transfer as many intrinsic properties of this function into the extrinsic setting as possible. Our objective will then be to reduce an intrin- sic version of the calculus on an ESM to some slightly adapted Euclidean calculus in the ambient space. And as a consequence of this, we will be able to deduce numerous other features.
We will see that a very good choice for this step from the ESM into the ambient space is an extension that is constant in the normal space. This can be accom- plished by performing 푓∘ΠϺ for given 푓 ∶ Ϻ → ℝ and the orthogonal projection ΠϺ onto Ϻ. While this is already well defined by the existence and sufficient smooth- ness of the projection implied by [44] in a sufficiently narrow tubular neighbour- hood, we will have to create a slightly more elaborate framework for our theoretical treatment of the problem in some contexts. In particular, we will create a suitable link from the ambient space to the specific inverse atlas we have just introduced.
We presume from now on that ԑ > 0 is so small that the tubular neighbourhood
Uԑ(Ϻ) has the closest point property and choose an inverse atlas 픸Ϻ = (ψ푖, ω푖)푖∈퐼 ∈ 푘 푑 ԑ ĸ ℿ(Ϻ) for the ESM Ϻ ∈ 필cp(ℝ ). Then we choose the set Ω푖 ∶= ω푖×Bԑ(0) and define
12 ĸ 푗 Ψ푖(푥, 푧1, ..., 푧ĸ) ∶= ψ푖(푥) + ∑ 푧푗ν (푥) 푗=1 for the 푗-th normal ν푗(푥) of normal frame N푖(푥). The resulting function has then a functional determinant with maximal rank and is thus a diffeomorphism that parameterises the tubular neighbourhood Uԑ(ψ푖(ω푖)). Moreover, we can deduce directly that for a suitable choice of ԑ this diffeomorphism is bidirectionally C- ⋆ bounded: By our requirements on (ψ푖, ω푖) ∈ 픸Ϻ and the superset ω푖 of ω푖 we ⋆ ĸ can see that Ψ푖 is smooth and diffeomorphic on ω푖 × B훿(0) for sufficiently small ĸ 훿 > ԑ. Thereby its restriction to ω푖 × Bԑ(0) is obviously a bidirectionally C-bounded diffeomorphism there. We obtain the following conclusion:
푘 푑 2.10 Conclusion For any Ϻ ∈ 필cp(ℝ ) and any sufficiently small ԑ > 0, there is ԑ ĸ an extended inverse atlas 핌Ϻ = (Ψ푖, ω푖 × Bԑ(0))푖∈퐼 of Uԑ(Ϻ) such that each Ψ푖 is a ԑ ĸ bidirectionally C-bounded diffeomorphism. Moreover, any N휉 ∶= Ψ푖(푥 × Bԑ(0)) for ɴ 푥 ∈ ω푖 and 휉 = ψ푖(푥) is a normal space ball Bԑ (ψ푖(푥)).
2.11 Remark: For any 푖 ∈ 퐼, the family of these balls {Nԑ } forms a so-called ψ푖(푥) 푥∈ω푖 〈1〉 foliation of Uԑ(ψ푖(ω푖)). Consequently, by considering all pairs of the inverse atlas we obtain a foliation of Uԑ(Ϻ) that we call the normal foliation 픽N. Therein, each ԑ ɴ normal space ball N휉 = Bԑ (휉) for 휉 ∈ Ϻ is called a leaf of the foliation.
2.12 Remark: (1) A direct consequence of the smoothness of Ϻ and Prop. 2.4 is ԑ that Ψ푖(Ω푖 ) ∈ 핃핚핡푑 for each 푖 ∈ 퐼, and obviously Uԑ(Ϻ) ∈ 핃핚핡푑 for sufficiently small ԑ > 0.
(2) In the following, we will simply write U(Ϻ) and extended inverse atlas 핌Ϻ =
(Ψ푖,Ω푖)푖∈퐼 whenever the exact extent ԑ does not matter, provided it is sufficiently small and fixed. (Ʒ) Note in particular that the foliation does not depend on the orientation of the normal frame. Consequently, any local rotation or even reflection of this does not modify the foliation. Thereby, it will also be well-defined for nonorientable surfaces like a Möbius strip.
2.13 Notation Any extended inverse atlas 핌Ϻ = (Ψ푖,Ω푖)푖∈퐼 that satisfies the stated requirements and is determined by ԑ > 0 small enough that Conclusion 2.10 ex applies, is denoted by 핌Ϻ ∈ ℿN (Ϻ).
As we have already mentioned before, we have a well-defined projection which is smooth on any such tubular neighbourhood that is sufficiently narrow. Con- sequently, we can use this projection ΠϺ ∶ U(Ϻ) → Ϻ to define an extension of functions on Ϻ into U(Ϻ):
〈1〉We will not require anything of the profound theory on foliations of Riemannian manifolds here, we just borrow the name from there. The interested reader is referred to [95].
13 푘 푑 2.14 Definition For any Ϻ ∈ 필cp(ℝ ), the operation EN ∶ 푓 ↦ 푓 ∘ ΠϺ deter- mines an extension of a function 푓 ∈ C(Ϻ, ℝ) into a sufficiently narrow tubular neighbourhood Uԑ(Ϻ). This extension is then called the normal extension. We will ⃡⃡⃡⃡⃡⃡ also denote EN푓 simply by 퐹. 2.15 Remark: (1) The regularity and smoothness of 퐹⃡⃡⃡⃡⃡⃡ is guarded by the regularity and smoothness of 푓 and ΠϺ. ⃡⃡⃡⃡⃡⃡ (2) The function 퐹 is constant on each leaf of 픽N by construction. Consequently, ԑ ɴ if N휉 = Bԑ (휉) is the leaf of 픽N to 휉 ∈ Ϻ and thus an embedded submanifold of ԑ dimension ĸ in its own right, the directional derivative of EN푓 at any 푥 ∈ N휉 along ԑ any 휈 ∈ T푥N휉 must vanish. This is a consequence of the chain rule and the fact that ԑ D휈ΠϺ vanishes there by definition. More specifically, we have for any 푥, 푧 ∈ N휉
ΠϺ(푧) − ΠϺ(푥) = 0.
ԑ By the smoothness of the leaf N휉, this remains valid under taking limits of differ- ential quotients. (Ʒ) As the foliation, the extension is independent of the orientation of the normal frame. In particular, its smoothness does not depend on the transition between local choices of the normal frames.
A further direct consequence of our recent considerations is the local existence of normal frames for our ESMs that are locally smooth in the neighbourhood as well, which is the purpose of the next lemma. This fact will later be used to define a form of differential calculus intrinsic to an ESM:
푘 푑 ԑ ĸ ex 2.16 Lemma Let Ϻ ∈ 필cp(ℝ ) and let 핌Ϻ = (Ψ푖, ω푖 × Bԑ(0))푖∈퐼 ∈ ℿN (Ϻ). Then ĸ for any sufficiently small ԑ > 0, any 푖 ∈ 퐼 and any 푧 ∈ Ψ푖(ω푖 × Bԑ(0)) the maps
푖 푖 푧 ↦ N (ΠϺ(푧)), 푧 ↦ T (ΠϺ(푧))
ĸ are C-bounded on each Ψ(ω푖 × Bԑ(0)). 푖 Proof: As we can demand that the map 푧 ↦ N (ΠϺ(푧)) is well-defined and smooth ⋆ ĸ even on Ψ푖(ω푖 × B훿(0)) with sufficiently small 훿 > ԑ, this is a direct consequence ĸ ⋆ ĸ 〈2〉 of compact containment Ψ푖(ω푖 × Bԑ(0)) ⋐ Ψ푖(ω푖 × B훿(0)) . The same arguments apply to the tangent frame. K
2.1.2 Open Submanifolds and General Foliations
While the reader can bear in mind the compact ESMs as a rolemodel on almost any occasion, we will also treat open ESMs to some degree within this thesis. So
〈2〉⋐ stands for a relatively compact subset, so 퐴 ⋐ 퐵 ∶⇔ clos(퐴) ⊆ int(퐵).
14 we need to prepare this suitably. Further, although it is effectively sufficient to treat foliations and extensions in terms of the normal extension and foliation, it may sometimes be convenient to choose other kinds of extensions and correspond- ing foliations. And while we present the results in the upcoming chapters just for the normal foliation, an equivalent treatment for more general foliations and ex- tensions is possible on most occasions as well. So we give here a short sketch of these more general choices after introducing open ESMs.
Open Subdomains of Closed Submanifolds
Obviously, any open, connected subdomain of a compact ESM is an embedded sub- manifold in its own right, and we can define normal space balls, tubular neighbour- hood, projection, extension and foliation just by restriction. However, to make all other upcoming considerations work, we will have to state further demands for such domains in the way we are going to employ them in this thesis: ̂ 푘 푑 First of all, we will demand that Ϻ is an open subdomain of some Ϻ ∈ 필cp(ℝ ). Furthermore, we demand that the relative boundary is made up of at most finitely ℓ 푘−1 푑 푘−1 ̂ many connected components {Γ푗}푗=1 such that Γ푗 ∈ 필cp (ℝ ) ∩ 필cp (Ϻ) for all ̂ 푗 ∈ 퐽. We also note that each Γ푗 is orientable as a submanifold of Ϻ. Consequently, and by the required properties of Ϻ,̂ one can deduce that Ϻ has a finite inverse atlas 픸Ϻ = (ψ푖, ω푖)푖∈퐼 such that for any 푖 ∈ 퐼 the following holds:
• ω푖 ∈ 핃핚핡푘 and any ψ푖 is C-bounded and injective on ω푖. ̂ ̂ • There is an inverse atlas 픸Ϻ̂ ∈ ℿ(Ϻ) of Ϻ such that any pair (ψ, ω) ∈ 픸Ϻ
can be obtained by restriction of a suitable pair (̂ψ,̂ω)∈ 픸Ϻ̂, so ω ⊆ ̂ω and
ψ = ̂ψ|ω. A suitable method for the construction of such an inverse atlas can for example be deduced from [55, Sect. 3] and is given in the appendix. There, we present the construction for only one boundary connected component, but the construction generalises naturally to finitely many components.
2.17 Definition
1. A smooth embedded submanifold Ϻ of ℝ푑 with dimension 푘 that is a subdo- ̂ 푘 푑 푘 푑 main of some Ϻ ∈ 필cp(ℝ ) as introduced above is denoted by Ϻ ∈ 필sd(ℝ ). 푑 푘 푑 If a smooth embedded submanifold Ϻ of dimension 푘 of ℝ is either in 필sd(ℝ ) 푘 푑 푘 푑 or in 필cp(ℝ ), we denote this by 필bd(ℝ ). 2. An inverse atlas of Ϻ̂ where the stated restrictability holds will be called a Ϻ-restrictable inverse atlas of Ϻ,̂ or just restrictable inverse atlas. It will be
denoted by 픸Ϻ̂ ∈ ℿR(Ϻ). Furthermore, we will denote the restricted inverse
atlas 픸Ϻ of Ϻ again by 픸Ϻ ∈ ℿ(Ϻ).
15 2.18 Remark: (1) We can presume by arguments presented in Sect. 9.1 of the appendix that we are equipped with such (restrictable) inverse atlases. (2) Unless explicitly stated otherwise, the term ESM will also include such open domains. If they are not included, we refer to a compact ESM or an ESM without boundary, which remain as equivalent expressions within this thesis.
(Ʒ) As any relevant properties of extended inverse atlas 핌Ϻ for Ϻ obtained by ex ̂ restriction of 핌Ϻ̂ ∈ ℿN (Ϻ) are retained, we can also generalise the notation of ex ex ℿN (Ϻ) to these: We write 핌Ϻ ∈ ℿN (Ϻ) if it is obtained by restriction of 핌Ϻ̂ ∈ ex ̂ ℿN (Ϻ).
Notes on other Foliations and Extensions
Although we will effectively restrict ourselves in the course of this thesis to normal foliation and normal extension, this is not the only way to obtain a foliation and ex- tension. For example and as proposed in [76], if Ϻ = 휑-1{0} for an implicit function 휑 ∶ ℝ푑 → ℝ, then the gradient ∇휑 gives rise to gradient vector field
푑 ∇휑(푦) ү ∶ U(Ϻ) → ℝ , ү(푦) ∶= 2 ‖∇휑‖2
휑 휑 on U훿 (Ϻ) ⊆ U(Ϻ) ⊆ U휚 (Ϻ), where 휚 > 훿 > 0 is chosen such that 휑 is a smooth submersion on 휑 푑 U휚 (Ϻ) ∶= {푦 ∈ ℝ ∶ |휑(푦)| ≤ 휚}.
Consequently, it gives rise in turn to a uniquely and well-defined gradient flow 휐 for the initial conditions
휑 휕푡(휐(푦, 푡)) = ү(푦), 휐(푦, 푡) = 푦, 푦 ∈ U휚 (Ϻ)
휑 by the Picard-Lindelöf theorem for any 푡 until we reach the boundary of U휚 (Ϻ). Moreover, we obtain the further relation
휑(휐(푦, 푡)) = 휑(푦) + 푡, 푡 ∈ 퐼푦 ∶= ]−휑(푦) − 휚, −휑(푦) + 휚[.
The orbits 휐(휉, ⋅) for 휉 ∈ Ϻ then define our foliation 픽휑. This foliation is still orthogonal to Ϻ in the sense that any leaf for a point 휉 ∈ Ϻ is orthogonal to Ϻ at that point: the tangent (normal) space of Ϻ and the tangent (normal) space of the leaf (as a submanifold) are mutually orthogonal at 휉. Furthermore, we can directly provide a diffeomorphic map between neighbourhoods obtained by this foliation and by the normal foliation: As the normal foliation must exist also in this case, we can define for an arbitrary 푦 = Ψ푖(푥, 푧) ∈ U(Ϻ) a map via
16 푦 = Ψ푖(푥, 푧) ↦ 휐(ψ푖(푥), 푧) which is by conception diffeomorphic and C-bounded on a sufficiently narrow U(Ϻ). This relation leads to the following, more general definition:
2.19 Definition Let φ ∶ Ω → ℝ푑 be a C-bounded diffeomorphism such that both ɴ ɴ Uԑ (Ϻ) ⋐ Ω and φ(푥) = 푥 for all 푥 ∈ Ϻ. Let 픽N be the normal foliation of Uԑ (Ϻ) ԑ subordinate to Ϻ. Let (ψ푖, ω푖)푖∈퐼 ∈ ℿ(Ϻ) be an inverse atlas of Ϻ and (Ψ푖,Ω푖 )푖∈퐼 ∈ ex ℿN (Ϻ) an inverse atlas of UN(Ϻ) for the normal foliation. Then we call the set
ԑ ԑ ɴ 픽 = {φ(Νx) ∶ 푥 ∈ Ϻ,Νx = Bԑ (푥) leaf of 픽N}
픽 ɴ ԑ ԑ Ϻ-foliation or just foliation of Uԑ (Ϻ) = φ(Uԑ (Ϻ)). Each Fx ∶= φ(Νx) for 푥 ∈ Ϻ is called a leaf of 픽. We call this foliation an orthogonal foliation if for any 푥 ∈ Ϻ ԑ ԑ the tangent space T푥Ϻ and the tangent space T푥Fx of the leaf Fx of 픽 containing 푥 are mutually orthogonal. We call the foliation a linear foliation if any leaf is subset of a ĸ-dimensional affine subspace of ℝ푑 and φ is a linear map on any leaf of the normal foliation.
2.20 Example: A particularly interesting foliation that is not orthogonal but lin- 푘 푑 ear can be obtained for all Ϻ ∈ 필bd(ℝ ) that allow for a well-defined projection onto the sphere in ℝ푑, for example a compact hypersurface Ϻ that contains a star- shaped domain: The vector field obtained by normalising the position of any point gives a suitable vector field that is never tangent and as smooth as Ϻ. By com- pactness it is then not hard to see that there is ԑ > 0 such that for any 푦 ∈ Ϻ and υ(푦) ∶= 푦/ ∣∣푦∣∣2 it holds ⟨υ(푦), τ(푦)⟩ ≥ ԑ for any τ(푦) ∈ T푦Ϻ. In case of a hypersurface, this direction can be used directly to obtain a foliation, where one just replaces the normal direction ν(ψ(푥)) in the extended parameterisation for the normal foliation by the direction υ(ψ(푥)). Then we could parameterise even all of ℝ푑\{0} in the form
υ Ψ푖 (푥, 푧) ∶= ψ푖(푥) + 푧υ(ψ푖(푥)), 푧 ∈ ]− ∣∣ψ(푥)∣∣2 , ∞[.
In case of higher codimension, we would just have to choose further (fixed) di- rections that are never tangent and linearly independent of the ”point direction”
υ(ψ푖(푥)). This is easy however, as by the required uniqueness of projection of the ESM onto the sphere there are multiple valid choices. The resulting foliation can be made linear by suitable parameterisation. Moreover, it can yield considerably larger neighbourhoods if necessary, as in fact the whole line from zero through the point on the ESM can serve as a leaf.
픽 픽 We can in fact use any foliation to define a projection ΠϺ ∶ U (Ϻ) → Ϻ via
17 픽 픽 ΠϺ ∶ 푧 = Ψ푖 (푥, 푧) ↦ 푥,
픽 where we define Ψ푖 (푥, 푧) = φ픽(Ψ푖(푥, 푧)) for Ψ푖 according to standard extended ԑ ex inverse atlas (Ψ푖,Ω푖 )푖∈퐼 ∈ ℿN (Ϻ) based on the normal foliation and extension. 픽 Thereby, we can also define an extension E픽 ∶ 푓 ↦ 푓 ∘ ΠϺ.
2.21 Remark: (1) Although the preferred extension in our theory is the normal extension, it is sometimes in practice more convenient to use a different extension. If for example the maximal extent of a tubular neighbourhood of Ϻ with the closest point property is small, then a different extension based on for example the gradi- ent field construction above can lead to a significantly larger U픽(Ϻ). 픽 (2) Again, the directional derivative of an extension E픽푓 = 푓 ∘ ΠϺ vanishes along the leaves of 픽.
2.22 Definition A regularity preserving extension 퐹 ∶ U(Ϻ) → ℝ of a function 푓 ∈ C1(Ϻ, ℝ) is called an orthogonal extension if 휕퐹 (푥) = 0 for any ν ∈ N Ϻ and 휕ν 푥 any 푥 ∈ Ϻ.
2.23 Remark: We do not demand that this directional derivative vanishes in points within the whole normal space for an orthogonal extension — that property is ef- fectively reserved for the normal foliation.
2.2 Function Spaces on Embedded Submanifolds
We now make a brief tour over function spaces. The most common spaces in ap- proximation theory are usually either spaces of continuous differentiability of cer- tain order — the classical spaces in which approximation theory used to take place — or their closures under certain integral norms, so Sobolev spaces. And as soon as one is taking the restriction R퐹 of such a function 퐹 to subspaces and subman- ifolds (called a trace ͲϺ퐹 of 퐹), one naturally comes across fractional Sobolev spaces. Since nowadays practical approximation theory usually takes place in Sobolev spaces, we shall concentrate on these, and also give a short treatment of fractional order spaces as well.
We start by giving a definition based on completion — among numerous other ways to define these space — where we follow [1, 2]:
∗ 훼 2.24 Definition 1. Let Ω ∈ 핃핚핡푑, 푚 ∈ ℕ0, 푝 ∈ [1, ∞[ and 휕 be the partial derivative w.r.t. multi-index 훼. Then we define the norm
푚 1/푝 ⎛ 훼 푝⎞ ‖푓‖ 푚 ∶= ⎜∫ ∑ ∑ |휕 푓| ⎟ W푝 (Ω) ⎝Ω 휇=0 |훼|=휇 ⎠
18 ∞ 푚 on the space C (Ω) of all smooth functions. We define the Sobolev space W푝 (Ω) as the completion of the subspace of C∞(Ω) where this norm is finite. 푘 푑 푛 2. Let Ϻ ∈ 필bd(ℝ ) be equipped with a finite inverse atlas 픸Ϻ = {(ψ푖, ω푖)}푖=1 ∈ 푚 ℿ(Ϻ). Then we define the Sobolev space W푝 (Ϻ) as the completion of the set of all smooth functions 푓 ∶ Ϻ → ℝ where
푛 ‖푓‖ 푚 ∶= ∑ ‖푓 ∘ ψ ‖ 푚 < ∞. W푝 (Ϻ) 푖 W푝 (ω푖) 푖=1
푚 푚 3. In all cases, we will usually write H for W2 . Furthermore, L푝 denotes the case 푚 = 0, as any of these is in particular a Lebesgue space.
2.25 Remark: Different choices of the inverse atlas yield different, yet equivalent 푘 푑 norms on a distinct ESM. In particular, if an ESM Ϻ ∈ 필sd(ℝ ) is an open domain ̂ 푘 푑 in another ESM Ϻ ∈ 필cp(ℝ ), then the definition of the Sobolev spaces for Ϻ is not directly obtained as a restriction of the definition for Ϻ,̂ in contrast to the Euclidean case. But by changing to the restrictable inverse atlas, we can achieve this. In the following, we will always presume the inverse atlas to be fixed if not explicitly stated otherwise, and restrictable if open ESMs are considered.
For the relations and embeddings between these spaces, also on ESMs, we refer the reader to the appendix and state here just that the usual Sobolev and Rellich- Kondrachov embeddings hold. We proceed instead with some results that yield an equivalent norm for integer order Sobolev spaces on ESMs based on normal extensions as introduced above. The statement and proof are very similar to the results given in [76] or [82], and we postpone the proof to the appendix:
푘 푑 2.26 Theorem If Ϻ ∈ 필bd(ℝ ) is contained in a family Uℎ(Ϻ) of tubular neigh- bourhoods with 0 < ℎ < ℎ0, then there are fixed 푎1, 푎2 > 0 independent of ℎ such 푚 ℎ ex that for any 퐹 ∈ W푝 (Uℎ(Ϻ)) and inverse atlas 핌Ϻ = (Ψ푖,Ω푖 )푖∈퐼 ∈ ℿN (Ϻ) of Uℎ(Ϻ)
푎 ‖퐹‖ 푚 ≤ ∑ ‖퐹 ∘ Ψ ‖ 푚 ℎ ≤ 푎 ‖퐹‖ 푚 . 1 W푝 (Uℎ(Ϻ)) 푖 W푝 (Ω푖 ) 2 W푝 (Uℎ(Ϻ)) 푖∈퐼
푚 Additionally, it holds for any 푓 ∈ W푝 (Ϻ) and suitable 푏1, 푏2 independent of ℎ
ĸ/푝 ĸ/푝 푏 ℎ ‖푓‖ 푚 ≤ ‖E 푓‖ 푚 ≤ 푏 ℎ ‖푓‖ 푚 . 1 W푝 (Ϻ) N W푝 (Uℎ(Ϻ)) 2 W푝 (Ϻ)
In particular, a specialised version of this theorem emphasises the relation between different extents of the tubular neighbourhoods that are still proportional to some parameter 0 < ℎ < ℎ0: 2.27 Corollary In the setting of the last result, let 푎, 푏 > 0. Then it holds for all 푚 0 < ℎ < ℎ0, any 푓 ∈ W푝 (Ϻ) and suitable c1, c2 > 0 independent of ℎ, 푎, 푏 that
19 −ĸ/푝 −ĸ/푝 −ĸ/푝 −ĸ/푝 c 푎 ℎ ‖E 푓‖ 푚 ≤ ‖푓‖ 푚 ≤ c 푏 ℎ ‖E 푓‖ 푚 . 1 N W푝 (U푎ℎ(Ϻ)) W푝 (Ϻ) 2 N W푝 (U푏ℎ(Ϻ))
Consequently, all three norms are equivalent for fixed choices of 푎 and 푏.
As soon as it comes to ESMs, one is naturally confronted with the problem of tak- ing traces of functions defined in the ambient space. In order to keep as much regularity as possible, this will necessarily lead to leaving the Sobolev spaces of integer order behind, and we will have to turn to more elaborate constructions. This leads to certain fractional order spaces and in the end also to Besov spaces. One way to define suitable fractional order spaces — with the alternative naming Slobodeckij spaces — is as follows (cf. [1, 2, 32, 98, 96, 100]), where we restrict ourselves to the Hilbert case as the only one we are effectively interested in:
∗ 2.28 Definition Let Ω ∈ 핃핚핡푑. Then we define for 0 < 푠 < 1, 푚 ∈ ℕ0 the norm
1/2 훼 훼 2 ⎛ |휕 푓(푥) − 휕 푓(푧)| ⎞ ‖푓‖ 푚+푠 ∶= ‖푓‖ 푚 + ⎜ ∫ ∑ 푑푥 푑푧⎟ H H ‖푥 − 푧‖푑+2푠 ⎝Ω×Ω |훼|=푚 2 ⎠ on the space of all smooth functions where this norm is finite. We define the frac- tional Sobolev space (or Slobodeckij space)H푚+푠(Ω) as their completion in that 푚+푠 푘 푑 norm. We further define H (Ϻ) for Ϻ ∈ 필bd(ℝ ) as above by completion under
푛 ‖푓‖ 푚+푠 ∶= ∑ ‖푓 ∘ ψ ‖ 푚+푠 . H (Ϻ) 푖 H (ω푖) 푖=1
푛 for a finite inverse atlas 픸Ϻ = {(ψ푖, ω푖)}푖=1 ∈ ℿ(Ϻ).
The following fact about Sobolev and Slobodeckij spaces due to [32],[87], [93, §3, Thm. 5], [99] is of significant importance. Thereby, we can directly transfer a property that holds in ℝ푑 for one of those spaces also into the respective space on a subdomain Ω ∈ 핃핚핡푑:
2.29 Proposition If Ω ∈ 핃핚핡푑 and 1 ≤ 푝 < ∞, then there is an extension operator 푚 푚 푑 푚 EΩ ∶ W푝 (Ω) → W푝 (ℝ ) independent of 0 ≤ 푚 < ∞ such that for any 푓 ∈ W푝 (Ω) and fixed c > 0
∣∣EΩ푓∣∣ 푚 푑 ≤ c ∣∣푓∣∣W푚(Ω) . W푝 (ℝ ) 푝 The same holds also for H푟(Ω), and the operator is independent of 0 ≤ 푟 < ∞.
As stated in the introduction, we present a more thorough treatment of these spaces on submanifolds in the appendix. Here, we restrict ourselves to the state- ment that the usual properties like Sobolev embeddings or norm equivalence under diffeomorphic maps hold. Proofs or references are given in the appendix. There, we also present some further consequences of function space interpolation in our setting, a short note on Besov spaces and the proofs for the statements in the re-
20 maining sections of this chapter. At this point, we shall only give one very important property that is deduced in the appendix as well, the interpolation property:
2.30 Proposition Let Ω ∈ 핃핚핡∗ , Ω ∈ 핃핚핡∗ . Let 푟 = 휃푟 +(1−휃)푟 for 휃 ∈ ]0, 1[ 1 푑1 2 푑2 1 2 and reals 0 ≤ 푟1 ≤ 푟2 < ∞ and let 휚 = 휃휚1 + (1 − 휃)휚2 for reals 0 ≤ 휚1 ≤ 휚2 < ∞.
푟푖 휚푖 푟 휚 If Λ ∶ H (Ω1) ↦ H (Ω2) is bounded for 푖 = 1, 2, then Λ ∶ H (Ω1) ↦ H (Ω2) is bounded as well and
휃 (1−휃) ||Λ|| 푟 휚 ≤ c ||Λ|| 푟 휚 ||Λ|| 푟 휚 . H →H Ω H 1 →H 1 H 2 →H 2
If a family {Λ } of such operators satisfies ℎ 0<ℎ<ℎ0
휃 휆1 (1−휃) 휆2 ∣∣Λ ∣∣ 푟 휚 ≤ c ℎ and ∣∣Λ ∣∣ 푟 휚 ≤ c ℎ , ℎ H 1 →H 1 1 ℎ H 2 →H 2 2
휆1⋅휃+휆2⋅(1−휃) ∣∣ ∣∣ 푟 휚 ≤ ℎ then we have in particular the relation Λℎ H →H c .
2.2.1 Trace Theorems
The spaces just introduced give us the key to take traces of functions 퐹 ∶ U(Ϻ) → ℝ 푑 푘 푘 on ESMs. These results are usually given for Ω ∈ 핃핚핡푑 or ℝ and Ω = Ω ∩ (ℝ × {0}푑−푘) or ℝ푘 × {0}푑−푘, but since we have a finite set of Lipschitz parameter spaces and C-bounded parameterisations, they generalise directly to ESMs.
∞ 푑 ∞ To this end, we define now the restriction ͲϺ ∶ C (ℝ ) → C (Ϻ) pointwise and for the fractional and integer Sobolev spaces via completion. Then we obtain the following trace theorems, for which we present literature references and proofs of certain generalising aspects in Sect. 9.2.5 of the appendix.
2.31 Theorem — Integer Trace Theorem — 푘 푑 1. Let Ϻ ∈ 필bd(ℝ ) be equipped with finite inverse atlas 픸Ϻ = {(ψ푖, ω푖)}푖∈퐼 ∈
ℿ(Ϻ) and let U(Ϻ) ∈ 핃핚핡푑 be some ambient tubular neighbourhood of Ϻ with 푚 fixed extent 휚 > 0. Let 퐹 ∈ W푝 (U(Ϻ)) for some 1 ≤ 푝 < ∞ and 푚 ∈ ℕ. Then 휇 푑−푘 푑−푘 ͲϺ퐹 ∈ W푝(Ϻ) for any 휇 ∈ ℕ0 with 휇 < 푚 − 푝 (휇 ≤ 푚 − 푝 in case 푝 = 1) and
‖Ͳ 퐹‖ 휇 ≤ c ‖퐹‖ 푚 . Ϻ W푝(Ϻ) W푝 (U(Ϻ))
푚 푘 푑 2. Let 푓 ∈ W푝 (Ϻ) for some Ϻ ∈ 필bd(ℝ ) and some 1 ≤ 푝 < ∞ and 푚 ∈ ℕ. Take 푘 푑 some open and bounded Ϻ0 ∈ 필bd(ℝ ) with Ϻ0 ⋐ Ϻ that has nonempty smooth 휇 boundary Γ. Then ͲΓ푓 ∈ W푝(Γ) for any 휇 < 푚 and
‖Ͳ 푓‖ 휇 ≤ c ‖푓‖ 푚 . Γ W푝(Γ) W푝 (Ϻ0)
21 Things become more involved when one is willing to consider fractional orders. The gain we can hope for in that setting is that if we consider fractional spaces, the loss of regularity might be considerably lower than implied by the integer case, and so we could hope for tighter relations in future approximation results.
Unfortunately, taking traces of Sobolev spaces will not necessarily bring you into a Sobolev space as the most regular space to end up in, not even a fractional one. But there is one important exception to this: Whenever 푝 = 2, then everything coincides and we are in a Slobodeckij space. So this is the case we go for:
2.32 Theorem — Fractional Trace Theorem — 푘 푑 Let Ϻ ∈ 필bd(ℝ ) have finite inverse atlas 픸Ϻ = {(ψ푖, ω푖)}푖∈퐼 ∈ ℿ(Ϻ) and let
U(Ϻ) ∈ 핃핚핡푑 be some tubular neighbourhood of Ϻ with fixed extent 휚 > 0 and ex- ex 푟 tended inverse atlas {(Ψ푖,Ω푖)}푖∈퐼 ∈ ℿN (Ϻ) subordinate to 픸Ϻ. Let 퐹 ∈ H (U(Ϻ)) 푟 > 푑−푘 퐹 ∈ 휚( ) 휚 = 푟 − 푑−푘 > 0 for 2 . Then ͲϺ H Ϻ for 2 and
‖ͲϺ퐹‖H휚(Ϻ) ≤ c ‖퐹‖H푟(U(Ϻ)).
U 휚 푟 Conversely, there is also a bounded extension operator EϺ ∶ H (Ϻ) → H (U(Ϻ)), so for any 푔 ∈ H휚(Ϻ) U ∣∣E 푔∣∣ ≤ c ∣∣푔∣∣ 휚 . Ϻ H푟(U(Ϻ)) H (Ϻ) Again, this extension operator is independent of 휚 and thus universally applicable for any 휚 > 0 and 푟 = 휚 + ĸ/2. All statements remain valid if one replaces U(Ϻ) by ℓ Ϻ and Ϻ by some ESM Γ ∈ 필bd(Ϻ) for 0 < ℓ < 푘.
The proof of this theorem for ESMs, given in the appendix by deduction from results on linear subspaces, yields in particular the following additional result:
2.33 Corollary — Chart Trace Theorem — In the setting of the last theorem it holds for any pair (ψ, ω) from the inverse atlas ex 푟 픸Ϻ ∈ ℿ(Ϻ), corresponding (Ψ, Ω) ∈ 핌Ϻ ∈ ℿN (Ϻ) and 퐹 ∈ H (U(Ϻ)) that
‖Ͳω(퐹 ∘ Ψ)‖H휚(ω) ≤ c ‖퐹‖H푟(Ψ(Ω)) ≤ c ‖퐹‖H푟(U(Ϻ)).
U 휚 푟 Conversely, there is a bounded extension operator Eω ∶ H (ω) → H (U(Ϻ)) such that for any 푔 such that 푔 ∘ ψ ∈ H휚(ω)
U ∣∣E 푔∣∣ ≤ c ∣∣(푔 ∘ ψ)∣∣ 휚 . ω H푟(U(Ϻ)) H (ω)
2.34 Remark: (1) All results on traces also hold for any other Lipschitz neigh- bourhood of an ESM that contains a tubular neighbourhood of fixed extent, and
22 the extension results for any Lipschitz neighbourhood of an ESM that is itself con- tained in such a tubular neighbourhood. Both come directly by restriction. (2) Note that we cannot say anything about traces when 휚 = 0. There, the respec- tive results will in fact fail; and while there are other conditions under which one can indeed achieve a trace (cf. [88, Cor. 3.17 etc.]), these conditions will no longer fit into our theory as they apply to 푝 ≤ 1 only, so we omit them here.
Nonetheless, there is at least one specific situation where we have some kind of result when taking traces even in case 휚 = 0: Namely if we start from Ϻ in the right manner. Then we can give a result on traces that is concerned with extensions based on foliations. Its main purpose is to point out that the trace of a function extended from an ESM into a suitable neighbourhood by the normal (or any other foliation-based) extension is the function we started with:
2.35 Theorem — Foliation Trace Theorem — 푘 푑 푛 Let Ϻ ∈ 필bd(ℝ ) have the finite inverse atlas 픸Ϻ = {(ψ푖, ω푖)}푖=1 ∈ ℿ(Ϻ) and let
U(Ϻ) ∈ 핃핚핡푑 be some tubular neighbourhood of Ϻ with fixed extent 휚 > 0. If then 푚 푚 푓 ∈ W푝 (Ϻ) it holds ͲϺEN푓 ∈ W푝 (Ϻ) and
∣∣푓 − ͲϺEN푓∣∣ 푚 = 0. W푝 (Ϻ)
We also have the following result, which gives us an universal extension operator for ESMs that corresponds to the extensions from Euclidean domains to ℝ푑:
2.36 Theorem — Manifold Extension Theorem — 푘 푑 푛 Let Ϻ ∈ 필bd(ℝ ) be equipped with inverse atlas 픸Ϻ = {(ψ푖, ω푖)}푖=1 ∈ ℿ(Ϻ). Let 푘 푑 further Ϻ0 ∈ 필bd(ℝ ) be an open subdomain Ϻ0 ⋐ Ϻ that is an ESM of dimension 푟 푟 푘 in its own right. Then there is an extension operator E ∶ H (Ϻ0) → H (Ϻ) such 푟 that for any 푟 > 0 and any 푔 ∈ H (Ϻ0)
∣∣E푔∣∣ 푟 ≤ c ∣∣푔∣∣ 푟 . H (Ϻ) H (Ϻ0)
This holds also if one replaces Ϻ0 by an arbitrary parameter space ω with corre- sponding parameterisation ψ according to the inverse atlas in the sense
Ϻ ∣∣E 푔∣∣ ≤ c ∣∣푔 ∘ ψ∣∣ 푟 . ω H푟(Ϻ) H (ω)
23 2.2.2 Friedrichs’ Inequality
Friedrichs’ Inequality is an important tool in functional analysis and particularly in the treatment of partial differential equations. For us, it will be important in bounding 퐹 − ENͲϺ퐹 for suitable functions 퐹 defined on some U(Ϻ). However, a clear drawback in particular in this application is the restriction in the dimension of the ambient space, as the offset to the ESM dimension must not be greater than one in the classical formulation of the inequality. Since we are going to investigate ESMs also in cases where the codimension is greater than one, there is a natural need for an extended result. We present it here, while we postpone the technical proof to Sect. 9.2.6 in the appendix:
2.37 Theorem — Multicodimensional Friedrichs’ Inequality — ĸ ĸ Let ω ∈ 핃핚핡푘 and let Ωℎ = ω × Bℎ[0] for 0 < ℎ < ℎ0. Let 퐹 ∈ W푝(Ωℎ) for some 1 ≤ 푝 < ∞ be a continuous function that vanishes on ω. Then we have the relation
ĸ ‖퐹‖푝 ≤ c ∑ ℎ푝⋅ℓ ∑ ‖휕훼퐹‖푝 . L푝(Ωℎ) L푝(Ωℎ) ℓ=1 |훼|=ℓ 훼1=...=훼푘=0
We also give a tightened variant of this relation for functions that are polynomials ĸ of fixed degree on each {푥} × Bℎ[0], where the codimension plays no role anymore. Again, the proof is postponed to the appendix.
2.38 Theorem — Leafwise Polynomial Friedrichs’ Inequality — ĸ 1 Let ω ∈ 핃핚핡푘 and let Ωℎ = ω × Bℎ[0]. Let 퐹 ∈ W푝(Ωℎ) for some 1 ≤ 푝 < ∞ be a continuous function that vanishes on ω. Let further the restriction of 퐹 to any ĸ {푥} × Bℎ[0] be a polynomial of maximal degree 푛 ∈ ℕ. Then we have the relation
‖퐹‖푝 ≤ c ℎ푝 ∑ ‖휕훼퐹‖푝 L푝(Ωℎ) L푝(Ωℎ) |훼|=1 훼1=...=훼푘=0 for a constant c = c (푛) > 0 independent of 퐹 and ℎ.
2.39 Remark: While Theorem 2.37 would work out in the same manner also for other foliations, the latter result Theorem 2.38 requires the foliation to be linear, as in this case the restriction of a polynomial to a leaf is always a polynomial of the same order in both image and preimage of the foliation parameterisation.
24 Chapter 3
Ambient Approximation Theory
The core of this chapter is the investigation of approximation operators AϺ for functions on ESMs that are defined by a roundtrip over some ambient neighbour- hood U(Ϻ). Essentially, they all apply the following common concept: Given an approximation operator A on some function space defined in U(Ϻ), we make the choice ɴ A = ͲϺAEN
픽 or more generally A = ͲϺAE픽 for some other foliation-based extension E픽. By this approach, we will be able to transfer many of the extrinsic properties of A to the intrinsic setting. In particular, if we have a family (A ) with approxi- ℎ 0<ℎ<ℎ0 mation order ℎ휆, we ask what we can say about the approximation order of the corresponding (Aɴ ) . We will see that under mild additional assumptions and ℎ 0<ℎ<ℎ0 for suitable approximation spaces like tensor product splines (TP-splines), we will be able to maintain the approximation order also in the intrinsic setting.
To be more precise, we will first present results that can be stated for general classes of approximation operators, before we turn to the important concepts of quasi-interpolation and quasi-projection. These will be discussed in further detail and exemplified by suitable operators for TP-splines, and we will finish this chapter by providing results for approximation of the derivatives along the normals of Ϻ, which will turn out to be crucial in upcoming chapters.
In the whole course of this chapter, Ϻ is an ESM of dimension 푘 and codimen- sion ĸ, and U(Ϻ) is an ambient tubular neighbourhood equipped with the closest point property (or its equivalent for another foliation based projection). Both are equipped with their respective finite C-bounded Lipschitz inverse atlases. All the results presented for EN remain valid if one replaces our standard example EN with some other foliation based extension E픽 if not explicitly stated otherwise.
25 3.1 General Approximation Operators
We do already have two tools at hand that can provide us with approximation re- sults for restrictions: One is the trace theorem, the other is our version of Friedrichs’ inequality.
However, if we look for an application of the trace theorem, then we loose some regularity and would therefore also expect to loose some approximation order: If ɴ we have Aℎ = ͲϺAℎEN and
푟−휚 ‖EN푓 − AℎEN푓‖H휚(U(Ϻ)) ≤ c ℎ ‖EN푓‖H푟(U(Ϻ)), we could only deduce the presumably suboptimal relation
ɴ 푟−휚 ‖푓 − Aℎ푓‖ 휚− ĸ ≤ c ℎ ‖푓‖H푟(Ϻ). H 2 (Ϻ)
푟 We could overcome this by application of the universal extension EϺ ∶ H (Ϻ) → 푟+ 푑−푘 H 2 (U(Ϻ)) and deduce from
푟−휚 ‖EϺ푓 − AℎEϺ푓‖ 휚+ ĸ ≤ c ℎ ‖EϺ푓‖ 푟+ ĸ H 2 (U(Ϻ)) H 2 (U(Ϻ)
Ϻ that for Aℎ ∶= ͲϺAℎEϺ
Ϻ 푟−휚 ‖푓 − Aℎ 푓‖H휚(Ϻ) ≤ c ℎ ‖푓‖H푟(Ϻ), but this extension is hardly ever known in practice. And that approach has another drawback: In contrast to the use of EN, the use of EϺ does usually not imply ap- proximately vanishing normal derivatives, which will turn out to be crucial in forth- coming chapters. So these direct results are rather unpleasant: We have either to expect some loss in the approximation order or we have to rely on an extension operator that is hardly ever known explicitly. Therefore, we need to take another approach. This will indeed be capable of maintaining approximation orders while employing extensions that are practically applicable, and it relies on the following definition:
3.1 Definition Let (U (Ϻ)) , (U (Ϻ)) be two families of tubular neigh- 푎ℎ ℎ<ℎ0 푏ℎ ℎ<ℎ0 bourhoods for fixed 0 < 푎 ≤ 푏 < ∞. Let F(U푏ℎ(Ϻ)), G(U푎ℎ(Ϻ)) be function spaces.
Let 퐹 be an arbitrary function that is contained in F(U푏ℎ(Ϻ)) and G(U푎ℎ(Ϻ)) for all ℎ < ℎ . Then a family (A ) of approximation operators mapping F(U (Ϻ)) 0 ℎ ℎ<ℎ0 푏ℎ to G(U푎ℎ(Ϻ)) for each ℎ < ℎ0 is called local relative to Ϻ with approximation order 휆 if it satisfies the relation
휆 ∣∣퐹 − Aℎ퐹∣∣ ≤ c ℎ ||퐹||F(U (Ϻ)) G(U푎ℎ(Ϻ)) 푏ℎ
26 with a generic constant c > 0 that is independent of 퐹 and ℎ in particular.
3.2 Remark: We will see later that for appropriate choices of the neighbourhoods, suitable quasi-interpolation approaches for TP-splines are local relative to Ϻ for ESMs without boundary.
The next theorem, which is a generalisation of results from [75, 76, 82] to a setting with higher codimensions, is now giving us the desired reproduction of approxi- mation orders. But that comes at the price of increasingly limited range as the codimension increases:
3.3 Theorem Let (U (Ϻ)) , (U (Ϻ)) be two families of tubular neigh- 푎ℎ ℎ<ℎ0 푏ℎ ℎ<ℎ0 bourhoods with 0 < 푎 ≤ 푏 < and let 푚 ∈ ℕ, 1 ≤ 푝 < . Let (A ) be a ∞ ∞ ℎ ℎ<ℎ0 푚 family of approximation operators local relative to Ϻ that map W푝 (U푏ℎ(Ϻ)) into 휇 W푝(U푎ℎ(Ϻ)) for fixed integer 휇 ∈ {0, ..., 푚 − ĸ}. Let further for any 휆 ∈ {휇, ..., 휇 + ĸ} hold 푚−휆 ∣∣퐹 − Aℎ퐹∣∣ 휆 ≤ c ℎ ||퐹||W푚(U (Ϻ)) . W푝(U푎ℎ(Ϻ)) 푝 푏ℎ
ɴ 푚 Then Aℎ ∶= ͲϺAℎEN satisfies for any 푓 ∈ W푝 (Ϻ) the relation
ɴ 푚−휇 ∣∣푓 − Aℎ푓∣∣ 휇 ≤ c ℎ ∣∣푓∣∣W푚(Ϻ) . W푝(Ϻ) 푝
Proof: We give the proof using ideas of [75, 76, 82] for the case of codimension one. To simplify notation in the subsequent arguments, we make the abbreviations ⃡⃡⃡⃡⃡⃡ ⃡⃡⃡⃡⃡⃡ ⃡⃡⃡⃡⃡⃡ 퐹 = EN푓, 퐹ℎ = Aℎ퐹, 퐹ℎ = ENͲϺ퐹ℎ,Uℎ = Uℎ(Ϻ),U푎ℎ = U푎ℎ(Ϻ),U푏ℎ = U푏ℎ(Ϻ).
Then we choose an arbitrary single pair (ψ, ω) from inverse atlas 픸Ϻ ∈ ℿ(Ϻ) and corresponding pairs (Ψ, Ω푎ℎ), (Ψ, Ω푏ℎ) for the parameterisations of the neighbour- hoods by the normal foliation. By the norm equivalences of Theorem 2.26 and the fact that 푎 > 0 is fixed, we obtain the relation
ɴ −ĸ/푝 ‖푓 − A 푓‖ 휇 ≤ c ℎ ‖퐹⃡⃡⃡⃡⃡⃡ − 퐹⃡⃡⃡⃡⃡⃡ ‖ 휇 ℎ W푝(ψ(ω)) ℎ W푝(Ψ(Ω푎ℎ)) −ĸ/푝 ≤ c ℎ (‖퐹⃡⃡⃡⃡⃡⃡ − 퐹 ‖ 휇 + ‖퐹 − 퐹⃡⃡⃡⃡⃡⃡ ‖ 휇 ). (3.3.1) ℎ W푝(Ψ(Ω푎ℎ)) ℎ ℎ W푝(Ψ(Ω푎ℎ))
The first summand there can be bounded by our hypothesis as
푚−휇 ‖퐹⃡⃡⃡⃡⃡⃡ − 퐹 ‖ 휇 ≤ ‖퐹⃡⃡⃡⃡⃡⃡ − 퐹 ‖ 휇 ≤ c ℎ ‖퐹‖⃡⃡⃡⃡⃡⃡ 푚 ℎ W푝(Ψ(Ω푎ℎ)) ℎ W푝(U푎ℎ) W푝 (U푏ℎ) ĸ/푝 푚−휇 푚−휇+ĸ/푝 ≤ c ℎ ℎ ‖푓‖ 푚 ≤ c ℎ ‖푓‖ 푚 . (3.3.2) W푝 (Ϻ) W푝 (Ϻ)
For the second summand we have to work a bit harder: We obtain once again by equivalence of Sobolev norms under diffeomorphisms that
27 훽 ‖퐹 − 퐹⃡⃡⃡⃡⃡⃡ ‖ 휇 = ∑ ‖휕 (퐹 − 퐹⃡⃡⃡⃡⃡⃡ )‖ 휇 ℎ ℎ W푝(Ψ(Ω푎ℎ)) ℎ ℎ W푝(Ψ(Ω푎ℎ)) |훽|≤휇 훼 ≤ c ∑ ‖휕 (퐹 ∘ Ψ − 퐹⃡⃡⃡⃡⃡⃡ ∘ Ψ)‖ 휇 . (3.3.3) ℎ ℎ W푝(Ω푎ℎ) |훼|≤휇
Now we partition the multi-index 훼 = (훼1, ..., 훼푑) into 훼 = (훼k⃮⃮⃮⃮⃮, 훼ĸ⃯⃯⃯⃯⃯), where 훼k⃮⃮⃮⃮⃮ are the first 푘 entries, and 훼ĸ⃯⃯⃯⃯⃯ are the last ĸ entries. Then we distinguish two cases:
Case one is that |훼ĸ⃯⃯⃯⃯⃯| > 0. In that case, we first clarify that for 푓ℎ ∶= ͲϺ퐹ℎ, the very definition of EN yields (EN푓ℎ)(Ψ(푥, 푧)) = 푓ℎ(ψ(푥)). Hence we have
훼 훼⃯⃯⃯⃯⃯ k⃮⃮⃮⃮⃮ ĸ ⃡⃡⃡⃡⃡⃡ 휕푥 휕푧 (퐹ℎ ∘ Ψ) = 0.
훼 훼⃯⃯⃯⃯⃯ k⃮⃮⃮⃮⃮ ĸ ⃡⃡⃡⃡⃡⃡ Using the same argument again we obtain 휕푥 휕푧 (퐹 ∘ Ψ) = 0, so in particular
훼 훼⃯⃯⃯⃯⃯ 훼 훼⃯⃯⃯⃯⃯ k⃮⃮⃮⃮⃮ ĸ ⃡⃡⃡⃡⃡⃡ k⃮⃮⃮⃮⃮ ĸ ⃡⃡⃡⃡⃡⃡ 휕푥 휕푧 (퐹ℎ ∘ Ψ − 퐹ℎ ∘ Ψ) = 휕푥 휕푧 (퐹ℎ ∘ Ψ − 퐹 ∘ Ψ).
Now we can exploit the approximation power of Aℎ: Because of norm equivalence under diffeomorphisms, the norm equivalences of Theorem 2.26 and because |훼| ≤ 휇 we have
훼 ‖휕 (퐹 ∘ Ψ − 퐹⃡⃡⃡⃡⃡⃡ ∘ Ψ)‖ ≤ c ‖퐹 − 퐹‖⃡⃡⃡⃡⃡⃡ 휇 ≤ c ‖퐹 − 퐹‖⃡⃡⃡⃡⃡⃡ 휇 ℎ L푝(Ω푎ℎ) ℎ W푝(Ψ(Ω푎ℎ)) ℎ W푝(U푎ℎ(Ϻ)) 푚−휇 푚−휇+ĸ/푝 ≤ c ℎ ‖퐹‖⃡⃡⃡⃡⃡⃡ 푚 ≤ c ℎ ‖푓‖ 푚 . (3.3.4) W푝 (U푏ℎ(Ϻ)) W푝 (Ϻ)
⃡⃡⃡⃡⃡⃡ In the second case we have |훼ĸ⃯⃯⃯⃯⃯| = 0. We abbreviate 푅ℎ ∶= 퐹ℎ ∘ Ψ − 퐹ℎ ∘ Ψ and find all the requirements of the multicodimensional Friedrichs’ inequality from Theo- rem 2.37 fulfilled. So we obtain
ĸ 훼k⃮⃮⃮⃮⃮ 푝 푝⋅ℓ 훼k⃮⃮⃮⃮⃮ 훽 푝 ‖휕푥 푅ℎ‖ ≤ c ∑ ℎ ∑ ‖휕푥 휕푧푅ℎ‖ . L푝(Ω푎ℎ) L푝(Ω푎ℎ) ℓ=1 |훽|=ℓ
Proceeding as before in (3.3.4) for any 훽, since we still have |훼|+|훽| ≤ |훼|+ĸ ≤ 휇+ĸ but now with some derivative in the last ĸ coordinates, we obtain as in (3.3.4) that
훼 훼 k⃮⃮⃮⃮⃮ 훽 푝 k⃮⃮⃮⃮⃮ 훽 ⃡⃡⃡⃡⃡⃡ (푚−휇−|훽|)푝+ĸ 푝 ‖휕푥 휕푧푅ℎ‖L (Ω ) ≤ ‖휕푥 휕푧(퐹ℎ ∘ Ψ − 퐹 ∘ Ψ)‖L (Ω ) ≤ c ℎ ‖푓‖ 푚 , 푝 푎ℎ 푝 푎ℎ W푝 (Ϻ) and so by summation
ĸ 훼k⃮⃮⃮⃮⃮ 푝 푝⋅ℓ (푚−휇−ℓ)푝+ĸ 푝 (푚−휇)푝+ĸ 푝 ‖휕푥 푅ℎ‖L (Ω ) ≤ c ∑ ℎ ∑ ℎ ‖푓‖ 푚 ≤ c ℎ ‖푓‖ 푚 . 푝 푎ℎ W푝 (Ϻ) W푝 (Ϻ) ℓ=1 |훽|=ℓ
Again by the norm equivalence of Theorem 2.26 and regardless of the particular choice of 훼, we obtain the relation
훼 푚−휇+ĸ/푝 ‖휕 (퐹 ∘ Ψ − 퐹⃡⃡⃡⃡⃡⃡ ∘ Ψ)‖ ≤ ℎ ‖푓‖ 푚 . ℎ ℎ L푝(Ω푎ℎ) W푝 (Ϻ)
28 Reinserting into (3.3.1) and taking all necessary sums over all inverse atlas ele- ments yields the desired result, as the inverse atlas was finite by assumption. K
3.4 Remark: The restriction in terms of ĸ is the main drawback of this result, as it can restrict the applicable norms significantly. Later in this chapter, we will be able to give an improved result for TP-splines and linear foliations that reduces this loss to 1, regardless of the actual codimension ĸ.
3.2 Quasi-Projection and Polynomial Reproduction
3.2.1 Definition and Approximation Power
The concept of quasi-interpolation (and more specifically quasi-projection, cf. e.g. [68, 89]) is vital to many approximation results in univariate and multivariate set- tings, and it is inevitably encountered alongside the concept of polynomial repro- duction. The basic idea for both is simple, and the theoretical results achievable thereby have important practical consequences in many approximation methods, one of which, based on tensor product splines, we have already announced to re- vise in further detail hereafter. But before we come to that, we will briefly revise the general concepts of quasi-interpolation and quasi-projection. We begin with the following definition (cf. [68]):
3.5 Definition Let F be an arbitrary Banach space of functions. Let (휑푖)푖∈퐼 be a family of linearly independent elements of F that span a subspace F1 and satisfy ∣∣휑 ∣∣ ≤ 푏 푏 푖 ∈ 퐼 (Λ ) 푖 F for fixed and any . Let further 푖 푖∈퐼 be a family of uniformly bounded functionals on F, so ∣∣Λ푖∣∣ < c for some global constant c > 0. Then the corresponding quasi-interpolant for any 푓 ∈ F is given as
Ϙ푓 = ∑ Λ푖(푓) ⋅ 휑푖. 푖∈퐼
If we take the special case of Lebesgue spaces with index 푝 ∈ ]1, ∞[ over ℝ푑 and choose its Hölder dual 푝⋆ = 푝/(푝 − 1), then a quasi-projection is the operation
⋆ ⋆ ⋆ Q푓 = ∑ ⟨푓, 휑푖 ⟩ ⋅ 휑푖 with ⟨푓, 휑푖 ⟩ = ∫푓 ⋅ 휑푖 , 푖∈퐼
⋆ 푑 ⋆ for functions {휑푖 }푖∈퐼 ⊆ L푝⋆ (ℝ ) with ∣∣휑푖 ∣∣ ≤ 푏. The quasi-projection is further L푝⋆ called a local quasi-projection if there is a fixed 푎0 > 0 and a set of points {ζ푖}푖∈퐼 푑 such that any cube of unit length in ℝ contains at most ո0 of these points for a
fixed ո0 ∈ ℕ and it holds
29 ⋆ 푑 supp 휑푖 ∪ supp 휑푖 ⊆ [−푎0, 푎0] + ζ푖.
3.6 Remark: Fixed quasi-interpolation operators Ϙ can yield others by simple scaling, and provided the operators are scaled suitably, the operator norms of Λ푖 and its ℎ-scaled version Λ푖,ℎ coincide. In particular (cf. [68]), fixed quasi-projection operators Q yield arbitrary ones by scaling and we obtain for 1/푝 + 1/푝⋆ = 1
−푑/푝⋆ ⋆ −푑/푝 Qℎ푓 = ∑ ⟨푓, ℎ 휑푖 (⋅/ℎ)⟩ ⋅ ℎ 휑푖(⋅/ℎ). 푖∈퐼
The respective spaces spanned by the {휑푖(⋅/ℎ)} will consequently be denoted by
Fℎ ∶={휑푖(⋅/ℎ) ∶ 푖 ∈ 퐼}.
Common examples of function spaces and quasi-interpolation techniques feature particularly splines (cf. [81, 89] and see below) and also for example moving least squares (cf. [105, Ch. 4]). Both examples also come along with the second con- cept featured in the section title, the reproduction of polynomials. In fact it is this property that makes up the key ingredient of their approximation power:
3.7 Definition A quasi-interpolation operator Ϙ is said to provide polynomial reproduction of order 푚 ∈ ℕ if
Ϙ 푝 = 푝 for all 푝 ∈ P푚(ℝ푑).
Following this definition, a suitable result for approximation in fractional Sobolev spaces is given in the literature, to be found in [68, Sect. 3/5]. There, the result is presented in terms of integer Sobolev spaces and Besov spaces, but fractional Sobolev spaces appear as special cases of these (cf. Appendix Sect. 9.2.3ff.).
3.8 Theorem Let {Q } be a family of local quasi-projection operators re- ℎ 0<ℎ<ℎ0 producing P푚 on ℝ푑.
〈1〉 ⋆ 1. Let the basis functions {휑푖}푖∈퐼 satisfy ∣∣휑푖∣∣ 푟 푑 ≤ 푏0 and ∣∣휑푖 ∣∣ 푑 ≤ 푏0 for H (ℝ ) L2(ℝ ) fixed 푏0 and some 0 < 푟 < 푚. Then we have for 0 < ℎ < ℎ0, 0 ≤ 휚 < 푟 and 퐹 ∈ H푟(ℝ푑) that 푟−휚 ∣∣퐹 − Q 퐹∣∣ ≤ c ℎ ||퐹|| 푟 푑 . ℎ H휚(ℝ푑) H (ℝ )
〈1〉 푟 The condition on 휑 featured in [68] was actually ∣∣휑 ∣∣ 푟 ≤ 푏 for Besov space B , but this is 푖 푖 B 0 푝,∞ 푝,∞ clearly implied in our setting by our condition ∣∣휑 ∣∣ 푟 ≤ 푏 due to the embedding stated in [96, 푖 H (ℝ푑) 0 Sect. 2.3.2].
30 ⋆ 2. Let the basis functions {휑푖}푖∈퐼 satisfy ∣∣휑푖∣∣ 푚−1 푑 ≤ 푏0 and ∣∣휑푖 ∣∣ 푑 ≤ 푏0 for H (ℝ ) L2(ℝ ) 푚 푑 fixed 푏0 > 0. Consider H (ℝ ). Then we have for 0 < ℎ < ℎ0, 휚 ∈ [0, 푚 − 1] and 퐹 ∈ H푚(ℝ푑) that
푚−휚 ∣∣퐹 − Q 퐹∣∣ ≤ c ℎ ||퐹|| 푚 푑 . ℎ H휚(ℝ푑) H (ℝ )
Proof: The proof for the first case is given in [68]. The second relation is also given 푑 in [68] for ℝ and integer orders, so 휚 = 휇 ∈ ℕ0. We use the interpolation property from Prop. 2.30 to generalise these to reals: As the relation is valid for integers, we can particularly deduce that the operator norm of Id − Qℎ as an operator from H푚 to H휇 for any integer 0 ≤ 휇 < 푚 is c ℎ푚−휇. So we have that relation for ⌊휚⌋ and ⌈휚⌉ in particular. Then we obtain with 휃 ∈ ]0, 1[ and 휚 = 휃⌊휚⌋ + (1 − 휃)⌈휚⌉ that
휃 (1−휃) ∣∣Id − Q ∣∣ ≤ c (ℎ푚−⌊휚⌋) (ℎ푚−⌈휚⌉) = c ℎ푚−휃⌊휚⌋−(1−휃)⌈휚⌉ = c ℎ푚−휚. ℎ H푚→H휚
K
The results of this theorem can be generalised to obtain the same convergence rates also on arbitrary Ω ∈ 핃핚핡푑 at least theoretically, and there are several op- tions to do so: The first is that for any 휑푖 such that supp 휑푖(⋅/ℎ) ∩ Ω ≠ ⌀ it holds ⋆ supp 휑푖 (⋅/ℎ) ⊆ Ω. In this case, the results generalise by definition and we obtain in particular for admissible choices of 휚
푟−휚 ∣∣퐹 − Q 퐹∣∣ ≤ c ℎ ||퐹|| 푟 , ℎ H휚(Ω) H (Ω) and equivalent adaptions of the second statement of the theorem. The second ¤ ∗ ¤ option is that 퐹 is actually known in some Ω ∈ 핃핚핡푑 such that Ω ⋐ Ω . In this case, ¤ we can apply the extension operator EΩ¤ and obtain 퐹 = EΩ¤ 퐹 that coincides with 퐹 on Ω. By the locality property we have then for sufficiently small ℎ > 0 that whenever ⋆ supp 휑푖(⋅/ℎ) ∩ Ω ≠ ⌀ and supp 휑푖 (⋅/ℎ) ∩ Ω ≠ ⌀ we have also ¤ ⋆ ¤ supp 휑푖(⋅/ℎ) ⊆ Ω and supp 휑푖 (⋅/ℎ) ⊆ Ω .
In this case, we obtain with the identification of Qℎ with QℎEΩ¤ (that does not affect the result on Ω by locality) for admissible choices of 휚 that
푟−휚 ¤ ∣∣퐹 − Q 퐹∣∣ ≤ c ℎ ∣∣퐹 ∣∣ 푟 . ℎ H휚(Ω) H (Ω¤)
Again, comparable adaptions of the second statement of the theorem can be de- duced similarly. The third option is to define Qℎ directly by identifying Qℎ = QℎEΩ
31 for the universal continuous extension operator EΩ. Then we obtain, at least the- oretically, for admissible choices of 휚
푟−휚 ∣∣퐹 − Q 퐹∣∣ ≤ c ℎ ||퐹|| 푟 . ℎ H휚(Ω) H (Ω)
As before, we can deduce equivalent adaptions of the second statement of the theorem.
3.9 Remark: These results will prove useful in a two-stage approximation method presented in a later chapter, while we will hardly ever use them otherwise. There, we will be in a situation of a compactly supported objective function, and thus the extension to all of ℝ푑 is easily accomplished.
3.2.2 Interpolating Quasi-Projections
As our final objective in the treatment of general quasi-projection operators, we are now going to investigate if we can enhance them such that Qℎ푓 interpolates 푓 in some finite, fixed set Ξ as long as ℎ is small enough. We will see that we can indeed do so, and this will become very important in future: Once we study approxima- tion in terms of energy functionals in a later chapter, we wish to make use of the convergence orders that quasi-projection operators provide as benchmarks. But sometimes our functionals are only given under strict interpolation constraints, for example if we want to minimise the ESM-equivalent of the energy
∫ ∑ (휕훼퐹)2 ℝ푑 |훼|=2 within the convex set of all functions in H2(Ϻ) that interpolate given function val- ues in a given finite set Ξ = {ξ1, ..., ξ푛} ⊆ Ϻ.
Ξ Consequently, our objective is now to determine how such an operator Qℎ can be constructed at least theoretically to provide us with a benchmark for the ”achiev- able” approximation order. The key ingredient to this is the idea of suitably blend- Ξ ing the operator Qℎ with an interpolation operator Iℎ in the form
Ξ Ξ Ξ Qℎ ∶= Qℎ + Iℎ − Iℎ ⋅ Qℎ.
This idea was for example proposed in [102], and one directly verifies or checks Ξ there that Qℎ 푓 will indeed interpolate a given function 푓 in all ξ ∈ Ξ.
Ξ What we still need now is the interpolation operator Iℎ , and we would require it for all 0 < ℎ < ℎ0 as long as ℎ0 is sufficiently small. We will now present a suitable construction method:
32 First of all, we restrict ourselves to local quasi-projection operators, which will be sufficient for our later treatment. Due to the C∞-version of Urysohn’s Lemma (cf.
[78, Sect. 4.4]), we can find an arbitrarily smooth function 휑ξ ∶ Ω → [0, 1] to any 푑 푑 ξ ∈ Ξ that is constantly 1 on B 푞Ξ (ξ) and has support in B 푞Ξ (ξ), where we define as 4 2 usual the separation distance 푞Ξ by
푞 ∶= min ∣∣ξ − ξ ∣∣ . Ξ 1 2 2 ξ1,ξ2∈Ξ
Ξ Now we define an interpolation operator I0 as
Ξ I0 푓 ∶= ∑ 푓(ξ)휑ξ. ξ∈Ξ
Since Ξ does not vary when changing ℎ, this function is fixed for any scaling factor 푟 ℎ of operator Qℎ, and moreover we have for the Sobolev Hilbert space H
Ξ ‖I0 푓‖H푟 ≤ max |푓(ξ)| ∑ ‖휑ξ‖H푟 . ξ∈Ξ ξ∈Ξ
푟 > 푑 푟 ↪ Furthermore, we have as long as 2 that H C by the Sobolev embedding theorem, and we can thereby deduce that
Ξ ‖I0 푓‖H푟 ≤ cΞ‖푓‖H푟 .
Ξ Now we have to choose ℎ0 small enough such that QℎI0 푓 is indeed interpolating 푓 at the points of Ξ. That this is actually possible is a consequence of Q being a local quasi-projection operator by assumption and the fact that Ξ is fixed while ℎ can be chosen small: We only have to wait until any basis function and functional for Qℎ relevant for the function value at a specific ξ has its whole support contained in 푑 Ξ B 푞Ξ (ξ). Then if we choose ℎ0 so small, we obtain that QℎI0 푓 is interpolating 푓 at 4 Ξ Ξ any point of Ξ for any ℎ < ℎ0, thereby defining a suitable operator Iℎ ∶= QℎI0 . With Ξ this operator, we can now define Qℎ as
Ξ Ξ Ξ Qℎ ∶= Qℎ + Iℎ − Iℎ ⋅ Qℎ.
3.10 Remark: It is also worth noting that if Qℎ is a projection operator, so Qℎ푓 = 푓 Ξ for 푓 ∈ Fℎ, then Qℎ is as well a projection operator, as for any 푓 ∈ Fℎ it holds
Ξ Ξ Ξ Ξ Ξ Qℎ 푓 = (Qℎ + Iℎ − Iℎ ⋅ Qℎ)푓 = Qℎ푓 + Iℎ 푓 − Iℎ 푓 = Qℎ푓 = 푓.
If we are now interested in the approximation power in some space H푟, then we
33 Ξ see that if Iℎ is bounded in the respective space, the approximation power is indeed reproduced:
Ξ Ξ Ξ Ξ ‖푓 − Qℎ 푓‖H푟 ≤ ‖푓 − Qℎ푓‖H푟 + ‖Iℎ 푓 − Iℎ Qℎ푓‖H푟 ≤ (1 + ‖Iℎ ‖H푟 ) ⋅ ‖푓 − Qℎ푓‖H푟 .
Ξ Taking the construction into account, then Iℎ is bounded at least if Qℎ is bounded and if
max |푓(ξ)| ∑ ‖휑ξ‖H푟 ≤ max ‖휑ξ‖H푟 ‖푓‖H푟 , ξ∈Ξ ξ∈Ξ ξ∈Ξ with the latter being the case precisely if H푟 ↪ C. So we can deduce that orders 푟 > 푑 are reproduced at least whenever 2 in our case, which we summarise in the following theorem:
Ξ 3.11 Theorem The operator Qℎ constructed as above from a local quasi-projection 휚 operator Qℎ for fixed set Ξ reproduces the convergence order of Qℎ on H when- 휚 > 푑 0 < ℎ < ℎ ever Qℎ is bounded, 2 and 0 sufficiently small.
∞ 3.12 Remark: Of course, the {휑ξ} need not be from C to make the above con- struction work, they only need to be bounded in H휚 at least.
3.3 Tensor Product B-Splines
The primary example of an approximation space we are going to revise is that of (uniform) tensor product (B-)splines, in future abbreviated as usual as TP-splines. We will employ these as our main technique when it comes to certain other practical applications of our theory, particularly concerning functional minimisations. This is done due to several factors, most importantly:
1. TP-splines have nice approximation properties, particularly when applied in gridded regions. Most importantly, they provide us with suitable operators for quasi-projection operators.
2. There is a considerable number of numerical techniques available that can handle TP-splines, so we can rely on well tested and efficient software pack- ages or components, e.g. in Matlab®.
3. When increasing the number of degrees of freedom, we can rely on the fact that the locality of the basis functions can be increased accordingly without loss of approximation power.
34 3.3.1 Definition and Initial Approximation Results
Now it is time to give a definition of B-splines as we are going to use them in the subsequent course of this thesis; we restrict ourselves to the case that is commonly known as a uniform B-spline, as there seems no real advantage in the application of nonuniform B-splines in the forthcoming chapters — and it simply makes notation and certain other aspects far easier. We follow [89] to define them.
Let ℎ > 0 and 푚 ∈ ℕ. Then we define a function 퐵1 as
⎧ 1 − 1 ≤ 푥 < 1 { 2 2 퐵1(푥) ∶= ⎨ , { ⎩0 else and inductively for 푚 > 1 the function 퐵푚 as
퐵푚(푥) ∶= ∫퐵푚−1(푥 − 푡)퐵1(푡)푑푡. ℝ
3.13 Definition Let now ℎ > 0 and 푛 ∈ ℤ. Then we define the uniform B-spline 푚 b푛,ℎ by 푥 푚 b푚 (푥) ∶= 퐵 ( + − 푛) . 푛,ℎ 푚 ℎ 2 For 푛 ∈ ℤ푑, the tensor product
b푚 (푥) ∶= b푚 (푥 ) ⋯ b푚 (푥 ) 푛,ℎ 푛1,ℎ 1 푛푑,ℎ 푑 of 푑 such B-splines {b푚 (푥 )}푑 is called a tensor product B-spline. A linear com- 푛푗,ℎ 푗 푗=1 bination 푠 of these basis functions is called a tensor product spline and the space 푚 푑 푑 of all those splines is denoted by Sℎ (ℝ ). If Ω ⊆ ℝ , then the set of all TP-B- splines whose support has nonempty intersection with Ω is called the set of active 푚 B-splines, and their span is denoted by Sℎ (Ω). 푑 The set of cells obtained by considering all 푑-dimensional cubes Cℎ(ℝ ) ∶= {ᴄℎ[ƶ] ∶= [0, ℎ]푑 + ƶ ∶ ƶ ∈ ℎ ⋅ ℤ푑} gives all the regions where the restriction of a spline is a 푑 polynomial of fixed degree by definition. The subset Cℎ(Ω) ⊆ Cℎ(ℝ ) of active cells for some Ω ⊆ ℝ푑 is defined by
푑 Cℎ(Ω) ∶= {ᴄ ∈ Cℎ(ℝ ) ∶ ᴄ ∩ Ω ≠ ⌀}.
The most important fact for us is that TP-B-splines admit local quasi-projection 푚 푚 operators that reproduce P . To see this, we make the choice 휑푖(⋅) = b푖 (⋅) for 푑 푖 ∈ ℤ and ℎ = 1. Then we start with an initial quasi-projection operator S1 that
35 we can explicitly construct. Afterwards, we can go along the lines of Remark 3.6 for scaling to obtain Sℎ.
We will now determine the initial local quasi-projection operator S1 by choosing ⋆ 휑푖 as follows: Since the basis functions are just uniform shifts of b0 = b0,1 for 푑 the zero vector 0 ∈ ℤ , we fix this b0(푥) and deduce the general behaviour by shifting. Denote now the other basis functions active on the support of b0,1(푥) by 1 푛 푑 bℓ1 , ..., bℓ푛 for suitable ℓ , ..., ℓ ∈ ℤ . Next, we apply L2(supp b0)-Gram-Schmidt orthogonalisation to the sequence bℓ1 , ..., bℓ푛 , b0 in that order to obtain mutually ̃ ̃ ̃ ̃ orthogonal bℓ1 , ..., bℓ푛 , b0. The resulting b0 is now our choice for the dual function, but we still have to justify that this choice is valid: ̃ First of all, one can directly see that the support of b0 is bounded, and that by ̃ construction b0 is L2-orthogonal to the initial bℓ1 , ..., bℓ푛 , as
̃ ̃ ̃ spɑn {bℓ1 , ..., bℓ푛 } = spɑn {bℓ1 , ..., bℓ푛 } ⟂ b0.
The resulting operator Λ0 is then defined by
̃ Λ0푓 = ∫ 푓 ⋅ b0,
supp b0 and it is bounded on L푝, as by Hölder’s inequality
̃ ̃ ∫ 푓 ⋅ b0 ≤ ∣∣푓∣∣ ⋅ ‖b0‖L (supp b ). L푝(supp b0) 푝⋆ 0 supp b0
̃ ̃ Suitable rescaling of b0 gives ∫b0 ⋅ b0 = 1 without any harm. Since the basis func- tion b0 was chosen arbitrarily, this construction will do for any B-spline by simple translation of our given functional. Then we define the functionals {Λ푛,ℎ}푛∈ℤ for 푚 arbitrary ℎ and Sℎ as proposed in Remark 3.6 by appropriate scaling. Simple cal- culation and application of the transformation law shows then that the operator norms of Sℎ and S1 for L푝 coincide. 푚 Furthermore, the resulting operator is a true projector onto Sℎ and thus provides reproduction of polynomials in particular: This can directly be deduced from the fact that any functional reproduces the corresponding basis function by construc- tion, and so in particular any polynomial of total order at most 푚 is reproduced, 푚 푑 푚 푑 as we clearly have P (ℝ ) ⊆ Sℎ (ℝ ). In fact, even tensor product polynomials of order at most 푚 are reproduced by the same argument.
3.14 Remark: (1) When it comes to practical implementation, it is convenient to 0 see that in case 푝 = 2 the above process gives us Λ0 as the value 훽ℓ0 for ℓ = ퟎ obtained from the quadratic minimisation problem
36 푛 2 ⎛ ⎞ ∫ ⎜∑ 훽ℓ푖 bℓ푖 − 푓⎟ → min! ⎝푖=0 ⎠ supp bퟎ
This holds because the ONB gives a projection operator in this Hilbert space case, and then the two definitions of the functional turn out to be effectively equal. (2) One can also perform the prescribed projection / minimisation separately on each cell of the support, and take the average of the results as the ultimate choice for the coefficient. Theoretically, this results in 푚푑 independent quasi-projections, namely one for each fixed cell position in a support, whose average is taken after- wards, but all relevant properties remain unaffected. Similarly, one could also use arbitrary suitable subsets of supp bퟎ, in particular just a single cell.
Since we have now defined suitable local quasi-projections (even projections), we can in particular deduce that the results of Theorem 3.8 hold for TP-splines. How- ever, still undetermined in this context remains the exact regularity of a TP-B-spline in the Hilbert Sobolev sense. Clearly, any such B-spline is in H푚−1, but the ques- tion is whether we can obtain actually H푚−훿 for some 0 < 훿 < 1. The answer is 훿 > 1 indeed affirmative for 2 , but we need the definition of Besov spaces in terms of moduli of smoothness for this, which we briefly revised in the appendix: By exam- 푚− 1 −ԑ ple 9.13 presented there, we obtain that a univariate uniform B-spline is in H 2 for any ԑ > 0. Consequently, a multivariate TP-B-spline is therefore as well at least 푚− 1 −ԑ in H 2 — we can deduce this directly by Fubini. So the supremal choice for 푟 푚 − 1 in the first statement of Theorem 3.8 is 2 . 3.15 Corollary Let {S } be a family of local quasi-projection operators on ℎ 0<ℎ<ℎ0 푚 푑 Sℎ (ℝ ) as introduced above. Then the results of Theorem 3.8 are applicable. Therein, the supremum of the admissible choices for 푟 > 0 in the first statement is 푚 − 1 at least 2 .
3.3.2 Locality of Approximation and the Continuity of Operators
We proceed with some more elaborate considerations on the local approximation power of local quasi-interpolants based on splines, using [89, §12.3, §13.3, §13.4]:
3.16 Lemma Let 1 ≤ 푝 < ∞ and 0 < 푛 ≤ 푚 ∈ ℕ. Let ᴄℎ[ƶ] be an arbitrary cell for 푚 푑 푛 the spline space Sℎ (ℝ ), and 휉 ∈ ᴄℎ[ƶ] and 푓 ∈ W푝(B푚ℎ[ƶ]). Let further 0 ≤ 휇 < 푛 and let 푇푛,휉 be the averaged Taylor polynomial of (tensorial or total) maximal order 푛 to 푓 in 휉. Then holds the relation
푛−휇 ∣∣푓 − 푇 ∣∣ 휇 ≤ ∣∣푓 − 푇 ∣∣ 휇 ≤ c ℎ ∣∣푓∣∣ 푛 . 푛,휉 푛,휉 W (B푚ℎ[ƶ]) W푝(ᴄℎ[ƶ]) W푝(B푚ℎ[ƶ]) 푝
37 Proof: This is just (averaged) Taylor remainder estimation due to [89, §13.4]. K
3.17 Lemma In the setting of the last lemma, any local quasi-projection operator
Sℎ reproducing polynomials of order 푚 will satisfy
푛−휇 ∣∣푓 − Sℎ푓∣∣ 휇 ≤ c ∣∣푓 − 푇푛,휉∣∣ 휇 ≤ c ℎ ∣∣푓∣∣ 푛 . W (ᴄℎ[ƶ]) W (B푚ℎ[ƶ]) 푝 W푝(B푚ℎ[ƶ]) 푝
If furthermore 푛 < 푚, then it holds even
∣∣푓 − Sℎ푓∣∣ 푛 ≤ c ∣∣푓∣∣W푛 (B [ƶ]) . W푝(ᴄℎ[ƶ]) 푝 푚ℎ
Proof: The first claim can be deduced from the last lemma by polynomial repro- duction properties of Sℎ via
∣∣푓 − 푓∣∣ 휇 ≤ ∣∣푓 − 푇 ∣∣ + ∣∣푇 − 푓∣∣ Sℎ W (ᴄ[ƶ]) 푛,휉 휇 푛,휉 Sℎ 휇 푝 W푝(ᴄ[ƶ]) W푝(ᴄ[ƶ])
= ∣∣푓 − 푇푛,휉∣∣ 휇 + ∣∣Sℎ푇푛,휉 − Sℎ푓∣∣ 휇 . W푝(ᴄ[ƶ]) W푝(ᴄ[ƶ])
푛−휇 There, the first summand is bounded by c ℎ ∣∣푓∣∣ 푛 for any 휇 < 푛, but the W푝(B푚ℎ[ƶ]) second will need some arguments: We take for an arbitrary suitable ordering and 푚 appropriate index set 퐼 the basis functions {b 푖 } and functionals {Λ 푖 } of ∗ ℓ 푖∈퐼∗ ℓ ,ℎ 푖∈퐼∗
Sℎ active on ᴄ[ƶ]. Then we obtain for 푅 = 푓 − 푇푛,휉 by the triangle inequality
푚 ∣∣ 푅∣∣ 휇 ≤ ∑ |Λ 푖 푅| ⋅ ∣∣ 푖 ∣∣ . Sℎ W (ᴄ[ƶ]) ℓ ,ℎ bℓ 휇 푝 W푝(ᴄ[ƶ]) 푖∈퐼∗
푚 −휇 We can bound ∣∣bℓ푖 ∣∣ 휇 by c ⋅ ℎ due to the chain rule and the construction of W푝(ᴄ[ƶ]) 푚 the bℓ푖 by scaling and shifting. By continuity and locality of the Λℓ푖,ℎ, we obtain
푛 |Λ 푖 푅| ≤ c ∣∣푓 − 푇 ∣∣ 푚 ≤ c ∣∣푓 − 푇 ∣∣ ≤ c ℎ ∣∣푓∣∣ 푛 , ℓ ,ℎ 푛,휉 푛,휉 W (B푚ℎ[ƶ]) L푝(supp bℓ푖 ) L푝(B푚ℎ[ƶ]) 푝 which finishes the first claim by insertion. For proving the second claim, we sup- pose the order of the Taylor polynomial to be total and observe that it still holds
∣∣푓 − 푓∣∣ 푛 ≤ ∣∣푓 − 푇 ∣∣ + ∣∣ 푇 − 푓∣∣ . Sℎ W (ᴄ[ƶ]) 푛,휉 푛 Sℎ 푛,휉 Sℎ 푛 푝 W푝(ᴄ[ƶ]) W푝(ᴄ[ƶ])
푛 Now the first summand satisfies due to totality of the order the relation D 푇푛,휉 = 0, and so we can deduce that
∣∣푓 − 푇 ∣∣ ≤ ∣∣푓 − 푇 ∣∣ + ∣∣푓∣∣ 푛 , 푛,휉 푛 푛,휉 푛−1 W (ᴄ[ƶ]) W푝(ᴄ[ƶ]) W푝 (ᴄ[ƶ]) 푝
38 where the right hand side can be bounded by c ∣∣푓∣∣ 푛 . And for the second W푝(B푚ℎ[ƶ]) 푚 term we can just proceed as in case 휇 < 푛, because in particular ∣∣bℓ푖 ∣∣ 푛 is still W푝(ᴄ[ƶ]) well-defined as we are in the situation 푛 < 푚. K
As a direct consequence of this lemma, the approximation power of the operator
Sℎ is clearly localised. So if we take all cells that are active on U푎ℎ(Ϻ) for some 푎 > 0 and Ϻ without boundary, then because of the finite, globally bounded overlap of the B푚ℎ[ƶ] for all relevant cell bases ƶ to cells in Cℎ(U푎ℎ(Ϻ)) we conclude after summing over all cells for sufficiently small ℎ
∣∣퐹 − Sℎ퐹∣∣ 휇 ≤ ∑ ∣∣퐹 − Sℎ퐹∣∣ 휇 W푝(U푎ℎ) W푝(ᴄ[ƶ]) ᴄ[ƶ]∈Cℎ(U푎ℎ) 푚−휇 푚−휇 ≤ c ℎ ∑ ||퐹|| 휇 ≤ c ℎ ||퐹|| 푚 W푝(B푚ℎ[ƶ]) W푝 (U푏ℎ) ᴄ[ƶ]∈Cℎ(U푎ℎ) with suitable 푏 > 푎+푑(푚+1). Consequently, we obtain that Sℎ is an approximation operator that is local relative to Ϻ, and we can transfer the results of Theorem 3.3:
푘 푑 3.18 Corollary Let in the general setting of Theorem 3.3 Ϻ ∈ 필cp(ℝ ) and {S } be a family of local spline quasi-projection operators on S푚(U(Ϻ)) re- ℎ 0<ℎ<ℎ0 ℎ producing P푚. Let 0 < 푛 ≤ 푚 and 0 ≤ 휇 ≤ 푛 − ĸ be such that 휇 < 푚 − ĸ. Then we ɴ 푛 have for Sℎ ∶= ͲϺ SℎEN and any 푓 ∈ W푝(Ϻ) the relation
ɴ 푛−휇 ∣∣푓 − Sℎ푓∣∣ 휇 ≤ c ℎ ∣∣푓∣∣W푛 (Ϻ) . W푝(Ϻ) 푝
ɴ 3.19 Remark: (1) If Ϻ has a boundary, then the same holds at least for Sℎ ∶= Ϻ̂ Ϻ̂ 푛 RϺͲϺ̂SℎENEϺ, where EϺ is the continuous extension operator that maps W푝(Ϻ) to 푛 ̂ ̂ 푘 푑 ̂ W푝(Ϻ) for Ϻ ∈ 필cp(ℝ ) without boundary such that Ϻ ⋐ Ϻ we have required to exist. The impact of this result for Ϻ with boundary is again of rather theoretical interest, but still useful in estimating the theoretically achievable approximation order in that case. (2) Despite the notorious problem of approximation in arbitrary domains with splines, a suitable practicable definition of the approximation operator also in the case with boundary is not beyond reach, as arguments in [79] imply.
3.20 Remark: In the case 푝 = ∞, which we neglected in the whole thesis, the above result is also valid, as can be found in [82] exemplarily discussed for TP- 푚 splines. There, it is also stated that actually for all 1 ≤ 휇 < 푚−1 and any 푓 ∈ W∞ (Ϻ) it holds ɴ 푚−휇 ∣∣푓 − 푓∣∣ 휇 ≤ ℎ ∣∣푓∣∣ 푚 . Sℎ ( ) c W (Ϻ) W∞ Ϻ ∞
39 We will see soon that for extensions based on a linear foliation we can also achieve 1 ≤ 휇 < 푚 even in case 푝 < ∞.
In addition to this, we obtain also a result on continuity of local quasi-projection operators not only on Lebesgue but also on higher and fractional order Sobolev spaces. On the one hand, we can deduce suitable interpolating quasi-projection operators thereby, and on the other hand it gives us a handle to deduce
∣∣푓 − 푓∣∣ 휚 ≤ ∣∣푓∣∣ 휚 Sℎ H c H for the limiting case 푓 ∈ H휚.
3.21 Theorem Let {S } be a family of local spline quasi-projection opera- ℎ 0<ℎ<ℎ0 푚 푑 휇 푑 tors on Sℎ (ℝ ) as introduced above. Then Sℎ is a bounded operator on W푝(ℝ ) 휚 푑
whenever 0 ≤ 휇 ≤ 푚 − 1 (휇 ∈ ℕ0) and on H (ℝ ) whenever 0 ≤ 휚 ≤ 푚 − 1. The
ʊ ʊ 휇 same relation holds on any Ω ∈ 핃핚핡푑 for Sℎ defined by Sℎ = RΩSℎEΩ and W푝(Ω) resp. H휚(Ω).
Proof: Due to the finite, globally bounded overlap of any collection of balls B푚ℎ[ƶ] as in Lemma 3.17, the last statement of that lemma and the triangle inequality, the 휇 푑 claim is clear for any W푝(ℝ ) with 0 ≤ 휇 ≤ 푚 − 1. Then the relation for noninteger spaces on ℝ푑 follows by interpolation of operator norms in the interpolation prop- erty of Prop. 2.30: We choose 푚1 = 푛1 = ⌊휚⌋, 푚2 = 푛2 = ⌈휚⌉ and 0 < 휃 < 1 such that 휚 = 휃푚1 + (1 − 휃)푚2 and insert in Prop. 2.30.
In all cases, the relation for Ω ∈ 핃핚핡푑 is a consequence of the respective result for, applied to the bounded extension from Ω to ℝ푑 if the function is not initially known
on all of ℝ푑. K ʊ 푛 3.22 Remark: Note in particular that due to boundedness of Sℎ in any W푝 with 푛
푛 < 푚 we have that for any 푓 ∈ W푝(Ω) ʊ
∣∣푓 − Sℎ푓∣∣ 푛 ≤ c ∣∣푓∣∣W푛 (Ω) . W푝(Ω) 푝
This gives us a key for handling functions of limited regularity with splines of higher orders. In particular, we have that approximations of functions in H2 are bounded by a constant factor times the initial function norm whenever 푚 > 2, so in literally any situation where considering second derivatives of the splines makes sense.
One further consequence of this result is that we can enhance the validity of The- orem 3.8 for TP-splines, and thus Cor. 3.15, in the following way:
3.23 Corollary Let {S } be a family of local spline quasi-projection opera- ℎ 0<ℎ<ℎ0 푚 푑 푚 푟 푑 tors for Sℎ (ℝ ) reproducing P . Consider H (ℝ ) and arbitrary 0 ≤ 푟 ≤ 푚. Then we have for 0 < ℎ < ℎ0 and 휚 ≤ min(⌊푟⌋, 푚 − 1) that
40 푟−휚 ∣∣퐹 − 퐹∣∣ 휚 ≤ ℎ ||퐹|| 푟 푑 . Sℎ H (ℝ푑) c H (ℝ )
Proof: By the last theorem, we have in particular that for any 0 ≤ 휇 ≤ min(푛, 푚−1) and arbitrary fixed 푛 ≤ 푚
푛−휇 ∣∣퐹 − 퐹∣∣ 휇 ≤ ℎ ||퐹|| 푛 푑 . Sℎ H (ℝ푑) c H (ℝ )
We can easily deduce that for any real 0 ≤ 휚 ≤ min(푛, 푚 − 1)
푛−휚 ∣∣퐹 − 퐹∣∣ 휚 ≤ ℎ ||퐹|| 푛 푑 . Sℎ H (ℝ푑) c H (ℝ ) (3.23.1)
This comes by interpolation as applied in the proof of 3.8. Now we have to apply the interpolation property from Prop. 2.30 in a second step once again. So we choose first some real, noninteger 푟 < 푚. We can choose 휚 ≤ 푚1 < 푚2 ≤ 푚 and
0 < 휃 < 1 such that 푟 = 휃푚1 + (1 − 휃)푚2, e. g. 푚1 = ⌊푟⌋ and 푚2 = ⌈푟⌉. If we now apply the interpolation property for ϱ1 = ϱ2 = 휚, we obtain by virtue of (3.23.1) for the operator Id − Sℎ that
휃(푚1−휚)+(1−휃)(푚2−휚) ∣∣퐹 − 퐹∣∣ 휚 ≤ ℎ ||퐹|| 푟 푑 . Sℎ H (ℝ푑) c H (ℝ )
This is already the desired result. K
3.24 Remark: Note that this corollary is not equivalent to the first statement in 3.8 for TP-splines. There, by the regularity of splines, the supremum of admissible 푟 푚 − 1 휚 < 푟 choices for was 2 , while any could be chosen. In contrast to this, we can have any 푟 ≤ 푚 here but must have 휚 ≤ min(⌊푟⌋, 푚 − 1).
3.3.3 Improved Approximation Results
In Theorem 3.3 and its TP-spline version in Corollary 3.18, the range for 휇 is rather restricted when it comes to higher codimensions. Consequently, we would hope for an improvement. Of course, this improvement comes not for free: While any of the previous results would remain valid if we would replace EN by some other E픽 for an arbitrary Ϻ-foliation 픽, in the following this foliation would need to be linear — but still, our normal foliation and extension can remain the rolemodel, as these are obviously linear.
Before we come to the actual statement and proof, we will have to make some additional auxiliary statements within a couple of lemmas:
41 3.25 Lemma Let φ ∶ Ω → Ωϕ be a bidirectionally C-bounded smooth diffeomor- phism. Then there are 푎1, 푎2, 푏1, 푏2 > 0 such that we have for any 푦 ∈ Ω, 푥 = φ(푦) and any 푟0 > 푟 > 0 the relations
φ(B (푦)) ⊆ B (푥) ⊆ φ(B (푦)), 푎1푟 푟 푎2푟 φ-1(B (푥)) ⊆ B (푦) ⊆ φ-1(B (푥)). 푏1푟 푟 푏2푟
The same relations hold (with possibly other constants) if one replaces the Eu- clidean norm ball B(⋅) by the maximum norm ball B[⋅]. Proof: Due to equivalence of norms, it suffices to give the respective result just for the Euclidean norm balls. There we have for arbitrary 푧 ∈ B푟(푥) by the mean value theorem for some convex combination 휉 of 푥 and 푧 that
-1 -1 -1 -1 -1 ‖φ (푧) − φ (푥)‖2 = ‖퐷φ (휉)(푧 − 푥) + φ (푥) − φ (푥)‖2
-1 ≤ max ∣∣(퐷φ (푤))ij∣∣ ⋅ ‖푧 − 푥‖2 푤∈Ωϕ 2 which is globally bounded from above by c ‖푥 − 푧‖2 for φ being bidirectionally C- bounded. In the same way, we obtain the other relations. K
Using this, we can now give the following result on containment of images and preimages of balls in parameter spaces:
푘 푑 3.26 Lemma Let Ϻ ∈ 필cp(ℝ ) and Uԑ(Ϻ) have the closest point property. Let ex both be parameterised according to inverse atlases 픸Ϻ ∈ ℿ(Ϻ) and 핌Ϻ ∈ ℿN (Ϻ), respectively. Let further 훿 > 0 be given. Then we can choose 0 < 푎 < 푏 < ∞, some
ℎ0 > 0 and the inverse atlases 픸Ϻ of Ϻ and 핌Ϻ of Uԑ(Ϻ) in such a way that it holds for any 0 < ℎ < ℎ0 and any 푥 ∈ U푎ℎ(Ϻ)
• the closure of the ball Bℎ훿[푥] is contained in Uℎ푏(Ϻ),
-1 • there is (Ψ, Ω) ∈ 핌Ϻ such that Ψ (Bℎ훿[푥]) ⋐ Ω.
Proof: The first statement is fairly obvious: We simply have to demand ℎ0 is so small that we can choose 푏 > 푎 + 푑훿 and U (Ϻ) has the closest point property. 푏ℎ0 The second statement requires slightly more considerations: By our choice of
(ψ푖, ω푖)푖∈퐼 ∈ ℿ(Ϻ) and compactness, we see that there is some fixed 휚 > 0 such ¤ that not only the collection (ψ푖, ω푖)푖∈퐼 parameterises Ϻ, but also (ψ푖, ω푖 )푖∈퐼 for suitable ω¤ ⊆ {푧 ∈ ω ∶ ∣∣푧 − ∂ω ∣∣ > 휚}. 푖 푖 푖 ∞
As we can always rely on cylinders or balls for the parameter spaces, we can therein obviously choose again Lipschitz sets. Then for an arbitrary 푥 ∈ U푎ℎ(Ϻ), there is ¤ ¤ ĸ some suitable (ψ푗, ω푗) such that 푥 ∈ Ψ(ω푗 × B푎ℎ(0)). Thereby we can deduce by the previous Lemma 3.25 that with ℎ0 sufficiently small it holds
42 -1 -1 ĸ Ψ (Bℎ훿[푥]) ⊆ Bℎ훿a᷉[Ψ (푥)] ⊆ ω푗 × B푎ℎ(0) for some fixed a᷉ > 0, which gives the desired relation. K
3.27 Lemma Let 푃 be a polynomial of (tensorial or total) order at most 푚 ∈ ℕ. Let Ω ⊆ ℝ푑 be compact and 0 < 푎 < 푏. Then there is a constant c > 0 that depends only on 푚, 푎, 푏, 푞 such that it holds for any 푟 > 0 and 푥 ∈ Ω
∫ (푃(푧 − 푥))푞 푑푧 ≤ c ∫ (푃(푧 − 푥))푞 푑푧.
B푏푟[푥] B푎푟[푥]
Proof: We suppose the polynomial to be in Taylor form with center 푥 and thus can restrict ourselves to 푥 = 0 now; all other positions then come by translation. 훼 We first consider the case of 푃 being a monomial, so 푃훼(푧) ∶= 푧 with |훼푗| < 푚 for all 푗 = 1, ..., 푑. Now we transform B푏푟[0] and B푎푟[0] to the unit square and obtain
푞 푞 푑 푑+푞⋅|훼| 푑+푞⋅|훼| 푞 ∫ (푃훼(푧)) = ∫ (푃훼(푟푏 ⋅ 푧)) (푏푟) = 푏 푟 ∫ (푃훼(푧)) ,
B푏푟[0] B1[0] B1[0]
푞 푞 푑 푑+푞⋅|훼| 푑+푞⋅|훼| 푞 ∫ (푃훼(푧)) = ∫ (푃훼(푟푎 ⋅ 푧)) (푎푟) = 푎 푟 ∫ (푃훼(푧)) .
B푎푟[0] B1[0] B1[0]
Hence we conclude
푞 푞 (푑+푚⋅푑) 푞 ‖푃훼‖ ≤ ‖푃훼‖ ≤ (푏/푎) ‖푃훼‖ , L푞(B푎푟[0]) L푞(B푏푟[0]) L푞(B푎푟[0])
푑(푚+1) and choose the desired constant as c푎,푏 = (푏/푎) . This constant will then suffice for any monomial. To deal with the general case, we demand that
훼 푃(푧) = ∑ 푐훼푧 . 훼∈ℕ푑 0 |훼푗|<푚
By equivalence of norms in finite dimensional spaces, we have for any such poly- nomial
∑ |푐훼| ⋅ ∣∣푃훼∣∣ ≤ c ||푃||L (B [0]) . (3.27.1) L푞(B푎[0]) 푞 푎 훼∈ℕ푑 0 |훼푗|<푚
We wish to achieve for a constant independent of 푟 > 0 that
∑ |푐훼| ⋅ ∣∣푃훼∣∣ ≤ c ||푃||L (B [0]) , (3.27.2) L푞(B푎푟[0]) 푞 푎푟 훼∈ℕ푑 0 |훼푗|<푚 as this would allow us to conclude by the triangle inequality
43 ||푃||L (B [0]) ≤ ∑ |푐훼| ⋅ ∣∣푃훼∣∣ ≤ c푎,푏 ∑ |푐훼| ⋅ ∣∣푃훼∣∣ ≤ c ||푃||L (B [0]) . 푞 푏푟 L푞(B푏푟[0]) L푞(B푎푟[0]) 푞 푎푟 훼∈ℕ푑 훼∈ℕ푑 0 0 |훼푗|<푚 |훼푗|<푚
By the transformation law we have for any polynomial 푃 and any 푟 > 0
‖푃(⋅)‖ = 푟푑/푞‖푃(푟⋅)‖ . L푞(B푟푎[0]) L푞(B푎[0])
As any 푟푑/푞 would appear on both sides and thus would cancel out in (3.27.2), it suffices for (3.27.2) to prove that it holds
∑ |푐훼| ⋅ ∣∣푃훼(푟⋅)∣∣ ≤ c ||푃(푟⋅)||L (B [0]) . L푞(B푎[0]) 푞 푎 훼∈ℕ푑 0 |훼푗|<푚
If we now take a look at 푐훼푃훼(푟⋅), we see that
훼 푐훼푃훼(푟⋅) = 푐훼푟 푃훼(⋅).
훼 Consequently, we can assign new coefficients 푐훼(푟) ∶= 푐훼푟 and reduce the in- equality to
∑ |푐 (푟)| ⋅ ∣∣푃 (⋅)∣∣ ≤ c ‖ ∑ 푐 (푟)푃 (⋅)‖ . 훼 훼 L (B [0]) 훼 훼 L푞(B푎[0]) 훼 푞 푎 훼
But with these new coefficients, this is precisely (3.27.1), which we know is true for any choice of coefficients with a constant independent of these. K
3.28 Lemma Let 푃푓 be the polynomial that coincides with Sℎ푓 for fixed 푓 ∈ 푚 푑 W푞 (ℝ ) on some arbitrary cell ᴄℎ[ƶ] = Bℎ/2[휁]. Then we have for any 0 ≤ 휇 < 푚, any fixed 푏 ≥ ԑ > 0 and sufficiently small 0 < ℎ < ℎ0 the relation
푚−휇 ∣∣푓 − 푃 ∣∣ 휇 ≤ c ℎ ∣∣푓∣∣ 휇 . 푓 W (B푚푏ℎ[휁]) W푞 (B푏ℎ[휁]) 푞
Proof: We see that by Taylor remainder estimation for 푇푓 the averaged Taylor polynomial of (tensorial or total) order 푚 to 푓 in 휁, and by the previous result with 푎 = 1 2 that
∣∣푓 − 푃푓∣∣ 휇 ≤ ∣∣푓 − 푇푓∣∣ 휇 + ∣∣푃푓 − 푇푓∣∣ 휇 W푞 (B푏ℎ[휁]) W푞 (B푏ℎ[휁]) W푞 (B푏ℎ[휁]) 푚−휇 ≤c ℎ ∣∣푓∣∣ 휇 + c ∣∣푃 − 푇 ∣∣ 휇 . W (B푚푏ℎ[휁]) 푓 푓 푞 W푞 (Bℎ/2[휁])
The second summand can now be dealt with due to quasi-projection properties as in the proof of Lemma 3.17 to obtain
44 푚−휇 ∣∣푃 − 푇 ∣∣ 휇 = ∣∣S 푓 − S 푇 ∣∣ 휇 ≤ c ℎ ∣∣푓∣∣ 휇 푓 푓 ℎ ℎ 푓 W (B푚ℎ[휁]) W푞 (Bℎ/2[휁]) W푞 (Bℎ/2[휁]) 푞 푚−휇 ≤ c ℎ ∣∣푓∣∣ 휇 . W푞 (B푚푏ℎ[휁])
K
Now we have assembled everything we need to improve the results of Theorem 3.3 in case of TP-splines and a linear foliation:
3.29 Theorem Let Ϻ ∈ 필푘 (ℝ푑). Let {S } be a family of local spline quasi- cp ℎ 0<ℎ<ℎ0 푚 푚 projection operators on Sℎ (U(Ϻ)) reproducing P . Let 1 < 푞 < ∞, 0 < 푛 ≤ 푚 and ɴ let 0 ≤ 휇 ≤ 푛 − 1 be such that 휇 < 푚 − 1. Define Sℎ ∶= ͲϺ SℎEN. Then we have for 푛 any 푓 ∈ W푞(Ϻ) the relation
ɴ 푛−휇 ∣∣푓 − Sℎ푓∣∣ 휇 ≤ c ℎ ∣∣푓∣∣W푛(Ϻ) . W푞 (Ϻ) 푞
Proof: In the proof, we are heading for application of the previous lemma, so we have to check for the requirements and its applicability. Let {ᴄ (ƶ)} = ℎ ƶ∈Ƶ푎,ℎ 〈2〉 {ᴄℎ(휁 − ℎ/2 ⋅ 1)} be the set of cells of length ℎ that make up Cℎ(U푎ℎ(Ϻ)). The points 휁 with 휁 − ℎ/2 ⋅ 1 ∈ Ƶ푎,ℎ ⊆ ℎ ℤ make up the centers of the cells, so that each cell has the form Bℎ/2[휁]. Without restriction, we demand 푎 < 1/(2푑) in ⃡⃡⃡⃡⃡⃡ ⃡⃡⃡⃡⃡⃡ Lemma 3.27. Let 푆ℎ = Sℎ퐹 for 퐹 = EN푓 on these cells and let 푠ℎ be its trace on
Ϻ. Let ᴄℎ be an arbitrary cell seen as an axis aligned cube Bℎ/2[휁] with center 휁. Then (푆 ) = 푃 for some polynomial 푃 of order 푚, and we name by 푝 = Ͳ 푃 ℎ |ᴄℎ ᴄℎ ᴄℎ ᴄℎ Ϻ ᴄℎ the trace on all of Ϻ for 푃 considered as a polynomial on ℝ푑. Because any cell ᴄℎ Bℎ/2[휁] is compactly contained in Bℎ[휁], we have that
‖푓 − 푠 ‖ 휇 ≤ c ∑ ‖푓 − 푝 ‖ 휇 . ℎ W푞 (Ϻ) ᴄℎ W푞 (Ϻ∩Bℎ[휁]) 휁∈Ƶ푎,ℎ
We have by Lemmas 3.25 and 3.26 that for sufficiently small ℎ > 0 to any Bℎ[휁] there is an appropriate pair (Ψ, Ω) = (Ψ(ᴄℎ), Ω(ᴄℎ)) such that the preimage of Bℎ[휁] is compactly contained in Ω, and we restrict ourselves to this Ω and Ψ for the 휇 further treatment of W (Ϻ∩B [휁]). We are now going to bound ‖푓−푝 ‖ 휇 푞 ℎ ᴄℎ W푞 (Ϻ∩Bℎ[휁]) suitably: By Lemmas 3.25 and 3.26, we even have for 휉 = Ψ-1(휁) and some constant
푐1 > 0 valid over the whole inverse atlas as long as ℎ is sufficiently small
Ψ-1(B [휁]) ⊆ B [휉] ⋐ Ω. ℎ 푐1ℎ
Now we abbreviate 푃 = 푃 , 푝 = 푝 and 푃⃡⃡⃡⃡⃡⃡ = E 푝. We are then going to prove that ᴄℎ ᴄℎ N for b᷉ independent of 푓, 푃, 휁:
〈2〉1 is the all-one vector in ℝ푑
45 푛−휇−ĸ/푞 ⃡⃡⃡⃡⃡⃡ ∣∣푓 − 푝∣∣ 휇 ≤ c ℎ ∣∣퐹∣∣ 푛 . W (Bℎ[휁]∩Ϻ) ( [휁]) 푞 W푞 Bb᷉ℎ
Because we have Ψ-1(B [휁]) ⊆ B [휉] for some fixed 푐 , we have that ℎ 푐1ℎ 1
푞 훼 푞 ∣∣푓 − 푝∣∣ 휇 ≤ c ∑ ∫ (휕 (푓 ∘ ψ − 푝 ∘ ψ)) . W푞 (Bℎ[휁]∩Ϻ) |훼|≤휇ω∩B [휉] 푐1ℎ
Therein, we have used implicitly that if ω ∩ B [휉] ⊆ ω it holds for the collection 푐1ℎ ℓ {(ψ휁,푗, ω휁,푗)}푗=1 ⊆ 픸Ϻ that satisfies ψ휁,푗(ω휁,푗) ∩ Bℎ[휁] ≠ ⌀ the additional relation
휂 푞 훼 푞 ∑ ∑ ∫ (휕 (푓 ∘ ψ휁,푗 − 푝 ∘ ψ휁,푗)) ≤ c ∑ ∫ (휕 (푓 ∘ ψ − 푝 ∘ ψ)) 1≤푗≤ℓ |휂|≤휇 -1 |훼|≤휇ω∩B [휉] ψ휁,푗(Bℎ[휁]∩Ϻ) 푐1ℎ for a constant independant of 푓, but we can require this by construction of our inverse atlas. Now we take an arbitrary multi-index 훼 = (훼k⃮⃮⃮⃮⃮, 훼ĸ⃯⃯⃯⃯⃯) with |훼| ≤ 휇 <
푚 − 1, where 훼k⃮⃮⃮⃮⃮ consists of the first 푘 and 훼ĸ⃯⃯⃯⃯⃯ of the last ĸ entries of 훼, and define 훼 훼 훼k⃮⃮⃮⃮⃮ ĸ⃯⃯⃯⃯⃯ consequently 휕 = 휕푥 휕푧 . Using the equivalence of norms defined on ω and on Ω for constant extension by EN due to Theorem 2.26, we obtain
∫ (휕훼(푓 ∘ ψ − 푝 ∘ ψ))푞 ≤ c ℎ−ĸ ∫ (휕훼(퐹⃡⃡⃡⃡⃡⃡ ∘ Ψ − 푃⃡⃡⃡⃡⃡⃡ ∘ Ψ))푞
ω∩B푐 ℎ[휉] B [휉] 1 푐1ℎ ≤ c ℎ−ĸ ∫ (휕훼(퐹⃡⃡⃡⃡⃡⃡ ∘ Ψ − 푃 ∘ Ψ))푞 + c ℎ−ĸ ∫ (휕훼(푃⃡⃡⃡⃡⃡⃡ ∘ Ψ) − 푃 ∘ Ψ))푞. (3.29.1) B [휉] B [휉] 푐1ℎ 푐1ℎ
We start with the first summand, where we have with norm equivalence under diffeomorphisms, the C-boundedness of Ψ as applied before, Lemma 3.25 for a suitable 푐2 > 0 and the approximation assumptions on 푃 due to Lemma 3.28
∫ (휕훼(퐹⃡⃡⃡⃡⃡⃡ ∘ Ψ − 푃 ∘ Ψ))푞 ≤c ∫ ∑ (휕훽(퐹⃡⃡⃡⃡⃡⃡ − 푃))푞 |훽|≤|훼| B [휉] Ψ(B푐 ℎ[휉]) 푐1ℎ 1 ≤c ∫ ∑ (휕훽(퐹⃡⃡⃡⃡⃡⃡ − 푃))푞 B [휁] |훽|≤|훼| 푐2ℎ 푞(푛−|훼|) ⃡⃡⃡⃡⃡⃡ 푞 ≤c ℎ ||퐹||W푛(B [휁]). (3.29.2) 푞 푚푐2ℎ
Therein we ensure ℎ < ℎ is so small that B [휁] ⊆ Ψ(Ω), possibly restricting 0 푚푐2ℎ
ℎ0 further. Now we have arrived at our goal with the first summand of (3.29.1), as ᷉ with b ≥ 푚 푐2 clearly ⃡⃡⃡⃡⃡⃡ 푞 ⃡⃡⃡⃡⃡⃡ 푞 ||퐹||W푛(B [휁]) ≤ ||퐹||W푛(B [휁]). 푞 푚푐2ℎ 푞 b᷉ℎ Turning to the second summand, we use Bω [휉] ∶= B [Π (휉)] ∩ ω for suitable 푐3ℎ 푐3ℎ ω 푐 ≥ 푐 such that B [휉] ⊆ B [Π (휉)]. Then we distinguish the two cases 3 1 푐1ℎ 푐3ℎ ω
|훼ĸ⃯⃯⃯⃯⃯| = 0 and |훼ĸ⃯⃯⃯⃯⃯| > 0 again, as we did in the proof of Theorem 3.3: In case |훼| = |훼k⃮⃮⃮⃮⃮|
46 we observe that because the normal foliation is linear, the restriction of a polyno- mial of fixed maximal degree to a leaf is a polynomial of fixed maximal degree there, and thus we can apply the version of Friedrichs’ Inequality from Theorem 2.38: Because the parameterisation for the normal extension is linear in the last ĸ vari- ables for any fixed 푥 ∈ ω, polynomial degree is retained in these variables. Then for 푅 ∶= 푃 ∘ Ψ − 푃⃡⃡⃡⃡⃡⃡ ∘ Ψ we obtain the relation
훼k⃮⃮⃮⃮⃮ 푞 훼k⃮⃮⃮⃮⃮ 푞 ∫ (휕푥 푅(푥, 푧)) 푑푧푑푥 ≤ ∫ (휕푥 푅(푥, 푧)) 푑푧푑푥
B [휉] B [Πω(휉)] 푐1ℎ 푐3ℎ
훼k⃮⃮⃮⃮⃮ 푞 푞 훼k⃮⃮⃮⃮⃮ 훽 푞 = ∫ ∫ (휕푥 푅(푥, 푧)) 푑푧푑푥 ≤ c ℎ ∑ ‖휕푥 휕푧푅‖L (B [휉]). (3.29.3) 푞 푐3ℎ Bω [휉]Bĸ [0] |훽|=1 푐3ℎ 푐3ℎ
Thereby, we have transformed the case |훼| = |훼k⃮⃮⃮⃮⃮| into the case |훼 + 훽| = |훼k⃮⃮⃮⃮⃮| + 1 with 훽 enhancement by some additional 푧-derivative 휕푧 and maintain |훼 + 훽| < 푚, |훼 + 훽| ≤ 푛. Since we have also gained ℎ푞, we can handle this case right along with the other 훼 훼 ĸ⃯⃯⃯⃯⃯ ⃡⃡⃡⃡⃡⃡ ĸ⃯⃯⃯⃯⃯ ⃡⃡⃡⃡⃡⃡ case and just consider 훼 with |훼ĸ⃯⃯⃯⃯⃯| > 0 now: There holds 휕 (푃∘Ψ) = 휕 (퐹∘Ψ) = 0, so we obtain 훼 훼 훼 훼 k⃮⃮⃮⃮⃮ ĸ⃯⃯⃯⃯⃯ ⃡⃡⃡⃡⃡⃡ k⃮⃮⃮⃮⃮ ĸ⃯⃯⃯⃯⃯ ⃡⃡⃡⃡⃡⃡ 휕푥 휕푦 (푃 ∘ Ψ − 푃 ∘ Ψ) = 휕푥 휕푦 (푃 ∘ Ψ − 퐹 ∘ Ψ).
Hence, we have with the norm equivalences of Theorem 2.26 in particular, a suit- able constant 푐 ≥ 푐 such that Ψ(B [휉]) ⊆ B [휁] due to Lemma 3.25 and the 4 2 푐2ℎ 푐4ℎ approximation result from Lemma 3.28:
훼 ⃡⃡⃡⃡⃡⃡ 훽 ⃡⃡⃡⃡⃡⃡ ‖휕 (푃 ∘ Ψ − 퐹 ∘ Ψ)‖L (B [휉]) ≤ c ∑ ‖휕 (푃 − 퐹)‖L (B [휁]) 푞 푐3ℎ 푝 푐4ℎ |훽|≤|훼| 푛−|훼| ⃡⃡⃡⃡⃡⃡ ≤ c ℎ ‖퐹‖W푛 (B [휁]). (3.29.4) 푝 푚푐4ℎ
᷉ Possibly choosing ℎ0 smaller and b greater, we obtain thereby also for the second summand of (3.29.1) the relation
훼 ⃡⃡⃡⃡⃡⃡ 푞 훼 ⃡⃡⃡⃡⃡⃡ 푞 ‖휕 (푃 ∘ Ψ − 푃 ∘ Ψ)‖L (B [휉]) ≤ ‖휕 (푃 ∘ Ψ − 퐹 ∘ Ψ)‖L (B [휉]) 푞 푐1ℎ 푞 푐3ℎ 푞(푛−|훼|) ⃡⃡⃡⃡⃡⃡ 푞 ≤ c ℎ ‖퐹‖ 푛 ( [휁]). (3.29.5) W푝 Bb᷉ℎ
Now we have to sum over all multi-indices 훼 and obtain
푛−휇−ĸ/푞 ⃡⃡⃡⃡⃡⃡ ∣∣푓 − 푝∣∣ 휇 ≤ c ℎ ∣∣퐹∣∣ 푛 . W (Bℎ[휁]∩Ϻ) ( [휁]) 푞 W푞 Bb᷉ℎ
It remains to sum over all cell centers 휁. To this end, we have to choose b᷉ large enough to work out for each cell center 휁 in at least one extended parameter space.
This will be no problem if ℎ0 is small enough. We further have to take care that we choose some 푏 > 0 large enough such that any Bb᷉ℎ[휁] is contained in U푏ℎ(Ϻ) inde-
47 pendent of ℎ. And we have to choose ℎ0 small enough to achieve that U푏ℎ(Ϻ) has the closest point property. Then because b᷉ was fixed over a fixed extended parame- ter space and the number of these is finite, we can choose b᷉ maximal among them to become independent of the extended parameter spaces. And because clearly any 푧 ∈ U푏ℎ(Ϻ) can be in at most ո0 balls Bb᷉ℎ[휁] for some fixed ո0, we deduce
푛−휇−ĸ/푞 ⃡⃡⃡⃡⃡⃡ 푛−휇 ∑ ∣∣푓 − 푝∣∣ 휇 ≤ c ℎ ∣∣퐹∣∣ 푛 ≤ c ℎ ∣∣푓∣∣ 푛 . W (Bℎ[휁]∩Ϻ) W (Ϻ) 푞 W푞(U푏ℎ(Ϻ)) 푞 휁∈Ƶ푎,ℎ
K
푘 푑 3.30 Remark: Again, if Ϻ ∈ 필bd(ℝ ) has a boundary, then the same holds at least ɴ Ϻ̂ Ϻ̂ for Sℎ ∶= RϺͲϺ̂SℎENEϺ, where EϺ is the continuous extension operator that maps 푛 푛 ̂ ̂ 푘 푑 ̂ W푞(Ϻ) to W푞(Ϻ) for the Ϻ ∈ 필cp(ℝ ) such that Ϻ ⋐ Ϻ.
3.3.4 Approximation in Fractional Orders and under fixed Interpo- lation Constraints
ɴ Both of the results for approximation by Sℎ we have presented so far allow for a generalisation to fractional Slobodeckij Hilbert spaces on ESMs. But in contrast to the Euclidean setting, we have to deal with the chart-wise definition of these spaces, and so the argumentation becomes more involved; the key ingredient is to determine two inverse atlases such that one is a restriction of the other. Such two inverse atlases are not hard to create in the case of an ESM without boundary we consider here: By compactness of the ESM, we can simply take sufficiently large compactly contained subsets of the parameter spaces of a given inverse atlas, restrict the parameterisations and still maintain an inverse atlas with all relevant properties. Then we can prove the following result:
3.31 Corollary
1. Let in the situation of Corollary 3.18 0 < 푟 ≤ 푚 and 0 ≤ 휚 ≤ ⌊푟 − ĸ⌋ be reals such that ⌈휚⌉ < 푚 − ĸ. Then any 푓 ∈ H푟(Ϻ) satisfies the relation
ɴ 푟−휚 ∣∣푓 − 푓∣∣ 휚 ≤ ℎ ∣∣푓∣∣ 푟 . Sℎ H (Ϻ) c H (Ϻ)
This relation remains valid if one replaces extension operator EN by any other
foliation-based extension operator E픽. 2. Let in the situation of Theorem 3.29 0 < 푟 ≤ 푚 and real 0 ≤ 휚 ≤ ⌊푟 − 1⌋ be reals such that ⌈휚⌉ < 푚 − 1. Then any 푓 ∈ H푟(Ϻ) satisfies the relation
ɴ 푟−휚 ∣∣푓 − 푓∣∣ 휚 ≤ ℎ ∣∣푓∣∣ 푟 . Sℎ H (Ϻ) c H (Ϻ)
48 This relation remains valid if one replaces extension operator EN by any other
extension operator E핍 based on a linear foliation 핍.
Proof: With given inverse atlas (ψ푖, ω푖)푖∈퐼 ∈ ℿ(Ϻ), we know by definition of ℿ(Ϻ) ⋆ ⋆ that there is another inverse atlas (ψ푖 , ω푖 )푖∈퐼 such that for any 푖 ∈ 퐼
ω ⋐ ω⋆ and ψ = (ψ⋆) . 푖 푖 푖 푖 |ω푖
⋆ Shrinking each ω푖 just a little so that still ω푖 ⋐ ω푖 , we can further demand that ⋆ ⋆ actually (ψ푖 , ω푖 )푖∈퐼 ∈ ℿ(Ϻ) as well. Then we will frequently apply the interpolation ɴ property from Proposition 2.30 for the operator Id − Sℎ for different choices of the spaces and norms involved. Now we fix one (ψ, ω) and corresponding (ψ⋆, ω⋆). We first have to deduce that in the situations of Corollary 3.18 or Theorem 3.29 for the respective choices of 푛, 휇
ɴ 푛−휇 ⋆ ∣∣(푓 − 푓) ∘ ψ∣∣ 휇 ≤ ℎ ∣∣푓 ∘ ψ ∣∣ 푛 . Sℎ H (ω) c H (ω⋆) (3.31.1)
ɴ To see this, we note that Sℎ is local in the following sense: For sufficiently small ℎ ⋆ ɴ ∣∣(푔 − 푓) ∘ ψ ∣∣ 푛 = 0 ∣∣( (푔 − 푓)) ∘ ψ∣∣ 휇 = 0 푔 ∈ we have that H (ω⋆) implies Sℎ H (ω) for any 푛 ⋆ Ϻ H (Ϻ) . In particular, this holds for 푔 = 푓 ∶= Eω⋆ 푓 as obtained by the continuous Ϻ 푛 ⋆ 푛 manifold extension Eω⋆ ∶ H (ω ) → H (Ϻ) of Theorem 2.36. And we have further that ⋆ ⋆ ∣∣푓 ∣∣H푛(Ϻ) ≤ c ∣∣푓 ∘ ψ ∣∣H푛(ω⋆) .
This gives us directly (3.31.1). Then we apply the interpolation property. First, we choose suitable integers 푛1 = ⌊휚⌋, 푛2 = ⌈휚⌉ to which (3.31.1) applies if we set 휇 = 휇푖.
This is guaranteed if respectively either 푛2 ≤ 푛 − ĸ and 푛2 < 푚 − ĸ (Corollary 3.18) or 푛2 ≤ 푛 − 1 and 푛2 < 푚 − 1 (Theorem 3.29). Then we choose 0 < 휃 < 1 such that
휚 = 휃푛1 + (1 − 휃)푛2. We obtain for 푚1 = 푚2 = 푛 in the interpolation property for ɴ the operator Id − Sℎ the relation
ɴ 휃(푛−푛1)+(1−휃)(푛−푛1) ⋆ ∣∣(푓 − 푓) ∘ ψ∣∣ 휚 ≤ ℎ ∣∣푓 ∘ ψ ∣∣ 푛 Sℎ H (ω) c H (ω⋆) 푛−휚 ⋆ ≤ c ℎ ∣∣푓 ∘ ψ ∣∣H푛(ω⋆) . (3.31.2)
The presence of the operation ”∘ ψ” therein does no harm, as we can always de- mand that an arbitrary 푔 ∈ H푛(ω⋆) has the form 푔 = ̃푔∘ ψ⋆ 〈3〉. Now we apply the interpolation property again for the choices ϱ1 = ϱ2 = 휚 and 푚1 = ⌊푟⌋, 푚2 = ⌈푟⌉. By hypothesis, both choices satisfy the requirements for 푛, and so we have for 푖 = 1, 2
〈3〉 ⋆ ⋆ ℝ푑 ℝ푘 Therein, we can choose ̃푔= Rω⋆ Λ(Ψ⋆)-1 RΨ⋆(Ω⋆)E 푔 for E = Eℝ푘 Eω⋆ , extended parameter space ⋆ ⋆ ⋆ -1 Ω corresponding to ω and Λ(Ψ⋆)-1 defined by Λ(Ψ⋆)-1 퐺 = 퐺 ∘ (Ψ ) . Then we have for an arbitrary 푥 ∈ ω⋆ that ̃푔(ψ⋆(푥)) = (E⋆푔)(푥) = 푔(푥) whenever 푔 is smooth. As all operators that appeared were bounded, the general result then comes directly via density.
49 ɴ 푚푖−휚 ⋆ ∣∣(푓 − 푓) ∘ ψ∣∣ 휚 ≤ ℎ ∣∣푓 ∘ ψ ∣∣ 푚 , Sℎ H (ω) c H 푖 (ω⋆) and clearly 푟 = 휃푚1 + (1 − 휃)푚2 for suitable 0 < 휃 < 1. We can do so by virtue of (3.31.2). Then we obtain the relation
ɴ 휃(푚1−휚)+(1−휃)(푚2−휚) ⋆ 푟−휚 ∣∣(푓 − 푓) ∘ ψ∣∣ 휚 ≤ ℎ ∣∣푓 ∘ ψ ∣∣ 푟 = ℎ ∣∣푓 ∘ ψ∣∣ 푟 . Sℎ H (ω) c H (ω⋆) c H (ω⋆)
To deduce the overall result, we have to sum over the two inverse atlases on both sides, which we can easily accomplish. And we have to see that, as one directly verifies, the two Sobolev norms on Ϻ subordinate to our two inverse atlases are equivalent. K
3.32 Remark: Again, the case of an ESM with boundary can at least theoretically Ϻ̂ 푟 푟 ̂ be deduced directly by extension operator EϺ ∶ H (Ϻ) → H (Ϻ) for a compact submanifold Ϻ̂ such that Ϻ ⋐ Ϻ.̂ Thereby, the same approximation orders as in the corollary are achievable at least theoretically.
Now we take the results on the boundedness of operators into account that we have proven in particular in Theorem 3.21. If we do that, we can also deduce results for approximation orders under fixed interpolation constraints. To be more pre- cise, we obtain the following result on suitably constructed Ξ-interpolating quasi- projections that we shall apply in approximation of energy functional minima later:
ℝ푑 휚 휚+ĸ/2 푑 3.33 Corollary Let EϺ = EϺ ∶ H (Ϻ) → H (ℝ ) be the universal bounded extension operator and define the interpolating approximation operator as
Ξ,ɴ Ξ Ξ Sℎ ∶= ͲϺ(SℎEN + Iℎ EϺ − Iℎ EϺͲϺSℎEN).
ɴ Then this operator retains the approximation orders of Sℎ according to Cor. 3.31 under the respective conditions, as long as in addition to the respective require- ments it holds 푘/2 < 휚 and 휚 + ĸ/2 ≤ 푚 − 1.
Proof: If we consider the approximation error, then for admissible 휚 by the tri- angle inequality
Ξ,ɴ ɴ Ξ ɴ ∣∣푓 − S 푓∣∣ 휚 ≤ ∣∣푓 − S 푓∣∣ 휚 + ∣∣Ͳ I E (푓 − S 푓)∣∣ . ℎ H (Ϻ) ℎ H (Ϻ) Ϻ ℎ Ϻ ℎ H휚(Ϻ)
Ξ So it suffices to prove continuity of ͲϺIℎ EϺ, which is directly implied by its con- struction. K
3.34 Remark: We use this last result only for deducing ”achievable orders” — again it has only little practical value due to the presence of EϺ. But in that respect, it is indeed very useful. And as usual, it generalises to the case of ESMs with
50 boundary by continuous (universal) extension from the ESM with boundary to the ESM without it is contained in by our assumptions.
3.3.5 Approximating Normal Derivatives
As an enhancement to the previous results, we will now be interested in the ap- ɴ proximation power of the operator Sℎ when it comes to normal derivatives: These will soon turn out to tell us how large the deviation of tangent Euclidean deriva- tives to their intrinsic counterparts is, the tangential derivatives we introduce in chapter four. In the upcoming relations, we will always demand that the deriva- tive DN or gradient ∇N is taken along the respective normal frame of the closest point if we are not on the ESM itself. And the index ”N” indicates projection onto the normal space according to the specific, locally smooth map N that assigns an (orthonormal) normal frame to each element of the ESM.
The basic idea is now the same as in deriving the corresponding approximation re- sults. We begin by stating them for integer orders and deduce the fractionals sim- ilar to Corollary 3.31 in the aftermath, which we compete by considering the case with interpolation constraints. We give proofs in correspondence to both Corol- lary 3.18 and Theorem 3.29, as the proof for the first will generalise also to other approximation methods and other orthogonal foliations easily, while the latter is more specific to splines and the normal foliation.
3.35 Corollary Let in the situation of Cor. 3.18 holds in particular taht 1 + ĸ ≤ 푛 and 1 + ĸ < 푚 for 푛 ≤ 푚 ∈ ℕ defined there. Then the normal derivatives satisfy
푛−1 ∣∣∣∣ ( 푓)∣∣ ∣∣ ≤ ℎ ∣∣푓∣∣ 푛 . ∇N SℎEN 2 c H (Ϻ) L2(Ϻ)
Proof: In order to prove this result, we will essentially try to reduce things so far ∣∣( 푆 )∣∣ that dealing with ∇N ℎ 2 reduces to a special case of the proof of Theorem 3.3. That proof had started from a pair of parameterisation and parameter space (ψ, ω), according to the inverse atlas to obtain (3.3.1). To obtain a similar relation, we have to look at our usual map N = (ν1, ..., νĸ) giving us a normal frame on the image of ω that is smooth and C-bounded on ω, of which we know it exists. We decompose ∣∣( 푆 )∣∣ then ∇N ℎ 2 as follows in a finite sum of normal directional derivatives: It holds in any ψ(푥) for 푥 ∈ ω that
ĸ ĸ 2 2 2 ⎛ ⎞ 2 ∣∣ 푆 ∣∣ = ∑ | ℓ 푆 | ≤ ⎜∑ | ℓ 푆 |⎟ = ∣∣ 푆 ∣∣ , ∇N ℎ 2 Dν ℎ c Dν ℎ c ∇N ℎ 1 ℓ=1 ⎝ℓ=1 ⎠ and consequently we have with the triangle inequality in the last step
51 ĸ
∣∣∣∣(∇ 푆 )∣∣ ∣∣ ≤ c ∣∣∣∣(∇ 푆 ) ∘ ψ∣∣ ∣∣ ≤ c ∑ ∣∣(D 푖 푆 ) ∘ ψ∣∣ . (3.35.1) N ℎ 2 L (ψ(ω)) N ℎ 1 L (ω) ν ℎ L (ω) 2 2 푖=1 2
Then we choose an arbitrary Dν = Dν푗 and define Ω푎ℎ as in the proof of Theorem 3.3. ⃡⃡⃡⃡⃡⃡ We can thus obtain as in (3.3.1) that because Dν퐹 = 0 it holds
− ĸ 2 ∣∣(Dν푆ℎ) ∘ ψ∣∣ ≤c ℎ ∣∣EN(Dν푆ℎ) ∘ Ψ∣∣ L2(ω) L2(Ω푎ℎ) − ĸ 2 ⃡⃡⃡⃡⃡⃡ =c ℎ ∣∣EN(Dν푆ℎ) ∘ Ψ − (Dν퐹) ∘ Ψ∣∣ L2(Ω푎ℎ) − ĸ 2 ⃡⃡⃡⃡⃡⃡ ≤c ℎ ∣∣(Dν푆ℎ) ∘ Ψ − (Dν퐹) ∘ Ψ∣∣ L2(Ω푎ℎ) − ĸ 2 + c ℎ ∣∣(Dν푆ℎ) ∘ Ψ − EN(Dν푆ℎ) ∘ Ψ∣∣ . L2(Ω푎ℎ)
Now we have to notice that on Ω푎ℎ we have that by definition of N and its compo- 푗 nent column ν = ν it holds (Dν푆ℎ) ∘ Ψ = 휕푗(푆ℎ ∘ Ψ) for some standard coordinate direction 푗. This can be deduced directly from the definition of Ψ. The first sum- mand can then be bounded by using the evident relation |D (푆 −퐹)|⃡⃡⃡⃡⃡⃡ ≤ ∣∣∇(푆 − 퐹)∣∣⃡⃡⃡⃡⃡⃡ ν ℎ ℎ 2 in the form
⃡⃡⃡⃡⃡⃡ ⃡⃡⃡⃡⃡⃡ ∣∣(Dν(푆ℎ − 퐹)) ∘ Ψ∣∣ ≤ ∣∣(푆ℎ − 퐹) ∘ Ψ∣∣ 1 L2(Ω푎ℎ) H (Ω푎ℎ) ⃡⃡⃡⃡⃡⃡ 푛−1 ⃡⃡⃡⃡⃡⃡ ≤ c ∣∣푆ℎ − 퐹∣∣ 1 ≤ ℎ ∣∣퐹∣∣ 푛 . H (U푎ℎ) H (U푏ℎ)
For the second summand, we are by definition in the case of |훼ĸ⃯⃯⃯⃯⃯| = 0 of the re- spective proof, and thus proceed similarly via Friedrichs’ inequality to obtain for
푅ℎ ∶= (Dν푆ℎ) ∘ Ψ − (EN(Dν푆ℎ)) ∘ Ψ the relation
ĸ 푝 푝⋅ℓ 훽 푝 ‖푅ℎ‖ ≤ c ∑ ℎ ∑ ‖휕푧푅ℎ‖ . (3.35.2) L2(Ω푎ℎ) L푝(Ω푎ℎ) ℓ=1 |훽|=ℓ
Now we notice that it holds (EN(Dν푆ℎ))(Ψ(푥, 푧)) = (EN(Dν푆ℎ))(Ψ(푥, 0)), whereby 훽 훽 ⃡⃡⃡⃡⃡⃡ in particular 휕푧((EN(Dν푆ℎ)) ∘ Ψ) = 0 = 휕푧휕푗(퐹 ∘ Ψ). Consequently we can deduce by virtue of Friedrichs’ inequality that it holds
ĸ 푝 푝⋅ℓ 훽 ⃡⃡⃡⃡⃡⃡ 푝 ‖푅ℎ‖ ≤ c ∑ ℎ ∑ ‖휕푧휕푗(푆ℎ ∘ Ψ − 퐹 ∘ Ψ)‖ L2(Ω푎ℎ) L푝(Ω푎ℎ) ℓ=1 |훽|=ℓ ĸ 푝⋅ℓ 휂 ⃡⃡⃡⃡⃡⃡ 푝 ≤ c ∑ ℎ ∑ ‖휕푧(푆ℎ ∘ Ψ − 퐹 ∘ Ψ)‖ . L푝(Ω푎ℎ) ℓ=1 |휂|=ℓ+1
Because we have required that ĸ + 1 < 푚 and ĸ + 1 ≤ 푛, we can proceed as before in (3.3.4) and obtain via the usual norm equivalences for any of the shifted multi- indices 휂 in the last sum that
휂 푝 휂 ⃡⃡⃡⃡⃡⃡ (푛−휇−|휂|)푝+ĸ 2 ‖휕푧푅ℎ‖ ≤ ‖휕푧(푆ℎ ∘ Ψ − 퐹 ∘ Ψ)‖ (Ω ) ≤ c ℎ ‖푓‖ 푛 , L2(Ω푎ℎ) L2 푎ℎ H (Ϻ)
52 whereby the desired result is obtained by insertion and summation. Note in partic- ular that ∇N푆ℎ is given by a finite linear combination of normal directional deriva- tives on each pair (ψ, ω) according to the inverse atlas and that the overall result is invariant under orthogonal transformation within the normal space. K
3.36 Corollary In the situation of Theorem 3.29 it holds as long as in particular 2 ≤ 푛 and 2 < 푚 that the normal derivatives satisfy
푛−1 ∣∣∣∣ ( 푓)∣∣ ∣∣ ≤ ℎ ∣∣푓∣∣ 푛 . ∇N SℎEN 2 c H (Ϻ) L2(Ϻ)
Proof: Again, the proof is very similar to that of 3.29: We have to reuse the decomposition of (3.35.1) and replace as in the last proof 푠ℎ in the proof of 3.35 by
(Dν푆ℎ)|Ϻ and consequently 푝 in the proof of 3.35 by Dν푃 for an arbitrary Dν = Dν푖 in the respective relations. Then we obtain similar to (3.29.1) that
2 −ĸ ⃡⃡⃡⃡⃡⃡ 2 ∫ ((Dν푃) ∘ ψ) ≤ c ℎ ∫ ((Dν퐹) ∘ Ψ − (Dν푃) ∘ Ψ)
ω∩B푐 ℎ[휉] B [휉] 1 푐1ℎ
−ĸ 2 + c ℎ ∫ ((Dν푃) ∘ Ψ − (EN(Dν푃)) ∘ Ψ) . B [휉] 푐1ℎ
For the first summand, we conclude similar to (3.29.2) that
⃡⃡⃡⃡⃡⃡ 2 2(푛−1) ⃡⃡⃡⃡⃡⃡ 푞 ∫ ((Dν퐹) ∘ Ψ − (Dν푃) ∘ Ψ) ≤ c ℎ ||퐹||H푛(B [휁]). 푚푐2ℎ B [휉] 푐1ℎ
Therein, we use as in the previous proof that for an arbitrary 퐺 it holds (Dν푗 퐺)∘Ψ =
휕푗(퐺 ∘ Ψ). For the second summand, we can also proceed as before and apply Friedrichs’ Inequality for leafwise finite dimensional polynomials: The directional derivative along a normal is again a polynomial. So we obtain also in this case similar to (3.29.3) that for 푅ℎ ∶= ((Dν푃) ∘ Ψ − (EN(Dν푃)) ∘ Ψ)
2 2 훽 2 ∫ (푅ℎ) ≤ c ℎ ∑ ‖휕푧(푅ℎ)‖L (B [휉]). 2 푐3ℎ B [휉] |훽|=1 푐1ℎ
Now we repeat the observations made in the proof of Corollary 3.35 after deducing (3.35.2). Thereby we obtain again for 푗 as implied by ν = ν푗
훽 2 훽 2 ∑ ‖휕푧(푅ℎ)‖L (B [휉]) = ∑ ‖휕푧(휕푗(푃 ∘ Ψ))‖L (B [휉]) 2 푐3ℎ 2 푐3ℎ |훽|=1 |훽|=1 휂 ⃡⃡⃡⃡⃡⃡ 2 ≤ ∑ ‖휕푧(퐹 ∘ Ψ − 푃 ∘ Ψ)‖L (B [휉]). 2 푐3ℎ |훽|=2
53 Once again, this gives us as in (3.29.4) that
2 2푛−2 ⃡⃡⃡⃡⃡⃡ 2 ∫ ((Dν푃) ∘ Ψ − (EN(Dν푃)) ∘ Ψ) ≤ c ℎ ‖퐹‖H푛(B [휁]). 푚푐4ℎ B [휉] 푐1ℎ
Now we can proceed precisely as in the proof of 3.29. K
3.37 Corollary In the situation of Cor. 3.31 it holds, as long as in particular 휚 = 1 is a valid choice, that normal derivatives satisfy the relation
푟−1 ∣∣∣∣ ( 푓)∣∣ ∣∣ ≤ ℎ ∣∣푓∣∣ 푟 . ∇N SℎEN 2 c H (Ϻ) L2(Ϻ)
Proof: As in the proof of Cor. 3.31 we hope to apply the interpolation property. Thus we choose again the two inverse atlases from the proof of that corollary. For parameterisation ψ⋆ and parameter space ω⋆ according to the inverse atlas we can choose N⋆ smooth on ω⋆ and restrict ourselves once again to arbitrary column 푗 ɴ ν = ν . Then the linear map 푓 ↦ ͲϺDν(Sℎ푓) is by virtue of the last two corollaries
푚푖 ⋆ bounded from H (ω ) to L2(ω) and it holds
ɴ 푚 −1 ⋆ 푖 푚 ∣∣(Dν(Sℎ푓)) ∘ ψ∣∣ ≤ c ℎ ∣∣푓 ∘ ψ ∣∣H 푖 (ω⋆) , L2(ω) for any 푚푖, 푖 = 1, 2 that are valid choices for 푛, in particular for 푚1 = ⌊푟⌋ and
푚2 = ⌈푟⌉. With a suitable choice for 0 < 휃 < 1 such that 푟 = 휃푚1 + (1 − 휃)푚2 we conclude then by the interpolation property
ɴ 푟−1 ⋆ ∣∣(Dν(Sℎ푓)) ∘ ψ∣∣ ≤ c ℎ ∣∣푓 ∘ ψ ∣∣H푟(ω⋆) . L2(ω)
Thereby we can deduce as before that
ɴ 푟−1 ⋆ ∣∣∣∣( ( 푓)) ∘ ψ∣∣ ∣∣ ≤ ℎ ∣∣푓 ∘ ψ ∣∣ 푟 . ∇N Sℎ 2 c H (ω⋆) L2(ω)
Again, finite summation yields the desired result. K
To deduce a result for the case with interpolation constraints requires another auxiliary result stated in the following lemma:
3.38 Lemma Let 퐺 ∈ H푟(U(Ϻ)) for 푟 ≥ 2. Then for any ԑ > 0 with ĸ/2 + ԑ ≤ 푟 − 1
∣∣∣∣ 퐺∣∣ ∣∣ ≤ ||퐺|| 1+ĸ/2+ԑ . ∇N 2 c H (U(Ϻ)) L2(Ϻ)
Proof: We choose an inverse atlas (ψ푖, ω푖)푖∈퐼 for Ϻ and corresponding extended
54 inverse atlas (Ψ푖,Ω푖)푖∈퐼 for U(Ϻ). Then we select again arbitrary but corresonding (ψ, ω), (Ψ, Ω) according to these, omitting the index 푖 from now on for this specific choice. By construction of Ψ and Ω, we can demand that on Ψ(Ω) the map 푥 ↦ 1 ĸ N(ΠϺ(푥)) is well-defined, smooth and C-bounded. We call by ν , ..., ν the columns of orthonormal normal frame N again and choose as before an arbitrary ν = ν푗. First we demand 퐺 to be smooth and state that it holds for the directional derivative 2 2 Dν퐺 of 퐺 along ν that |Dν퐺| ≤ c ||∇퐺||2 both on ψ(ω) and on Ψ(Ω). Moreover, we even have for any 휎 ≥ 0 such that 퐺 ∈ H1+휎(U(Ϻ)) also
∣∣( 퐺)∣∣ 휎 ≤ ||퐺|| 1+휎 ≤ ||퐺|| 1+휎 . Dν H (Ψ(Ω)) c H (Ψ(Ω)) c H (U(Ϻ))
This is obvious for 휎 = 0. For 휎 = 푛 ∈ ℕ we can obtain this by the product rule and the fact that N can be presumed to be C-bounded on Ψ(Ω). Induction and the triangle inequality give then the required result. Consequently, we can also deduce 푚 (via density) that the map 퐺 ↦ Dν퐺 is in fact a bounded linear map from H (Ψ(Ω)) to H푚−1(Ψ(Ω)) for all 푚 ∈ ℕ. Now we apply the interpolation property. So let 휚 be fractional and choose 0 < 휃 < 1 such that 휚 = 휃⌊휚⌋ + (1 − 휃)⌈휚⌉. Then an easy calculation shows that
휚 − 1 = 휃⌊휚⌋ + (1 − 휃)⌈휚⌉ − 1 = 휃⌊휚 − 1⌋ + (1 − 휃)⌈휚 − 1⌉.
Thus by the interpolation property from Proposition 2.30 the map is also bounded from H휚(Ψ(Ω)) to H휚−1(Ψ(Ω)). This gives us the fractional case. To deduce the claimed relation, we have first to see that
ĸ 2 2 ∣∣∣∣∇ 퐺∣∣ ∣∣ = ∑ ∫ |D ℓ 퐺| 푑푥. N 2 L (ψ(ω)) ν (푥) 2 ℓ=1ψ(ω)
Then obviously ∣∣(Dν퐺)∣∣ ≤ ∣∣(Dν퐺)∣∣ ԑ for any ԑ > 0. By the chart trace L2(ψ(ω)) H (ψ(ω)) theorem, so Theorem 2.33, it holds in particular that
∣∣( 퐺)∣∣ ≤ ∣∣( 퐺)∣∣ + /2 Dν Hԑ(ψ(ω)) c Dν Hԑ ĸ (Ψ(Ω))
and by our previous findings this is bounded by c ||퐺||H1+ԑ+ĸ/2(U(Ϻ)). To deduce the ultimate relation, we just have to sum over the finite inverse atlas. K
3.39 Corollary Let in the situation of the last corollaries Ξ ⊆ Ϻ be a finite set of Ξ,ɴ ɴ points. Then it holds with the operator Sℎ instead of Sℎ and with
Ξ,ɴ,ᴜ Ξ Ξ Sℎ ∶= SℎEN + Iℎ EϺ − Iℎ EϺͲϺSℎEN
its result before application of ͲϺ that for any small ԑ > 0
55 Ξ,ɴ,ᴜ 푟−max(1,푘/2)−ԑ ∣∣∣∣∇ (S 푓)∣∣ ∣∣ ≤ c ℎ ∣∣푓∣∣ 푟 . N ℎ 2 H (Ϻ) L2(Ϻ)
Proof: We start with the triangle inequality to obtain
Ξ,ɴ,ᴜ Ξ Ξ ∣∣∣∣∇ (S 푓)∣∣ ∣∣ ≤ ∣∣∣∣∇ (SℎE 푓)∣∣ ∣∣ + ∣∣∣∣∇ ((Iℎ E − Iℎ E Ͳ SℎE )푓)∣∣ ∣∣ . N ℎ 2 L (Ϻ) N N 2 L (Ϻ) N Ϻ Ϻ Ϻ N 2 2 2 L2(Ϻ)
ɴ The first summand is then bounded by the respective property of Sℎ. For the second summand, we apply Lemma 3.38 and obtain for some fixed U(Ϻ) that
∣∣∣∣∇ ((IΞE − IΞE Ͳ S E )푓)∣∣ ∣∣ ≤ c ∣∣(IΞE − IΞE Ͳ S E )푓∣∣ . N ℎ Ϻ ℎ Ϻ Ϻ ℎ N 2 ℎ Ϻ ℎ Ϻ Ϻ ℎ N H1+ĸ/2+ԑ(U(Ϻ)) L2(Ϻ)
Ξ We use the continuity of Iℎ and conclude with ĸԑ = ĸ/2 + ԑ
∣∣(IΞE − IΞE Ͳ S E )푓∣∣ ℎ Ϻ ℎ Ϻ Ϻ ℎ N H1+ĸԑ (U(Ϻ)) ≤ ∣∣(IΞE − IΞE Ͳ S E )푓∣∣ ℎ Ϻ ℎ Ϻ Ϻ ℎ N Hmax(1,푘/2)+ĸԑ (U(Ϻ))
≤ ∣∣(E − E Ͳ S E )푓∣∣ max(1,푘/2)+ĸ Ϻ Ϻ Ϻ ℎ N H ԑ (U(Ϻ))
≤ ∣∣(E − E Ͳ S E )푓∣∣ max(1,푘/2)+ĸ Ϻ Ϻ Ϻ ℎ N H ԑ (ℝ푑)
≤ ∣∣(Id − Ͳ S E )푓∣∣ max(1,푘/2)+ԑ Ϻ ℎ N H (Ϻ) 푟−max(1,푘/2)−ԑ ≤c ℎ ∣∣푓∣∣H푟(Ϻ) .
K
3.40 Remark: Like for the previous statements on approximation by constant ex- tension and restriction, the case of an ESM with boundary can at least theoretically 푟 푟 ̂ be deduced directly by extension operator EϺ ∶ H (Ϻ) → H (Ϻ) for a compact sub- ̂ 푘 푑 ̂ manifold Ϻ ∈ 필cp(ℝ ) such that Ϻ ⋐ Ϻ. Thereby, the same approximation orders are achievable at least theoretically.
3.41 Remark: It is also important to have approximation results just for the ap- proximation of the normal derivatives in the ambient space: They help us to deter- mine that an additional stabilisation penalty on the approximation in the ambient space which will occur later does not affect the approximation order. In case we have no interpolation constraints, it is directly clear that for any tubular neigh- bourhood U푎ℎ(Ϻ) that is sufficiently narrow we have the relation
∣∣∣∣ ( 푓)∣∣ ∣∣ ≤ ∣∣( 퐹⃡⃡⃡⃡⃡⃡ − 퐹)∣∣⃡⃡⃡⃡⃡⃡ ∇N SℎEN 2 Sℎ 1 L2(U푎ℎ(Ϻ)) H (U푎ℎ(Ϻ)) 푟−1 ≤ ∣∣(S 퐹⃡⃡⃡⃡⃡⃡ − 퐹)∣∣⃡⃡⃡⃡⃡⃡ ≤ c ℎ ∣∣푓∣∣ 푟 . ℎ H1(U (Ϻ)) H (Ϻ) 푎ℎ0
56 The last inequality therein comes by application of Appendix Theorem 9.17 that gives the relation ∣∣퐹∣∣⃡⃡⃡⃡⃡⃡ ≤ c ∣∣푓∣∣ 푟 . In fact, we could even expect to have H푟(U(Ϻ)) H (Ϻ) higher orders, but that will play no significant role for us. In the case with a finite set of constraints, we have the relation
Ξ,ɴ,ᴜ 푟−max(1,푘/2) ∣∣∣∣∇ (S 푓)∣∣ ∣∣ ≤ c ℎ ∣∣푓∣∣ 푟 . N ℎ 2 H (Ϻ) L2(U푎ℎ(Ϻ))
This is obtained via the intermediate steps
∣∣∣∣∇ (SΞ,ɴ,ᴜ푓)∣∣ ∣∣ = ∣∣∣∣∇ (E 푓 − SΞ,ɴ,ᴜ푓)∣∣ ∣∣ ≤ c ∣∣(E 푓 − SΞ,ɴ,ᴜ푓)∣∣ N ℎ 2 N N ℎ 2 N ℎ H1(U ) L2(U푎ℎ) L2(U푎ℎ) 푎ℎ ⃡⃡⃡⃡⃡⃡ ⃡⃡⃡⃡⃡⃡ Ξ Ξ ≤c ∣∣(Sℎ퐹 − 퐹)∣∣ 1 + c ∣∣(Iℎ EϺ − Iℎ EϺͲϺSℎEN)푓∣∣ 1 H (U푎ℎ) H (U푎ℎ) ≤c ∣∣(S 퐹⃡⃡⃡⃡⃡⃡ − 퐹)∣∣⃡⃡⃡⃡⃡⃡ + c ∣∣(IΞE − IΞE Ͳ S E )푓∣∣ ℎ H1(U ) ℎ Ϻ ℎ Ϻ Ϻ ℎ N H1(U ) 푎ℎ0 푎ℎ0 ≤c ∣∣(S 퐹⃡⃡⃡⃡⃡⃡ − 퐹)∣∣⃡⃡⃡⃡⃡⃡ + c ∣∣IΞE (Id − Ͳ S E )푓∣∣ ℎ H1(U ) ℎ Ϻ Ϻ ℎ N Hmax(1,푑/2)(U ) 푎ℎ0 푎ℎ0 ɴ ≤c ∣∣(S 퐹⃡⃡⃡⃡⃡⃡ − 퐹)∣∣⃡⃡⃡⃡⃡⃡ + c ∣∣(Id − S )푓∣∣ max(1,푑/2)−ĸ/2 ℎ H1(U ) ℎ H (Ϻ) 푎ℎ0 푟−1 푟−max(1−ĸ/2,푘/2) 푟−max(1,푘/2) ≤c (ℎ + ℎ ) ∣∣푓∣∣H푟(Ϻ) ≤ c ℎ ∣∣푓∣∣H푟(Ϻ) .
3.3.6 Practical Examples
ɴ The approximation operator Sℎ gives rise to a concept recently developed ([74, 75, 76, 82]) under the term Ambient B-Spline Method or more generally Ambient Approximation Method for codimension one, and by our theory the approach gen- eralises naturally to higher codimensions. The process of the method is briefly described as follows:
3.42 Algorithm — Ambient B-Spline Method —
1. For ℎ small, determine all the cells Cℎ(Ϻ) active on Ϻ.
〈4〉 〈5〉 2. Compute the approximation of 퐹 = 푓 ∘ ΠϺ for Cℎ(Ϻ). 3. Store the coefficients just for the basis functions active on Ϻ to determine a
solution 푆Ϻ.
4. Evaluate 푆Ϻ in points on Ϻ as necessary.
Initially, this algorithm was investigated for codimension ĸ = 1 alone ([74],[82]), and convergence rates were also investigated just for codimension one: [82] pro- vides practical results that verify the theoretical orders for several types of surfaces in ℝ3. So in our examples we put our focus exemplarily on ĸ = 2 and thus on a
〈4〉This may require additional cells as required by the applied quasi-projection method. Usually one will need at least all cells active on some U훿ℎ(Ϻ) for fixed 훿 > 0. 〈5〉 Another suitable projection Π픽 based on foliation 픽 is conceivable as well.
57 10 -1 10 -1 10 -1
10 -2 10 -2 10 -2
10 -3 10 -3 10 -3
10 -4 10 -4
10 -4 10 -5 10 -5
10 -5 10 -6 10 -6
10 -6 10 -7 10 -7 0.025 0.05 0.1 0.2 0.025 0.05 0.1 0.2 0.025 0.05 0.1 0.2
Figure 3.1: Upper row: Plots of the functions 푓1 (left), 푓2 (mid), 푓3 (right). Lower row: Corre- sponding experimental rates of convergence for quadcubic splines: — root mean square error for 1602 equally spaced gridded points, - - - reference ℎ4, ⋅⋅⋅⋅⋅ maximal error.
surface embedded in ℝ4 that is not embedded in ℝ3. To be more precise, we give 4 examples in the case of the so called Clifford torus 핋clf ⊆ ℝ — a surface which is not embedded into ℝ3 and given as
1 1 1 1 핋clf = ( 핊 ) × ( 핊 ), √2 √2 whereby one sees that it has a globally defined normal frame and can be param- eterised by parameterisations of the circles. Consequently, it is isometrically iso- morphic to [0, π√2]×[0, π√2] if one identifies opposite edges, and thus we can visu- alise functions on the Clifford torus in the plane — which we did on [0, 2π] × [0, 2π] for technical reasons, as implied by the standard parameterisation of the circle.
We present convergence behaviour for three functions 푓1, 푓2, 푓3 on 핋clf that are 4 defined as restrictions of the ℝ functions 퐹1, 퐹2, 퐹3 given as
2 2 2 3 − (9푥+푤−2) +(9푦+푧−2) 3 − (9푥+1) − 9푦+1 퐹 (푤, 푥, 푦, 푧) ∶=e푤푧푤 ( e 4 + e 49 10 ) 1 4 4 2 2 1 − (9푥+푤−7) +(9푦+푧−3) 1 2 2 + e푤푧푤 ( e 4 − e−(9푥+푤−4) −(9푦+푧−7) ) 2 5 2 2 3 − (9푥+6푤−2) +(9푦+6푧−2) 3 2 퐹 (푤, 푥, 푦, 푧) ∶=e푤푧푤(푦 + 푧) ( e 4 + e−(9푥+1) /49−(9푦+1)/10) 2 4 4 2 2 1 − (9푥+푤−7) −(9푦+푧−3) 1 2 2 + e푤푧푤(푦 + 푧) ( e 4 − e−(9푥+푤−4) −(9푦+푧−7) ) 2 5 2 2 3 − (9푥+6푤−2) +(9푦+6푧−2) 퐹 (푤, 푥, 푦, 푧) ∶= (푦 + 푧) sin(푤푧) cos(푤)e 4 3 4
58 3 3 2 + (푦 + 푧) sin(푤푧) cos(푤) e−(9푥+1) /49−(9푦+1)/10 4 4 2 2 1 − (9푥+푤−7) +(9푦+푧−3) + (푦 + 푧) sin(푤푧) cos(푤)e 4 2 1 2 2 − (푦 + 푧) sin(푤푧) cos(푤)e−(9푥+푤−4) −(9푦+푧−7) . 5
Our approximations are computed by cubic TP-splines on ℝ4, so ”quadcubics”, with cell widths ℎ = 0.2, 0.1, 0.05, 0.025 and provide precisely our estimations of ℎ4 convergence.
3.43 Remark: In our tests, we use the second variant for quasi-projection, the one performed in parallel on all cells in the support, as proposed in 3.14. We found that it has no significant negative impact on the expected rate of convergence if just the average over all cells active on a small tubular neighbourhood of Ϻ was taken, and not necessarily all cells in the supports of all B-splines active on that neighbourhood. Thereby, the computational cost can be reduced by a significant constant factor.
We tested also an approximation based on extension along a linear foliation that is not the normal foliation: The foliation presented in Example 2.20. This has the significant advantage that we can circumvene the problem that the tubular neighbourhood may be very small if we require the closest point property. We illustrate this with the ”unit balls” of the norms ||⋅||64 , ||⋅||24 , ||⋅||6, so the zero surfaces of the maps
(푥, 푦, 푧) ↦ 푥64 + 푦64 + 푧64 − 1, (푥, 푦, 푧) ↦ 푥24 + 푦24 + 푧24 − 1, (푥, 푦, 푧) ↦ 푥6 + 푦6 + 푧6 − 1.
In particular the first two would demand a neighbourhood so narrow that we can hardly afford it in practice, while the foliation 핍핊 along the normalised position vectors and the corresponding extension E핍 are effectively well defined in all of ℝ푑\{0}.
We tested with the restrictions of the functions
2 2 2 1 − (9푥−2) +(9푦−2) − (9푥+1) − 9푦+1 퐹 (푥, 푦, 푧) ∶= (e 4 + e 49 10 ) 4 4 2 2 1 − (9푥−7) +(9푦−3) 2 2 + (3e 4 − e−(9푥−4) −(9푦−7) ), 4 1 3 5 1/4 퐹5(푥, 푦, 푧) ∶= log(푥 + 푦 + 푧 + 7) .
The functions on the respective surfaces and the corresponding convergence plots are presented in Fig. 3.2. We can see that the expected convergence is present.
59 10 2 10 2 10 0
10 0 10 0 10 -2
10 -2 -2 10 10 -4 10 -4
-4 10 10 -6 10 -6
-6 10 10 -8 10 -8 0.025 0.05 0.1 0.2 0.025 0.05 0.1 0.2 0.025 0.05 0.1 0.2
0 -2 10 0 10 10
-2 -4 10 -2 10 10
10 -4 10 -4 10 -6
10 -6 10 -6 10 -8
10 -8 10 -8 10 -10 0.025 0.05 0.1 0.2 0.025 0.05 0.1 0.2 0.025 0.05 0.1 0.2
Figure 3.2: Upper row: Plots of the restrictions of function 퐹4 (first row) and 퐹5 (third row) to the respective surfaces functions. Second and fourth row: Corresponding experimental rates of convergence for tricubic splines: — root mean square error for ≈ 100′000 roughly uniformly spaced points, - - - reference ℎ4, ⋅⋅⋅⋅⋅ maximal eError.
And we can also see that the approximation results are satisfactory for choices of ℎ that are far greater than any choice of ℎ that would be possible for normal foliation
픽N and normal extension EN.
And finally, we also tested the approximation behaviour for functions of limited regularity. To accomplish this, we relied on an idea proposed and applied e.g. in [49]: The authors used suitable interpolants by Matern-kernels to data sites on Ϻ to accomplish functions that are in H훽−ԑ(Ϻ) for arbitrary choice of 훽 > 푘/2 and 0 < < 1 훽 ∈ {1.5, 2.0, 2.5, 3.0} any ԑ 2 ; We did the same, for choices of . Note in particular that the case 훽 = 1.5 can only be handled by our fractional corollary to Theorem 3.29, but not by the same corollary to the spline version of Theorem 3.3.
60 We performed tests for functions on 핊2 and on the Clifford torus; these simple sur- faces were chosen because we wanted to avoid effects of the geometry with large ℎ. The functions employed in the tests are obtained as Matern kernel interpolants for
푒1 ↦ 1.1, 푒2 ↦ 1.2, 푒3 ↦ 1.3, −푒1 ↦ 1.4, −푒2 ↦ 1.5, −푒3 ↦ 1.6 in case of 핊2, and for the Clifford torus as interpolants that satisfy
푒 + 푒 −푒 + 푒 −푒 + 푒 −푒 − 푒 1 3 ↦ 1.1, 1 3 ↦ 1.2, 2 4 ↦ 1.4, 2 4 ↦ 1.8, √2 √2 √2 √2 −푒 − 푒 푒 − 푒 푒 + 푒 푒 − 푒 1 3 ↦ 1.3, 1 3 ↦ 1.5, 2 4 ↦ 1.7, 2 4 ↦ 1.9. √2 √2 √2 √2
Again, we find the expected rates of convergence verified in Fig. 3.3.
10 0 10 0
10 -2 10 -2
10 -4
10 -4 10 -6
-6 10 -8 10 0.0125 0.025 0.05 0.1 0.2 0.025 0.035 0.05 0.07 0.1 0.14 0.2
Figure 3.3: Convergence plots for approximation of functions of limited regularity on 핊2 (left) and the Clifford torus (right). References are depicted dashed, practical results solid and thicker: 푓 ∈ H훽−ԑ(Ϻ) and reference ≈ ℎ훽 for 훽 = 1.5 (blue), 훽 = 2.0 (red), 훽 = 2.5 (green), 훽 = 3.0 (purple).
61 62 Chapter 4
Tangential Calculus, Function Spaces and Functionals
Handling differential calculus in an ESM with an inverse atlas is in theory always possible, but it has significant drawbacks: For example, derivatives depend on the chosen parameterisations and have to be made invariant with quite some ef- fort. Furthermore, whenever it comes to practical applications, the blending of chartwise solutions can pose significant difficulties, and it will directly affect the solution and its properties. Therefore, a purely intrinsic differential calculus is highly desirable. It should, of course, lead to equivalent norms etc. when talking about Sobolev spaces, and it should reduce to the standard calculus if the manifold considered is some open set in ℝ푘. To introduce such calculus and to investigate its properties is the prime objective of this chapter.
Specifically, we are first going to present an intrinsic tangential calculus for C2- functions. Next, we will use this to give an intrinsic characterisation of second order Hilbert space norms on ESMs based on that tangential calculus. After this, we will investigate when a finite set of points Ξ ⊆ Ϻ ensures the only function 푓 ∶ Ϻ → ℝ with 푓(ξ) = 0 for all ξ ∈ Ξ whose tangential counterpart of the Hessian vanishes identically will vanish identically itself on Ϻ. Thereby, we transfer the question of polynomial unisolvency to ESMs. In particular, we will find that for a wide range of ESMs only constant functions have identically vanishing Hessians. Afterwards, we will employ these results in problems of extrapolation and energy minimisation for a finite set of points, and use the general framework we introduce thereby to handle also more general energy minimisation problems and function- als, and the solution of certain intrinsic partial differential equations.
Finally, we will introduce the concept of equivalently EN-extrinsic functionals or in short EN-extrinsic functionals as functionals where tangential and Euclidean cal- culus are somehow mutually exchangable, and any of our examples will turn out to
63 be such an EN-extrinsic functional. Moreover, we will see that the Euclidean way to express these functionals gives us an approximate extrinsic access to this kind of intrinsic functionals.
4.1 Tangential Derivatives, Gradients and Hessians
In our setting of ESMs, we have to build two bridges to obtain a suitable tangen- tial calculus in terms of Euclidean calculus in the ambient space: One extending functions from the ESM into the ambient space, and one that makes the calculus model in the ambient space independant of that very extension. The first bridge is already built by our knowledge about foliations of the ambient space and the extensions based on them, particularly the normal foliation and normal extension. The second bridge is now achieved straightforewardly: We first take an arbitrary but suitably smooth extension 퐹 of a function 푓 ∈ C2(Ϻ) into tubular neighbour- hood U(Ϻ) and get its differential. Of course, this differential does not have a reasonable meaning within Ϻ itself: Simple calculus shows that if 퐹 ∶ ℝ2 → ℝ with 퐹(푥, 푦) = 푥 is restricted to the unit circle 핊1, then the result is far from being 1 linear as a function of 핊 . Instead, we rather have that 퐹|핊1 is 푡 ↦ cos(푡) for some
푡 ∈ [0, 2π[. But as soon as we project the result onto the tangent space T푥Ϻ at each 푥 ∈ Ϻ, we get precisely what we would expect from a gradient, just w.r.t. the ESM — in case of our example, the tangent of (푥, 푦) = (cos 푡, sin 푡) is (− sin 푡, cos 푡) and the Euclidean Jacobian of 퐹 is (1, 0), so we obtain precisely the expected 푡 ↦ − sin 푡 as the intrinsic differential.
4.1.1 Definition and Basic Properties
With the initial example in mind, we will now express the intrinsic derivative as the projection of the derivative onto the tangent space — first with a rather abstract definition, but we will soon be able to give a more convenient expression for the concept〈1〉:
푘 푑 4.1 Definition For any Ϻ ∈ 필bd(ℝ ) and differentiable function 푓 ∶ Ϻ → ℝ extended to neighbourhood U(Ϻ) as 퐹 ∶ U(Ϻ) → ℝ with 푓 = 퐹|Ϻ we define the tangential derivative operator DϺ via the Euclidean derivative operator D as
〈1〉In [36], the same operator is introduced for codimension one. There, it is also proven for the case of codimension one that this operator coincides with the definition for Riemannian derivatives in terms of parameterisations and parameter spaces, and the proof uses no features of the codimension, so it generalises naturally to higher codimension. We omit the details here, as this way of expressing the tangential derivative plays no relevant role for us. For a short explanation of the more general Riemannian setting, where the concepts introduced here appear with label ”Riemannian“ instead of ”tangential“, see Appendix 9.3.
64 DϺ푓(푥) = Π t D퐹(푥), T푥Ϻ
t where T푥 is the cotangent space and ΠV is the orthogonal projection onto a vector subspace V of ℝ푑. Accordingly, we define the tangential gradient as
∇ 푓(푥) ∶= Π ∇퐹(푥). Ϻ T푥Ϻ
4.2 Remark: (1) Obviously, this definition coincides with the classical Euclidean directional derivative of the extension for any tangent direction by definition. By this observation one can directly conclude that it is independent of the extension. (2) For the tangential gradient (and correspondingly for the differential), one can in particular use the equivalent formulation
∇ 푓(푥) ∶= ∇퐹(푥) − Π ∇퐹(푥). Ϻ N푥Ϻ
This is interesting in practice whenever the codimension ĸ is smaller than the di- mension 푘, as the projection onto the (co-)normal space is easier to calculate in this case. Since that is the usual situation we are going to face in future (because our prime objective will be surfaces in ℝ3), it is also the formulation we will use most often. Additionally, it has certain useful implications that shall be investi- gated further in the course of this section, and it is more convenient in proving them.
The following technical lemma will now give us a more practical form of this tan- gential gradient for our future treatment:
4.3 Lemma For any pointwise orthonormal frame N(푥) = (ν1(푥), ..., νĸ(푥)) of the normal space N푥Ϻ to 푥 ∈ Ϻ, the tangential gradient can be expressed in the form
ĸ 푖 ∇Ϻ푓(푥) ∶= ∇퐹(푥) − ∑(Dν푖 퐹)(푥)ν (푥). 푖=1
푖 Therein, Dν푖 퐹 is the directional derivative in direction of ν (푥).
Proof: We write the projection from our definition in the standard version for given orthonormal frame:
ĸ Π (⋅) = ∑ ⟨ν푖(푥), ⋅⟩ ν푖(푥). N푥Ϻ 푖=1
The projection is well-known to be independent of the choice of the frame, and we can apply the resulting vector to ∇퐹(푥) and conclude
65 ĸ ĸ 푖 푖 푖 ∇Ϻ푓(푥) = ∇퐹(푥) − ∑ ⟨ν (푥), ∇퐹(푥)⟩ ν (푥) = ∇퐹(푥) − ∑(Dν푖 퐹)(푥)ν (푥). 푖=1 푖=1
K
4.4 Remark: By Lemma 2.16 we know that there is a locally smooth map N that assigns a normal frame N(푥) = (ν1(푥), ..., νĸ(푥)) to any 푥 ∈ Ϻ, and this map is in particular also smooth as a function in the ambient space. Using this, the notation of tangential gradient can be simplified further, as we have thereby locally
∇Ϻ푓(푥) = ∇퐹(푥) − N(푥) ⋅ ∇N퐹(푥).
t Therein, we understand ∇N퐹(푥) as ∇N퐹(푥) = (Dν1 퐹(푥), ..., Dνĸ 퐹(푥)) . Things become more involved for the second order case: The concept of tangential derivative generalises naturally to higher orders by concatinated application. Most important to us will be the tangential derivative of second order, which gives us a tangential Hessian〈2〉:
푘 푑 4.5 Definition Let Ϻ ∈ 필bd(ℝ ) and let 푓 ∶ Ϻ → ℝ be a twice continuously differentiable function extended to twice continuously differentiable 퐹 ∶ U(Ϻ) → ℝ such that 푓 = 퐹 on Ϻ. Then we define for 푥 ∈ Ϻ the tangential Hessian matrix as
Ϻ 퐻푓(푥) ∶= DϺ∇Ϻ푓(푥)
Ϻ 푑 푑 and the corresponding pointwise bilinear form H푓(푥)(⋅, ⋅) on ℝ × ℝ by
Ϻ t Ϻ H푓(푥)(푣, 푤) = 푣 퐻푓(푥)푤.
4.6 Remark: Note that the tangential Hessian is no longer just the restriction of the Euclidean Hessian of 퐹 to the tangent space, in contrast to the gradient: In our initial example, the second order differential of 퐹(푥, 푦) = 푥 vanishes everywhere, but clearly the second order differential of its restriction to the unit circle is − cos 푡 Ϻ 2 for some 푡. In fact, H푓(푥)(⋅, ⋅) of a C -function 퐹 is in general not even a symmetric bilinear form on all of ℝ푑 (cf. [29, Subs. 8.5.3]).
We now sum up a collection of useful properties for the tangential Hessian. Most Ϻ importantly, we will be interested in the properties of H푓(푥) as a pointwise bilin- ear map on the respective tangent space T푥Ϻ. In particular, we find that for the
〈2〉We obtain this by tangential differentiation of a tangential gradient, or more generally speaking by covariant differentiation of a Riemannian gradient, cf. [7, Sect. 5.4] for a description of the concept in the setting of submanifolds of manifolds. Note in particular that the respective operations reduce to the Euclidean ones in the manifold ℝ푑! For a short explanation of the more general Riemannian setting, see Appendix 9.3.
66 normal extension〈3〉, the tangential Hessian coincides with the Euclidean Hessian as a bilinear map on T푥Ϻ at any 푥 ∈ Ϻ.
푘 푑 2 4.7 Theorem Let Ϻ ∈ 필bd(ℝ ), let 퐹 ∈ C (U(Ϻ), ℝ) and let 푓 = 퐹|Ϻ. Then there are constants c1, c2 > 0 depending on Ϻ, but independent of 퐹 or 푓, such that for any 푥 ∈ Ϻ
‖∇Ϻ푓(푥) − ∇퐹(푥)‖2 ≤ c1 max |Dν퐹(푥)| ν∈N푥Ϻ ‖ν‖2=1 and for any 푣, 푤 ∈ T푥Ϻ
Ϻ |H푓(푥)(푣, 푤) − H퐹(푥)(푣, 푤)| ≤ c2 max{|Dν퐹(푥)| ⋅ ‖푣‖2 ⋅ ‖푤‖2 ∶ ν ∈ N푥Ϻ,‖ν‖2 = 1}.
Proof: The first result is a direct consequence of Lemma 4.3, so we only work on the second here: We conclude immediately from the definition of the tangential Hessian and Remark 4.4 that locally
Ϻ t t H푓(푣, 푤) = 푣 퐻푓푤 − 푣 퐷 (N ⋅ ∇N퐹) 푤. (4.7.1)
Then we demand without restriction a coordinate system where the first 푘 coordi- nates correspond to tangent and the last ĸ coordinates correspond to normal basis vectors. Now we take a closer look at 퐷(N ⋅ ∇N퐹) and decompose it as
ĸ ĸ ⎜⎛ 푖⎟⎞ 푖 퐷 ∑ (Dν푖 퐹) ν = ∑ 퐷 ((Dν푖 퐹) ν ). ⎝푖=1 ⎠ 푖=1
We restrict ourselves now to one coordinate ℓ for 1 ≤ ℓ ≤ 푑 and look at just 푖 푖 푖 휕ℓ ((Dν푖 퐹) ν ) closer for arbitrary ν . As each (Dν푖 퐹) ν still has multiple entries, we restrict us further to just one (Dν푖 퐹) Nij. There we find by the product rule
휕ℓ ((Dν푖 퐹) Nij) = (휕ℓ (Dν푖 퐹)) Nij + (Dν푖 퐹) (휕ℓNij).
푖 Reinserting this into 휕ℓ ((Dν푖 퐹) ν ) we obtain
푖 푖 푖 휕ℓ ((Dν푖 퐹) ν ) = (휕ℓ (Dν푖 퐹)) ν + (Dν푖 퐹) (휕ℓν ).
Then we get for the overall 퐷(N ⋅ ∇N퐹) that
ĸ ĸ 푖 푖 퐷(N ⋅ ∇N퐹) = ∑ ν ⋅ 퐷 (Dν푖 퐹) + ∑ (Dν푖 퐹) (퐷ν ). 푖=1 푖=1
Multiplication of this by an arbitrary cotangent 푣t from the left and by an arbitrary
〈3〉Actually, this holds for any orthogonal extension.
67 tangent 푤 from the right will annihilate the first sum. This in mind, we reinsert into (4.7.1) and obtain
ĸ Ϻ t t 푖 ⎜⎛ 푖 ⎟⎞ H푓(푥)(푣, 푤) = 푣 퐻푓(푥)푤 − 푣 ∑ (Dν 퐹) (푥) (퐷ν )(푥) 푤. ⎝푖=1 ⎠
Since N can be presumed to be C-bounded by Lemma 2.16, and hence the bilinear form induced by 퐷ν푖(푥) is bounded for any 푖 = 1, ..., ĸ, we can deduce that
ĸ t 휕퐹 t 푖 ∣푣 퐷 (N ⋅ ) 푤∣ ≤ max ∣D 푖 퐹(푥)∣ ⋅ ∑ ∣푣 퐷ν (푥)푤∣ ≤ c max |D 퐹(푥)| ⋅ ‖푣‖ ⋅ ‖푤‖ . ν 2 ν∈N Ϻ ν 2 2 휕N 1≤푖≤푑−푞 푥 푖=1 ‖ν‖2=1
Hence, we obtain the desired result. K
This theorem gives us an approximate handling of second order tangential deriva- tives just as approximate Euclidean derivatives whenever the directional derivative in normal directions is small. In particular, the correspondence between tangen- tial and Euclidean derivative is exact whenever the normal derivative vanishes — which is the case if the extension was obtained by the normal extension operator. This fact is so important to us that we repeat it here as a statement of its own〈4〉: 4.8 Corollary As a bilinear map of the tangent space, the tangential Hessian Ϻ bilinear map H푓(푥)(⋅, ⋅) ∶ T푥Ϻ × T푥Ϻ → ℝ does coincide with the standard Hessian 푑 푑 ⃡⃡⃡⃡⃡⃡ bilinear map H퐹⃡⃡⃡⃡⃡⃡⃡(푥)(⋅, ⋅) ∶ ℝ × ℝ → ℝ of normal extension 퐹 of 푓, so
Ϻ H푓(푥)(푣, 푤) = H퐹⃡⃡⃡⃡⃡⃡⃡(푥)(푣, 푤) ∀푣, 푤 ∈ T푥Ϻ.
With these newly defined first and second order tangential derivatives, we can also ask for further operators on these, and we shall give at least one example here: the
Laplace-Beltrami operator ΔϺ. It comes easily now, and we directly give a suitable definition (cf. e.g. [57, pp. 359], [83, p. 28]):
푘 푑 4.9 Definition Let Ϻ ∈ 필bd(ℝ ). Then the Laplace-Beltrami operator ΔϺ is 2 Ϻ defined for arbitrary 푓 ∈ C (Ϻ) as the trace of the Hessian, so for 퐻푓(푥) with 푓 entries հij (푥) for 푖, 푗 = 1, ..., 푑 it holds
푑 푓 ΔϺ푓(푥) = ∑ հ푖푖 (푥). 푖=1
4.10 Remark: We can define a tangential divergence operator divϺ for a tangent t vector field v ∶ Ϻ → TϺ with v(푥) = (ᴠ1(푥), ..., ᴠ푑(푥)) as 〈4〉The statement of this corollary is also given in [29, Subs. 8.5.3] for the case of codimension one, but in a more ”differential geometric style”.
68 푑
divϺv = ∑(DϺᴠ푖)푖. 푖=1
Then we obtain the tangential Laplacian simultaneously as the divergence of the gradient field.
In case of an extension based on an orthogonal foliation, we directly obtain the fol- lowing important simplification of that definition as a consequence of Corollary 4.8 and the fact that the trace of a bilinear form is an invariant under orthogonal basis transformations〈5〉:
푘 푑 4.11 Corollary Let Ϻ ∈ 필bd(ℝ ). For any 푥 ∈ Ϻ it holds with the normal extension operator EN that
ΔϺ푓(푥) = Δ(EN푓)(푥), where Δ is the standard Laplacian operator in the ambient space U(Ϻ). For an arbitrary tangent frame T(푥) = (τ1(푥), ..., τ푘(푥)) it holds
푘 2 ΔϺ푓(푥) = ΔT(EN푓)(푥) ∶= ∑ Dτ푖 (EN푓)(푥). 푖=1
For an arbitrary smooth extension 퐹 of 푓 still holds
|ΔϺ푓(푥) − ΔT(퐹)(푥)| ≤ c max |Dν퐹(푥)|. ν∈N푥Ϻ ‖ν‖2=1
With tangential versions of the gradient and the Laplacian, we can further state a tangential version of Green’s theorem (cf. [18, Th. 17], [83, Cor. 46, p. 382]):
4.12 Proposition — Tangential Green’s Theorem — 2 Let 퐹 ∈ C (U(Ϻ)) and 푓 = 퐹|Ϻ. If Ϻ has no boundary, then it holds
∫푓ΔϺ푓 = −∫ ⟨∇Ϻ푓, ∇Ϻ푓⟩ Ϻ Ϻ and if Ϻ has boundary Γ ≠ ⌀, then it holds
∫푓ΔϺ푓 = −∫ ⟨∇Ϻ푓, ∇Ϻ푓⟩ + ∫푓Dν푓, Ϻ Ϻ Γ where Dν푓 is the directional derivative in the direction of the relative outer normal ν of Γ relative to Ϻ.
4.13 Remark: In Appendix 9.3, a short introduction into the more general Rie-
〈5〉Again, this result was already used and presented in [36] for codimension one, but without the general context of Hessians we provided here.
69 mannian setting is given. There we also comment briefly on the relation between our definitions and further common ways to express these terms, in particular in terms of parameterisations.
4.1.2 Intrinsic Characterisation of Sobolev Spaces
Now we have a closer look at the tangential Hessian itself and deduce the important property of invariance under rotation of the specific tangent frame. This will give us the key to derive an intrinsic definition of Sobolev norms:
4.14 Lemma 1. Let 푔 ∶ Ω ⊆ ℝ푑 → ℝ be twice continuously differentiable and 푥 ∈ Ω. Then the quantity 2 푑 휕2푔 ∑ ∣ (푥)∣ 푖 푗 푖,푗=1 휕υ 휕υ is independent of the specific choice of an orthonormal frame υ1, ..., υ푑 of ℝ푑. ⃡⃡⃡⃡⃡⃡ 2 2. If 퐹 = EN푓 is the normal extension of some function 푓 ∈ C (Ϻ, ℝ) for ESM Ϻ into ambient neighbourhood U(Ϻ), 푥 ∈ Ϻ and τ1(푥), ..., τ푘(푥) is an arbitrary orthonormal frame of T푥Ϻ, then we have
푘 푘 2 2 Ϻ 푖 푗 푖 푗 ∑ ∣H푓(τ , τ )∣ = ∑ ∣H퐹⃡⃡⃡⃡⃡⃡⃡(τ , τ )∣ . 푖,푗=1 푖,푗=1
Again, this relation is independent of the specific choice of the tangent frame and invariant under rotation (in the tangent space) of that basis on either side. 3. If υ1, ..., υ푘 is another basis of the tangent space, then we have for suitable constants c1, c2 depending on the basis that
푘 푘 푘 2 2 2 Ϻ 푖 푗 Ϻ 푖 푗 Ϻ 푖 푗 c1 ∑ ∣H푓(υ , υ )∣ ≤ ∑ ∣H푓(τ , τ )∣ ≤ c2 ∑ ∣H푓(υ , υ )∣ . 푖,푗=1 푖,푗=1 푖,푗=1
If the basis frame Y(푥) = (υ1, ..., υ푘) is continuous in a neighbourhood of 푥 ∈ Ϻ, then the constants depend continuously on 푥 ∈ Ϻ. If Y ∘ ψ is C-bounded in parameter space ω of the inverse atlas of Ϻ and its spectrum is bounded from below by ԑ > 0, then all constants are globally bounded on ψ(ω).
Proof: 1. Obvious, since it is the squared Frobenius matrix-norm ||⋅||Ϝ, which in 2 turn is a rewritten Euclidean norm on ℝ푑 . 2. and 3. can be deduced from a common start: Let T(푥) = (τ1(푥), ..., τ푘(푥)) be an orthonormal frame of the tangent space and Y(푥) = (υ1(푥), ..., υ푘(푥)) be another, not necessarily orthonormal, frame. By standard linear algebra, there is ΛY,T ∈ 푘×푘 ℝ such that T = YΛY,T, and in the same way, there is ΛT,Y such that Y = TΛY,T.
70 Both ΛY,T and ΛY,T depend locally continuous on 푥 ∈ Ϻ. Now it holds with the submultiplicativity of the Frobenius matrix-norm and Corollary 4.8 that
푘 2 2 2 ∑ ∣HϺ(τ푖, τ푗)∣ = ∣∣Tt퐻 T∣∣ = ∣∣Λt Yt 퐻 YΛ ∣∣ 푓 퐹⃡⃡⃡⃡⃡⃡⃡ Ϝ Y,T 퐹⃡⃡⃡⃡⃡⃡⃡ Y,T Ϝ 푖,푗=1 푘 2 2 2 4 2 ≤ ∣∣Λ ∣∣ ∣∣Yt퐻 Y∣∣ ∣∣Λ ∣∣ = ∣∣Λ ∣∣ ∑ ∣HϺ(υ푖, υ푗)∣ , Y,T Ϝ 퐹⃡⃡⃡⃡⃡⃡⃡ Ϝ Y,T Ϝ Y,T Ϝ 푓 푖,푗=1
In the same way one obtains
푘 푘 2 4 2 ∑ ∣HϺ(υ푖, υ푗)∣ ≤ ∣∣Λ ∣∣ ∑ ∣HϺ(τ푖, τ푗)∣ , 푓 T,Y Ϝ 푓 푖,푗=1 푖,푗=1 whereby one directly deduces the third claim. To deduce the second, we can as- sume that Y is now also orthonormal. In that case, both ΛY,T and ΛY,T are orthogonal matrices, as one easily verifies. Thereby the second claim is also fairly obvious, as the Frobenius norm remains unaffected by orthogonal transforms on either side.
We shall now just give a short argument for ΛY,T being orthogonal: We take the nor- mal frame N and enhance both tangent frames to global frames, (T, N) and (Y, N).
Filling up ΛY,T by zeros on the left and below, and by an ĸ unit matrix in the lower right, we can deduce
t ⎛ ΛY,T 푂푘×ĸ⎞ t 퐼푑 = (T, N)(T, N) = (Y, N) ⎜ ⎟ (T, N) , ⎝푂ĸ×푘 퐼ĸ ⎠ which gives in turn Λ 푂 (Y, N)t(T, N) = ⎜⎛ Y,T 푘×ĸ⎟⎞ , ⎝푂ĸ×푘 퐼ĸ ⎠ and as the left is orthogonal, so is the right. K
The next theorem gives us an intrinsic characterisation of the space H2(Ϻ) inde- pendent of any inverse atlas, but equivalent to the definition we have previously introduced. But before we come to that we make one more important definition:
푘 푑 2 4.15 Definition Let Ϻ ∈ 필bd(ℝ ) and 푓 ∈ H (Ϻ). Then we define the Hessian 〈6〉 energy ЄH(푓, 푓) by
ЄH(푓, 푔) ∶= ∫єH(푓, 푔), Ϻ where in turn 푘 Ϻ 푖 푗 Ϻ 푖 푗 єH(푓, 푔) ∶= ∑ H푓(τ , τ ) ⋅ H푔(τ , τ ) 푖,푗=1
〈6〉The letters Є and є are a script- or crescent-shaped variant of epsilon that is used here to avoid any confusion with our extension operators. It was particularly common in byzantine scripts, and still is in coptic.
71 for (τ1, ..., τ푘) = (τ1(푥), ..., τ푘(푥)) an arbitrary orthonormal frame of the tangent space T푥Ϻ at any 푥 ∈ Ϻ.
4.16 Theorem The chart based definition of the norms ∣∣푓∣∣H1(Ϻ) , ∣∣푓∣∣H2(Ϻ) from
Definition 2.24 for any finite C-bounded Lipschitz inverse atlas 픸Ϻ ∈ ℿ(Ϻ) are equivalent to the following definitions of norms:
2 2 2 2 2 ∣∣푓∣∣ 1( ) ∶= ∫|푓| + ∫ ∣∣∇Ϻ푓∣∣ ∣∣푓∣∣ 2( ) ∶= ∣∣푓∣∣ 1( ) + ЄH(푓, 푓). HT Ϻ 2 HT Ϻ HT Ϻ Ϻ Ϻ
Proof: Since we have a finite inverse atlas 픸Ϻ, we restrict ourselves here to one pair (ψ, ω) from 픸Ϻ, and we understand the norms ∣∣푓∣∣ 1( ) and ∣∣푓∣∣ 2( ) in the HT Ϻ HT Ϻ Sobolev case as metric closures of the smooth functions. Then by the transforma- tion law 2 2 2 c1 ∣∣푓 ∘ ψ∣∣ ≤ ∫ |푓| ≤ c2 ∣∣푓 ∘ ψ∣∣ . L2(ω) L2(ω) ψ(ω) By considering additionally the chain rule and the fact that 퐷ψ maps into a basis of the tangent space and this map is C-bounded〈7〉, we can further deduce that
2 2 2 c1∫|∇(푓 ∘ ψ)| ≤ ∫ |∇Ϻ푓| ≤ c2∫|∇(푓 ∘ ψ)| . ω ψ(ω) ω
This comes as follows: The change of basis that the transition from old basis τ1, ..., τ푘, ν1, ..., νĸ to the new basis 휕ψ , ..., 휕ψ , ν1, ..., νĸ invokes (and vice versa) has 휕푥1 휕푥푘 C-bounded entries by definition of ψ and bounded absolute values of its spectrum within some interval [푎, 푏] for 0 < 푎 ≤ 푏 < ∞, and thus all constants are globally bounded. So we need to take care about the second order terms only. There, we have for arbitrary coordinate directions 푥푖, 푥푗 with respect to ω that by the chain rule
t 휕2(퐹⃡⃡⃡⃡⃡⃡ ∘ ψ) 휕ψ 휕ψ 휕ψ 휕ψ 푑 휕2ψ = ( 1 , ..., 푑 ) ⋅ 퐻 (ψ(푥)) ⋅ ( 1 , ..., 푑 ) + ∑ 휕 퐹⃡⃡⃡⃡⃡⃡ ℓ . 휕푥 휕푥 휕푥 휕푥 퐹⃡⃡⃡⃡⃡⃡⃡ 휕푥 휕푥 ℓ 휕푥 휕푥 푖 푗 푖 푖 푗 푗 ℓ=1 푖 푗
By definition, the set 푘 휕ψ 휕ψ t {휕 ψ ∶= ( 1 , ..., 푑 ) } 푖 휕푥 휕푥 푖 푖 푖=1 forms a pointwise basis of the tangent space, and the corresponding basis trans- form has a bounded spectrum due to C-boundedness by conception of 픸Ϻ. So with the third statement of the last lemma we can deduce that
〈7〉 By conception of 픸Ϻ, and this is also a consequence of appendix Theorem 9.6.
72 2 2 휕2(퐹⃡⃡⃡⃡⃡⃡ ∘ ψ) ∣ 푑 휕2ψ ∣ ( ) ≤ ⎜⎛∣(휕 ψ)t ⋅ 퐻 (ψ(푥)) ⋅ (휕 ψ)∣ + ∣∑ 휕 퐹⃡⃡⃡⃡⃡⃡ ℓ ∣⎟⎞ 휕푥 휕푥 푖 퐹⃡⃡⃡⃡⃡⃡⃡ 푗 ∣ ℓ 휕푥 휕푥 ∣ 푖 푗 ⎝ ∣ℓ=1 푖 푗 ∣⎠ 2 푑 2 2 ∣ 휕 ψ ∣ ≤ c ∣(휕 ψ)t ⋅ 퐻 (ψ(푥)) ⋅ (휕 ψ)∣ + c ∣∑ 휕 퐹⃡⃡⃡⃡⃡⃡ ℓ ∣ 푖 퐹⃡⃡⃡⃡⃡⃡⃡ 푗 ∣ ℓ 휕푥 휕푥 ∣ ∣ℓ=1 푖 푗 ∣ 2 ≤ c єH(푓, 푓) + c ⟨∇Ϻ푓, ∇Ϻ푓⟩ ,
⃡⃡⃡⃡⃡⃡ where we used further that ∇Ϻ푓 = ∇퐹. Thereby we can deduce that
2 푘 2 휕 (퐹⃡⃡⃡⃡⃡⃡ ∘ ψ) 2 ∫ ∑ ( ) ≤ c ∣∣푓∣∣H2(Ϻ) , 휕푥 휕푥 T ω 푖,푗=1 푖 푗 which gives one part of the required inequality. For the other, we argue by con- tradiction: We assume that we have a sequence (푓푛)푛∈ℕ of functions with corre- 2 sponding 퐹⃡⃡⃡⃡⃡⃡ = E 푓 such that ∣∣푓 ∣∣ ≥ 1 but 푛 N 푛 푛 2( ) HT Ϻ
1 ∣∣푓 ∣∣ < ∀푛 ∈ ℕ. 푛 H2(Ϻ) 푛
Without restriction, we demand these functions to be smooth. Then we can directly deduce from the known equivalence of ∣∣푓∣∣ 1( ) and ∣∣푓∣∣ 1( ), and from Sobolev em- H Ϻ HT Ϻ beddings, that also 1 ∣∣푓 ∣∣ < c , 푛 1( ) HT Ϻ 푛 so again it boils down to the second order term only. There we see now that in particular 2 휕2(퐹⃡⃡⃡⃡⃡⃡ ∘ ψ) 1 ∫ ⎜⎛ 푛 ⎟⎞ < . 휕푥 휕푥 푛 ω ⎝ 푖 푗 ⎠
2 2 t 휕 ψ1 휕 ψ푑 On the other hand, we can deduce that with 휕ijψ ∶= ( ,..., ) we have by 휕푥푖휕푥푗 휕푥푖휕푥푗 the chain rule
2 휕2(퐹⃡⃡⃡⃡⃡⃡ ∘ ψ) 2 ⎜⎛ 푛 ⎟⎞ t = ((휕푖ψ) 퐻퐹⃡⃡⃡⃡⃡⃡⃡ (ψ(푥))(휕푗ψ) + ⟨휕ijψ, ∇Ϻ푓푛⟩) ⎝ 휕푥푖휕푥푗 ⎠ 푛 2 2 t = ((휕푖ψ) 퐻⃡⃡⃡⃡⃡⃡⃡ (ψ(푥))(휕푗ψ)) + ⟨휕ijψ, ∇Ϻ푓 ⟩ 퐹푛 푛 t + 2 ((휕푖ψ) 퐻⃡⃡⃡⃡⃡⃡⃡ (ψ(푥))(휕푗ψ)) ⋅ ⟨휕ijψ, ∇Ϻ푓 ⟩ 퐹푛 푛 2 t ≥ ((휕푖ψ) 퐻⃡⃡⃡⃡⃡⃡⃡ (ψ(푥))(휕푗ψ)) 퐹푛 t − 2 ∣((휕푖ψ) 퐻⃡⃡⃡⃡⃡⃡⃡ (ψ(푥))(휕푗ψ)) ⋅ ⟨휕ijψ, ∇Ϻ푓 ⟩∣ . 퐹푛 푛
Integrating over both after summing over all 푖, 푗 gives
73 2 1 푘 휕2(퐹⃡⃡⃡⃡⃡⃡ ∘ ψ) >∫ ∑ ⎜⎛ 푛 ⎟⎞ 푛 휕푥 휕푥 ω 푖,푗=1 ⎝ 푖 푗 ⎠ 푘 2 t ≥∫ ∑ ((휕푖ψ) 퐻⃡⃡⃡⃡⃡⃡⃡ (ψ(푥))(휕푗ψ)) 퐹푛 ω 푖,푗=1 푑 t − 2∫ ∑ ∣((휕푖ψ) 퐻⃡⃡⃡⃡⃡⃡⃡ (ψ(푥))(휕푗ψ)) ⋅ ⟨휕ijψ, ∇Ϻ푓 ⟩∣ . 퐹푛 푛 ω 푖,푗=1
On the other hand, Hölder’s inequality gives that
푘 t ∑ ∫ ∣((휕푖ψ) 퐻⃡⃡⃡⃡⃡⃡⃡ (ψ(푥))(휕푗ψ)) ⋅ ⟨휕ijψ, ∇Ϻ푓 ⟩∣ 퐹푛 푛 푖,푗=1 ω 1 1 푘 2 2 2 2 ⎛ t ⎞ ⎛ ⎞ ≤c ∑ ⎜∫ ((휕푖ψ) 퐻⃡⃡⃡⃡⃡⃡⃡ (ψ(푥))(휕푗ψ)) ⎟ ⋅ ⎜∫ ⟨휕ijψ, ∇Ϻ푓 ⟩ ⎟ 퐹푛 푛 푖,푗=1 ⎝ω ⎠ ⎝ω ⎠ 1 1 푘 2 2 2 ⎛ t ⎞ ⎛ ⎞ ≤c ∑ ⎜∫ ((휕푖ψ) 퐻⃡⃡⃡⃡⃡⃡⃡ (ψ(푥))(휕푗ψ)) ⎟ ⋅ ⎜∫ ⟨휕ijψ, 휕ijψ⟩ ⟨∇Ϻ푓 , ∇Ϻ푓 ⟩⎟ 퐹푛 푛 푛 푖,푗=1 ⎝ω ⎠ ⎝ω ⎠ 1 ≤c ∣∣푓 ∣∣ ∣∣푓 ∣∣ ≤ c ⋅ . 푛 H2 푛 H1 푛
Inserting this gives
푘 2 1 t 1 > ∫ ∑ ((휕푖ψ) 퐻⃡⃡⃡⃡⃡⃡⃡ (ψ(푥))(휕푗ψ)) − c ⋅ , 푛 퐹푛 푛 ω 푖,푗=1 and so we deduce that
푘 2 t 1 ∫ ∑ ((휕푖ψ) 퐻⃡⃡⃡⃡⃡⃡⃡ (ψ(푥))(휕푗ψ)) ≤ c ⋅ . 퐹푛 푛 ω 푖,푗=1
But this produces a contradiction, as we can bound
푘 2 푑 2 푖 t 푗 t ∫ ∑ ((τ ) 퐻⃡⃡⃡⃡⃡⃡⃡ (ψ(푥))(τ )) ≤ c ∫ ∑ ((휕푖ψ) 퐻⃡⃡⃡⃡⃡⃡⃡ (ψ(푥))(휕푗ψ)) 퐹푛 퐹푛 ω 푖,푗=1 ω 푖,푗=1 and thus we would have
1 1 ≤ ∣∣푓 ∣∣ < c ∀푛 ∈ ℕ. 푛 2( ) HT Ϻ 푛
K
74 4.2 Unisolvency in Tangential Calculus
Since now a second order tangential derivative is available, we can ask ourselves if there are also functions in H2(Ϻ) that have optimal second order energy under suitable constraints, in particular interpolation constraints on some finite set Ξ ∈ Ϻ. That is, if there is a function 푓∗ ∈ H2(Ϻ) that interpolates given function values at the points of a finite set Ξ ⊆ Ϻ such that 푓∗ minimises
푘 2 Ϻ 푖 푗 ЄH(푓, 푓) = ∫ ∑ ∣H푓(τ , τ )∣ . Ϻ 푖,푗=1
In ℝ푑, it is well-known that this is possible under some mild conditions on Ξ: As long as Ξ is unisolvent for the linear polynomials P2(ℝ푑), so there is no linear polynomial other than the zero function that vanishes in all points, the minimum exists, and it is even explicitly known (cf. [105]). And the check for unisolvency is not too hard, either: Ξ is unisolvent for P2(ℝ푑) if and only if Ξ contains at least 푑 + 1 points that are not part of a common affine subspace of ℝ푑 (cf. [105]).
We hope that a similar concept will also hold on more general ESMs. So we ask ourselves what amount and distribution of points Ξ is necessary to determine that the kernel of the second order tangential derivative operator is trivial if we demand 푓(ξ) = 0 for all ξ ∈ Ξ.
4.17 Remark: In the course of this section, we treat only the case of C2-functions on Ϻ. However, this generalises directly to the distributional case: To deduce this, it suffices to verify that any function in the kernel of ЄH(푓, 푓) actually has to be twice continuously differentiable. To this end, we first conclude that whenever the 2 quadratic form ЄH(푓, 푓) vanishes for a function 푓 ∈ H (Ϻ), then necessarily ΔϺ푓 will vanish as well. This implies that a function from the kernel is necessarily a harmonic function on Ϻ. And one deduces from the regularity theory of elliptic PDEs (cf. [83, Th. 67, p. 281]) that any of these functions has to be C∞ in case of a smooth Ϻ!
푘 푑 A suitable condition for points on an ESM Ϻ is easily determined if Ϻ ∈ 필cp(ℝ ), so is compact and thus has no boundary. It comes by application of the tangential version of Green’s theorem:
푘 푑 2 4.18 Theorem Let Ϻ ∈ 필cp(ℝ ). Then any function 푓 ∈ C (Ϻ) whose tangential
Hessian vanishes identically on Ϻ is constant, and the same holds true if ΔϺ푓 vanishes identically.
Proof: With all intrinsic second derivatives vanishing, so must in particular the Laplace-Beltrami of 푓. So we are in the second situation and have there that
75 ΔϺ푓(푥) = 0 ∀푥 ∈ Ϻ.
Then we employ Green’s theorem to obtain
0 = ∫푓ΔϺ푓 = −∫ ⟨∇Ϻ푓, ∇Ϻ푓⟩ ≤ 0. Ϻ Ϻ
K Consequently ∇Ϻ푓 = 0 everywhere, and therefore 푓 must be constant.
So whenever Ϻ is compact, we have that one single point suffices to determine the kernel as the zero function. However, we can easily find that when Ϻ has a boundary, the conditions have to be more elaborate: Even a domain in ℝ푑 suffices as a counterexample. Of course, we could employ boundary conditions, for example Neumann conditions, to achieve the same result just as we did above. On the other hand, why should one use boundary conditions if there is no need for them? So we would like to have a comparable result by simply considering a finite discrete set of points like in the Euclidean case. This hope and the correspondence to the common Euclidean case justifies the following definition:
푘 푑 4.19 Definition For Ϻ ∈ 필bd(ℝ ), a set Ξ ⊆ Ϻ of discrete points is said to ℓ be DϺ-unisolvent if the only function from the kernel of the ℓth order tangential derivative operator that vanishes at all points of Ξ is the constant zero function.
ℓ ℓ 4.20 Remark: Obviously, any superset of some DϺ-unisolvent set is DϺ-unisolvent itself. A suitable condition for unisolvency in the case with boundary is easily determined for curves: There is an obvious one-to-one correspondence between the tangential derivative of order ℓ of a smooth curve ɣ (seen as an ESM in its own right) and the derivative of order ℓ on the parameter interval for an arc length parameterisation γ of the curve ɣ: they effectively coincide, so
ℓ ℓ Dɣ푓(푥) = D (푓 ∘ γ)( ̃푥), 푥 = γ( ̃푥), ̃푥∈ 퐼 for the arc length parameterisation γ ∶ 퐼 → ℝ푑. So for any curve, two points will suffice for the case ℓ = 2, and for a closed curve, even one point will do the job. 4.21 Conclusion On an ESM ɣ that is an open curve, any set of at least ℓ points ℓ is DϺ-unisolvent.
4.2.1 Unisolvency and Curvature
Surprisingly, there is a comparably simple characterisation of arbitrary ESMs of dimension 푘 ≥ 2 that admit no other functions with vanishing Hessian than the
76 constant functions even if they have nonempty boundary. If in particular 푘 = 2, then it turns out that only a surface whose Gaussian curvature〈8〉 does vanish identically can ever have nonconstant functions with vanishing Hessian. That is, any surface where this kernel is nontrivial must necessarily be a developable surface. We will deduce this condition now by a sequence of suitable arguments.
But before we come to that, we need a handful of appropriate definitions, stated here in some specialised version but based on the general Riemannian setting. They can be found for example in [9, Ch. 1], [34, Ch. 4] and [51, Ch. II and III], on which we shall rely here:
푘 푑 4.22 Definition Let Ϻ ∈ 필bd(ℝ ). Let ℾ(Ϻ) be the set of all smooth tangent
vector fields on Ϻ.
ռ ռ ռ
ռ t
1. Let ∈ ℾ(Ϻ). Let (푦) ∶= ( 1(푦), ..., 푑(푦)) . Then we define the (tangential)
ռ ռ
covariant derivative ∇v (푦) of in 푦 along a vector field v ∈ ℾ(Ϻ) by
ռ ռ ռ t
∇v (푦) = ((DϺ 1(푦))v(푦), ..., (DϺ 푑(푦))v(푦)) .
ռ ռ 2. We call ∈ ℾ(Ϻ) a parallel vector field if ∇v (푦) = 0 for any 푦 ∈ Ϻ and any v ∈ ℾ(Ϻ).
3. We call ռ ∈ ℾ(Ϻ) an integrable vector field if each curve integral exists and does not depend on the actual path from its start to its end.
4.23 Example: The gradient of any function with vanishing Hessian is by defini- tion of parallel vector fields such a parallel vector field: If its tangential Hessian vanishes, then any covariant derivative of the gradient along any vector field will vanish in particular. Further, any such field is integrable according to the above definition.
We begin now with a couple of subsequent lemmata that provide us with useful
facts about covariant derivatives and parallel vector fields: 푘 푑 ռ 4.24 Lemma Let Ϻ ∈ 필bd(ℝ ), let 푦 ∈ Ϻ and let , v, w, x ∈ ℾ(Ϻ).Then the
following rules hold locally around 푦:
ռ ռ ռ ∞
1. ∇ηx+w = η∇x + ∇w for any η ∈ C (Ϻ).
ռ ռ
2. ∇w(훼 + v) = 훼∇w + ∇wv for any 훼 ∈ ℝ.
ռ ռ ռ ∞ 3. Dw ⟨ , v⟩ = ⟨∇w , v⟩ + ⟨ , ∇wv⟩, where Dw푓 ∶= (DϺ푓)w for any 푓 ∈ C (Ϻ).
If w, x are globally linearly independent, all relations hold globally as well.
〈8〉We recall that in ℝ3 the Gaussian curvature К(푥) of a surface Ϻ at a point 푥 ∈ Ϻ is the product of the principal curvatures к1(푥) and к2(푥). These are in turn the maximal and the minimal normal curvature in 푥 taken over all geodesics through 푥.
77 Proof: The first two are direct consequences of the definition. The third is then also easily seen: By definition, the tangential derivative satisfies the product rule in the form
DϺ(푓 ⋅ 푔) = 푔DϺ푓 + 푓DϺ푔.
ռ ռ ռ t t Then we suppose that = ( 1, ..., 푑) and v = (ᴠ1, ..., ᴠ푑) and choose an arbitrary
푖 ∈ {1, ..., 푑}. We obtain
ռ ռ ռ DϺ( 푖ᴠ푖) = ᴠ푖DϺ 푖 + 푖DϺᴠ푖, and conclude by summation that
푑 푑 ռ ռ ռ DϺ ⟨ , v⟩ = ∑ ᴠ푖DϺ 푖 + ∑ 푖DϺᴠ푖. 푖=1 푖=1
Multiplication of both sums by w yields
푑 푑 ռ ռ ռ ռ ռ Dw ⟨ , v⟩ = ∑ ᴠ푖(DϺ 푖)w + ∑ 푖(DϺᴠ푖)w = ⟨∇w , v⟩ + ⟨ , ∇wv⟩. 푖=1 푖=1
K ռ
4.25 Lemma Let ,ռ v ∈ ℾ(Ϻ) be parallel vector fields. Then ⟨ , v⟩ is constant. In
ռ ռ ռ particular, ⟨ , ⟩ and thus || ||2 are constant. Proof: By the product rule for covariant derivatives presented in the last lemma, it holds locally around an arbitrary 푦 ∈ Ϻ for any x ∈ ℾ(Ϻ) that does not vanish at
푦 that
ռ ռ ռ Dx ⟨ (푦), v(푦)⟩ = ⟨∇x (푦), v(푦)⟩ + ⟨ (푦), ∇xv(푦)⟩ = 0 + 0 = 0.
Consequently, ⟨ (푦),ռ v(푦)⟩ must be constant in a neighbourhood of 푦, and thus
globally constant because Ϻ was (always) supposed to be connected. K ռ
4.26 Lemma Let ռ 1, ..., ℓ be ℓ ≤ 푘 linearly independent parallel tangent vector
fields on Ϻ. Then there is a positively oriented pointwise orthonormal frame for the
ռ ℓ ℓ 1ռ ℓ ℓ-section σϺ defined as σϺ(푥) ∶= spɑn( (푥), ..., (푥)) ⊆ T푥Ϻ. In particular, any system of linearly independent parallel vector fields consists of at most 푘 vector fields. All such maximal systems yield the same section.
Proof: Without restriction, we can ∣∣ 푖ռ ∣∣ = 1, 푖 = 1, ..., ℓ. Then we can apply Gram- Schmidt orthonormalisation in each point, whereby we obtain new vector fields
v1 = 1ռ and for 휆 = 2, ..., ℓ
휆−1 ռ ռ
v휆(푥) = ∑ ⟨ 휆ռ (푥), 푖(푥)⟩ 푖(푥). 푖=1
78 Because of Lemma 4.25, any coefficient is constant over Ϻ. Moreover, it holds for any 휆 = 2, ..., ℓ
휆−1 휆−1 ռ ռ ռ ռ ռ
⟨v휆, v휆⟩ = ⟨ ∑ ⟨ ռ 휆(푥), 푖(푥)⟩ 푖(푥), ∑ ⟨ 휆(푥), 푗(푥)⟩ 푗(푥)⟩ 푖=1 푗=1
휆−1
ռ ռ ռ ռ ռ = ∑ ⟨ 휆ռ , 푖⟩⟨ 휆, 푗⟩⟨ 푖, 푗⟩, 푖,푗=1 which is again constant over Ϻ. So parallelity of the new vector fields is retained due to the ℝ-linearity of covariant derivatives. Finally, if we had another section ℓ 1 ℓ ℓ ℓ ρϺ spanned by fields w , ..., w such that at a point 푥0 we have σϺ(푥0) ≠ ρϺ(푥0), 0 0 1 ℓ then there is a field w with w (푥0) linearly independent of v (푥0), ..., v (푥0). As it is also parallel and vanishes nowhere, it has constant angle to any v푖. Thus w0 is linearly independent to these fields and ℓ is not maximal. K
This result tells us that we have a vector space structure on the parallel vector fields. Recalling that the gradient field of a function with identically vanishing tangential Hessian is parallel, we obtain the following result that gives us a con- struction method for functions with vanishing Hessian.
푘 푑 2 4.27 Theorem Let Ϻ ∈ 필sd(ℝ ) and 푓 ∈ C (Ϻ) be a nonconstant function with
ЄH(푓, 푓) = 0. ռ
Let { ռ 1, ..., ℓ} be a maximal linear independent set of integrable orthonormal par- allel tangent vector fields. Then ∇Ϻ푓 has a unique representation of the form
ℓ 1ռ ∇Ϻ푓 = ∑ 훼푖 푖=1 with real coefficients 훼1, ..., 훼ℓ.
This result tells us in turn that the kernel of ЄH(푓, 푓) is a finite dimensional space. We can even deduce its dimension, which has to be ℓ + 1. And we can explicitly construct any function in the kernel if the gradient and a single function value in a point ξ0 is given: If 휉 ∈ Ϻ is another point and ɣ is a curve in Ϻ that contains
ξ0, 휉 with arc-length 퐿 and arc-length parameterisation γ ∶ [0, 퐿] → Ϻ such that
γ(0) = ξ0 and γ(퐿) = 휉, then one obtains
퐿 ℓ 퐿 ′ 푖ռ ′ 푓(휉) = 푓(ξ0) + ∫ ⟨∇Ϻ푓(γ(푡)), γ (푡)⟩ 푑푡 = 푓(ξ0) + ∑ 훼푖∫ ⟨ (γ(푡)), γ (푡)⟩ 푑푡. 0 푖=1 0
79 ռ
Due to the integrability of the ”basis fields” ռ 1, ..., ℓ, this is independent of the
actual choice of ɣ and thus provides a well-defined expression that defines 푓.
ռ 푘 푑 ռ 1 ℓ 4.28 Conclusion Let Ϻ ∈ 필sd(ℝ ) and let { , ..., } ⊆ ℾ(Ϻ) be a maximal linear independent system of integrable orthonormal parallel vector fields. Then
any function 푓 ∈ C2(Ϻ) whose tangential Hessian vanishes on Ϻ is determined up ռ
to a constant by integration of the vector fields { 1ռ , ..., ℓ}.
The question is now what values we can expect for ℓ. As one would expect, the answer depends on the actual shape of Ϻ: All values between 0 and 푘 are possible, and for each choice there are representatives; one simply has to choose Ϻ = 핊푘−ℓ × 퐼ℓ for an arbitrary open interval 퐼 and ℓ = 0, ..., 푘.
The actual value can be hard to determine for an arbitrary Ϻ, but we shall see soon that for a wide range of ESMs and in particular for all surfaces whose Gaussian curvature does not vanish identically, there are no such parallel fields. To accom- plish this, we need some further definitions. Namely, we require some concepts of curvature for an ESM that give us the usual Gaussian curvature of standard
differential geometry in particular (cf. [9, Ch. 1] and [34, Sect. 4.2/4.3]): 푘 푑 ռ
4.29 Definition Let Ϻ ∈ 필bd(ℝ ). For three vector fields , v, w ∈ ℾ(Ϻ) we
ռ Ϻ ռ
define the (tangential) curvature tensor 푥Я ( , v)w of , v and w at 푥 by
Ϻ ռ ռ ռ ռ Я 푥 ( , v)w ∶= ∇ ∇ w(푥) − ∇ ∇ w(푥) − ∇ ռ w(푥) + ∇ w(푥). v v ∇ v ∇v
If in particular ,ռ v are also linearly independent at 푥, then we define the (tangen-
ռ Ϻ ռ 2
tial) sectional curvature К푥 ( , v) at 푥 along the section σϺ = spɑn( , v) by
ռ
Ϻ ռ
Ϻ ռ ⟨ 푥Я ( , v)v, ⟩
ռ ռ К ( , v) ∶= ռ . 푥 ⟨ , ⟩⟨v, v⟩ − ⟨ , v⟩2
4.30 Remark: The sectional curvature can locally be seen as the Gaussian curva-
ture of a surface whose tangent plane is spanned by ռ and v: Taking two geodesics ռ
ɣ ռ , ɣv through a point 푦 ∈ Ϻ with tangent directions (푦), v(푦), this local surface
is obtained as the image of the plane spanned by the preimages of these geodesics 〈9〉 ռ under the exponential map for 푦, so exp푦,Ϻ(푢 + 푣v) for 푢, 푣 ∈ ]−ԑ, ԑ[.
We now give two useful properties for the curvature tensor from [51, Prop. 3.5]: ռ
4.31 Lemma Let 푦 ∈ Ϻ and let ռ , v, w, x ∈ ℾ(Ϻ) such that and v are linearly independent near 푦. Then the curvature tensor satisfies locally around 푦 the rela-
tions
ռ ռ ռ Ϻ ռ Ϻ Ϻ Ϻ 푦Я ( , v)x = − 푦Я (v, )x,⟨ 푦Я ( , v)w, x⟩ = − ⟨ Я 푦( , v)x, w⟩.
〈9〉Cf. Sect. 9.1.1 for definition and properties of the exponential map.
80 Ϻ ռ Moreover, 푦Я ( , v)x vanishes at 푦 whenever the vector field x is parallel on Ϻ. Consequently, the curvature tensor vanishes identically for such parallel x.
Proof: The first two properties are proven in [51, Prop. 3.5], and the first one is also fairly obvious. The claim on parallel fields follows then by insertion. K
Now we can finally deduce the desired relation for the kernel of the tangential Hessian. It turns out to be a direct consequence of the following lemma:
푘 푑 2 4.32 Lemma Let Ϻ ∈ 필sd(ℝ ) and let 푓 ∈ C (Ϻ) be a nonconstant function
such that its tangential Hessian vanishes identically. Let ռ ∈ ℾ(Ϻ) be a vector
field that is linearly independent to the gradient field at 푥 ∈ Ϻ. Then the sectional Ϻ ռ curvature К푥 (∇Ϻ푓, ) vanishes at 푥.
Proof: As the tangential gradient ∇Ϻ푓 must have constant norm over all of Ϻ, we know that ∇Ϻ푓 does not vanish anywhere. Hence it is a smooth, nonvanishing
integrable vector field on Ϻ that is parallel as well. But by Lemma 4.31, the curva- Ϻ ռ ture tensor 푥Я ( , v)∇Ϻ푓 must vanish identically at any 푥 ∈ Ϻ for any vector fields
ռ , v ∈ ℾ(Ϻ) that are linearly independent near 푥. This gives in particular that the K sectional curvature must vanish at any 푥 for any section that contains ∇Ϻ푓.
This last result is now the direct justification of the following condition:
푘 푑 4.33 Theorem If Ϻ ∈ 필sd(ℝ ) contains a point 휉 such that for any two vector
fields ռ , v ∈ ℾ(Ϻ) that are linearly independent at 휉 the sectional curvature does not vanish at 휉, then the tangential Hessian vanishes only for constant functions 2 and any nonempty Ξ ⊆ Ϻ is DϺ-unisolvent. If in particular 푘 = 2 and the Gaussian curvature of Ϻ does not vanish identically, 2 then any nonempty Ξ ⊆ Ϻ is DϺ-unisolvent.
Proof: The general statement is obvious by the previous lemmas, and the particu- lar statement for surfaces comes because for a surface, there is only one sectional curvature, as any section coincides with the whole tangent bundle. And that sec- tional curvature coincides with the Gaussian curvature of that surface (cf. [73, pp. 145/146]). K
4.2.2 Unisolvency and Isometry
The last theorem implies that in the surface case, nonconstant functions can have identically vanishing Hessian only if the Gaussian curvature of the surface vanishes identically, too. Surfaces whose Gaussian curvature vanishes identically are called developable surfaces, and they can locally be obtained by ”rolling up” a portion of the plane in an isometric way. Following [34, Ch. 1, Def. 2.2], a diffeomorphism
81 φ ∶ ω → Ϻ with ω ⊆ ℝ푘 and φ(ω) ⊆ Ϻ is called an isometry between ω and φ(ω) if for any 푣, 푤 ∈ ℝ푘 and any 푥 ∈ ω it holds
⟨푣, 푤⟩ = ⟨퐷φ(푥)푣, 퐷φ(푥)푤⟩ .
Correspondingly, a local diffeomorphism is called a local isometry if the respective relation holds locally.
푘 4.34 Remark: This implies in particular that the linear map Dφ(푥) ∶ ℝ → Tϕ(푥)Ϻ is an orthogonal linear map.
푘 푑 We deduce now that if a function 푓 ∶ Ϻ → ℝ for Ϻ ∈ 필bd(ℝ ) has identically vanishing Hessian and φ ∶ ω → Ϻ is an isometry, then 푓 ∘ φ must be a linear polynomial on ω. This becomes apparent if we consider the following lemma. It states a relation in terms of tangential differential calculus that is one of the key relations of Riemannian geometry in the general case.
푘 푑 4.35 Lemma Let Ϻ ∈ 필bd(ℝ ) and let φ ∶ ω → Ϻ be a smooth isometry between ω ⊆ ℝ푘 and φ(ω) ⊆ Ϻ. Then φ preserves tangential derivatives and covariant derivatives. In particular, it preserves parallelism.
Proof: First we consider an arbitrary smooth function 푔 ∶ Ϻ → ℝ. By the chain rule it is clear that
D(푔 ∘ φ)(푥) = D푔(φ(푥)) ⋅ Dφ(푥),
푘 and by definition, Dφ(푥) maps vectors in ℝ into the tangent space Tϕ(푥)Ϻ for any 푥 ∈ ψ. As Dφ is an orthogonal linear map, it gives us a locally smooth orthonor- mal frame of the tangent space. Then the very definition of tangential derivatives implies
D(푔 ∘ φ)(푥) = DϺ푔(φ(푥)).
Because the tangential covariant derivative is defined by component-wise applica- tion of tangential differentiation, the remaining claims are direct consequences of this relation. K
However, vanishing Gaussian curvature is only a necessary condition, not a suffi- cient one. Still, any value of ℓ ∈ {0, 1, 2} for the number of basis fields can and does actually occur: ℓ = 2 is obviously represented by a domain in ℝ2, and by the last lemma this is retained if we deform that domain isometrically. The case ℓ = 1 is represented by a cylinder and the case ℓ = 0 is represented by a truncated cone or a flat torus (with or without holes); to see this, we could use the techniques we already have
82 at hand at this point, but the necessary arguments would be very specific to the respective situations. So instead of doing that, we will present a more general characterisation of unisolvency in terms of (Riemannian) developing and covering maps. These two extraordinary types of locally isometric maps from and to an ESM are introduced in the following definition.
푘 푑 ∗ 4.36 Definition Let Ϻ ∈ 필bd(ℝ ) and let ω ∈ 핃핚핡푘 be connected.
1. A smooth, surjective map ϙ ∶ ω → Ϻ is called a covering (map)〈10〉 of Ϻ if for any 푥 ∈ Ϻ there is a neighbourhood Θ(푥) ⊆ Ϻ such that
-1 ω ϙ (Θ(푥)) = ⋃ Θ푗 (푥), 푗∈퐽(푥)
ω where the sets {Θ (푥)} are disjoint open subsets of ψ such that ϙ ω ∶ 푗 |Θ푗 (푥) ω Θ푗 (푥) → Θ(푥) is a diffeomorphism. It is called a Riemannian covering (map) ω of Ϻ if each ϙ ω ∶ Θ (푥) → Θ(푥) is an isometry. |Θ푗 (푥) 푗 2. A smooth map ᴨ ∶ Ϻ → ℝ푘 is called a (global) developing (map) of Ϻ if for
any 푥 ∈ Ϻ there is a neighbourhood Θ(푥) ⊆ Ϻ of 푥 such that ᴨ|Θ(푥) is a dif- feomorphism of Θ(푥) onto ᴨ(Θ(푥)). It is further called a (global) Riemannian
developing (map) of Ϻ if each ᴨ|Θ(푥) is an isometry.
If we have a Riemannian covering or developing map, then we can deduce neces- sary and sufficient condition for unisolvency. We start with the case of a Rieman- nian developing:
푘 푑 푘 4.37 Theorem Let Ϻ ∈ 필bd(ℝ ) admit a Riemannian developing ᴨ ∶ Ϻ → ℝ and 2 푘 let Ξ ⊆ Ϻ be finite. Then the set Ξ is DϺ-unisolvent if and only if the set ᴨ(Ξ) ⊆ ℝ is P2(ℝ푘)-unisolvent. If in the same setting Ξ is not unisolvent, then in particular ᴨ(Ξ) is part of an affine 푘 ⊥ subspace V0 + ү0 of ℝ . The normal space V0 to this subspace determines the integrable, nonvanishing parallel gradient fields of any function that vanishes in Ξ in the following way: Any such gradient field has the form
(퐷ᴨ(푥))t푣
⊥ for a constant nonvanishing vector 푣 ∈ V0 and any maximal system of linearly ⊥ independent gradient fields of that form has dim(V0 ) elements.
Proof: Let 푓 be a smooth function whose tangential Hessian vanishes identi- cally on Ϻ. Then the function 푓 ∘ ᴨ-1 has to be a linear polynomial on any domain ᴨ(Θ(푥)) : By Lemma 4.35 we can deduce that in particular tangential and covariant
〈10〉The letter ”Ϙ,ϙ” is the greek letter ”Koppa/Qoppa” representing the latin ”Q”. It fell out of use in attic greek during the classical era, remaining only as a numeral sign.
83 derivatives are preserved locally by local isometries, and these reduce to standard derivatives in any ᴨ(Θ(푥)); so the standard (Euclidean) Hessian of 푓 ∘ ᴨ-1 must van- ish there and thus 푓 ∘ ᴨ-1 is a linear polynomial on any such set ᴨ(Θ(푥)). Now we notice that on any two ᴨ(Θ(푦)), ᴨ(Θ(푧)) with nonempty intersection these polyno- mials have to coincide. Consequently, they have to coincide globally and we can thus conclude directly on sufficiency. To prove necessity, we provide a suitable integrable parallel vector field if the stated requirement is violated. If the points of ᴨ(Ξ) are not P2(ℝ푘)-unisolvent, 푘 then by [105] they lie in an affine subspace V0 + ү0 of ℝ . So we can choose at least one fixed unit normal to this subspace, say 푣1 ∈ 핊푘−1. Then we define a func- t 1 tion 푝1(푧) ∶= 푧 푣 for 푧 ∈ ω, which is a linear polynomial in ω. If we now define
푓1(푥) ∶= 푝1(ᴨ(푥)), then as isometries preserve the tangential and covariant deriva- tive, 푓1 must have vanishing Hessian. Repeating this process for further mutually 2 ℓ 푘−1 ⊥ orthogonal vectors 푣 , ..., 푣 ∈ 핊 from the normal space V0 determines a set of functions that are obviously linearly independent, and the set of corresponding fields is maximal by construction. K
The important advantage of this characterisation is that we do not need a global isometry, a local but surjective isometry is sufficient. This can provide considerable advantage, as Fig. 4.1 shows: On the one hand, the surface depicted there from above and from behind is not globally isometric to any domain in ℝ2. On the other hand, as it has curvature only in one direction like a cylinder, it is easily mapped into a domain in the Euclidean plane globally such that the map is locally isometric. Thus it suffices to consider the image of Ξ under this map to deduce unisolvency.
Unfortunately, we cannot apply the same idea directly if instead of a developing we have a covering of Ϻ. In that case, we would have to insert ϙ-1 into the linear