GEOMETRIC VALUATIONS
CORDELIA E. CSAR, RYAN K. JOHNSON, AND RONALD Z. LAMBERTY
Work begun on June 12 2006. Ended on July 28 2006. 1 2 CORDELIAE.CSAR,RYANK.JOHNSON,ANDRONALDZ.LAMBERTY
Introduction
This paper is the result of a National Science Foundation-funded Research Experience for Undergraduates (REU) at the University of Notre Dame during the summer of 2006. The REU was directed by Professor Frank Connolly, and the research project was supervised by Professor Liviu Nicolaescu. The topic of our independent research project for this REU was Geometric Probability. Consequently, the first half of this paper is a study of the book Introduction to Geometric Probability by Daniel Klain and Gian-Carlo Rota. While closely following the text, we have attempted to clarify and streamline the presentation and ideas contained therein. In particular, we highlight and emphasize the key role played by the Radon transform in the classification of valuations. In the second part we take a closer look at the special case of valuations on polyhedra. Our primary focus in this project is a type of function called a “valuation”. A valuation as- sociates a number to each ”reasonable” subset of Rn so that the inclusion-exclusion principle is satisfied. Examples of such functions include the Euclidean volume and the Euler characteristic. Since the objects we are most interested lie in an Euclidean space and moreover we are interested in properties which are independent of the location of the objects in space, we are motivated to study “invariant” valuations on certain subsets of Rn. The goal of the first half of this paper, therefore, is to characterize all such valuations for certain natural collections of subsets in Rn. This undertaking turned out to be quite complex. We must first spend time introducing the abstract machinery of valuations (chapter 1), and then applying this machinery to the simpler simpler case of pixelations (chapter 2). Chapter 3 then sets up the language of poly- convex sets, and explains how to use the Radon transform to generate many new examples of valuations. These new valuations have probabilistic interpretations. In chapter 4 we finally nail the characterization of invariant valuations on polyconvex sets. Namely, we show that all valuations are obtainable by the Radon transform technique from a unique valuation, the Euler characteristic. We celebrate this achievement in chapter 5 by exploring applications of the theory. With the valuations having been completely characterized, we turn our attention toward special polyconvex sets: polyhedra, that is finite unions of convex polyhedra. These poly- hedra can be triangulated, and in chapter 5 we investigate the combinatorial features of a triangulation, or simplicial complex. In Chapter 7 we prove a global version of the inclusion-exclusion principle for simplicial complexes known as the M¨obius inversion formula. Armed we this result, we then explain how to compute the valuations of a polyhedron using data coming form a triangulation. In Chapter 8 we use the technique of integration with respect to the Euler characteristic to produce combinatorial versions of Morse theory and Gauss-Bonnet formula. In the end, we arrive at an explicit formula relating the Euler characteristic of a polyhedron in terms of measurements taken at vertices. These measurements can be interpreted either as curvatures, or as certain averages of Morse indices. GEOMETRIC VALUATIONS 3
The preceding list says little about why one would be interested in these topics in the first place. Consider a coffee cup and a donut. Now, a topologist would you tell you that these two items are “more or less the same.” But that’s ridiculous! How many times have you eaten a coffee cup? Seriously, even aside from the fact that they are made of different materials, you have to admit that there is a geometric difference between a coffee cup and a donut. But what? More generally, consider the shapes around you. What is it that distinguishes them from each other, geometrically? We know certain functions, such as the Euler characteristic and the volume, tell part of the story. These functions share a number of extremely useful properties such as the inclusion- exclusion principle, invariance, and “continuity”. This motivates us to consider all such functions. But what are all such functions? In order to apply these tools to study polyconvex sets, we must first understand the tools at our disposal. Our attempts described in the previous paragraphs result in a full characterization of valuations on polyconvex sets, and even lead us to a number of useful and interesting formulae for computing these numbers. We hope that our efforts to these ends adequately communicate this subject’s richness, which has been revealed to us by our research advisor Liviu Nicolaescu. We would like to thank him for his enthusiasm in working with us. We would also like to thank Professor Connolly for his dedication in directing the Notre Dame REU and the National Science Foundation for supporting undergraduate research. 4 CORDELIAE.CSAR,RYANK.JOHNSON,ANDRONALDZ.LAMBERTY
CONTENTS Introduction 2 Introduction 2 1. Valuations on lattices of sets 5 §1.1. Valuations 5 §1.2. Extending Valuations 6 2. Valuations on pixelations 10 §2.1. Pixelations 10 §2.2. Extending Valuations from Par to Pix 11 §2.3. Continuous Invariant Valuations on Pix 13 §2.4. Classifying the Continuous Invariant Valuations on Pix 16 3. Valuations on polyconvex sets 18 §3.1. Convex and Polyconvex Sets 18 §3.2. Groemer’s Extension Theorem 21 §3.3. The Euler Characteristic 23 §3.4. Linear Grassmannians 26 §3.5. Affine Grassmannians 27 §3.6. The Radon Transform 28 §3.7. The Cauchy Projection Formula 33 4. The Characterization Theorem 36 §4.1. The Characterization of Simple Valuations 36 §4.2. The Volume Theorem 40 §4.3. Intrinsic Valuations 41 5. Applications 44 §5.1. The Tube Formula 44 §5.2. Universal Normalization and Crofton’s Formulæ 46 §5.3. The Mean Projection Formula Revisited 48 §5.4. Product Formulæ 48 §5.5. Computing Intrinsic Volumes 50 6. Simplicial complexes and polytopes 53 §6.1. Combinatorial Simplicial Complexes 53 §6.2. The Nerve of a Familyof Sets 54 §6.3. Geometric Realizations of Combinatorial Simplicial Complexes 55 7. The M¨obius inversion formula 59 §7.1. A Linear Algebra Interlude 59 §7.2. The M¨obius Function of a Simplicial Complex 60 §7.3. The Local Euler Characteristic 65 8. Morse theory on polytopes in R3 69 §8.1. Linear Morse Functions on Polytopes 69 §8.2. The Morse Index 71 §8.3. Combinatorial Curvature 73 References 80 GEOMETRIC VALUATIONS 5
References 80 6 CORDELIAE.CSAR,RYANK.JOHNSON,ANDRONALDZ.LAMBERTY
1. Valuations on lattices of sets
§1.1. Valuations. Definition 1.1. (a) For every set S we denote by P (S) the collection of subsets of S and by Map(S, Z) the set of functions f : S → Z. The indicator or characteristic function of a subset A ⊂ S is the function IA ∈ Map(S, Z) defined by 1 s ∈ A IA(s)= (0 s 6∈ A (b) If S is a set then an S-lattice (or a lattice of sets) is a collection L ⊂ P (S) such that ∅ ∈ L, A, B ∈ L =⇒ A ∩ B, A ∪ B ∈ L. (c) Is L is an S-lattice then a subset G ∈ L is called generating if ∅ ∈ G and A, B ∈ G =⇒ A ∩ B ∈ G, and every A ∈ L is a finite union of sets in G. ⊓⊔
Definition 1.2. Let G be an Abelian group and S a set. (a) A G-valuation on an S-lattice of sets is a function µ : L → G satisfying the following conditions: (a1) µ(∅)=0 (a2) µ(A ∪ B)= µ(A)+ µ(B) − µ(A ∩ B). (Inclusion-Exclusion) (b) If G is a generating set of the S-lattice L, then a G-valuation on G is a function µ : G → G satisfying the following conditions: (b1) µ(∅)=0 (b2) µ(A ∪ B)= µ(A)+ µ(B) − µ(A ∩ B), for every A, B ∈ G such that A ∪ B ∈ G. ⊓⊔
The inclusion-exclusion identity in Definition 1.2 implies the generalized inclusion-exclusion identity
µ(A1 ∪ A2 ∪···∪ An)= µ(Ai) − µ(Ai ∩ Aj)+ µ(Ai ∩ Aj ∩ Ak)+ ··· (1.1) i i I• : P (S) → Map(S, Z) given by P (S) ∋ A 7→ IA ∈ Map(S, Z) is a valuation. This follows from the fact that the indicator functions satisfy the following identities. IA∩B = IAIB (1.2a) IA∪B = IA + IB − IA∩B = IA + IB − IAIB =1 − (1 − IA)(1 − IB). (1.2b) Valuations on lattices 7 (b) Suppose S is a finite set. Then the cardinality map # : P (S) → Z, A 7→ #A := the cardinality of A is a valuation. (c) Suppose S = R2, R = R and L consists of measurable bounded subsets of the Euclidean space. The map which associates to each A ∈ L its Euclidean area, area(A), is a real valued valuation. Note that the lattice point count map λ : L → Z, A 7→ #(A ∩ Z2) is a Z-valuation. (d) Let S and L be as above. Then the Euler characteristic defines a valuation χ : L → Z, A 7→ χ(A). ⊓⊔ If (G, +) is an Abelian group and R is a commutative ring with 1 then we denote by HomZ(G, R) the set of group morphisms G → R, i.e. the set of maps ϕ : G → R such that ϕ(g1 + g2)= ϕ(g1)+ ϕ(g2), ∀g1,g2 ∈ G. We will refer to the maps in HomZ(G, R) as Z-linear maps from G to R. Suppose L is an S-lattice. We denote by S(L) the (additive) subgroup of Map(S, Z) generated by the functions IA, A ∈ L. We will refer to the functions in S(L) as L-simple functions, or simple functions if the lattice L is clear from the context. Definition 1.4. Suppose L is an S-lattice, and G is an Abelian group. An G-valued integral on L is a Z-linear map : S(L) → G, S(L) ∋ f 7−→ f ∈ G. ⊓⊔ Z Z Observe that any G-valued integral on an S-lattice L defines a valuation µ : L → G by setting µ(A) := IA. Z The inclusion-exclusion formula for µ follows from (1.2a) and (1.2b). We say that µ is the valuation induced by the integral. When a valuation is induced by an integral we will say that the valuation induces an integral. In general, a generating set of a lattice has a much simpler structure and it is possible to construct many valuations on it. A natural question arises: Is it possible to extend to the entire lattice a valuation defined on a generating set? The next result describes necessary and sufficient conditions for which this happens. 8 Csar-Johnson-Lamberty §1.2. Extending Valuations. Theorem 1.5 (Groemer’s Integral Theorem). Let G be a generating set for a lattice L and let µ : G → H be a valuation on G, where H is an Abelian group. The following statements are equivalent. (1) µ extends uniquely to a valuation on L. (2) µ satisfies the inclusion-exclusion identities µ(B1 ∪ B2 ∪···∪ Bn)= µ(Bi) − µ(Bi ∩ Bj)+ ··· i i • (1) =⇒ (2). Note that the second statement is not trivial because B1 ∪···∪ Bn−1 is not necessarily in G. Suppose the valuation µ extends uniquely to a valuation on L. Then µ satisfies the inclusion exclusion identity µ(A ∪ B) = µ(A)+ µ(B) − µ(A ∩ B) for all A, B ∈ L. Since B1 ∪···∪ Bn−1 ∈ L even if it is not in G, we can apply the inclusion- exclusion identity repeatedly to obtain the result. • (2) =⇒ (3). We wish to construct a linear map : S(L) → H. To do this, we first note that by (1.2b) any function, f in S(L) can be written as R m f = αiIKi , i=1 X where Ki ∈ G and αi ∈ Z. We thus define an integral as follows: m m αiIKi dµ := αiµ(Ki). i=1 i=1 Z X X This map might not be well-defined since f could be represented in different ways as a linear combination of indicator functions of generating sets. We thus need to show that the above map is independent of such a representation. We argue by contradiction and we assume that f has two distinct representations f = γiIAi = βiIBi , yet X X γiµ(Ai) =6 βiµ(Bi). Thus, subtracting these equationsX and renamingX the terms appropriately, we are left with the situation m m αiIKi =0 and αiµ(Ki) =06 . (1.3) i=1 i=1 X X Now we label the intersections L1 = K1, ..., Lm = Km, Lm+1 = K1 ∩ K2, Lm+2 = K1 ∩ K3,... Valuations on lattices 9 such that Li ⊂ Lj ⇒ j < i. This can be done because we have a finite number of sets. We note that all the Li’s are in G, since the Ki’s and their intersections are in G. We then rewrite (1.3) in terms of the Li’s as p p aiILi =0 and aiµ(Li) =06 . (1.4) i=1 i=1 X X Now take q maximal such that p p aiILi =0 and aiµ(Li) =06 . (1.5) i=q i=q X X Note that 1 ≤ q Lq ⊂ Li. i=q+1 [ p Indeed, if x ∈ Lq r j=q+1 Lj then S p ILi (x)=0 ∀i =6 q and aq = aiILi (x)=0. i=q X This is impossible since aq =06 . Hence p p Lq = Lq ∩ Li = (Lq ∩ Li). i=q+1 ! i=q+1 [ [ Let us write Lq ∩ Li = Lji . Then, since i > q and by construction Li ⊂ Lj =⇒ j < i, we have that ji > q. Thus: p p Lq = (Lq ∩ Li)= Lji . i=q+1 i=q+1 [ [ Then we have n n 0 =6 aiµ(Li)= aqµ(Lq)+ aiµ(Li) i=q i=q+1 X X p n p = aqµ Lji + aiµ(Li)= biµLi, i=q+1 ! i=q+1 i=q+1 [ X X where the last equality is attained by applying the assumed inclusion/exclusion principle to the union and regrouping the terms. We now repeat exactly the same process with the expression involving the indicator func- tion. Then, p p p a I = a I p + a I = b I i Li q Si=q+1(Lq ∩Li) i Li i Li i=q i=q+1 i=q+1 X X X 10 Csar-Johnson-Lamberty Thus, p biILi =0 i=q+1 X However, this contradicts the maximality of q. Hence, the integral map is well defined, and we are done. • (3) =⇒ (1). Suppose µ defines an integral on the space of L-simple functions. Then for A ∈ G, IAdµ = µ(A). This motivates us to define an extension µ˜ of µ to L by R µ˜(A) := IAdµ = µ(A), A ∈ L. Z This definition is certainly unambiguous and its restriction to G is just µ, so we need only check that it is a valuation. Let A, B ∈ L. Then, µ˜(A ∪ B)= IA∪Bdµ = IA + IB − IA∩Bdµ Z Z = IAdµ + IBdµ − IA∩Bdµ =µ ˜(A)+˜µ(B) − µ˜(A ∩ B). Z Z Z Thus, µ˜ is an extension of µ to L. Moreover, it is unique. Suppose ν is another extension of µ. Then, given any A ∈ L we can write A = K1 ∪ . . . ∪ Kr for Ki ∈ G. Since µ˜ and ν are both valuations, both satisfy the generalized inclusion exclusion principle. Furthermore, since both are extensions of µ, both agree on G µ˜(A)=˜µ(K1 ∪ . . . ∪ Kr) r = µ(Ki) − µ(Ki ∩ Kj)+ µ(Ki ∩ Kj ∩ Kk)+ ··· i=1 i 2. Valuations on pixelations §2.1. Pixelations. We now study valuations on a small class of easy to understand subsets of Rn. We will then use this knowledge to study valuations on much more general subsets of Rn. One of our main aims is to classify all the “nice” valuations on such subsets. It will turn out that the ideas and results of this section are analogous to results discussed later. Definition 2.1. (a) An affine k-plane in Rn is the translate of a k-dimensional vector subspace of Rn. A hyperplane in Rn is an affine (n − 1)-plane. (b) An affine transformation of Rn is a bijection T : Rn → Rn such that T (λ~u + (1 − λ)~v)= λT~u + (1 − λ)T~v, ∀λ ∈ R, ~u,~v ∈ Rn. The set Aff(Rn) of affine transformations of Rn is a group with respect to the composition of maps. ⊓⊔ Definition 2.2. An (orthogonal) parallelotope in Rn a compact set P of the form P = [a1, b1] ×··· [an, bn], ai ≤ bi, i =1, ··· n. ⊓⊔ We will often refer to an orthogonal parallelotope as a parallelotope or even just a box. Remark 2.3. It is entirely possible for any number of the intervals defining an orthogonal parallelotope to have length zero. Finally, note that the intersections of two parallelotopes is again a parallelotope. ⊓⊔ We denote by Par(n) the collection of parallelotopes in Rn and we define a pixelation to be a finite union of parallelotopes. Observe that the collection Pix(n) of pixelations in Rn is a lattice, i.e. it is stable under finite intersections and unions. Definition 2.4. (a) A pixelation P ∈ Pix(n) is said to have dimension n (or full dimension) if P is not contained in a finite union of hyperplanes. (b) A pixelation P ∈ Pix(n) is said to have dimension k (k ≤ n) if P is contained in a finite union of k-planes but not in a finite union of (k − 1)-planes. ⊓⊔ The top part of Figure 1 depicts possible boxes in R2, while the bottom part depicts a possible pixelation in R2. §2.2. Extending Valuations from Par to Pix. By definition, Par(n) is a generating set of Pix(n). Consequently, we would like to know if we can extend valuations from Par(n) to Pix(n). The following theorem shows that we can do so whenever the valuations map into a commutative ring with 1. Theorem 2.5. Let R be a commutative ring with 1. Then any valuation µ : Par(n) → R extends uniquely to a valuation on Pix(n). Proof. Due to Groemer’s Integral Theorem, all we need to show is that µ gives rise to an integral on the vector space of functions generated by the indicator functions of boxes. Thus 12 Csar-Johnson-Lamberty FIGURE 1. Planar pixelations. it suffices to show that m m αiIPi =0=⇒ αiµ(Pi)=0, i=1 i=1 X X where the Pi’s are boxes. We proceed by induction on the dimension n. If the dimension is zero, the space only has one point, and the above claim is true. Now suppose that the theorem holds in dimension n − 1. For the sake of contradiction, we suppose the theorem does not hold for dimension n. That is, suppose there exist distinct boxes P1,...,Pm such that m m αiIPi =0 and αiµ(Pi)= r =06 . (2.1) i=1 i=1 X X Let k be the number of the boxes Pi of full dimension. Take k to be minimal over all such contradictions. We distinguish three cases. Case 1. k = 0. Since none of the boxes is of full dimension, each is contained in a hyper- plane. Of all the relations of type (2.1) we choose the one so that the boxes involved are contained in the smallest possible number ℓ of hyperplanes. Assume first that ℓ = 1. Then all the Pi are contained in a single hyperplane. By the induction hypothesis, then the integral is well defined, so we have a contradiction. Thus, we can assume that ℓ> 1. So, there exist ℓ hyperplanes orthogonal to the coordinate axes, H1,...,Hℓ, such that each Pi is contained in one of them. Without loss of generality, we may renumber the indices so that P1 ⊂ H1. The the restriction to H1 of the first sum in (2.1) is zero so that m αiIPi∩H1 =0. i=1 X But, Pi ∩ H1 is a subset of the a hyperplane H1 and we can apply the induction hypothesis to conclude that m αiµ(Pi ∩ H1)=0. i=1 X Valuations on pixelations 13 Subtracting the above two equations from (2.1) we see that m m αi(IPi − IPi∩H1 )=0 and αi(µ(Pi) − µ(Pi ∩ H1)) = r =06 . i=1 i=2 X X The above sums take the same form (2.1), but we see that the boxes Pj ⊂ H1 disappear since Pj = Pj ∩ H1. Thus we obtain new equalities of the type (2.1) but the boxes involved are contained in fewer hyperplanes, H2, ··· ,Hℓ contradicting the minimality of ℓ. Case 2. k = 1. We may assume the top dimensional box is P1. Then P2 ∪···∪ Pm is contained in a finite union of hyperplanes H1, ··· ,Hν perpendicular to the coordinate axes. Observe that (H1 ∪···∪ Hν) ∩ P1 ( P1. Indeed, ν vol (H1 ∪···∪ Hν) ∩ P1 ≤ vol (Hj ∩ P1)=0 < vol (P1), j=1 X so that ∃x0 ∈ P1 r (H1 ∪···∪ Hν). Using the identity αiIPi = 0 at x0 found above we deduce α1 = 0 which contradicts the minimality of k. P Case 3. k > 1. We can assume that the top dimensional boxes are P1, ··· ,Pk. Choose a hyperplane H such that P1 ∩ H is a facet of P1, i.e. a face of P1 of highest + − dimension such that it is not all of P1. H has two associated closed half-spaces H and H . + + H is singled out by the requirement P1 ⊂ H . Recall that m αiIPi =0. i=1 X Restricting to H+ we deduce m + αiIPi∩H =0. i=1 X Likewise, m m − αiIPi∩H =0 and αiIPi∩H =0 (2.2) i=1 i=1 +X − X+ − Note that Pi =(Pi ∩ H ) ∪ (Pi ∩ H ) and (Pi ∩ H ) ∩ (Pi ∩ H )= Pi ∩ H. Then, since µ is a valuation, it obeys the inclusion-exclusion rule so m m m m + − αiµ(Pi)= αiµ(Pi ∩ H )+ αiµ(Pi ∩ H ) − αiµ(Pi ∩ H) (2.3) i=1 i=1 i=1 i=1 X X X X Since the sets Pi ∩ H are in a space of dimension n − 1, and m αiIPi∩H =0 i=1 X 14 Csar-Johnson-Lamberty we deduce from the induction assumption that m αiµ(Pi ∩ H)=0. i=1 X − − On the other hand, P1 ∩ H = P1 ∩ H so P1 ∩ H has dimension n − 1. We now have a − new collection of boxes Pi ∩ H , i =1, ··· , m of which at most k − 1 are top dimensional and satisfy − αiIPi∩H =0. i X The minimality of k now implies m − αiµ(Pi ∩ H )=0. i=1 X Therefore, the equality (2.3) implies m m + αiµ(Pi)= αiµ(Pi ∩ H )= r. (2.4) i=1 i=1 X X P1 has dimension n. Then there exist 2n hyperplanes H1,...,H2n such that 2n + P1 = Hi . i=1 \ + Replacing Pi with Pi ∩ H and iterating the above argument we get m m + + + αiµ(Pi ∩ H1 ∩ H2 ∩···∩ H2n )= αiµ(Pi ∩ P1)= r. (2.5) i=1 i=1 X X and m αiIPi∩P1 =0. (2.6) i=1 X We repeat this argument with the remaining top dimensional boxes P2,...,Pk and if we set P0 := P1 ∩···∩ Pk, A := α1 + ··· + αk we conclude m αiIPi∩P0 = AIP0 + αiIPi∩P0 =0, (2.7a) i=1 X Xi>k m αiµ(Pi ∩ P0)= Aµ(P0)+ αiµ(Pi ∩ P0)= r. (2.7b) i=1 X Xi>k In the above sums, at most one of the boxes is top dimensional, which contradicts the mini- mality of k > 1. ⊓⊔ Valuations on pixelations 15 §2.3. Continuous Invariant Valuations on Pix. n Notation 2.6. We shall denote Tn the subgroup of Aff(R ) generated by the translations and the permutations of coordinates in Rn. ⊓⊔ e Definition 2.7. A valuation µ on Pix(n) is called invariant if µ(gP )= µ(P ), ∀P ∈ Pix(n), g ∈ Tn. and is called translation invariant if the same condition holds for all translations g. ⊓⊔ e We aim to find all the invariant valuations on Pix(n). To avoid unnecessary complications, we will impose a further condition on the valuations. The final condition we would like on our valuations on Pix(n) is that of continuity. Our valuations are functions on Pix(n), which is a collection of compact subsets of Rn. So, in order for continuity to make any sense, we would like some concept of open sets for this collection of compact sets. A good way of achieving this goal is to make them into a metric space by defining a reasonable notion of distance. Definition 2.8. (a) Let A ⊂ Rn and x ∈ Rn. The distance from x to A, d(x, A), is the nonnegative real number d(x, A) defined by d(x, A) = inf d(x, a) a∈A where d(x, a) is the Euclidian distance from x to a (b) Let K and E be subsets of Rn. Then the Hausdorff distance δ(K, E) is defined by: δ(K, E) = max sup d(a, E), sup d(K, b) . a∈K b∈E n (c) A sequence of compact sets Kn in R converges to a set K if δ(Kn,K) −→ 0 as n −→ ∞. If this is the case, then we write Kn −→ K. ⊓⊔ Remark 2.9. If K and E are compact, then δ(K, E)=0 if and only if K = E. That is, the Hausdorff distance is positive definite on the set of compact sets in Rn. ⊓⊔ n n Let Bn be the unit ball in R . For K ⊂ R and ǫ> 0, set K + ǫBn := {x + ǫu | x ∈ K and u ∈ Bn}. The following lemma (whose proof is clear) gives a hands-on understanding of how the Hausdorff distance behaves. Lemma 2.10. Let K and E be compact subsets of Rn. Then δ(K, E) ≤ ǫ ⇐⇒ K ⊂ E + ǫBn and E ⊂ K + ǫBn. ⊓⊔ The above result implies that the Hausdorff distance is actually a metric. The next result summarizes this. Proposition 2.11. The collection of compact subsets of Rn together with the Hausdorff dis- tance forms a metric space. ⊓⊔ 16 Csar-Johnson-Lamberty Definition 2.12. Let µ : Pix(n) → R be a valuation on Pix(n). Then µ is called (box) continuous if µ is continuous on boxes that is for any sequence of boxes Pi converging in the Hausdorff metric to a box P we have µ(Pi) −→ µ(P ) ⊓⊔ We want to classify the continuous invariant valuations on Pix(n). We start by considering the problem in R1. An element of Pix(1) is a finite union of closed intervals. For A ∈ Pix(1), set 1 µ0(A)= the number of connected components of A 1 µ1(A)= the length of A Both are continuous invariant valuations on Pix(1). It is clear that they are invariant under Tn, which is, in this case, the group of translations. It is clear that both are continuous. Proposition 2.13. Every continuous invariant valuation µ : Pix(1) → R is a linear combi- e 1 1 nation of µ0 and µ1. ′ 1 ′ Proof. Let c = µ(A), where A is a singleton set {x}, x ∈ R. Now let µ = µ − cµ0. µ vanishes on points by construction. Now, define a continuous function f : [0, ∞)−→R by ′ ′ 1 f(x) = µ ([0, x]). µ is invariant because µ and µ0 are invariant. Then, if A is a closed interval of length x, µ′(A)= f(x) since we can simply translate A to the origin. Now observe that f(x + y)= µ′([0, x + y]) = µ′([0, x] ∪ µ′([x, x + y]) = µ′(0, x]) + µ′([x, x + y]) − µ′({x}) = µ′([0, x]) + µ′([x, x + y]) = f(x)+ f(y). Since f is continuous and linear, we deduce that there exists a constant r such that f(x)= rx, ′ 1 for all x ≥ 0. Therefore, µ = rµ1 and our assertion follows from the equality ′ µ = µ + cµ0 = rµ1 + cµ0. ⊓⊔ n We now move onto R . Let µn(P ) be the volume of a pixelation P of dimension n. Definition 2.14. The k-th elementary symmetric polynomial in the variables x1,...,xn the polynomial ek(x1,...,xn) such that e0 =1 and ek(x1,...,xn)= xi1 ··· xik 1 ≤ k ≤ n. ⊓⊔ 1≤i1 1 1 1 1 1 Proof. Let µ0,µ1 : Pix(1) → R be the valuations described above. We set µt = µ0 + tµ1, where t is a variable. Then, n 1 1 1 µt = µt × µt ×···× µt : Par(n) → R[t] is an invariant valuation on parallelotopes with values in the ring of polynomials with real n coefficients. By Groemer’s Extension Theorem (2.5), µt extends to a valuation on Pix(n) which must also be invariant. Using (2.8) we deduce n n n k µt ([0, x1] ×···× [0, xn]) = (1 + txj)= ek(x1, ··· , xn)t . j=1 Y Xk=0 For any parallelotope P we can write n n k µt(P )= µk (P )t . Xk=0 n The coefficients µk (P ) define continuous invariant valuations on Par(n) which extend to continuous invariant valuations on Pix(n) such that n n n k n µt (S)= µk (S)t , ∀S ∈ Pix(n) µk ([0, x1] ×···× [0, xn]) = ek(x1, ··· , xn). Xk=0 ⊓⊔ Theorem 2.16. The valuations µi on Pix(n) are normalized independently of the dimension m n n, i.e. µi (P )= µi (P ) for all P ∈ Pix(n). Proof. This follows from the preceding theorem and the definition of an elementary sym- metric function. If P ∈ Par(n) and we consider the same P ∈ Par(k), where k > n, P remains a cartesian product of the same intervals, except there are some additional intervals of length 0, which do not effect µi. ⊓⊔ Since the valuation µk(P ) is independent of the ambient space, µk is called the k-th in- trinsic volume. µ0 is the Euler characteristic. µ0(Q)=1 for all non-empty boxes. n Theorem 2.17. Let H1 and H2 be complementary orthogonal subspaces of R spanned by subsets of the given coordinate system with dimensions h and n − h, respectively. Let Pi be a parallelotope in Hi and let P = P1 × P2. µi(P1 × P2)= µr(P1)µs(P2) (2.9) r+s=i X The identity is therefore valid when P1 and P2 are pixelations since both sides of the above equalities define valuations on Par(n) which extend uniquely to valuations on Par(n). Proof. Suppose P1 has sides of length x1,...,xh and P2 has sides of length y1,...,yn−h. Then, µr(P1)µs(P2)= xj1 ··· xjh yk1 ··· jks r+s=i r+s=i ! X X 1≤j1 Let jr+1 = k1 + h,...,ji = jr+s = ks + h and let xh=1 = y1,...,xn = yn−h, simply relabelling. Then, µ (P )µ (P )= x ··· x x ··· x r 1 s 2 j1 jr jr+1 ji r+s=i r+s=i X X 1≤j1 §2.4. Classifying the Continuous Invariant Valuations on Pix. At this point we are very close to a full description of the continuous invariant valuations on Pix(n). Definition 2.18. A valuation µ on Pix(n) is said to be simple if µ(P )=0 for all P of dimension less than n. ⊓⊔ Theorem 2.19 (Volume Theorem for Pix(n)). Let µ be a translation invariant, simple valu- ation defined on Par(n) and suppose that µ is either continuous or monotone. There exists c ∈ R such that µ(P )= cµn(P ) for all P ∈ Pix(n), that is µ is equal to the volume, up to a constant factor. n n n 1 n c Proof. Let [0, 1] denote the unit cube in R and let c = µ([0, 1]) . Then, µ 0, k = kn for all k > 0 ∈ Z. Therefore, µ(C) = cµn(C) for every box C of rational dimensions 1 n with sides parallel to the coordinate axes since C can be built from [0, k ] cubes for some k. Since µ is either continuous or monotone and Q is dense in R, then µ(C) = cµn(C) for C with real dimensions since we can find a sequence of rational Cn converging to C. Then, by inclusion-exclusion, µ(P ) = cµn(P ) for all P ∈ Pix(n) (since it works for parallelotopes, we can extend it to a valuation on pixelations). ⊓⊔ Theorem 2.20. The valuations µ0,µ1,...,µn form a basis for the vector space of all con- tinuous invariant valuations defined on Pix(n). Proof. Let µ be a continuous invariant valuation on Pix(n). Denote by x1,...,xn be the n standard Euclidean coordinates on R and let Hj denote the hyperplane defined by the equa- tion xj = 0. The restriction on µ to Hj is an invariant valuation on pixelations in Hj. Proceeding by induction (taking n =1 as a base case, which was proven in Proposition 2.13, assume n−1 µ(A)= ciµi(A) ∀A ∈ Pix(n) such that A ⊆ Hj (2.10) i=0 X The ci are the same for all Hj since µ0,...,µn−1 are invariant under permutation. Then, µ − n−1 i=0 ciµi vanishes on all lower dimensional pixelations in Pix(n) since any such pixelation is in a hyperplane parallel to one of the Hj’s (since the µi’s are translationally invariant). PThen, by Theorem 2.19 we deduce n−1 µ − ciµi = cnµn i=0 X Valuations on pixelations 19 which proves our claim. ⊓⊔ If we can find a continuous invariant valuation on Pix(n), then we know that it is a linear combination of the µi’s. However, we would like a better description, if at all possible. The following corollary yields one. Definition 2.21. A valuation µ is said to be homogenous of degree k > 0 if µ(αP ) = αkµ(P ) for all P ∈ Pix(n) and all α ≥ 0. Corollary 2.22. Let µ be a continuous invariant valuation defined on Pix(n) that is homoge- nous of degree k for some 0 ≤ k ≤ n. Then there exists c ∈ R such that µ(P )= cµk(P ) for all P ∈ Pix(n). n n Proof. There exist c1,...,cn ∈ R such that µ = ciµi. If P = [0, 1] , then for all α> 0, i=0 X n n n n µ(αP )= c µ (αP )= c αiµ (P )= c α i i i i i i i i=0 i=0 i=0 X X X Meanwhile, n n n µ(αP )= αkµ(P )= αk c µ (P )= αk c i i i i i=0 i=0 so X X n n n n c αk = c αi i i i i i=0 ! i=0 X X meaning that ci =0 for i =6 k. ⊓⊔ 20 Csar-Johnson-Lamberty 3. Valuations on polyconvex sets Now that we understand continuous invariant valuations on a very specific collection of sub- sets of Rn, we recognize the limitations of this viewpoint. Notably we have said nothing of valuations on exciting shapes such as triangles and disks. To include these, we dramatically expand our collection of subsets and again try to classify the continuous invariant valuations. This effort will turn out to be a far greater undertaking. §3.1. Convex and Polyconvex Sets. Definition 3.1. (a) K ⊂ Rn is convex if for any two points x and y in K, the line segment between x and y lies in K. We denote by Kn the set of all compact convex subsets of Rn. (b) A polyconvex set is a finite union of compact convex sets. We denote by Polycon(n) the set of all polyconvex sets in Rn. ⊓⊔ Example 3.2. A single point is a compact convex set. If T := {(x, y ) ∈ R2 | x, y ≥ 0, x + y ≤ 1}. then T is a filled in triangle in R2, so that T ∈ K2. Also, ∂T is a polyconvex set, but not a convex set. ⊓⊔ One of the most important properties of convex sets is the separation property. Proposition 3.3. Suppose C ⊂ Rn is a closed convex set and x ∈ Rn r C. Then there exists a hyperplane which separates x from C, i.e. x and C lie in different half-spaces determined by the hyperplane. Proof. We outline only the main geometric ideas of the construction of such a hyperplane. Let d := dist(x, C). Then we can find a unique point y ∈ C such that d = |y − x|. The hyperplane perpendicular to the segment [x, y] and intersecting it in the middle will do the trick. ⊓⊔ Definition 3.4. If A is a polyconvex set in Rn, then we say that A is of dimension n or has full dimension if A is not contained in a finite union of hyperplanes. Otherwise, we say that A has lower dimension. ⊓⊔ Remark 3.5. Polycon(n) is a distributive lattice under union and intersection. Furthermore, Kn is a generating set of Polycon(n). ⊓⊔ We must now explore some tools we can use to understand these compact, convex sets. The most critical of these is the support function. Let h−, −i denote the standard inner product on Rn and by |−| the associated norm. We set Sn−1 := {u ∈ Rn; |u| =1}. Valuations on polyconvex sets 21 n n−1 Definition 3.6. Let K ∈ K and nonempty. Then its support function, hK : S → R, given by hK(u) := maxhu, xi. ⊓⊔ x∈K n−1 Example 3.7. If K = {x} is just a single point, then hK (u)= hu, xi for all u ∈ S . Remark 3.8. (a) We can characterize the support function in terms of a function on Rn as follows. Let h˜ : Rn−→R be such that h˜(tu) = th˜(u) for all u ∈ Sn−1 and t ≥ 0. Let h be the restriction of h˜ to Sn−1. Then h is a support function of a compact convex set in Rn if and only if h˜(x + y) ≤ h˜(x)+ h˜(y) for all x, y ∈ Rn. n (b) Consider the hyperplane H(K,u) = {x ∈ R | hx, ui = hK(u)} and the closed half- − n space H(K,u) = {x ∈ R | hx, ui ≤ hK(u)}. Then it is easy to see that H(K,u) is “tangent” to ∂K and that K lies wholly in H(K,u)− for all u ∈ Sn−1. The separation property described in Proposition 3.3 implies K = H(K,u)−. Sn−1 u∈\ In other words, K is uniquely determined by its support function. ⊓⊔ Definition 3.9. Let K, L be in Kn. Then we define K + L := {x + y | x ∈ K,y ∈ L} and call K + L the Minkowski sum of K and L. Remark 3.10. We want to point out that for every K, L ∈ Kn we have n−1 K ⊂ L ⇐⇒ hk(u) ≤ hk(u), ∀u ∈ S and hK+L(u) = max (hx + y,ui) = max (hx, ui + hy,ui) x∈K,y∈L x∈K,y∈L = max(hx, ui) + max(hy,ui)= hK(u)+ hL(u). ⊓⊔ x∈K y∈L Remark 3.11. Recall that for compact sets K and L in Rn, the Hausdorff metric satisfies δ(K, L) ≤ ǫ if and only if K ⊂ L + ǫB and L ⊂ K + ǫB. In light of this fact and the preceding comments, one can show that δ(K, L) = sup |hK(u) − hL(u)|. u∈Sn−1 That is, the Hausdorff metric on compact convect subsets of Rn is given by the uniform metric on the set of support functions of compact convex sets. ⊓⊔ Now that we have some tools to understand these compact convex sets, we will soon wish to consider valuations on them. But we don’t want just any valuations. We would like our valuations to be somehow tied to the shape of the convex set alone, rather than where the convex set is in the space. So, we would like some sort of invariance under types of 22 Csar-Johnson-Lamberty transformations. Furthermore, to make things nicer we would like to restrict our attention to those valuations which are somehow continuous. We formalize these notions. It is easiest to begin with continuity. Recall that the Hausdorff distance turns the set of compact subsets of Rn into a metric space. Since Kn is a subset of these, continuity is a well defined concept for elements of Kn. (We restrict our attention to these elements and not all of Polycon(n) since dimensionality problems arise when considering limits of polyconvex sets.) Definition 3.12. A valuation µ : Polycon(n) → R is said to be convex continuous (or simply continuous where no confusion is possible) if µ(An)−→µ(A) whenever An, A are compact, convex sets and An−→A. ⊓⊔ n Notation 3.13. Let En be the Euclidean group of R , which is the subgroup of affine trans- n formations of R generated by translations and rotations. For any g ∈ En there exist T ∈ SO(n) and v ∈ Rn such that g(x)= T (x)+ v, ; ∀x ∈ Rn. The elements of En are also known as rigid motions. Definition 3.14. Let µ : Polycon(n) → R be a valuation. Then µ is said to be rigid motion invariant (or invariant when no confusion is possible) if µ(A)= µ(gA) for all A ∈ Polycon(n) and g ∈ En. If the same holds only for translations g then µ is said to be translation invariant. ⊓⊔ Our aim is to understand the set of convex-continuous invariant valuations on Polycon(n), which we will denote by Val(n). Note that for every m ≤ n we have an inclusion Polycon(m) ⊂ Polycon(n) given by the natural inclusion Rm ֒→ Rn. In particular, any continuous invariant valuation µ : Polycon(n) → R induces by restriction a valuation on Polycon(m). In this way we obtain for every m ≤ n a restriction map Sm,n : Val(n) → Val(m) such that Sn,n is the identity map and for every k ≤ m ≤ n we have Sk,n = Sk,m ◦ Sm,n, i.e. the diagram below commutes. Sm,n Val(n) Û Val(m) ³ ³ ³ Sk,m ³ µ Sk,n ³ Ù Val(k) Valuations on polyconvex sets 23 Definition 3.15. An intrinsic valuation is a sequence of convex-continuous, invariant valua- tions µn ∈ Val(n) such that for every m ≤ n we have m n µ = Sm,nµ . ⊓⊔ Remark 3.16. (a) To put the above definition in some perspective we need to recall a classical notion. A projective sequence of Abelian groups is a sequence of Abelian groups (Gn)n≥1 together with a family of group morphisms Sm,n : Gn → Gm, m ≤ n satisfying Snn = ½Gn , Skn = Skm ◦ Smn, ∀k ≤ m ≤ n.. The projective limit of a projective sequence {Gn; Smn } is the subgroup limprojn Gn ⊂ Gn n Y consisting of sequences (gn)n≥1 satisfying gn ∈ Gn, gm = Sm,ngn, ∀m ≤ n. The sequence (Val(n)) together with the maps Smn define a projective sequence and we set Val Val Val (∞) := limprojn (n) ⊂ (n). n≥0 Y An intrinsic measure is then an element of Val(∞). Similarly if we denote by ValPix(n) the space of continuous, invariant valuations on Pix(n) the we obtain again a projective sequence of vector spaces and an element in the corresponding projective limit will be an intrinsic valuation in the sense defined in the previ- ous section. (b) Observe that since ValPix(n) ⊂ Val(n) we have a natural map Φn : Val(n) → ValPix(n). A priori this linear map need be neither injective nor surjective. However, in a later section we will show that this map is a linear isomorphism. ⊓⊔ §3.2. Groemer’s Extension Theorem. We now show that any convex-continuous valua- tion on Kn can be extended to Polycon(n). Thus, we can confine our studies to continuous valuations on compact, convex sets. Theorem 3.17. A convex, continuous valuation µ, on Kn can be extended (uniquely) to Polycon(n). Moreover, if µ is also invariant, then so is its extension. Proof. Suppose that µ is a convex-continuous valuation on Kn. In light of Groemer’s integral theorem, we need only show that the integral defined by µ on the space of indicator functions is well defined. We proceed by induction on dimension. In dimension zero, this proposition is trivial, and in dimension one, Kn is the same as Par(n) and Polycon(n) is the same as Pix(n). Hence, 24 Csar-Johnson-Lamberty we have already done this dimension as well. So, suppose the theorem holds for dimension n − 1. Suppose for the sake of contradiction that the integral defined by µ is not well defined. Us- ing the same technique as in Theorem 2.5, Groemer’s Extension Theorem for parallelotopes, n suppose that there exist K1,...,Km ∈ K such that m αiIKi =0 (3.1) i=1 X while m αiµ(Ki)= r =06 (3.2) i=1 X Take m to be the least positive integer such that (3.1) and (3.2) exist. + − Choose a hyperplane H with associated closed half-spaces H and H such that K1 ⊂ + Int(H ). Recall that IA∩B = IAIB. Thus, in light of equation (3.1), we can multiply and get m + αiIKi∩H =0 i=1 X as well as m m − αiIKi∩H =0 and αiIKi∩H =0. i=1 i=1 X + − X + − Now note that Ki =(Ki ∩ H ) ∪ (Ki ∩ H ) and that H ∩ H = H. Thus, since µ is a valuation, we may apply this decomposition and see that m m m m + − αiµ(Ki)= αiµ(Ki ∩ H )+ αiµ(Ki ∩ H ) − αiµ(Ki ∩ H). i=1 i=1 i=1 i=1 X X X X Since each Ki ∩ H lies inside H, a space of dimension n − 1, we deduce from the induction assumption that m αiµ(Ki ∩ H)=0. i=1 X+ Moreover, since we took K1 ⊂ Int (H ), we have − IK1∩H =0, and the sum m m − − αiµ(Ki ∩ H )= αiµ(Ki ∩ H ) i=1 i=2 X X must be zero due to the minimality of m. Thus, from (3.2), we have m m + 0 =6 r = αiµ(Ki)= αiµ(Ki ∩ H ). i=1 i=1 X X Valuations on polyconvex sets 25 + Thus we have replaced Ki by Ki ∩ H in (3.1) and (3.2). We now repeat this process. By taking a countable dense subset, choose a sequence of hyperplanes, H1,H2,... such that + K1 ⊂ Int(Hi ), and + K1 = Hi . Thus, iterating the proceeding argument, we have\ that m + + αiµ(Ki ∩ H1 ∩···∩ Hq )= r =06 i=1 X for all q ≥ 1. Since µ is continuous, we can take the limit as q →∞, giving m αiµ(Ki ∩ K1)= r =06 , i=1 X while applying the same type of argument to (3.1) yields m αiIKi∩K1 =0. i=1 X Thus, we find ourselves in exactly the same position we were with equations (3.1) and (3.2); therefore, we may repeat the entire argument for K2,...,Km, giving m m αiµ(Ki ∩ K1 ∩···∩ Km)= αiµ(K1 ∩···∩ Km) i=1 i=1 X X m = αi · µ(K1 ∩···∩ Km)= r =06 (3.3a) i=1 ! X and m m αiIK1∩···∩Km = αi (IK1∩···∩Km )=0. (3.4) i=1 i=1 ! X X The equalities (3.3a) and (3.4) contradict each other. The first implies that α1 +···+αm =06 and K1∩· · ·∩Km =6 ∅, while from the second implies that α1+···+αm =0 or IK1···∩Km =0. Thus, the integral must be well defined, so there exists a unique extension of µ to Poly- con(n). ⊓⊔ §3.3. The Euler Characteristic. n Theorem 3.18. (a) There exists an intrinsic valuation µ0 =(µ0 )n≥0 uniquely determined by n n µ0 (C)=1, ∀C ∈ K . n−1 n (b) Suppose C ∈ Polycon(n), and u ∈ S . Define ℓu : R → R by ℓu(x)= hu, xi and set −1 Ht := ℓu (t), Ct := C ∩ Ht, Fu,C (t) := µ0(Ct). 26 Csar-Johnson-Lamberty Then Fu,C (t) is a simple function, i.e. it is a linear combination of integral coefficients of 1 characteristic functions IS, S ∈ K . Moreover µ0(C)= IC dµ0 = Fu,C (t)dµ0(t)= Fu,C (t) − Fu,C (t + 0) . (Fubini) Z Z t X Proof. Part (a) follows immediately from Theorem 3.17. To establish the second part we use again Theorem 3.17. For every C ∈ Polycon(n) define λu(C) := Fu,C (t)dµ0(t) Z Observe that for every t ∈ R and every C1,C2 ∈ Polycon(n) we have Fu,C1∪C2 (t)= µ0 Ht ∩ (C1 ∪ C2) = µ0 (Ht ∩ C1) ∪ (Ht ∩ C2) = µ0(Ht ∩ C1)+ µ0(Ht ∩ C2) − µ0 (H t ∩ C1) ∩ (Ht ∩ C2) = Fu,C1 (t)+ Fu,C2 (t) − Fu,C1∩C2 (t). n 1 Observe that if C ∈ K then Fu,C is the characteristic function of a compact interval ⊂ R . This shows that χu,C is a simple function for every C ∈ Polycon(n). Moreover Fu,C1∪C2 dµ0 = Fu,C1 dµ0 + Fu,C2 dµ0 − Fu,C1∩C2 dµ0 Z Z Z Z so that the correspondence Polycon(n) ∋ C 7→ λu(C) n is a valuation such that λu(C)=1, ∀C ∈ K . From part (a) we deduce Fu,Cdµ0 = λu(C)= µ0(C). Z We only have to prove that for every simple function h(t) on R1 we have h(t)dµ0(t)= L1(h) := h(t) − h(t + 0) . Z t X To achieve this observe that L1(h) is linear and convex continuous in h and thus defines an integral on the space of simple functions. Moreover if h is the characteristic function of a compact interval then L1(h). Thus by Theorem 3.17 we deduce L1 = dµ0. Z ⊓⊔ Remark 3.19. Denote by S(Rn) the the Abelian subgroup of Map(Rn, Z) generated by indi- cator functions of compact convex subsets of Rn. We will refer to the elements of S(Rn) as simple functions. Thus any f ∈ S(Rn) can be written non-uniquely as a sum n f = αiICi , αi ∈ Z, Ci ∈ K . i X Valuations on polyconvex sets 27 The Euler characteristic then defines an integral n dµ0 : S(R ) → Z, αiICi dµ0 = αi. i i Z Z X X The convolution of two simple functions f,g is the function f ∗ g defined by f ∗ g(x) := f¯x(y) · g(y)dµ0(y), Z where f¯x(y) := f(x − y). Observe that if A, B ∈ Kn and A + B is their Minkowski sum then IA ∗ IB = IA+B so that f ∗ g is a simple function for any f,g ∈ S(Rn). Moreover, f ∗ g = g ∗ f, ∀f,g ∈ S(Rn). n n Finally, f ∗ I{0} = f, for all f ∈ S(R ) so that S(R ), +, ∗ is a commutative ring with 1.⊓⊔ Definition 3.20. A convex polyhedron is an intersection of a finite collection of closed half- spaces. A convex polytope is a compact convex polyhedron. A polytope is a finite union of convex polytopes. The polytopes form a distributive sublattice of Polycon(n). The dimension of a convex polyhedron P is the dimension of the affine subspace Aff(P ) generated by P . We denote by relint(P ) the relative interior of P that is, the interior of P relative to the topology of Aff(P ). Remark 3.21. Give a convex polytope P , the boundary ∂P is also a polytope. Therefore, µ0(∂P ) is defined. n Theorem 3.22. If P ⊂ R is a convex polytope of dimension n > 0, then µ0(∂P ) = 1 − (−1)n. n−1 n Proof. Let u ∈ S be a unit vector and define ℓu : R → R as before. Using the previous notation, note that Ht∩∂P = ∂(Ht∩P ) if t is not a boundary point of the interval ℓu(P ) ⊂ R. Let F : R → R be defined by F (t)= µ0(Ht ∩ ∂P ). We proceed by induction. For n =1, we have µ0(∂P )=2=1 − (−1) since ∂P consists of two distinct points (since P is an interval). For n> 1, it follows from the induction by hypothesis that n−1 µ0(Ht ∩ ∂P )= µ0(∂(Ht ∩ P ))=1 − (−1) (*) if t ∈ ℓu(P ) is not a boundary point of the interval ℓu(P ). If t ∈ ∂ℓu(P ), we have µ0(Ht ∩ ∂P )=1 (**) n since Ht ∩ P is a face of P and is thus in K . Finally, µ0(Ht ∩ ∂P )=0 (***) when Ht ∩ ∂P = ∅. 28 Csar-Johnson-Lamberty We can now compute F (t)dµ0(t)= (F (t) − F (t + 0)) t Z X which vanishes except at the two point a and b (a < b) where [a, b]= ℓu(P ). Then, (F (t) − F (t +0)) = F (a) − F (a +0)+ F (b) − F (b + 0). t X Now observe that F (b +0) = 0 by (***), F (b) = F (a)=1 by (**) and F (a +0) = 1 − (−1)n+1 by (*). Then, n−1 n−1 n F (t)dµ0(t)=1 − 1+(−1) +1=(1)+(−1) =1 − (−1) Z ⊓⊔ Theorem 3.23. Let P be a compact convex polytope of dimension k in Rn. Then k µ0(relint(P ))=(−1) (3.5) Proof. Since µ0 is normalized independently of n, we can consider P in the k-dimensional plane in Rn in which it is contained. Then, relint(P )= P r ∂P so k µ0(relint(P )) = µ0(P ) − µ0(∂P )=(−1) . ⊓⊔ Definition 3.24. A system of faces of a polytope P , is a family F of convex polytopes such that the following hold. (a) relint(Q)= P . Q∈F (b) If[Q, Q′ ∈ F and Q =6 Q′, then relint(Q) ∩ relint(Q′)= ∅. ⊓⊔ Theorem 3.25 (Euler-Schl¨afli-Poincar´e). Let F be a system of faces of the polytope P , and let fi be the number of elements in F of dimension i. Then, µ0 = f0 − f1 + f2 −··· . Proof. We have IP = Irelint(Q) F QX∈ so that dim Q k µ0(P )= µ0(relint(Q)) = (−1) = (−1) fk. F F QX∈ QX∈ Xk≥0 ⊓⊔ Valuations on polyconvex sets 29 §3.4. Linear Grassmannians. We denote by Gr(n, k) the set of all k-dimensional sub- spaces in Rn. Observe that the orthogonal group O(n) acts transitively on Gr(n, k) and the stabilizer of a k-dimensional coordinate plane can be identified with the Cartesian prod- uct O(k) × O(n − k). Thus Gr(n, k) can also be identified with the space of left cosets O(n)/ O(k) × O(n − k) . Example 3.26. Gr(n, 1) is the set of lines in Rn, a.k.a the projective space RPn−1. Denote by Sn−1 the unit sphere in Rn. We have a natural map ℓ : Sn−1 → Gr(n, 1), Sn−1 ∋ x 7→ ℓ(x)= the line through 0 and x. The map ℓ is 2-to-1 since ℓ(x)= ℓ(−x). ⊓⊔ Gr(n, k) can also be viewed as a subset of the vector space of symmetric linear operators Rn → Rn via the map Gr(n, k) ∋ V 7→ PV = the orthogonal projection onto V . As such it is closed and bounded and thus compact. To proceed further we need some notations and a classical fact. n Denote by ωn the volume of the unit ball in R and by σn−1 the (n − 1)-dimensional “surface area” of Sn−1. Then σn−1 = nωn−1 and πk n =2k n/2 k! π ω = = , n Γ(n/2+1) 22k+1πkk! n =2k +1 (2k + 1)! where Γ(x) is the gamma function. We list below the values of ω for small n. n n 0 1 2 3 4 4π π2 . ωn 1 2 π 3 2 Theorem 3.27. For every positive constant c there exists a unique measure µ = µc on Gr(n, k) which is invariant, i.e. µ(g · S)= µ(S), ∀g ∈ O(n), S ⊂ Gr(n, k) open subset and has finite volume, i.e. µ(Gr(n, k)) = c. ⊓⊔ For a proof we refer to [San]. Example 3.28. Here is how we construct such a measure on Gr(n, 1). Given an open set U ⊂ Gr(n, 1) we obtain a subset U = ℓ−1(U) ⊂ Sn−1 e 30 Csar-Johnson-Lamberty U consists of two diametrically opposed subsets of the unit sphere. Define 1 e µ(U)= area(U) 2 Observe that for this measure e 1 σ nω µ(Gr(n, 1)) = area(Sn−1)= n−1 = n . 2 2 2 ⊓⊔ Observe that a constant multiple of an invariant measure is an invariant measure. In par- ticular µc = c · µ1. Define nω n! n [n] [n] := n , [n]! := [1] · [2] ··· [n]= ω , := 2ω 2n n k [k]![n − k]! n−1 n and denote by νk the invariant measure on Gr(n, k) such that n νn(Gr(n, k)) = . k k §3.5. Affine Grassmannians. We denote by Graff(n, k) the space of k-dimensional affine subspaces (planes) of Rn. For every affine k-plane we denote by Π(V ) the linear subspace parallel to V , and by V ⊥ the orthogonal complement of Π(V ). We obtain in this fashion a surjection Π : Graff(n, k) → Gr(n, k), V 7→ Π(V ) Observe that an affine k-plane V is uniquely determined by V ⊥ and the point p = V ⊥ ∩ V . The fiber of the map Π : Graff(n, k) → Gr(n, k) over a point L ∈ Gr(n, k) is the set Π−1(L) consisting of all affine k-planes parallel to L. This set can be canonically identified with L⊥ via the map Π−1(L) ∋ V 7→ V ∩ L⊥ ∈ L⊥. The map Π : Graff(n, k) → Gr(n, k) is an example of vector bundle. We now describe how to integrate functions f : Graff(n, k) → R. Define n n f(V )dλk := f(L + p)dL⊥ p dνk , Graff(n,k) Gr(n,k) L⊥ Z Z Z ⊥ where dL⊥ p denotes the Euclidean volume element on L . Example 3.29. Let 1 dist (L, 0) ≤ 1 f : Gr(3, 1) → R, f(L) := . (0 dist (L, 0) > 1 Valuations on polyconvex sets 31 Π(V) V p V⊥ FIGURE 2. The orthogonal projection of an element in Graff(3, 2). Observe that f is none other than the indicator function of the set Graff(3, 1; B3) of affine lines in R3 which intersect B3, the closed unit ball centered at the origin. Then 3 3 3 f(V )dλ1 = f(L + p)dp dν1 = ω2 = [3] · ω2 Graff(3,1) Gr(3,1) L⊥ 1 Z Z Z =ω2 3ω|3 3{zω3 } = ω2 = =2π. 2ω2 2 In particular σ2 1 λ3 Graff(3, 1; B3))=2π = = × surface area of B3. 1 2 2 ⊓⊔ §3.6. The Radon Transform. At this point, we further our goal of understanding Val(n) by constructing some of its elements. Recall that Kn is the collection of compact convex subsets of Rn and Polycon(n) is the collection of polyconvex subsets of Rn. We denote by Val(n) the vector space of convex continuous, rigid motion invariant valuations µ : Polycon(n) → R. Via the embedding Rk ֒→ Rn, k ≤ n, we can regard Polycon(k) as a subcollection of Polycon(n) and thus we obtain a natural map Sk,n : Val(n) → Val(k) 32 Csar-Johnson-Lamberty We denote by Valw(n) the vector subspace of Val(n) consisting of valuations µ homoge- neous of degree (or weight) w, i.e. µ(tP )= twµ(P ), ∀t> 0, P ∈ Polycon(n). For every k ≤ n and every µ ∈ Val(k) we define the Radon transform of µ to be n Rn,kµ : Polycon(n) → R, Polycon(n) ∋ P 7→ (Rn,kµ)(P ) := µ(P ∩ V )dλk . ZGraff(n,k) Proposition 3.30. If µ ∈ Val(k) then Rn,kµ is a convex continuous, invariant valuation on Polycon(n). Proof. For every µ ∈ Val(k), V ∈ Graff(n, k) and any S1,S2 ∈ Polycon(n) we have V ∩ (S1 ∪ S2)=(V ∩ S1) ∪ (V ∩ S2), (V ∩ S1) ∩ (V ∩ S2)= V ∩ (S1 ∩ S2) so that µ( V ∩ (S1 ∪ S2))= µ(V ∩ S1)+ µ(V ∩ S2) − µ(V ∩ (S1 ∩ S2)). Integrating the above equality with respect to V ∈ Graff(n, k) we deduce that Rn,k(S1 ∪ S2)= Rn,k(S1)+ Rn,k(S2) − Rn,k(S1 ∩ S2) so that Rn,kµ is a valuation on Polycon(n). The invariance of Rn,k follows from the invari- n ance of µ and of the measure λk on Graff(n, k). Observe that if Cν is a sequence of compact convex sets in Rn such that n Cν → C ∈ K then lim µ(Cν ∩ V )= µ(C ∩ V ), ∀V ∈ Graff(n, k). ν→∞ We want to show that n n lim µ(Cν ∩ V )dλk (V )= µ(C ∩ V )dλk (V ) ν→∞ ZGraff(n,k) ZGraff(n,k) by invoking the Dominated Convergence Theorem. We will produce an integrable function f : Graff(n, k) → R such that µ(Cn ∩ V ) ≤ f(V ), ∀n> 0, ∀V ∈ Graff(n, k). (3.6) To this aim observe first that since Cn → C there exists R > 0 such that all the sets Cn and the set C are contained in the ball Bn(R) of radius R centered at the origin. Define Graff(n, k; R) := V ∈ Graff(n, k) | V ∩ Bn(R) =6 ∅ . Graff(n, k; R) is a compact subset of Graff(n, k) and thus it has finite volume. Now define N = {0}∪{1/n | n ∈ Z>0 } ⊂ [0, 1], and F : N × Graff(n, k; R) → R, F (r, V ) := µ(C1/r ∩ V ) where for uniformity we set C := C1/0. Observe that N × Graff( n, k; R) is a compact space and F is continuous. Set M := sup F (r, V ) | (r, V ) ∈ N × Graff(n, k; R) . Valuations on polyconvex sets 33 Then M < ∞. The function M V ∈ Graff(n, k; R) f : Graff(n, k) → R, f(V ) := (0 V 6∈ Graff(n, k; R) satisfies the requirements of (3.6). ⊓⊔ The resulting map Rn,k : Val(k) → Val(n) is linear and it is called the Radon transform. Proposition 3.31. If µ ∈ Val(k) is homogeneous of weight w, then Rk+j,kµ is homogeneous of weight w + j, that is w w+j Rk+j,k Val (k)) ⊂ Val (k + j). Proof. If C ∈ Polycon(k + j) and t is a positive real number then k+j ⊥ (Rk+j,kµ)(tC)= µ(tC ∩ (V + p))dL p dνk ⊥ ZGr(k+j,k) ZL We use the equality tC ∩ (V + p)= t (C ∩ (V + t−1p)) to get w −1 k+j ⊥ (Rk+j,kµ)(tC)= t µ(C ∩ (V + t p))dL p dνk Gr(k+j,k) L⊥ Z Z −1 j We make a change in variables q = t p in the interior integral so that dL⊥ p = t dL⊥ q and w+j ⊥ n w+j (Rk+j,kµ)(tC)= t µ(C ∩ (V + q))dL q dνk = t Rk+j,kµ(C). Gr(k+j,k) L⊥ Z Z ⊓⊔ So far we only know two valuations in Val(n), the Euler characteristic, and the Euclidean volume voln. We can now apply the Radon transform to these valuations and hopefully obtain new ones. k Example 3.32. Let volk ∈ Val(k) denote the Euclidean k-dimensional volume in R . Then for every n>k and every compact convex subset C ⊂ Rn we have n (Rn,k volk)(C)= volk(V ∩ C)dλk (V ) ZGraff(n,k) ⊥ n = volk (C ∩ (L + p) dL p dνk (L), ⊥ ZGr(n,k) ZL ! ⊥ where dL⊥ p denotes the Euclidean volume element on the (n − k)-dimensional space L . The usual Fubini theorem applied to the decomposition Rn = L⊥ × L implies that voln(C)= µn(C)= dRn q = volk (C ∩ (L + p) dL⊥ p. n ⊥ ZR ZL Hence n (R vol )(C)= vol (C)dνn(L)= vol (C). n,k k n k k n ZGr(n,k) 34 Csar-Johnson-Lamberty In other words n R vol = vol . n,k k k n Thus the Radon transform of the Euclidean volume produces the Euclidean volume in a higher dimensional space, rescaled by a universal multiplicative scalar. ⊓⊔ The above example is a bit disappointing since we have not produced an essentially new type of valuation by applying the Radon transform to the Euclidean volume. The situation is dramatically different when we play the same game with the Euler characteristic. We are now ready to create elements of Val(n). For any positive integers k, n define n Val µk := Rn,n−kµ0 ∈ (n), where µ0 ∈ Val(k) denotes the Euler characteristic. Observe that if C is a compact convex subset in Rk+j, then for any V ∈ Graff(k + j, k) we have 1 if C ∩ V =6 ∅ µ0(V ∩ C)= . (0 if C ∩ V = ∅ Thus the function Graff(k + j, k) ∋ V 7→ µ0(V ∩ C) is the indicator function of the set Graff(C,k) := V ∈ Graff(k + j, k); V ∩ C =6 ∅ . We conclude that k+j k+j µj (C)=(Rk+j,kµ0)(C)= IGraff(C,k)dλk ZGraff(k+j,k) k+j = λk Graff(k,C) . k+j Thus the quantity µj (C) “counts” how many k-planes intersect C. From this interpretation we obtain immediately the following result. Theorem 3.33 (Sylvester). Suppose K ⊆ L ⊂ Rk+j are two compact convex subsets. Then the conditional probability that a k-plane which meets L also intersects K is equal k+j µj (K) to k+j . ⊓⊔ µj (L) n n We can give another interpretation of µj . Observe that if C ∈ K then n n n µj (C)= µ0(C∩V )dλn−j(V )= µ0 C∩L+p) dL⊥ p dνn−j(L). ⊥ ZGraff(n,n−j) ZGr(n,n−j) ZL ! Now observe that if (C|L⊥) denotes the orthogonal projection of C onto L⊥ then ⊥ µ0 C ∩ (L + p) =06 ⇐⇒ µ0 C ∩ (L + p) ⇐⇒ p ∈ (C|L ). 2 An example of this fact in R can be seen in Figure 3. Hence ⊥ µ0 C ∩ L + p) dL⊥ p = volj(C|L ) ⊥ ZL Valuations on polyconvex sets 35 and thus n ⊥ n µj (C)= volj(C|L )dνn−j(L). (3.7) Gr(n,n−j) n Z Thus µj is the “average value” of the volumes of the projections of C onto the j-dimensional subspaces of Rn. V L C p C L⊥ L⊥ FIGURE 3. C|L⊥ for V in Graff(2, 1). n Valj Val A priori, some or maybe all of the valuations µj ∈ (n) ⊂ (n), 0 ≤ j ≤ n could n n be trivial. Note, however, that µn is voln. Furthermore, volume is intrinsic, so µn is in fact n µn. We show that in fact all of the µj are nontrivial. Proposition 3.34. The valuations n n n Val µ0 = µ0 ,µ1 , ··· ,µj , ··· µn ∈ (n) are linearly independent. n Proof. Let Bn(r) denote the closed ball of radius r centered at the origin of R . We set B B n n = n(1). Since µj is homogeneous of degree j we have n B j n B (3.7) j B ⊥ n j B n µj n(r) = r µj ( n) = r volj( n|L )dνn−j(L)= r volj( j)dνn−j(L). ZGr(n,n−j) ZGr(n,n−j) Hence n n µn B (r) = ω rj dνn = ω rj = ω rj . (3.8) j n j n−j j n − j j j ZGr(n,n−j) Observe that the above equality is also valid for j =0. Suppose that for some real constants c0,c1, ··· ,cn we have n n cjµj =0. j=0 X 36 Csar-Johnson-Lamberty Then n n B cjµj n(r) =0, ∀r > 0, j=0 X so that n n c ω rj =0, ∀r > 0. j j j j=0 X This implies cj =0, ∀j. ⊓⊔ n The valuation µj is homogeneous of degree j and it induces a continuous, invariant, homo- geneous valuation of degree w on the lattice Pix(n) ⊂ Polycon(n). Observe that if C1 ⊂ C2 are two compact convex subsets then n n µj (C1) ≤ µj (C2). Since every n-dimensional parallelotope contains a ball of positive radius we deduce from n (3.8) that µj (P ) =06 , for any n-dimensional parallelotope. Corollary 2.22 implies that there n exists a constant C = Cj such that n n Cj µj (P )= µj(P ), ∀P ∈ Pix(k + j). (3.9) n We denote by µj the valuation n n n Valj µj := Cj µj ∈ (n). (3.10) n b n n n Since µn is voln, as is µ n, we know that Cn = 1. Thus, µ n = µ n = µn = voln. We will n prove in later sections that the sequenceb (µj ) defines an intrinsic valuation and that in fact n all the constants Cj areb equal to 1. b b Remark 3.35. Observe that b n nω µn (B )= ω = [n]ω = n n−1 n n−1 1 n−1 2 σ 1 = n−1 = × surface area of B . 2 2 n This is a special case of a more general fact we discuss in the next subsection which will n imply that the constants Cn−1 in (3.9) are equal to 1. ⊓⊔ §3.7. The Cauchy Projection Formula. We want to investigate in some detail the valua- j+1 Val j+1 ⊥ tions µj ∈ (j + 1). Suppose C ∈ K and ℓ ∈ Gr(j +1, 1). If we denote by (C|ℓ ) the orthogonal projection of C onto the hyperplane ℓ⊥ then we obtain from (3.7) j+1 ⊥ j+1 j+1 µj (C)= µj(C|ℓ )dν1 (ℓ), ∀C ∈ K . (3.11) ZGr(j+1,1) j+1 Loosely, speaking µj (C) is the average value of the “areas” of the shadows of C on the hy- perplanes of Rj+1. To clarify the meaning of this average we need the following elementary fact. Valuations on polyconvex sets 37 Lemma 3.36. For any v ∈ Sj ⊂ Rj+1, |u · v|du =2ωj (3.12) Sj Z Proof. We recall from elementary calculus that an integral can be treated as a Riemann sum, i.e., |ui · v|du ≈ |ui · v|S(Ai), Sj Z i j X where the sphere S has been partitioned into many small portions (here denoted Ai), each containing the point ui and possessing area S(Ai). j Let Ai denote the orthogonal projection of Ai onto the hyperplane tangent to S at the ⊥ ⊥ point ui. Denote by Ai|v the orthogonal projection of Ai onto the hyperplane v . Since j ⊥ Ai ⊂ Se , this projection is entirely into the disk Bn−1 lying inside v . Since Ai lies in a hyperplane with unit normale v⊥, its projected area is the product of its area in the hyperplane ⊥ times the angle between the two planes, i.e. S(Ai|v )= |ui · v|S(Ai). e j For a fine enough partition of S we have that S(Ai) ≈ S(Ai). Clearly then |ui ·v|S(Ai) ≈ ⊥ ⊥ |ui · v|S(Ai), and thus S(Ai|v ) ≈ S(Ai|v ). Therefore,e e e ⊥ e |u · ve|du ≈ S(Ai|v ). Sj i Z X ⊥ Because the collection of sets {Ai|v } contains equivalent portions above and below the ⊥ hyperplane v , it covers the unit, j-dimensional ball Bj once from above and once from below, we see that |u · v|du ≈ 2ωj. Sj Z The similarities in this proof all converge to equalities in the limit as the mesh of the partition Ai goes to zero. ⊓⊔ Theorem 3.37 (Cauchy’s surface area formula). For every convex polytope K ⊂ Rj+1 we denote by S(K) its surface area. Then 1 ⊥ S(K)= µj(K|u )du. ω Sj j Z Proof. Let P have facets Pi with corresponding unit normal vectors vi and surface areas αi. ⊥ For every unit vector u the projection (Pi|u ) onto the j-dimensional subspace orthogonal to u has j-dimensional volume ⊥ µj(Pi|u )= αi|u · vi|. j ⊥ For u ∈ S , the region (P |u ) is covered completely by the projections of facets Pi, ⊥ ⊥ (P |u )= (Pi|u ). i [ For almost every point p in the projection (P |u⊥) the line containing p and parallel to u intersects the boundary of P in two different points. (The set of points p ∈ (P |u⊥) for 38 Csar-Johnson-Lamberty which this does happen is negligible, i.e. it has j-dimensional volume 0.) Thus, in the above decomposition of (P |u⊥) as a union of projection of facets, almost every point of (P |u⊥) is contained in exactly two such projections. Therefore 1 µ (P |u⊥)= µ (P |u⊥) j 2 j i i X so that m ⊥ 1 µj(P |u )du = αi|u · vi|du Sj Sj 2 i=1 Z Z X m m αi (3.12) = |u · vi|du = αiωj = ωjS(P ). 2 Sj i=1 i=1 X Z X ⊓⊔ Recall that for parallelotopes P ∈ Par(j + 1), 1 µ (P )= S(P ). j 2 In this light, Cauchy’s surface area formula becomes 1 ⊥ µj(P )= µj(P |u )du, ∀P ∈ Par(j + 1). 2ω Sj j Z We want to rewrite this as an integral over the Grassmannian Gr(j +1, 1). Recall that we have a 2-to-1 map ℓ : Sj → Gr(j +1, 1). An invariant measure µc on Gr(j+1, 1) of total volume c corresponds to an invariant measure j j µ˜c on S of total volume 2c. If σ denotes the invariant measure on S defined by the area element on the sphere of radius 1 then j 2c σ(S )= σj, µ˜c = σ σj Any function f : Gr(j +1, 1) → R defines a function ℓ∗f := f ◦ ℓ : Sj → R called the pullback of f via ℓ. Conversely, any even function g on the sphere is the pullback of some function g¯ on Gr(j +1, 1). Moreover 1 ∗ c ∗ fdµc = ℓ fdµ˜c = ℓ fdσ. 2 Sj σ Sj ZGr(j+1,1) Z j Z This shows that ∗ σj ℓ fdσ = fdµc. Sj c Z ZGr(j+1,1) j+1 The measure ν1 has total volume c = [j + 1] so that 1 ⊥ σj ⊥ j+1 µj(P )= µj(P |u )du = µj(P |L )dν1 (L) 2ω Sj 2[j + 1]ω j Z j ZGr(j+1,1) Valuations on polyconvex sets 39 Using (3.11) we deduce σj j+1 µj(P )= µj (P ) 2[j + 1]ωj for every parallelotope P ⊂ Rj+1. Recalling that (j + 1)ωj+1 σj =(j + 1)ωj+1, [j +1] = 2ωj we conclude σ j =1 2[j + 1]ωj so that finally j+1 µj(P )= µj (P ), ∀P ∈ Par(j + 1). 40 Csar-Johnson-Lamberty 4. The Characterization Theorem We are now ready to completely characterize Val(n). §4.1. The Characterization of Simple Valuations. Before we can state and prove the characterization theorem for simple valuations, we need to recall a few things. En denotes the group of rigid motions of Rn, i.e. the subgroup of affine transformations of Rn generated n by translations and rotations. Two subsets S0,S1 ⊂ R are called congruent if there exists φ ∈ En such that φ(S0)= S1. A valuation µ : Polycon(n) → R is called simple if µ(S)=0, for every S ∈ Polycon(n) such that dim S < n. The Minkowski Sum of two compact convex sets, K, and L is given by K + L = x + y | x ∈ K,y ∈ L . We call a zonotope a finite Minkowski sum of straight line segments, and we call a zonoid a convex set Y that can be approximated in the Hausdorff metric by a convergent sequence of zonotopes. If a compact convex set is symmetric about the origin, the we call it a centered set. We n n denote the set of centered sets in K by Kc . The proof of the characterization theorem relies in a crucial way on the following technical result. n Lemma 4.1. Let K ∈ Kc . Suppose that the support function of K is smooth. Then there exist zonoids Y1,Y2 such that K + Y2 = Y1. ⊓⊔ Idea of proof. We begin by observing that the support function of a centered line segment in Rn with endpoints u, −u is given by n−1 hu : S → R, h(x) := |hu, xi|. n−1 Thus, any function g : S → R which is a linear combinations of hu’s with positive coefficients is the support function of a centered zonotope. In particular any uniform limit of such linear combinations will be the support function of a zonoid. For example, a function g : Sn−1 → R of the form g(x)= |hu, xi|f(u)dSu, Sn−1 Z with f : Sn−1 → [0, ∞) a continuous, symmetric function, is the support function of a zonoid. The characterization theorem 41 n−1 n−1 Let us denoteby Ceven(S ) the space of continuous, symmetric (even) functions S → R. We obtain a linear map n−1 n−1 C : Ceven(S ) → Ceven(S ), (Cf)(x) := |hu, xi|f(u)dSu Sn−1 Z called the cosine transform. Thus we can rephrase the last remark by saying that the cosine transform of a nonnegative continuous, even function on Sn−1 is the support function of a zonoid. Note that f is continuous, even, then we can write is as the difference of two continuous, even, nonnegative functions 1 1 f = f − f , f = (f + |f|), f = (|f| − f). + − + 2 − 2 n Thus if h is the support function of some C ∈ Kc such that h = Cf =⇒ h + Cf− = Cf+ Note that Cf− is the support function of a zonoid Y1 and Cf+ is the support function of a zonoid Y1 and thus h + Cf− = Cf+ ⇐⇒ C + Y1 = Y2. We deduce that any continuous function which is in the image of the cosine transform is the support function of a set C ∈ Kn satisfying the conclusions of the lemma. The punch line is now that any smooth, even function f : Sn−1 → R is the cosine trans- form of s smooth, even function on the sphere. This existence statement is a nontrivial result, and its proof is based on a very detailed knowledge of the action of the cosine transform on the harmonic polynomials on Sn−1. For a proof along these lines we refer to [Gr, Chap. 3]. For an alternate approach which reduces the problem to a similar statement about Fourier transforms we refer to [Gon, Prop. 6.3]. ⊓⊔ Remark 4.2. The connection between the classification of valuations and the spectral proper- ties of the cosine transform is philosophically the key reason why the classification turns out to be simple. The essence of the classification is invariant theoretic, and is roughly says that the invariance under the group of motions dramatically cuts down the number of possible choices. ⊓⊔ Theorem 4.3. Suppose that µ ∈ Val(n) is a simple valuation. If µ is even, that is µ(K)= µ(−K), ∀K ∈ Kn, then µ([0, 1]n)=0 ⇐⇒ µ is identically zero on Kn. Proof. Clearly only the implication =⇒ is nontrivial. To prove it we employ a simple strategy. By cleverly cutting and pasting and taking limits we produce more and more sets S ∈ Polycon(n) such that µ(S)=0 until we get what we want. We will argue by induction on the dimension n of the ambient space. The result is true for n = 1 because in this case K1 = Par(1) and we can invoke the volume theorem for Pix(1), Theorem 2.19. 42 Csar-Johnson-Lamberty So, now suppose that n > 1 and the theorem holds for valuations on Kn−1. Note that n 1 for every positive integer k the cube [0, 1] decomposes in k cubes congruent to 0, k and overlapping on sets of dimension < n. We deduce that µ([0, 1/k]n)=0. 1 n Since any box whose edges have rational length decomposes into cubes congruent to 0, k for some positive integer k we deduce that µ vanishes on such boxes. By continuity we deduce that it vanishes on all boxes P ∈ Par(n). Using the rotation invariance we conclude that rotation invariance, µ(C)=0 for all boxes C with positive sides parallel to some other orthonormal axes. We now consider orthogonal cylinders with convex bases, i.e. sets congruent to convex sets of the form C ×[0,r] ∈ Kn, C ∈ Kn−1. For every real numbers a < b define a valuation n−1 τ = τa,b on K by τ(K)= µ(K × [a, b]), for all K ∈ Kn−1. Note that [0, 1]n−1 × [a, b] ∈ Par(n) so that τ([0, 1]n−1)= µ([0, 1]n−1 × [a, b])=0. Then τ satisfies the inductive hypotheses, so τ = 0. Hence µ vanishes on all orthogonal cylinders with convex basis. Now suppose that M is a prism, with congruent top and bottom faces parallel to the hyperplane xn = 0, but whose cylindrical boundary is not orthogonal to xn = 0 but rather meets it at some constant angle. See the top of Figure 4 M1 M 2 M 2 M1 FIGURE 4. Cutting and pasting a prism. The characterization theorem 43 We now cut M into two pieces M1, M2 by a hyperplane orthogonal to the cylindrical boundary of M. Using translation invariance, slide M1 and M2 around and glue them to- gether along the original top and bottom faces. (See Figure 4). We then have a right cylinder C. Thus, µ(M)= µ(M1)+ µ(M2)= µ(C)=0. Note that we must actually be careful with this cutting and repasting. It is possible that M is too “fat”, preventing us from slicing M with a hyperplane orthogonal to the cylindrical axes and fully contained in each of them. If this problem occurs, we can easily remedy it by subdividing the top and bottom of M into sufficiently small convex pieces. Using the simplicity of µ, we can then consider each separately. Now let P be a convex polytope having facets P1,...,Pm, and corresponding outward unit n normal vectors u1,...,um. Let v ∈ R and let v¯ denote the straight line segment connecting the point v to the origin. Without loss of generality, suppose that P1,...,Pj are exactly those facets of P such that hui, vi > 0 for all 1 ≤ i ≤ j. We can thus express the Minkowski sum P +¯v as j P +¯v = P ∪ (Pi +¯v) , i=1 ! where each term in the above union is either disjoint[ from the others or intersects in dimen- sion less than n (see Figure 5). _ P 1 + v P1 _ P + v P 2 P 2 _ v FIGURE 5. Smearing a convex polytope. This simply results from the fact that P +¯v is the ’smearing’ of P in the direction of v. Hence, since µ is simple, j µ(P +¯v)= µ(P )+ µ(Pi +¯v). i=1 X But now each term Pi +¯v is the smearing of the facet Pi in the direction of v, making Pi +¯v a prism, so that µ(P +¯v)= µ(P ), 44 Csar-Johnson-Lamberty for all convex polytopes P and line segments v¯. By translation invariance v¯ could be any segment in the space. We deduce iteratively that for all convex polytopes P and all zonotopes Z we have µ(Z)=0 and µ(P + Z)= µ(P ). By continuity, µ(Y )=0 and µ(K + Y )= µ(K), for all K ∈ Kn and all zonoids Y. n Now suppose that K ∈ Kc and has a smooth support function. Then by our lemma, there are zonoids Y1,Y2 such that K + Y1 = Y2. Thus, we now have that µ(K)= µ(K + Y2)= µ(Y1)=0. Since any centered convex body can be approximated by a sequence of centered convex sets with smooth support functions, by continuity of µ, µ is zero on all centered convex, compact sets. Now let ∆ be an n-dimensional simplex, with one vertex at the origin. Let u1,...,un denote the other vertices of ∆, and let P be the parallelepiped spanned by the vectors u1,...,un. Let v = u1 + ··· + un. Let ξ1 be the hyperplane passing through the points u1,...,un and let ξ2 be the hyperplane passing through the points v − u1,...,v − un. Fi- nally, denote by Q the set of all points of P lying between the hyperplanes ξ1 and ξ2. We now write P = ∆ ∪ Q ∪ (−∆+ v), where each term in the union intersects any other in dimension less than n. P and Q are centered, so 0= µ(P )= µ(∆) + µ(Q)+ µ(−∆+ v)= µ(∆) + µ(−∆). Thus, µ(∆) = −µ(−∆). Yet by assumption µ(∆) = µ(−∆). Thus, µ(∆) = 0. Now since any convex polytope can be expressed as the finite union of simplices each of which intersects with any other in dimension less than n, we have that µ(P )=0 for all convex polytopes P . Since convex polytopes are dense in Kn, we have that µ is zero on Kn, which is what we wanted. ⊓⊔ We now immediately get the following result. Theorem 4.4. Suppose that µ is a continuous simple valuation on Kn that is translation and rotation invariant. Then there exists some c ∈ R such that µ(K)+ µ(−K)= cµn(K) for all K ∈ Kn. Proof. For K ∈ Kn, define n η(K)= µ(K)+ µ(−K) − 2µ([0, 1] )µn(K). Then η satisfies the conditions of the previous theorem, so that η is zero on Kn. Thus, µ(K)+ µ(−K)= cµn(K), where c =2µ([0, 1]n). ⊓⊔ The characterization theorem 45 §4.2. The Volume Theorem. Theorem 4.5 (Sah). Let ∆ bea n-dimensional simplex. There exist convex polytopes P1,...,Pm such that ∆ = P1 ∪···∪ Pm where each term of the union intersects another in dimension at most n − 1 and where each of the polytopes Pi is symmetric under a reflection across a hyperplane, i.e. for each Pi, the exists a hyperplane such that Pi is symmetric when reflected across it. Proof. Let x0,...,xn be the vertices of ∆ and let ∆i be the facet of ∆ opposite to xi. Let z be the center of the inscribed sphere of ∆ and let zi be the foot of the perpendicular line from z to the facet ∆i. For all i < j, let Ai,j denote the convex hull of z, zi, zj and the face ∆i ∩ ∆j (see Figure 6). x 0 z z 2 1 z1 z z A0,1 x 1 z0 z x2 0 FIGURE 6. Cutting a simplex into symmetric polytopes. Then ∆= Ai,j, 0≤i Theorem 4.6 (Volume Theorem for Polycon(n)). Suppose that µ is a continuous rigid mo- n tion invariant simple valuation on K . Then there exists c ∈ R such that µ(K) = cµn(K), for all K ∈ Kn. Note that we can extend continuous valuations on Kn to continuous valua- tions on Polycon(n), so the theorem also holds replacing Kn with Polycon(n). Proof. Since µ is translation invariant and simple, by Theorem 4.4, there exists a ∈ R such n n that µ(K) + µ(−K) = aµn(K) for all K ∈ K . Let ∆ be a n-simplex in R . Then µ(∆) + µ(−∆) = aµn(∆). Suppose n is even, meaning the dimension of Rn is even. Then ∆ and −∆ differ by a rotation. This is clearly true in R2. We can then rotate ∆ to −∆ in each “orthogonal” plane of Rn, meaning ∆ and −∆ differ by the composition of these rotations, i.e. a rotation. Then a µ(∆) = µ(−∆) = 2 µn(∆). Now suppose that n is odd. By Theorem 4.5, there exist polytopes P1,...,Pm such that ∆ = P1 ∪···∪ Pm and dim Pi ∩ Pj ≤ n − 1 and each Pi is symmetric under a reflection 46 Csar-Johnson-Lamberty across a hyperplane. Thus, we can transform Pi into −Pi by a series of rotations and then reflection across the hyperplane. Then, µ(Pi)= µ(−Pi) and m m µ(−∆) = µ(−Pi)= µ(Pi)= µ(∆) i=1 i=1 Xa X Then for all n-simplices ∆, µ(∆) = 2 µn(∆). a n Let c = 2 and suppose P is a convex polytope in R . Then P is a finite union of simplices, i.e. P = ∆1 ∪···∪ ∆m such that dim ∆i ∩ ∆j < n. Then µ(P ) = µ(∆1)+ ··· + µ(∆m) = cµn(∆1)+ ··· + cµn(∆m) = cµn(P ) n The set of convex polytopes is dense in K , so since µ is continuous, µ(K) = cµn(K) for all K ∈ Kn. ⊓⊔ §4.3. Intrinsic Valuations. We would like to show that for each k ≥ 0 the valuations n µ k determine an intrinsic valuation, meaning that for all N ≥ n and P ∈ Polycon(n), n N µ k (P )= µ k (P ). For uniformity, we set b n µ k =0, ∀k > n. b b Theorem 4.7. For every k ≥ 0 the sequence b n Val (µ k ) ∈ (n) n≥0 Y defines an intrinsic valuation, i.e., anb element of Val Val µ k ∈ (∞) = limprojn (n). Proof. First of all let us fix k. Since there is no danger of confusion we will write µ n instead n b of µ k . Consider the statement n ℓ µ (P )= µ (P ), ∀P ∈ Polycon(ℓ). b (Pn,ℓ) b Note that l ≤ n. We want to prove by induction over n that the statement b b Pn,ℓ for any ℓ ≤ n (Sn) is true for any n. The statement S0 is trivially true. We will prove that S0, ··· , Sn−1 =⇒ Sn. Thus, we know that Pm,ℓ is true for every ℓ ≤ m < n, and we want to prove that Pn,ℓ is true for any ℓ. (Tℓ) To prove Tℓ we argue by induction on ℓ. (n is fixed.) The result is obviously true for ℓ = 0, 1 since in these case Pix = Polycon. Thus, we assume that Pn,ℓ is true and we will show that Pn,ℓ+1 is true. In other words, we know that µ n(P )= µ ℓ(P ), P ∈ Polycon(ℓ) (4.1) b b The characterization theorem 47 and we want to prove that µ n(P )= µ ℓ+1(P ), ∀P ∈ Polycon(ℓ + 1). Clearly the last statement is trivially true if ℓ +1= n so we may assume ℓ +1 < n. We denote by ν theb restrictionb of µ n to Polycon(ℓ + 1). Then ν restricts to µ ℓ on Polycon(ℓ) and µ n restricts to µ ℓ by (4.1). S ℓ+1 ℓ Since ℓ +1 < n we deduce from bℓ+1 that µ restricts to µ on Polycon(ℓ).b Then ℓ+1 ν − µ vanishesb on Polycon(bℓ), meaning it is a continuous invariant simple valuation on ℓ+1 ℓ+1 Polycon(ℓ +1). Theorem 4.6 implies that there existsb c ∈ R such thatb ν − µ = cµ ℓ+1 on Polycon(ℓ + 1). l+1 ℓ+1 On the other hand, ν = µ on Pix(l + 1), meaning c =0 and thus ν =bµ . b ⊓⊔ Based on this result, the superscript of µ n does not matter. Therefore, we define b k b n µ k := µ k . At this point, we are able to characterizeb the continuous, invariant valuations on Polycon(n), as we did for Pix(n) in Theorem 2.20. b b Theorem 4.8 (Hadwiger’s Characterization Theorem). The valuations µ 0, µ 1,..., µ n form a basis for the vector space Val(n). Proof. Let µ ∈ Val(n) and let H be a hyperplane in Rn. The restrictionb ofb µ to Hbis then a continuous invariant valuation on H. Note that the choice of H does not matter since µ is rigid motion invariant and all hyperplanes can be arrived at through rigid motions of a single hyperplane. Recall that Polycon(1) = Pix(1), and by Theorem 2.20, the statement is true for n =1. We take this as our base case and proceed by induction. For every polyconvex set A in H, we assume that n−1 µ(A)= ciµ i(A) i=0 X Thus, b n−1 µ − ciµ i i=0 Val X is a simple valuation in (n). Then, by Theoremb 4.6, n−1 µ − ciµ i = cnµ n i=0 X We move the sum to the other side and b b n µ = ciµ i i=0 X b ⊓⊔ In a definition analogous to that on Pix(n), a valuation µ on Polycon(n) is homogenous of degree k if for α ≥ 0 and all K ∈ Polycon(n), µ(αK)= αkµ(K). 48 Csar-Johnson-Lamberty Corollary 4.9. Let µ ∈ Val(n) be homogenous of degree k. Then there exists c ∈ R such that µ(K)= cµ k(K) for all K ∈ Polycon(n). Proof. By Theorem 4.8, we know that there exist c0, ··· ,cn ∈ R such that b n µ = ciµ i i=0 X Suppose P = [0, 1]n, the unit cube in Rn. Fix α ≥b0. Then, n n n n µ(αP )= c µ (αP )= αic µ (P )= c αi. i i i i i i i=0 i=0 i=0 n X X X µ i(P ) = µi(P ) = i since forbP ∈ Par(n), µi is theb i-th elementary symmetric function. At the same time, b n n n µ(αP )= αkµ(P )= c µ (P )αk = c αk. i i i i i=0 i=0 X X We compare the coefficients ci in the two sumsb and conclude that cj = 0 for i =6 k. Thus, µ = ckµ k. ⊓⊔ b Applications 49 5. Applications It is payoff time! Having thus proven Hadwiger’s Characterization Theorem, we now seek to use it in extracting as much interesting geometric information as possible. Let us recall that n n n n µk = Rn,n−kµ0 and µ k = Ck µk , n where the normalization constants Ck are chosen so that b n µ n([0, 1]n)= µ ([0, 1]n)= . k k k Above, µk is the intrinsic valuationb on the lattice of pixelations defined in Section 2. We have n seen in the last section that the sequence {µ k }n≥k defines an intrinsic valuation and thus we n will write µ k instead of µ k b §5.1. Theb Tube Formula.b Let K, L ∈ Kn, and let α ≥ 0. We have the Minkowski sum K + αL = {x + αy | x ∈ K and y ∈ L} Proposition 5.1 (Smearing Formula). Let u denote the straight line segment connecting u and the origin o. For K ∈ Kn and any unit vector u ∈ Rn, ⊥ µn(K + ǫu)= µn(K)+ ǫµn−1(K|u ) Proof. Let L = K + ǫu. We will compute the volume of L by integrating the lengths of ⊥ one-dimensional slices of L with lines ℓx parallel through u passing through points x ∈ u , that is µn(L)= µ1(L ∩ ℓx)dx. ⊥ Zu ⊥ ⊥ Since µ1(l ∩ ℓx)= µ1(K ∩ ℓx)+ ǫ for all x ∈ K|u and zero for x∈ / K|u , we have that ⊥ µn(L)= µ1(L ∩ ℓx)dx = (µ1(K ∩ ℓx)+ ǫ)dx = µn(K)+ ǫµn−1(K|u ) ⊥ ⊥ Zu ZK|u ⊓⊔ Let Cn denote the n-dimensional unit cube. Recall that for 0 ≤ i ≤ n, n µ (C )= . i n i Proposition 5.2. For ǫ ≥ 0, n n n n µ (C + ǫB )= µ (C )ω ǫn−i = ω ǫn−i = ω µ n(C )ǫn−i. n n n i n n−i i n−i n−i i n i=0 i=0 i=0 X X X b 50 Csar-Johnson-Lamberty n Proof. Let u1,...,un denote the standard orthonormal basis for R , and ui the line segment connecting ui to the origin. By Proposition 5.1 we have n 1 µ (B + ru )= ω + rω = ω ri n n 1 n n−1 i n−i i=0 X for all n ≥ 1. Having proven the base case, we proceed by induction. Suppose that k k µ (B + ru + ··· + ru )= ω ri, n n 1 k i n−i i=0 X for some 1 ≤ k < n. Then B B B ⊥ µn( n+ru1+···+ruk+1)= µn( n+ru1+···+ruk)+rµn−1(( n−1+ru1+···+ruk)|uk+1) k k k k k k = ω ri +rµ (B +ru +···+u )= ω ri + ω ri+1 i n−i n−1 n−1 1 k i n−i i n−i−1 i=0 i=0 i=0 X X X k+1 k k k+1 k +1 = + ω ri = ω ri. i i − 1 n−i i n−i i=0 i=0 X X Thus, by induction we have that n n µ (B + ru + ··· + ru )= ω ri n n 1 n i n−i i=0 X Noting that µn is homogeneous of degree n, we have that 1 1 µ (B +ru +···+ru )= µ (B +rC )= µ r B + C = rnµ B + C . n n 1 n n n n n r n n n r n n 1 Letting ǫ = r , the previous two inequalities give us n n 1 i n n µ (C + ǫB )= ǫn ω = ω ǫn−i n n n i n−i ǫ i n−i i=0 i=0 X X ⊓⊔ We can now prove the following remarkable result, first proved by Jakob Steiner in di- mensions ≤ 3 in the 19th century. Theorem 5.3 (Tube Formula). For K ∈ Kn and ǫ ≥ 0, n B n−i µn(K + ǫ n)= µ i(K)ωn−iǫ . (5.1) i=0 X b Applications 51 n n n Proof. Let η(K)= µn(K + Bn), for K ∈ K . Because Bn ∈ K , for K, L ∈ K we have that (K ∪ L)+ Bn =(K + Bn) ∪ (L + Bn), and clearly (K + Bn) ∩ (L + Bn)=(K ∩ L)+ Bn, so we have η(K ∪ L)= µn (K ∪ L)+ Bn = µn (K + Bn) ∪ (L + Bn) = µn(K + Bn)+ µn(L + Bn) − µn (K + Bn) ∩ (L + Bn) = η(K)+ η(L) − η(K ∩ L). n Thus η is a valuation on K . The continuity and invariance of µn, and the symmetry of Bn under Euclidean transformations show that η is a convex-continuous invariant valuation on Kn, which according to the Extension Theorem 3.17 extends to a valuation in Val(n). By Hadwiger’s Characterization Theorem we have that n η(K)= ciµ i(K), i=0 X for all K ∈ Kn. Therefore, for ǫ> 0 we have thatb 1 n 1 n µ (K + ǫB )= ǫnµ K + B = ǫn c µ (K) = c µ (K)ǫn−i. n n n ǫ n i i ǫi i i i=0 i=0 X X In particular, if we let K = Cn in the previous equationb and comparingb with the results of the previous Proposition, we find that n n n−i n−i ciµi(Cn)ǫ = µ i(Cn)ωn−iǫ , i=0 i=0 X X i.e. ci = ωn−i. b ⊓⊔ Theorem 5.4 (The intrinsic volumes of the unit ball). For 0 ≤ i ≤ n, n ω n µ (B )= n = ω . i n i ω i i n−i b Proof. Applying the tube formula to to K = Bn we obtain n B n−i B B B µ i( n)ωn−iǫ = µn( n + ǫ n)= µn ((1 + ǫ) n) i=0 X b n n = (1+ ǫ)nµ (B )= ω ǫn−i, n n i n i=0 X for all ǫ> 0. Comparing coefficients of powers of ǫ, we uncover the desired result. ⊓⊔ 52 Csar-Johnson-Lamberty §5.2. Universal Normalization and Crofton’s Formulæ. It turns out that the intrinsic n volumes of the unit ball are the final piece of the puzzle in understanding the constants Ck n n n in the formula µ k = Ck µk , where n n µk = Rn,n−kµ0 and µ k = µk on Pix(n). b We have the following very pleasant surprise. b Theorem 5.5. For 0 ≤ k ≤ n and K ∈ Kn, n n µ k = µ k = µk In other words, no more hats! b b Proof. From Theorem 5.4 we deduce n n Cnµn(B )= µ n = ω = ω . k k n k n n − k n k On the other hand (3.8) shows that b n µn = ω . k k k n Equating the two, we readily see that Ck =1 for all n, k ≥ 0. In fact, since both µ k and µk are intrinsic, we have shown that µ k = µk. ⊓⊔ b We have thus proved that the intrinsic volumes µ on Polycon(n) are exactly the Radon b k Transforms Rn,n−kµ0, a result to which we feel obliged to “tip our hats”. The following theorem is a generalization of theb previous one. Theorem 5.6 (Crofton’s formula). For 0 ≤ i, j ≤ n and K ∈ Polycon(n), i + j (R µ )(K)= µ (K) n,n−i j j i+j Proof. From the section before regarding Radon Transforms, we already know that Rn,n−iµj ∈ Vali+j n (n). By Corollary 4.9 we have that there exists a c ∈ R such that Rn−iµj = cµi+j. To obtain this c we simply evaluate (Rn,n−iµj)(Bn). (Rn,n−iµj)(Bn)= Rn,n−i((Rn,n−i)µ0) (Bn)