GEOMETRIC VALUATIONS

CORDELIA E. CSAR, RYAN K. JOHNSON, AND RONALD Z. LAMBERTY

Work begun on June 12 2006. Ended on July 28 2006. 1 2 CORDELIAE.CSAR,RYANK.JOHNSON,ANDRONALDZ.LAMBERTY

Introduction

This paper is the result of a National Science Foundation-funded Research Experience for Undergraduates (REU) at the University of Notre Dame during the summer of 2006. The REU was directed by Professor Frank Connolly, and the research project was supervised by Professor Liviu Nicolaescu. The topic of our independent research project for this REU was Geometric Probability. Consequently, the first half of this paper is a study of the book Introduction to Geometric Probability by Daniel Klain and Gian-Carlo Rota. While closely following the text, we have attempted to clarify and streamline the presentation and ideas contained therein. In particular, we highlight and emphasize the key role played by the Radon transform in the classification of valuations. In the second part we take a closer look at the special case of valuations on polyhedra. Our primary focus in this project is a type of function called a “valuation”. A valuation as- sociates a number to each ”reasonable” subset of Rn so that the inclusion-exclusion principle is satisfied. Examples of such functions include the Euclidean volume and the Euler characteristic. Since the objects we are most interested lie in an Euclidean space and moreover we are interested in properties which are independent of the location of the objects in space, we are motivated to study “invariant” valuations on certain subsets of Rn. The goal of the first half of this paper, therefore, is to characterize all such valuations for certain natural collections of subsets in Rn. This undertaking turned out to be quite complex. We must first spend time introducing the abstract machinery of valuations (chapter 1), and then applying this machinery to the simpler simpler case of pixelations (chapter 2). Chapter 3 then sets up the language of poly- convex sets, and explains how to use the Radon transform to generate many new examples of valuations. These new valuations have probabilistic interpretations. In chapter 4 we finally nail the characterization of invariant valuations on polyconvex sets. Namely, we show that all valuations are obtainable by the Radon transform technique from a unique valuation, the Euler characteristic. We celebrate this achievement in chapter 5 by exploring applications of the theory. With the valuations having been completely characterized, we turn our attention toward special polyconvex sets: polyhedra, that is finite unions of convex polyhedra. These poly- hedra can be triangulated, and in chapter 5 we investigate the combinatorial features of a triangulation, or simplicial complex. In Chapter 7 we prove a global version of the inclusion-exclusion principle for simplicial complexes known as the M¨obius inversion formula. Armed we this result, we then explain how to compute the valuations of a polyhedron using data coming form a triangulation. In Chapter 8 we use the technique of integration with respect to the Euler characteristic to produce combinatorial versions of Morse theory and Gauss-Bonnet formula. In the end, we arrive at an explicit formula relating the Euler characteristic of a polyhedron in terms of measurements taken at vertices. These measurements can be interpreted either as curvatures, or as certain averages of Morse indices. GEOMETRIC VALUATIONS 3

The preceding list says little about why one would be interested in these topics in the first place. Consider a coffee cup and a donut. Now, a topologist would you tell you that these two items are “more or less the same.” But that’s ridiculous! How many times have you eaten a coffee cup? Seriously, even aside from the fact that they are made of different materials, you have to admit that there is a geometric difference between a coffee cup and a donut. But what? More generally, consider the shapes around you. What is it that distinguishes them from each other, geometrically? We know certain functions, such as the Euler characteristic and the volume, tell part of the story. These functions share a number of extremely useful properties such as the inclusion- exclusion principle, invariance, and “continuity”. This motivates us to consider all such functions. But what are all such functions? In order to apply these tools to study polyconvex sets, we must first understand the tools at our disposal. Our attempts described in the previous paragraphs result in a full characterization of valuations on polyconvex sets, and even lead us to a number of useful and interesting formulae for computing these numbers. We hope that our efforts to these ends adequately communicate this subject’s richness, which has been revealed to us by our research advisor Liviu Nicolaescu. We would like to thank him for his enthusiasm in working with us. We would also like to thank Professor Connolly for his dedication in directing the Notre Dame REU and the National Science Foundation for supporting undergraduate research. 4 CORDELIAE.CSAR,RYANK.JOHNSON,ANDRONALDZ.LAMBERTY

CONTENTS Introduction 2 Introduction 2 1. Valuations on lattices of sets 5 §1.1. Valuations 5 §1.2. Extending Valuations 6 2. Valuations on pixelations 10 §2.1. Pixelations 10 §2.2. Extending Valuations from Par to Pix 11 §2.3. Continuous Invariant Valuations on Pix 13 §2.4. Classifying the Continuous Invariant Valuations on Pix 16 3. Valuations on polyconvex sets 18 §3.1. Convex and Polyconvex Sets 18 §3.2. Groemer’s Extension Theorem 21 §3.3. The Euler Characteristic 23 §3.4. Linear Grassmannians 26 §3.5. Affine Grassmannians 27 §3.6. The Radon Transform 28 §3.7. The Cauchy Projection Formula 33 4. The Characterization Theorem 36 §4.1. The Characterization of Simple Valuations 36 §4.2. The Volume Theorem 40 §4.3. Intrinsic Valuations 41 5. Applications 44 §5.1. The Tube Formula 44 §5.2. Universal Normalization and Crofton’s Formulæ 46 §5.3. The Mean Projection Formula Revisited 48 §5.4. Product Formulæ 48 §5.5. Computing Intrinsic Volumes 50 6. Simplicial complexes and polytopes 53 §6.1. Combinatorial Simplicial Complexes 53 §6.2. The Nerve of a Familyof Sets 54 §6.3. Geometric Realizations of Combinatorial Simplicial Complexes 55 7. The M¨obius inversion formula 59 §7.1. A Linear Algebra Interlude 59 §7.2. The M¨obius Function of a Simplicial Complex 60 §7.3. The Local Euler Characteristic 65 8. Morse theory on polytopes in R3 69 §8.1. Linear Morse Functions on Polytopes 69 §8.2. The Morse Index 71 §8.3. Combinatorial Curvature 73 References 80 GEOMETRIC VALUATIONS 5

References 80 6 CORDELIAE.CSAR,RYANK.JOHNSON,ANDRONALDZ.LAMBERTY

1. Valuations on lattices of sets

§1.1. Valuations. Definition 1.1. (a) For every set S we denote by P (S) the collection of subsets of S and by Map(S, Z) the set of functions f : S → Z. The indicator or characteristic function of a subset A ⊂ S is the function IA ∈ Map(S, Z) defined by 1 s ∈ A IA(s)= (0 s 6∈ A (b) If S is a set then an S-lattice (or a lattice of sets) is a collection L ⊂ P (S) such that ∅ ∈ L, A, B ∈ L =⇒ A ∩ B, A ∪ B ∈ L. (c) Is L is an S-lattice then a subset G ∈ L is called generating if ∅ ∈ G and A, B ∈ G =⇒ A ∩ B ∈ G, and every A ∈ L is a finite union of sets in G. ⊓⊔

Definition 1.2. Let G be an Abelian group and S a set. (a) A G-valuation on an S-lattice of sets is a function µ : L → G satisfying the following conditions: (a1) µ(∅)=0 (a2) µ(A ∪ B)= µ(A)+ µ(B) − µ(A ∩ B). (Inclusion-Exclusion) (b) If G is a generating set of the S-lattice L, then a G-valuation on G is a function µ : G → G satisfying the following conditions: (b1) µ(∅)=0 (b2) µ(A ∪ B)= µ(A)+ µ(B) − µ(A ∩ B), for every A, B ∈ G such that A ∪ B ∈ G. ⊓⊔

The inclusion-exclusion identity in Definition 1.2 implies the generalized inclusion-exclusion identity

µ(A1 ∪ A2 ∪···∪ An)= µ(Ai) − µ(Ai ∩ Aj)+ µ(Ai ∩ Aj ∩ Ak)+ ··· (1.1) i i

I• : P (S) → Map(S, Z) given by

P (S) ∋ A 7→ IA ∈ Map(S, Z) is a valuation. This follows from the fact that the indicator functions satisfy the following identities.

IA∩B = IAIB (1.2a)

IA∪B = IA + IB − IA∩B = IA + IB − IAIB =1 − (1 − IA)(1 − IB). (1.2b) Valuations on lattices 7

(b) Suppose S is a finite set. Then the cardinality map

# : P (S) → Z, A 7→ #A := the cardinality of A is a valuation. (c) Suppose S = R2, R = R and L consists of measurable bounded subsets of the Euclidean space. The map which associates to each A ∈ L its Euclidean area, area(A), is a real valued valuation. Note that the lattice point count map

λ : L → Z, A 7→ #(A ∩ Z2) is a Z-valuation. (d) Let S and L be as above. Then the Euler characteristic defines a valuation

χ : L → Z, A 7→ χ(A). ⊓⊔

If (G, +) is an Abelian group and R is a commutative ring with 1 then we denote by HomZ(G, R) the set of group morphisms G → R, i.e. the set of maps ϕ : G → R such that

ϕ(g1 + g2)= ϕ(g1)+ ϕ(g2), ∀g1,g2 ∈ G.

We will refer to the maps in HomZ(G, R) as Z-linear maps from G to R. Suppose L is an S-lattice. We denote by S(L) the (additive) subgroup of Map(S, Z) generated by the functions IA, A ∈ L. We will refer to the functions in S(L) as L-simple functions, or simple functions if the lattice L is clear from the context.

Definition 1.4. Suppose L is an S-lattice, and G is an Abelian group. An G-valued integral on L is a Z-linear map

: S(L) → G, S(L) ∋ f 7−→ f ∈ G. ⊓⊔ Z Z Observe that any G-valued integral on an S-lattice L defines a valuation µ : L → G by setting

µ(A) := IA. Z The inclusion-exclusion formula for µ follows from (1.2a) and (1.2b). We say that µ is the valuation induced by the integral. When a valuation is induced by an integral we will say that the valuation induces an integral. In general, a generating set of a lattice has a much simpler structure and it is possible to construct many valuations on it. A natural question arises: Is it possible to extend to the entire lattice a valuation defined on a generating set? The next result describes necessary and sufficient conditions for which this happens. 8 Csar-Johnson-Lamberty

§1.2. Extending Valuations. Theorem 1.5 (Groemer’s Integral Theorem). Let G be a generating set for a lattice L and let µ : G → H be a valuation on G, where H is an Abelian group. The following statements are equivalent. (1) µ extends uniquely to a valuation on L. (2) µ satisfies the inclusion-exclusion identities

µ(B1 ∪ B2 ∪···∪ Bn)= µ(Bi) − µ(Bi ∩ Bj)+ ··· i i

• (1) =⇒ (2). Note that the second statement is not trivial because B1 ∪···∪ Bn−1 is not necessarily in G. Suppose the valuation µ extends uniquely to a valuation on L. Then µ satisfies the inclusion exclusion identity µ(A ∪ B) = µ(A)+ µ(B) − µ(A ∩ B) for all A, B ∈ L. Since B1 ∪···∪ Bn−1 ∈ L even if it is not in G, we can apply the inclusion- exclusion identity repeatedly to obtain the result. • (2) =⇒ (3). We wish to construct a linear map : S(L) → H. To do this, we first note that by (1.2b) any function, f in S(L) can be written as R m

f = αiIKi , i=1 X where Ki ∈ G and αi ∈ Z. We thus define an integral as follows: m m

αiIKi dµ := αiµ(Ki). i=1 i=1 Z X X This map might not be well-defined since f could be represented in different ways as a linear combination of indicator functions of generating sets. We thus need to show that the above map is independent of such a representation. We argue by contradiction and we assume that f has two distinct representations

f = γiIAi = βiIBi , yet X X

γiµ(Ai) =6 βiµ(Bi). Thus, subtracting these equationsX and renamingX the terms appropriately, we are left with the situation m m

αiIKi =0 and αiµ(Ki) =06 . (1.3) i=1 i=1 X X Now we label the intersections

L1 = K1, ..., Lm = Km, Lm+1 = K1 ∩ K2, Lm+2 = K1 ∩ K3,... Valuations on lattices 9 such that Li ⊂ Lj ⇒ j < i. This can be done because we have a finite number of sets. We note that all the Li’s are in G, since the Ki’s and their intersections are in G. We then rewrite (1.3) in terms of the Li’s as p p

aiILi =0 and aiµ(Li) =06 . (1.4) i=1 i=1 X X Now take q maximal such that p p

aiILi =0 and aiµ(Li) =06 . (1.5) i=q i=q X X Note that 1 ≤ q

Lq ⊂ Li. i=q+1 [ p Indeed, if x ∈ Lq r j=q+1 Lj then S p ILi (x)=0 ∀i =6 q and aq = aiILi (x)=0. i=q X This is impossible since aq =06 . Hence p p

Lq = Lq ∩ Li = (Lq ∩ Li). i=q+1 ! i=q+1 [ [

Let us write Lq ∩ Li = Lji . Then, since i > q and by construction Li ⊂ Lj =⇒ j < i, we have that ji > q. Thus: p p

Lq = (Lq ∩ Li)= Lji . i=q+1 i=q+1 [ [ Then we have n n

0 =6 aiµ(Li)= aqµ(Lq)+ aiµ(Li) i=q i=q+1 X X p n p

= aqµ Lji + aiµ(Li)= biµLi, i=q+1 ! i=q+1 i=q+1 [ X X where the last equality is attained by applying the assumed inclusion/exclusion principle to the union and regrouping the terms. We now repeat exactly the same process with the expression involving the indicator func- tion. Then, p p p a I = a I p + a I = b I i Li q Si=q+1(Lq ∩Li) i Li i Li i=q i=q+1 i=q+1 X X X 10 Csar-Johnson-Lamberty

Thus, p

biILi =0 i=q+1 X However, this contradicts the maximality of q. Hence, the integral map is well defined, and we are done. • (3) =⇒ (1). Suppose µ defines an integral on the space of L-simple functions. Then for A ∈ G, IAdµ = µ(A). This motivates us to define an extension µ˜ of µ to L by

R µ˜(A) := IAdµ = µ(A), A ∈ L. Z This definition is certainly unambiguous and its restriction to G is just µ, so we need only check that it is a valuation. Let A, B ∈ L. Then,

µ˜(A ∪ B)= IA∪Bdµ = IA + IB − IA∩Bdµ Z Z

= IAdµ + IBdµ − IA∩Bdµ =µ ˜(A)+˜µ(B) − µ˜(A ∩ B). Z Z Z Thus, µ˜ is an extension of µ to L. Moreover, it is unique. Suppose ν is another extension of µ. Then, given any A ∈ L we can write A = K1 ∪ . . . ∪ Kr for Ki ∈ G. Since µ˜ and ν are both valuations, both satisfy the generalized inclusion exclusion principle. Furthermore, since both are extensions of µ, both agree on G

µ˜(A)=˜µ(K1 ∪ . . . ∪ Kr) r

= µ(Ki) − µ(Ki ∩ Kj)+ µ(Ki ∩ Kj ∩ Kk)+ ··· i=1 i

2. Valuations on pixelations

§2.1. Pixelations. We now study valuations on a small class of easy to understand subsets of Rn. We will then use this knowledge to study valuations on much more general subsets of Rn. One of our main aims is to classify all the “nice” valuations on such subsets. It will turn out that the ideas and results of this section are analogous to results discussed later. Definition 2.1. (a) An affine k-plane in Rn is the translate of a k-dimensional vector subspace of Rn. A hyperplane in Rn is an affine (n − 1)-plane. (b) An affine transformation of Rn is a bijection T : Rn → Rn such that T (λ~u + (1 − λ)~v)= λT~u + (1 − λ)T~v, ∀λ ∈ R, ~u,~v ∈ Rn. The set Aff(Rn) of affine transformations of Rn is a group with respect to the composition of maps. ⊓⊔

Definition 2.2. An (orthogonal) parallelotope in Rn a compact set P of the form

P = [a1, b1] ×··· [an, bn], ai ≤ bi, i =1, ··· n. ⊓⊔ We will often refer to an orthogonal parallelotope as a parallelotope or even just a box. Remark 2.3. It is entirely possible for any number of the intervals defining an orthogonal parallelotope to have length zero. Finally, note that the intersections of two parallelotopes is again a parallelotope. ⊓⊔

We denote by Par(n) the collection of parallelotopes in Rn and we define a pixelation to be a finite union of parallelotopes. Observe that the collection Pix(n) of pixelations in Rn is a lattice, i.e. it is stable under finite intersections and unions. Definition 2.4. (a) A pixelation P ∈ Pix(n) is said to have dimension n (or full dimension) if P is not contained in a finite union of hyperplanes. (b) A pixelation P ∈ Pix(n) is said to have dimension k (k ≤ n) if P is contained in a finite union of k-planes but not in a finite union of (k − 1)-planes. ⊓⊔

The top part of Figure 1 depicts possible boxes in R2, while the bottom part depicts a possible pixelation in R2.

§2.2. Extending Valuations from Par to Pix. By definition, Par(n) is a generating set of Pix(n). Consequently, we would like to know if we can extend valuations from Par(n) to Pix(n). The following theorem shows that we can do so whenever the valuations map into a commutative ring with 1. Theorem 2.5. Let R be a commutative ring with 1. Then any valuation µ : Par(n) → R extends uniquely to a valuation on Pix(n). Proof. Due to Groemer’s Integral Theorem, all we need to show is that µ gives rise to an integral on the vector space of functions generated by the indicator functions of boxes. Thus 12 Csar-Johnson-Lamberty

FIGURE 1. Planar pixelations.

it suffices to show that m m

αiIPi =0=⇒ αiµ(Pi)=0, i=1 i=1 X X where the Pi’s are boxes. We proceed by induction on the dimension n. If the dimension is zero, the space only has one point, and the above claim is true. Now suppose that the theorem holds in dimension n − 1. For the sake of contradiction, we suppose the theorem does not hold for dimension n. That is, suppose there exist distinct boxes P1,...,Pm such that

m m

αiIPi =0 and αiµ(Pi)= r =06 . (2.1) i=1 i=1 X X Let k be the number of the boxes Pi of full dimension. Take k to be minimal over all such contradictions. We distinguish three cases. Case 1. k = 0. Since none of the boxes is of full dimension, each is contained in a hyper- plane. Of all the relations of type (2.1) we choose the one so that the boxes involved are contained in the smallest possible number ℓ of hyperplanes. Assume first that ℓ = 1. Then all the Pi are contained in a single hyperplane. By the induction hypothesis, then the integral is well defined, so we have a contradiction. Thus, we can assume that ℓ> 1. So, there exist ℓ hyperplanes orthogonal to the coordinate axes, H1,...,Hℓ, such that each Pi is contained in one of them. Without loss of generality, we may renumber the indices so that P1 ⊂ H1. The the restriction to H1 of the first sum in (2.1) is zero so that m

αiIPi∩H1 =0. i=1 X But, Pi ∩ H1 is a subset of the a hyperplane H1 and we can apply the induction hypothesis to conclude that m

αiµ(Pi ∩ H1)=0. i=1 X Valuations on pixelations 13

Subtracting the above two equations from (2.1) we see that m m

αi(IPi − IPi∩H1 )=0 and αi(µ(Pi) − µ(Pi ∩ H1)) = r =06 . i=1 i=2 X X The above sums take the same form (2.1), but we see that the boxes Pj ⊂ H1 disappear since Pj = Pj ∩ H1. Thus we obtain new equalities of the type (2.1) but the boxes involved are contained in fewer hyperplanes, H2, ··· ,Hℓ contradicting the minimality of ℓ.

Case 2. k = 1. We may assume the top dimensional box is P1. Then P2 ∪···∪ Pm is contained in a finite union of hyperplanes H1, ··· ,Hν perpendicular to the coordinate axes. Observe that

(H1 ∪···∪ Hν) ∩ P1 ( P1. Indeed, ν

vol (H1 ∪···∪ Hν) ∩ P1 ≤ vol (Hj ∩ P1)=0 < vol (P1), j=1  X so that

∃x0 ∈ P1 r (H1 ∪···∪ Hν).

Using the identity αiIPi = 0 at x0 found above we deduce α1 = 0 which contradicts the minimality of k. P Case 3. k > 1. We can assume that the top dimensional boxes are P1, ··· ,Pk. Choose a hyperplane H such that P1 ∩ H is a facet of P1, i.e. a face of P1 of highest + − dimension such that it is not all of P1. H has two associated closed half-spaces H and H . + + H is singled out by the requirement P1 ⊂ H . Recall that m

αiIPi =0. i=1 X Restricting to H+ we deduce m

+ αiIPi∩H =0. i=1 X Likewise, m m

− αiIPi∩H =0 and αiIPi∩H =0 (2.2) i=1 i=1 +X − X+ − Note that Pi =(Pi ∩ H ) ∪ (Pi ∩ H ) and (Pi ∩ H ) ∩ (Pi ∩ H )= Pi ∩ H. Then, since µ is a valuation, it obeys the inclusion-exclusion rule so m m m m + − αiµ(Pi)= αiµ(Pi ∩ H )+ αiµ(Pi ∩ H ) − αiµ(Pi ∩ H) (2.3) i=1 i=1 i=1 i=1 X X X X Since the sets Pi ∩ H are in a space of dimension n − 1, and m

αiIPi∩H =0 i=1 X 14 Csar-Johnson-Lamberty

we deduce from the induction assumption that m

αiµ(Pi ∩ H)=0. i=1 X − − On the other hand, P1 ∩ H = P1 ∩ H so P1 ∩ H has dimension n − 1. We now have a − new collection of boxes Pi ∩ H , i =1, ··· , m of which at most k − 1 are top dimensional and satisfy

− αiIPi∩H =0. i X The minimality of k now implies m − αiµ(Pi ∩ H )=0. i=1 X Therefore, the equality (2.3) implies m m + αiµ(Pi)= αiµ(Pi ∩ H )= r. (2.4) i=1 i=1 X X P1 has dimension n. Then there exist 2n hyperplanes H1,...,H2n such that 2n + P1 = Hi . i=1 \ + Replacing Pi with Pi ∩ H and iterating the above argument we get m m + + + αiµ(Pi ∩ H1 ∩ H2 ∩···∩ H2n )= αiµ(Pi ∩ P1)= r. (2.5) i=1 i=1 X X and m

αiIPi∩P1 =0. (2.6) i=1 X We repeat this argument with the remaining top dimensional boxes P2,...,Pk and if we set

P0 := P1 ∩···∩ Pk, A := α1 + ··· + αk we conclude m

αiIPi∩P0 = AIP0 + αiIPi∩P0 =0, (2.7a) i=1 X Xi>k m

αiµ(Pi ∩ P0)= Aµ(P0)+ αiµ(Pi ∩ P0)= r. (2.7b) i=1 X Xi>k In the above sums, at most one of the boxes is top dimensional, which contradicts the mini- mality of k > 1. ⊓⊔ Valuations on pixelations 15

§2.3. Continuous Invariant Valuations on Pix.

n Notation 2.6. We shall denote Tn the subgroup of Aff(R ) generated by the translations and the permutations of coordinates in Rn. ⊓⊔ e Definition 2.7. A valuation µ on Pix(n) is called invariant if

µ(gP )= µ(P ), ∀P ∈ Pix(n), g ∈ Tn. and is called translation invariant if the same condition holds for all translations g. ⊓⊔ e We aim to find all the invariant valuations on Pix(n). To avoid unnecessary complications, we will impose a further condition on the valuations. The final condition we would like on our valuations on Pix(n) is that of continuity. Our valuations are functions on Pix(n), which is a collection of compact subsets of Rn. So, in order for continuity to make any sense, we would like some concept of open sets for this collection of compact sets. A good way of achieving this goal is to make them into a metric space by defining a reasonable notion of distance. Definition 2.8. (a) Let A ⊂ Rn and x ∈ Rn. The distance from x to A, d(x, A), is the nonnegative real number d(x, A) defined by d(x, A) = inf d(x, a) a∈A where d(x, a) is the Euclidian distance from x to a (b) Let K and E be subsets of Rn. Then the Hausdorff distance δ(K, E) is defined by:

δ(K, E) = max sup d(a, E), sup d(K, b) . a∈K b∈E  n (c) A sequence of compact sets Kn in R converges to a set K if δ(Kn,K) −→ 0 as n −→ ∞. If this is the case, then we write Kn −→ K. ⊓⊔

Remark 2.9. If K and E are compact, then δ(K, E)=0 if and only if K = E. That is, the Hausdorff distance is positive definite on the set of compact sets in Rn. ⊓⊔

n n Let Bn be the unit ball in R . For K ⊂ R and ǫ> 0, set

K + ǫBn := {x + ǫu | x ∈ K and u ∈ Bn}. The following lemma (whose proof is clear) gives a hands-on understanding of how the Hausdorff distance behaves. Lemma 2.10. Let K and E be compact subsets of Rn. Then

δ(K, E) ≤ ǫ ⇐⇒ K ⊂ E + ǫBn and E ⊂ K + ǫBn. ⊓⊔ The above result implies that the Hausdorff distance is actually a metric. The next result summarizes this. Proposition 2.11. The collection of compact subsets of Rn together with the Hausdorff dis- tance forms a metric space. ⊓⊔ 16 Csar-Johnson-Lamberty

Definition 2.12. Let µ : Pix(n) → R be a valuation on Pix(n). Then µ is called (box) continuous if µ is continuous on boxes that is for any sequence of boxes Pi converging in the Hausdorff metric to a box P we have

µ(Pi) −→ µ(P ) ⊓⊔ We want to classify the continuous invariant valuations on Pix(n). We start by considering the problem in R1. An element of Pix(1) is a finite union of closed intervals. For A ∈ Pix(1), set 1 µ0(A)= the number of connected components of A 1 µ1(A)= the length of A Both are continuous invariant valuations on Pix(1). It is clear that they are invariant under

Tn, which is, in this case, the group of translations. It is clear that both are continuous. Proposition 2.13. Every continuous invariant valuation µ : Pix(1) → R is a linear combi- e 1 1 nation of µ0 and µ1. ′ 1 ′ Proof. Let c = µ(A), where A is a singleton set {x}, x ∈ R. Now let µ = µ − cµ0. µ vanishes on points by construction. Now, define a continuous function f : [0, ∞)−→R by ′ ′ 1 f(x) = µ ([0, x]). µ is invariant because µ and µ0 are invariant. Then, if A is a closed interval of length x, µ′(A)= f(x) since we can simply translate A to the origin. Now observe that f(x + y)= µ′([0, x + y]) = µ′([0, x] ∪ µ′([x, x + y]) = µ′(0, x]) + µ′([x, x + y]) − µ′({x}) = µ′([0, x]) + µ′([x, x + y]) = f(x)+ f(y). Since f is continuous and linear, we deduce that there exists a constant r such that f(x)= rx, ′ 1 for all x ≥ 0. Therefore, µ = rµ1 and our assertion follows from the equality ′ µ = µ + cµ0 = rµ1 + cµ0. ⊓⊔

n We now move onto R . Let µn(P ) be the volume of a pixelation P of dimension n.

Definition 2.14. The k-th elementary symmetric polynomial in the variables x1,...,xn the polynomial ek(x1,...,xn) such that e0 =1 and

ek(x1,...,xn)= xi1 ··· xik 1 ≤ k ≤ n. ⊓⊔ 1≤i1

1 1 1 1 1 Proof. Let µ0,µ1 : Pix(1) → R be the valuations described above. We set µt = µ0 + tµ1, where t is a variable. Then, n 1 1 1 µt = µt × µt ×···× µt : Par(n) → R[t] is an invariant valuation on parallelotopes with values in the ring of polynomials with real n coefficients. By Groemer’s Extension Theorem (2.5), µt extends to a valuation on Pix(n) which must also be invariant. Using (2.8) we deduce n n n k µt ([0, x1] ×···× [0, xn]) = (1 + txj)= ek(x1, ··· , xn)t . j=1 Y Xk=0 For any parallelotope P we can write n n k µt(P )= µk (P )t . Xk=0 n The coefficients µk (P ) define continuous invariant valuations on Par(n) which extend to continuous invariant valuations on Pix(n) such that n n n k n µt (S)= µk (S)t , ∀S ∈ Pix(n) µk ([0, x1] ×···× [0, xn]) = ek(x1, ··· , xn). Xk=0 ⊓⊔

Theorem 2.16. The valuations µi on Pix(n) are normalized independently of the dimension m n n, i.e. µi (P )= µi (P ) for all P ∈ Pix(n). Proof. This follows from the preceding theorem and the definition of an elementary sym- metric function. If P ∈ Par(n) and we consider the same P ∈ Par(k), where k > n, P remains a cartesian product of the same intervals, except there are some additional intervals of length 0, which do not effect µi. ⊓⊔

Since the valuation µk(P ) is independent of the ambient space, µk is called the k-th in- trinsic volume. µ0 is the Euler characteristic. µ0(Q)=1 for all non-empty boxes. n Theorem 2.17. Let H1 and H2 be complementary orthogonal subspaces of R spanned by subsets of the given coordinate system with dimensions h and n − h, respectively. Let Pi be a parallelotope in Hi and let P = P1 × P2.

µi(P1 × P2)= µr(P1)µs(P2) (2.9) r+s=i X The identity is therefore valid when P1 and P2 are pixelations since both sides of the above equalities define valuations on Par(n) which extend uniquely to valuations on Par(n).

Proof. Suppose P1 has sides of length x1,...,xh and P2 has sides of length y1,...,yn−h. Then,

µr(P1)µs(P2)= xj1 ··· xjh yk1 ··· jks r+s=i r+s=i ! X X 1≤j1

Let jr+1 = k1 + h,...,ji = jr+s = ks + h and let xh=1 = y1,...,xn = yn−h, simply relabelling. Then,

µ (P )µ (P )= x ··· x x ··· x r 1 s 2  j1 jr jr+1 ji  r+s=i r+s=i X X 1≤j1

§2.4. Classifying the Continuous Invariant Valuations on Pix. At this point we are very close to a full description of the continuous invariant valuations on Pix(n). Definition 2.18. A valuation µ on Pix(n) is said to be simple if µ(P )=0 for all P of dimension less than n. ⊓⊔

Theorem 2.19 (Volume Theorem for Pix(n)). Let µ be a translation invariant, simple valu- ation defined on Par(n) and suppose that µ is either continuous or monotone. There exists c ∈ R such that µ(P )= cµn(P ) for all P ∈ Pix(n), that is µ is equal to the volume, up to a constant factor. n n n 1 n c Proof. Let [0, 1] denote the unit cube in R and let c = µ([0, 1]) . Then, µ 0, k = kn for all k > 0 ∈ Z. Therefore, µ(C) = cµn(C) for every box C of rational dimensions 1 n    with sides parallel to the coordinate axes since C can be built from [0, k ] cubes for some k. Since µ is either continuous or monotone and Q is dense in R, then µ(C) = cµn(C) for C with real dimensions since we can find a sequence of rational Cn converging to C. Then, by inclusion-exclusion, µ(P ) = cµn(P ) for all P ∈ Pix(n) (since it works for parallelotopes, we can extend it to a valuation on pixelations). ⊓⊔

Theorem 2.20. The valuations µ0,µ1,...,µn form a basis for the vector space of all con- tinuous invariant valuations defined on Pix(n).

Proof. Let µ be a continuous invariant valuation on Pix(n). Denote by x1,...,xn be the n standard Euclidean coordinates on R and let Hj denote the hyperplane defined by the equa- tion xj = 0. The restriction on µ to Hj is an invariant valuation on pixelations in Hj. Proceeding by induction (taking n =1 as a base case, which was proven in Proposition 2.13, assume n−1

µ(A)= ciµi(A) ∀A ∈ Pix(n) such that A ⊆ Hj (2.10) i=0 X The ci are the same for all Hj since µ0,...,µn−1 are invariant under permutation. Then, µ − n−1 i=0 ciµi vanishes on all lower dimensional pixelations in Pix(n) since any such pixelation is in a hyperplane parallel to one of the Hj’s (since the µi’s are translationally invariant). PThen, by Theorem 2.19 we deduce n−1

µ − ciµi = cnµn i=0 X Valuations on pixelations 19 which proves our claim. ⊓⊔

If we can find a continuous invariant valuation on Pix(n), then we know that it is a linear combination of the µi’s. However, we would like a better description, if at all possible. The following corollary yields one. Definition 2.21. A valuation µ is said to be homogenous of degree k > 0 if µ(αP ) = αkµ(P ) for all P ∈ Pix(n) and all α ≥ 0. Corollary 2.22. Let µ be a continuous invariant valuation defined on Pix(n) that is homoge- nous of degree k for some 0 ≤ k ≤ n. Then there exists c ∈ R such that µ(P )= cµk(P ) for all P ∈ Pix(n). n n Proof. There exist c1,...,cn ∈ R such that µ = ciµi. If P = [0, 1] , then for all α> 0, i=0 X n n n n µ(αP )= c µ (αP )= c αiµ (P )= c α i i i i i i i i=0 i=0 i=0 X X X   Meanwhile, n n n µ(αP )= αkµ(P )= αk c µ (P )= αk c i i i i i=0 i=0   so X X n n n n c αk = c αi i i i i i=0 ! i=0 X   X   meaning that ci =0 for i =6 k. ⊓⊔ 20 Csar-Johnson-Lamberty

3. Valuations on polyconvex sets

Now that we understand continuous invariant valuations on a very specific collection of sub- sets of Rn, we recognize the limitations of this viewpoint. Notably we have said nothing of valuations on exciting shapes such as triangles and disks. To include these, we dramatically expand our collection of subsets and again try to classify the continuous invariant valuations. This effort will turn out to be a far greater undertaking.

§3.1. Convex and Polyconvex Sets. Definition 3.1. (a) K ⊂ Rn is convex if for any two points x and y in K, the between x and y lies in K. We denote by Kn the set of all compact convex subsets of Rn. (b) A polyconvex set is a finite union of compact convex sets. We denote by Polycon(n) the set of all polyconvex sets in Rn. ⊓⊔

Example 3.2. A single point is a compact convex set. If T := {(x, y ) ∈ R2 | x, y ≥ 0, x + y ≤ 1}. then T is a filled in triangle in R2, so that T ∈ K2. Also, ∂T is a polyconvex set, but not a convex set. ⊓⊔

One of the most important properties of convex sets is the separation property. Proposition 3.3. Suppose C ⊂ Rn is a closed convex set and x ∈ Rn r C. Then there exists a hyperplane which separates x from C, i.e. x and C lie in different half-spaces determined by the hyperplane. Proof. We outline only the main geometric ideas of the construction of such a hyperplane. Let d := dist(x, C). Then we can find a unique point y ∈ C such that d = |y − x|. The hyperplane perpendicular to the segment [x, y] and intersecting it in the middle will do the trick. ⊓⊔

Definition 3.4. If A is a polyconvex set in Rn, then we say that A is of dimension n or has full dimension if A is not contained in a finite union of hyperplanes. Otherwise, we say that A has lower dimension. ⊓⊔

Remark 3.5. Polycon(n) is a distributive lattice under union and intersection. Furthermore, Kn is a generating set of Polycon(n). ⊓⊔

We must now explore some tools we can use to understand these compact, convex sets. The most critical of these is the support function. Let h−, −i denote the standard inner product on Rn and by |−| the associated norm. We set Sn−1 := {u ∈ Rn; |u| =1}. Valuations on polyconvex sets 21

n n−1 Definition 3.6. Let K ∈ K and nonempty. Then its support function, hK : S → R, given by hK(u) := maxhu, xi. ⊓⊔ x∈K n−1 Example 3.7. If K = {x} is just a single point, then hK (u)= hu, xi for all u ∈ S . Remark 3.8. (a) We can characterize the support function in terms of a function on Rn as follows. Let h˜ : Rn−→R be such that h˜(tu) = th˜(u) for all u ∈ Sn−1 and t ≥ 0. Let h be the restriction of h˜ to Sn−1. Then h is a support function of a compact convex set in Rn if and only if h˜(x + y) ≤ h˜(x)+ h˜(y) for all x, y ∈ Rn. n (b) Consider the hyperplane H(K,u) = {x ∈ R | hx, ui = hK(u)} and the closed half- − n space H(K,u) = {x ∈ R | hx, ui ≤ hK(u)}. Then it is easy to see that H(K,u) is “tangent” to ∂K and that K lies wholly in H(K,u)− for all u ∈ Sn−1. The separation property described in Proposition 3.3 implies K = H(K,u)−. Sn−1 u∈\ In other words, K is uniquely determined by its support function. ⊓⊔

Definition 3.9. Let K, L be in Kn. Then we define K + L := {x + y | x ∈ K,y ∈ L} and call K + L the Minkowski sum of K and L. Remark 3.10. We want to point out that for every K, L ∈ Kn we have n−1 K ⊂ L ⇐⇒ hk(u) ≤ hk(u), ∀u ∈ S and hK+L(u) = max (hx + y,ui) = max (hx, ui + hy,ui) x∈K,y∈L x∈K,y∈L

= max(hx, ui) + max(hy,ui)= hK(u)+ hL(u). ⊓⊔ x∈K y∈L Remark 3.11. Recall that for compact sets K and L in Rn, the Hausdorff metric satisfies δ(K, L) ≤ ǫ if and only if K ⊂ L + ǫB and L ⊂ K + ǫB. In light of this fact and the preceding comments, one can show that

δ(K, L) = sup |hK(u) − hL(u)|. u∈Sn−1 That is, the Hausdorff metric on compact convect subsets of Rn is given by the uniform metric on the set of support functions of compact convex sets. ⊓⊔

Now that we have some tools to understand these compact convex sets, we will soon wish to consider valuations on them. But we don’t want just any valuations. We would like our valuations to be somehow tied to the shape of the convex set alone, rather than where the convex set is in the space. So, we would like some sort of invariance under types of 22 Csar-Johnson-Lamberty

transformations. Furthermore, to make things nicer we would like to restrict our attention to those valuations which are somehow continuous. We formalize these notions. It is easiest to begin with continuity. Recall that the Hausdorff distance turns the set of compact subsets of Rn into a metric space. Since Kn is a subset of these, continuity is a well defined concept for elements of Kn. (We restrict our attention to these elements and not all of Polycon(n) since dimensionality problems arise when considering limits of polyconvex sets.) Definition 3.12. A valuation µ : Polycon(n) → R is said to be convex continuous (or simply continuous where no confusion is possible) if

µ(An)−→µ(A) whenever An, A are compact, convex sets and An−→A. ⊓⊔

n Notation 3.13. Let En be the Euclidean group of R , which is the subgroup of affine trans- n formations of R generated by translations and rotations. For any g ∈ En there exist T ∈ SO(n) and v ∈ Rn such that g(x)= T (x)+ v, ; ∀x ∈ Rn.

The elements of En are also known as rigid motions. Definition 3.14. Let µ : Polycon(n) → R be a valuation. Then µ is said to be rigid motion invariant (or invariant when no confusion is possible) if µ(A)= µ(gA) for all A ∈ Polycon(n) and g ∈ En. If the same holds only for translations g then µ is said to be translation invariant. ⊓⊔

Our aim is to understand the set of convex-continuous invariant valuations on Polycon(n), which we will denote by Val(n). Note that for every m ≤ n we have an inclusion Polycon(m) ⊂ Polycon(n) given by the natural inclusion Rm ֒→ Rn. In particular, any continuous invariant valuation µ : Polycon(n) → R induces by restriction a valuation on Polycon(m). In this way we obtain for every m ≤ n a restriction map

Sm,n : Val(n) → Val(m)

such that Sn,n is the identity map and for every k ≤ m ≤ n we have Sk,n = Sk,m ◦ Sm,n, i.e. the diagram below commutes.

Sm,n

Val(n) Û Val(m)

³ ³

³ Sk,m

³ µ

Sk,n ³ Ù Val(k) Valuations on polyconvex sets 23

Definition 3.15. An intrinsic valuation is a sequence of convex-continuous, invariant valua- tions µn ∈ Val(n) such that for every m ≤ n we have m n µ = Sm,nµ . ⊓⊔

Remark 3.16. (a) To put the above definition in some perspective we need to recall a classical notion. A projective sequence of Abelian groups is a sequence of Abelian groups (Gn)n≥1 together with a family of group morphisms

Sm,n : Gn → Gm, m ≤ n satisfying

Snn = ½Gn , Skn = Skm ◦ Smn, ∀k ≤ m ≤ n..

The projective limit of a projective sequence {Gn; Smn } is the subgroup

limprojn Gn ⊂ Gn n Y consisting of sequences (gn)n≥1 satisfying

gn ∈ Gn, gm = Sm,ngn, ∀m ≤ n.

The sequence (Val(n)) together with the maps Smn define a projective sequence and we set Val Val Val (∞) := limprojn (n) ⊂ (n). n≥0 Y An intrinsic measure is then an element of Val(∞). Similarly if we denote by ValPix(n) the space of continuous, invariant valuations on Pix(n) the we obtain again a projective sequence of vector spaces and an element in the corresponding projective limit will be an intrinsic valuation in the sense defined in the previ- ous section. (b) Observe that since ValPix(n) ⊂ Val(n) we have a natural map

Φn : Val(n) → ValPix(n). A priori this linear map need be neither injective nor surjective. However, in a later section we will show that this map is a linear isomorphism. ⊓⊔

§3.2. Groemer’s Extension Theorem. We now show that any convex-continuous valua- tion on Kn can be extended to Polycon(n). Thus, we can confine our studies to continuous valuations on compact, convex sets. Theorem 3.17. A convex, continuous valuation µ, on Kn can be extended (uniquely) to Polycon(n). Moreover, if µ is also invariant, then so is its extension. Proof. Suppose that µ is a convex-continuous valuation on Kn. In light of Groemer’s integral theorem, we need only show that the integral defined by µ on the space of indicator functions is well defined. We proceed by induction on dimension. In dimension zero, this proposition is trivial, and in dimension one, Kn is the same as Par(n) and Polycon(n) is the same as Pix(n). Hence, 24 Csar-Johnson-Lamberty

we have already done this dimension as well. So, suppose the theorem holds for dimension n − 1. Suppose for the sake of contradiction that the integral defined by µ is not well defined. Us- ing the same technique as in Theorem 2.5, Groemer’s Extension Theorem for parallelotopes, n suppose that there exist K1,...,Km ∈ K such that m

αiIKi =0 (3.1) i=1 X while m

αiµ(Ki)= r =06 (3.2) i=1 X Take m to be the least positive integer such that (3.1) and (3.2) exist. + − Choose a hyperplane H with associated closed half-spaces H and H such that K1 ⊂ + Int(H ). Recall that IA∩B = IAIB. Thus, in light of equation (3.1), we can multiply and get m

+ αiIKi∩H =0 i=1 X as well as m m

− αiIKi∩H =0 and αiIKi∩H =0. i=1 i=1 X + − X + − Now note that Ki =(Ki ∩ H ) ∪ (Ki ∩ H ) and that H ∩ H = H. Thus, since µ is a valuation, we may apply this decomposition and see that m m m m + − αiµ(Ki)= αiµ(Ki ∩ H )+ αiµ(Ki ∩ H ) − αiµ(Ki ∩ H). i=1 i=1 i=1 i=1 X X X X Since each Ki ∩ H lies inside H, a space of dimension n − 1, we deduce from the induction assumption that

m

αiµ(Ki ∩ H)=0. i=1 X+ Moreover, since we took K1 ⊂ Int (H ), we have

− IK1∩H =0, and the sum m m − − αiµ(Ki ∩ H )= αiµ(Ki ∩ H ) i=1 i=2 X X must be zero due to the minimality of m. Thus, from (3.2), we have m m + 0 =6 r = αiµ(Ki)= αiµ(Ki ∩ H ). i=1 i=1 X X Valuations on polyconvex sets 25

+ Thus we have replaced Ki by Ki ∩ H in (3.1) and (3.2). We now repeat this process. By taking a countable dense subset, choose a sequence of hyperplanes, H1,H2,... such that + K1 ⊂ Int(Hi ), and + K1 = Hi . Thus, iterating the proceeding argument, we have\ that m + + αiµ(Ki ∩ H1 ∩···∩ Hq )= r =06 i=1 X for all q ≥ 1. Since µ is continuous, we can take the limit as q →∞, giving m

αiµ(Ki ∩ K1)= r =06 , i=1 X while applying the same type of argument to (3.1) yields

m

αiIKi∩K1 =0. i=1 X Thus, we find ourselves in exactly the same position we were with equations (3.1) and (3.2); therefore, we may repeat the entire argument for K2,...,Km, giving m m

αiµ(Ki ∩ K1 ∩···∩ Km)= αiµ(K1 ∩···∩ Km) i=1 i=1 X X m

= αi · µ(K1 ∩···∩ Km)= r =06 (3.3a) i=1 ! X and m m

αiIK1∩···∩Km = αi (IK1∩···∩Km )=0. (3.4) i=1 i=1 ! X X The equalities (3.3a) and (3.4) contradict each other. The first implies that α1 +···+αm =06 and K1∩· · ·∩Km =6 ∅, while from the second implies that α1+···+αm =0 or IK1···∩Km =0. Thus, the integral must be well defined, so there exists a unique extension of µ to Poly- con(n). ⊓⊔

§3.3. The Euler Characteristic.

n Theorem 3.18. (a) There exists an intrinsic valuation µ0 =(µ0 )n≥0 uniquely determined by n n µ0 (C)=1, ∀C ∈ K . n−1 n (b) Suppose C ∈ Polycon(n), and u ∈ S . Define ℓu : R → R by ℓu(x)= hu, xi and set −1 Ht := ℓu (t), Ct := C ∩ Ht, Fu,C (t) := µ0(Ct). 26 Csar-Johnson-Lamberty

Then Fu,C (t) is a simple function, i.e. it is a linear combination of integral coefficients of 1 characteristic functions IS, S ∈ K . Moreover

µ0(C)= IC dµ0 = Fu,C (t)dµ0(t)= Fu,C (t) − Fu,C (t + 0) . (Fubini) Z Z t X  Proof. Part (a) follows immediately from Theorem 3.17. To establish the second part we use again Theorem 3.17. For every C ∈ Polycon(n) define

λu(C) := Fu,C (t)dµ0(t) Z Observe that for every t ∈ R and every C1,C2 ∈ Polycon(n) we have

Fu,C1∪C2 (t)= µ0 Ht ∩ (C1 ∪ C2) = µ0 (Ht ∩ C1) ∪ (Ht ∩ C2)

= µ0(Ht ∩ C1)+ µ0(Ht ∩ C2) − µ0 (H t ∩ C1) ∩ (Ht ∩ C2) 

= Fu,C1 (t)+ Fu,C2 (t) −Fu,C1∩C2 (t).  n 1 Observe that if C ∈ K then Fu,C is the characteristic function of a compact interval ⊂ R . This shows that χu,C is a simple function for every C ∈ Polycon(n). Moreover

Fu,C1∪C2 dµ0 = Fu,C1 dµ0 + Fu,C2 dµ0 − Fu,C1∩C2 dµ0 Z Z Z Z so that the correspondence

Polycon(n) ∋ C 7→ λu(C) n is a valuation such that λu(C)=1, ∀C ∈ K . From part (a) we deduce

Fu,Cdµ0 = λu(C)= µ0(C). Z We only have to prove that for every simple function h(t) on R1 we have

h(t)dµ0(t)= L1(h) := h(t) − h(t + 0) . Z t X  To achieve this observe that L1(h) is linear and convex continuous in h and thus defines an integral on the space of simple functions. Moreover if h is the characteristic function of a compact interval then L1(h). Thus by Theorem 3.17 we deduce

L1 = dµ0. Z ⊓⊔

Remark 3.19. Denote by S(Rn) the the Abelian subgroup of Map(Rn, Z) generated by indi- cator functions of compact convex subsets of Rn. We will refer to the elements of S(Rn) as simple functions. Thus any f ∈ S(Rn) can be written non-uniquely as a sum

n f = αiICi , αi ∈ Z, Ci ∈ K . i X Valuations on polyconvex sets 27

The Euler characteristic then defines an integral

n dµ0 : S(R ) → Z, αiICi dµ0 = αi. i i Z Z X  X The convolution of two simple functions f,g is the function f ∗ g defined by

f ∗ g(x) := f¯x(y) · g(y)dµ0(y), Z where f¯x(y) := f(x − y). Observe that if A, B ∈ Kn and A + B is their Minkowski sum then

IA ∗ IB = IA+B so that f ∗ g is a simple function for any f,g ∈ S(Rn). Moreover, f ∗ g = g ∗ f, ∀f,g ∈ S(Rn). n n Finally, f ∗ I{0} = f, for all f ∈ S(R ) so that S(R ), +, ∗ is a commutative ring with 1.⊓⊔

Definition 3.20. A convex polyhedron is an intersection of a finite collection of closed half- spaces. A convex polytope is a compact convex polyhedron. A polytope is a finite union of convex polytopes. The polytopes form a distributive sublattice of Polycon(n). The dimension of a convex polyhedron P is the dimension of the affine subspace Aff(P ) generated by P . We denote by relint(P ) the relative interior of P that is, the interior of P relative to the topology of Aff(P ). Remark 3.21. Give a convex polytope P , the boundary ∂P is also a polytope. Therefore, µ0(∂P ) is defined. n Theorem 3.22. If P ⊂ R is a convex polytope of dimension n > 0, then µ0(∂P ) = 1 − (−1)n. n−1 n Proof. Let u ∈ S be a unit vector and define ℓu : R → R as before. Using the previous notation, note that Ht∩∂P = ∂(Ht∩P ) if t is not a boundary point of the interval ℓu(P ) ⊂ R. Let F : R → R be defined by

F (t)= µ0(Ht ∩ ∂P ).

We proceed by induction. For n =1, we have µ0(∂P )=2=1 − (−1) since ∂P consists of two distinct points (since P is an interval). For n> 1, it follows from the induction by hypothesis that n−1 µ0(Ht ∩ ∂P )= µ0(∂(Ht ∩ P ))=1 − (−1) (*) if t ∈ ℓu(P ) is not a boundary point of the interval ℓu(P ). If t ∈ ∂ℓu(P ), we have

µ0(Ht ∩ ∂P )=1 (**) n since Ht ∩ P is a face of P and is thus in K . Finally,

µ0(Ht ∩ ∂P )=0 (***) when Ht ∩ ∂P = ∅. 28 Csar-Johnson-Lamberty

We can now compute

F (t)dµ0(t)= (F (t) − F (t + 0)) t Z X which vanishes except at the two point a and b (a < b) where [a, b]= ℓu(P ). Then,

(F (t) − F (t +0)) = F (a) − F (a +0)+ F (b) − F (b + 0). t X Now observe that F (b +0) = 0 by (***), F (b) = F (a)=1 by (**) and F (a +0) = 1 − (−1)n+1 by (*). Then,

n−1 n−1 n F (t)dµ0(t)=1 − 1+(−1) +1=(1)+(−1) =1 − (−1) Z ⊓⊔

Theorem 3.23. Let P be a compact convex polytope of dimension k in Rn. Then

k µ0(relint(P ))=(−1) (3.5)

Proof. Since µ0 is normalized independently of n, we can consider P in the k-dimensional plane in Rn in which it is contained. Then, relint(P )= P r ∂P so

k µ0(relint(P )) = µ0(P ) − µ0(∂P )=(−1) .

⊓⊔

Definition 3.24. A system of faces of a polytope P , is a family F of convex polytopes such that the following hold. (a) relint(Q)= P . Q∈F (b) If[Q, Q′ ∈ F and Q =6 Q′, then relint(Q) ∩ relint(Q′)= ∅. ⊓⊔

Theorem 3.25 (Euler-Schl¨afli-Poincar´e). Let F be a system of faces of the polytope P , and let fi be the number of elements in F of dimension i. Then, µ0 = f0 − f1 + f2 −··· . Proof. We have

IP = Irelint(Q) F QX∈ so that dim Q k µ0(P )= µ0(relint(Q)) = (−1) = (−1) fk. F F QX∈ QX∈ Xk≥0 ⊓⊔ Valuations on polyconvex sets 29

§3.4. Linear Grassmannians. We denote by Gr(n, k) the set of all k-dimensional sub- spaces in Rn. Observe that the orthogonal group O(n) acts transitively on Gr(n, k) and the stabilizer of a k-dimensional coordinate plane can be identified with the Cartesian prod- uct O(k) × O(n − k). Thus Gr(n, k) can also be identified with the space of left cosets O(n)/ O(k) × O(n − k) . Example 3.26. Gr(n, 1) is the set of lines in Rn, a.k.a the projective space RPn−1. Denote by Sn−1 the unit sphere in Rn. We have a natural map ℓ : Sn−1 → Gr(n, 1), Sn−1 ∋ x 7→ ℓ(x)= the line through 0 and x. The map ℓ is 2-to-1 since ℓ(x)= ℓ(−x). ⊓⊔

Gr(n, k) can also be viewed as a subset of the vector space of symmetric linear operators Rn → Rn via the map

Gr(n, k) ∋ V 7→ PV = the orthogonal projection onto V . As such it is closed and bounded and thus compact. To proceed further we need some notations and a classical fact. n Denote by ωn the volume of the unit ball in R and by σn−1 the (n − 1)-dimensional “surface area” of Sn−1. Then

σn−1 = nωn−1 and πk n =2k n/2 k! π  ω = = , n Γ(n/2+1)  22k+1πkk!  n =2k +1 (2k + 1)!  where Γ(x) is the gamma function. We list below the values of ω for small n.  n n 0 1 2 3 4 4π π2 . ωn 1 2 π 3 2

Theorem 3.27. For every positive constant c there exists a unique measure µ = µc on Gr(n, k) which is invariant, i.e. µ(g · S)= µ(S), ∀g ∈ O(n), S ⊂ Gr(n, k) open subset and has finite volume, i.e. µ(Gr(n, k)) = c. ⊓⊔

For a proof we refer to [San]. Example 3.28. Here is how we construct such a measure on Gr(n, 1). Given an open set U ⊂ Gr(n, 1) we obtain a subset U = ℓ−1(U) ⊂ Sn−1

e 30 Csar-Johnson-Lamberty

U consists of two diametrically opposed subsets of the unit sphere. Define 1 e µ(U)= area(U) 2 Observe that for this measure e 1 σ nω µ(Gr(n, 1)) = area(Sn−1)= n−1 = n . 2 2 2 ⊓⊔

Observe that a constant multiple of an invariant measure is an invariant measure. In par- ticular

µc = c · µ1. Define nω n! n [n] [n] := n , [n]! := [1] · [2] ··· [n]= ω , := 2ω 2n n k [k]![n − k]! n−1   n and denote by νk the invariant measure on Gr(n, k) such that n νn(Gr(n, k)) = . k k  

§3.5. Affine Grassmannians. We denote by Graff(n, k) the space of k-dimensional affine subspaces (planes) of Rn. For every affine k-plane we denote by Π(V ) the linear subspace parallel to V , and by V ⊥ the orthogonal complement of Π(V ). We obtain in this fashion a surjection Π : Graff(n, k) → Gr(n, k), V 7→ Π(V ) Observe that an affine k-plane V is uniquely determined by V ⊥ and the point p = V ⊥ ∩ V . The fiber of the map Π : Graff(n, k) → Gr(n, k) over a point L ∈ Gr(n, k) is the set Π−1(L) consisting of all affine k-planes parallel to L. This set can be canonically identified with L⊥ via the map Π−1(L) ∋ V 7→ V ∩ L⊥ ∈ L⊥. The map Π : Graff(n, k) → Gr(n, k) is an example of vector bundle. We now describe how to integrate functions f : Graff(n, k) → R. Define

n n f(V )dλk := f(L + p)dL⊥ p dνk , Graff(n,k) Gr(n,k) L⊥ Z Z Z  ⊥ where dL⊥ p denotes the Euclidean volume element on L . Example 3.29. Let

1 dist (L, 0) ≤ 1 f : Gr(3, 1) → R, f(L) := . (0 dist (L, 0) > 1 Valuations on polyconvex sets 31

Π(V) V

p V⊥

FIGURE 2. The orthogonal projection of an element in Graff(3, 2).

Observe that f is none other than the indicator function of the set Graff(3, 1; B3) of affine lines in R3 which intersect B3, the closed unit ball centered at the origin. Then

3 3 3 f(V )dλ1 = f(L + p)dp dν1 = ω2 = [3] · ω2 Graff(3,1) Gr(3,1) L⊥ 1 Z Z Z    =ω2

3ω|3 3{zω3 } = ω2 = =2π. 2ω2 2 In particular

σ2 1 λ3 Graff(3, 1; B3))=2π = = × surface area of B3. 1 2 2 ⊓⊔

§3.6. The Radon Transform. At this point, we further our goal of understanding Val(n) by constructing some of its elements. Recall that Kn is the collection of compact convex subsets of Rn and Polycon(n) is the collection of polyconvex subsets of Rn. We denote by Val(n) the vector space of convex continuous, rigid motion invariant valuations µ : Polycon(n) → R. Via the embedding Rk ֒→ Rn, k ≤ n, we can regard Polycon(k) as a subcollection of Polycon(n) and thus we obtain a natural map

Sk,n : Val(n) → Val(k) 32 Csar-Johnson-Lamberty

We denote by Valw(n) the vector subspace of Val(n) consisting of valuations µ homoge- neous of degree (or weight) w, i.e. µ(tP )= twµ(P ), ∀t> 0, P ∈ Polycon(n). For every k ≤ n and every µ ∈ Val(k) we define the Radon transform of µ to be

n Rn,kµ : Polycon(n) → R, Polycon(n) ∋ P 7→ (Rn,kµ)(P ) := µ(P ∩ V )dλk . ZGraff(n,k) Proposition 3.30. If µ ∈ Val(k) then Rn,kµ is a convex continuous, invariant valuation on Polycon(n).

Proof. For every µ ∈ Val(k), V ∈ Graff(n, k) and any S1,S2 ∈ Polycon(n) we have

V ∩ (S1 ∪ S2)=(V ∩ S1) ∪ (V ∩ S2), (V ∩ S1) ∩ (V ∩ S2)= V ∩ (S1 ∩ S2) so that µ( V ∩ (S1 ∪ S2))= µ(V ∩ S1)+ µ(V ∩ S2) − µ(V ∩ (S1 ∩ S2)). Integrating the above equality with respect to V ∈ Graff(n, k) we deduce that

Rn,k(S1 ∪ S2)= Rn,k(S1)+ Rn,k(S2) − Rn,k(S1 ∩ S2) so that Rn,kµ is a valuation on Polycon(n). The invariance of Rn,k follows from the invari- n ance of µ and of the measure λk on Graff(n, k). Observe that if Cν is a sequence of compact convex sets in Rn such that n Cν → C ∈ K then lim µ(Cν ∩ V )= µ(C ∩ V ), ∀V ∈ Graff(n, k). ν→∞ We want to show that

n n lim µ(Cν ∩ V )dλk (V )= µ(C ∩ V )dλk (V ) ν→∞ ZGraff(n,k) ZGraff(n,k) by invoking the Dominated Convergence Theorem. We will produce an integrable function f : Graff(n, k) → R such that

µ(Cn ∩ V ) ≤ f(V ), ∀n> 0, ∀V ∈ Graff(n, k). (3.6)

To this aim observe first that since Cn → C there exists R > 0 such that all the sets Cn and the set C are contained in the ball Bn(R) of radius R centered at the origin. Define

Graff(n, k; R) := V ∈ Graff(n, k) | V ∩ Bn(R) =6 ∅ . Graff(n, k; R) is a compact subset of Graff(n, k) and thus it has finite volume. Now define

N = {0}∪{1/n | n ∈ Z>0 } ⊂ [0, 1], and F : N × Graff(n, k; R) → R, F (r, V ) := µ(C1/r ∩ V ) where for uniformity we set C := C1/0. Observe that N × Graff( n, k; R) is a compact space and F is continuous. Set M := sup F (r, V ) | (r, V ) ∈ N × Graff(n, k; R) .  Valuations on polyconvex sets 33

Then M < ∞. The function M V ∈ Graff(n, k; R) f : Graff(n, k) → R, f(V ) := (0 V 6∈ Graff(n, k; R) satisfies the requirements of (3.6). ⊓⊔

The resulting map Rn,k : Val(k) → Val(n) is linear and it is called the Radon transform.

Proposition 3.31. If µ ∈ Val(k) is homogeneous of weight w, then Rk+j,kµ is homogeneous of weight w + j, that is w w+j Rk+j,k Val (k)) ⊂ Val (k + j). Proof. If C ∈ Polycon(k + j) and t is a positive real number then

k+j ⊥ (Rk+j,kµ)(tC)= µ(tC ∩ (V + p))dL p dνk ⊥ ZGr(k+j,k) ZL  We use the equality tC ∩ (V + p)= t (C ∩ (V + t−1p)) to get

w −1 k+j ⊥ (Rk+j,kµ)(tC)= t µ(C ∩ (V + t p))dL p dνk Gr(k+j,k) L⊥ Z Z  −1 j We make a change in variables q = t p in the interior integral so that dL⊥ p = t dL⊥ q and

w+j ⊥ n w+j (Rk+j,kµ)(tC)= t µ(C ∩ (V + q))dL q dνk = t Rk+j,kµ(C). Gr(k+j,k) L⊥ Z Z  ⊓⊔

So far we only know two valuations in Val(n), the Euler characteristic, and the Euclidean volume voln. We can now apply the Radon transform to these valuations and hopefully obtain new ones.

k Example 3.32. Let volk ∈ Val(k) denote the Euclidean k-dimensional volume in R . Then for every n>k and every compact convex subset C ⊂ Rn we have

n (Rn,k volk)(C)= volk(V ∩ C)dλk (V ) ZGraff(n,k)

⊥ n = volk (C ∩ (L + p) dL p dνk (L), ⊥ ZGr(n,k) ZL !  ⊥ where dL⊥ p denotes the Euclidean volume element on the (n − k)-dimensional space L . The usual Fubini theorem applied to the decomposition Rn = L⊥ × L implies that

voln(C)= µn(C)= dRn q = volk (C ∩ (L + p) dL⊥ p. n ⊥ ZR ZL Hence  n (R vol )(C)= vol (C)dνn(L)= vol (C). n,k k n k k n ZGr(n,k)   34 Csar-Johnson-Lamberty

In other words n R vol = vol . n,k k k n   Thus the Radon transform of the Euclidean volume produces the Euclidean volume in a higher dimensional space, rescaled by a universal multiplicative scalar. ⊓⊔

The above example is a bit disappointing since we have not produced an essentially new type of valuation by applying the Radon transform to the Euclidean volume. The situation is dramatically different when we play the same game with the Euler characteristic. We are now ready to create elements of Val(n). For any positive integers k, n define n Val µk := Rn,n−kµ0 ∈ (n), where µ0 ∈ Val(k) denotes the Euler characteristic. Observe that if C is a compact convex subset in Rk+j, then for any V ∈ Graff(k + j, k) we have 1 if C ∩ V =6 ∅ µ0(V ∩ C)= . (0 if C ∩ V = ∅ Thus the function Graff(k + j, k) ∋ V 7→ µ0(V ∩ C) is the indicator function of the set Graff(C,k) := V ∈ Graff(k + j, k); V ∩ C =6 ∅ . We conclude that  k+j k+j µj (C)=(Rk+j,kµ0)(C)= IGraff(C,k)dλk ZGraff(k+j,k) k+j = λk Graff(k,C) . k+j Thus the quantity µj (C) “counts” how many k-planes intersect C. From this interpretation we obtain immediately the following result. Theorem 3.33 (Sylvester). Suppose K ⊆ L ⊂ Rk+j are two compact convex subsets. Then the conditional probability that a k-plane which meets L also intersects K is equal k+j µj (K) to k+j . ⊓⊔ µj (L)

n n We can give another interpretation of µj . Observe that if C ∈ K then

n n n µj (C)= µ0(C∩V )dλn−j(V )= µ0 C∩L+p) dL⊥ p dνn−j(L). ⊥ ZGraff(n,n−j) ZGr(n,n−j) ZL !  Now observe that if (C|L⊥) denotes the orthogonal projection of C onto L⊥ then ⊥ µ0 C ∩ (L + p) =06 ⇐⇒ µ0 C ∩ (L + p) ⇐⇒ p ∈ (C|L ). 2 An example of this fact in R can be seen in Figure 3. Hence ⊥ µ0 C ∩ L + p) dL⊥ p = volj(C|L ) ⊥ ZL  Valuations on polyconvex sets 35 and thus n ⊥ n µj (C)= volj(C|L )dνn−j(L). (3.7) Gr(n,n−j) n Z Thus µj is the “average value” of the volumes of the projections of C onto the j-dimensional subspaces of Rn.

V L

C

p

C L⊥ L⊥

FIGURE 3. C|L⊥ for V in Graff(2, 1).

n Valj Val A priori, some or maybe all of the valuations µj ∈ (n) ⊂ (n), 0 ≤ j ≤ n could n n be trivial. Note, however, that µn is voln. Furthermore, volume is intrinsic, so µn is in fact n µn. We show that in fact all of the µj are nontrivial. Proposition 3.34. The valuations n n n Val µ0 = µ0 ,µ1 , ··· ,µj , ··· µn ∈ (n) are linearly independent. n Proof. Let Bn(r) denote the closed ball of radius r centered at the origin of R . We set B B n n = n(1). Since µj is homogeneous of degree j we have

n B j n B (3.7) j B ⊥ n j B n µj n(r) = r µj ( n) = r volj( n|L )dνn−j(L)= r volj( j)dνn−j(L). ZGr(n,n−j) ZGr(n,n−j) Hence  n n µn B (r) = ω rj dνn = ω rj = ω rj . (3.8) j n j n−j j n − j j j ZGr(n,n−j)     Observe that the above equality is also valid for j =0. Suppose that for some real constants c0,c1, ··· ,cn we have n n cjµj =0. j=0 X 36 Csar-Johnson-Lamberty

Then n n B cjµj n(r) =0, ∀r > 0, j=0 X so that  n n c ω rj =0, ∀r > 0. j j j j=0 X   This implies cj =0, ∀j. ⊓⊔

n The valuation µj is homogeneous of degree j and it induces a continuous, invariant, homo- geneous valuation of degree w on the lattice Pix(n) ⊂ Polycon(n). Observe that if C1 ⊂ C2 are two compact convex subsets then n n µj (C1) ≤ µj (C2). Since every n-dimensional parallelotope contains a ball of positive radius we deduce from n (3.8) that µj (P ) =06 , for any n-dimensional parallelotope. Corollary 2.22 implies that there n exists a constant C = Cj such that n n Cj µj (P )= µj(P ), ∀P ∈ Pix(k + j). (3.9) n We denote by µj the valuation n n n Valj µj := Cj µj ∈ (n). (3.10) n b n n n Since µn is voln, as is µ n, we know that Cn = 1. Thus, µ n = µ n = µn = voln. We will n prove in later sections that the sequenceb (µj ) defines an intrinsic valuation and that in fact n all the constants Cj areb equal to 1. b b Remark 3.35. Observe that b n nω µn (B )= ω = [n]ω = n n−1 n n−1 1 n−1 2   σ 1 = n−1 = × surface area of B . 2 2 n This is a special case of a more general fact we discuss in the next subsection which will n imply that the constants Cn−1 in (3.9) are equal to 1. ⊓⊔

§3.7. The Cauchy Projection Formula. We want to investigate in some detail the valua- j+1 Val j+1 ⊥ tions µj ∈ (j + 1). Suppose C ∈ K and ℓ ∈ Gr(j +1, 1). If we denote by (C|ℓ ) the orthogonal projection of C onto the hyperplane ℓ⊥ then we obtain from (3.7)

j+1 ⊥ j+1 j+1 µj (C)= µj(C|ℓ )dν1 (ℓ), ∀C ∈ K . (3.11) ZGr(j+1,1) j+1 Loosely, speaking µj (C) is the average value of the “areas” of the shadows of C on the hy- perplanes of Rj+1. To clarify the meaning of this average we need the following elementary fact. Valuations on polyconvex sets 37

Lemma 3.36. For any v ∈ Sj ⊂ Rj+1,

|u · v|du =2ωj (3.12) Sj Z Proof. We recall from elementary calculus that an integral can be treated as a Riemann sum, i.e.,

|ui · v|du ≈ |ui · v|S(Ai), Sj Z i j X where the sphere S has been partitioned into many small portions (here denoted Ai), each containing the point ui and possessing area S(Ai). j Let Ai denote the orthogonal projection of Ai onto the hyperplane tangent to S at the ⊥ ⊥ point ui. Denote by Ai|v the orthogonal projection of Ai onto the hyperplane v . Since j ⊥ Ai ⊂ Se , this projection is entirely into the disk Bn−1 lying inside v . Since Ai lies in a hyperplane with unit normale v⊥, its projected area is the product of its area in the hyperplane ⊥ times the angle between the two planes, i.e. S(Ai|v )= |ui · v|S(Ai). e j For a fine enough partition of S we have that S(Ai) ≈ S(Ai). Clearly then |ui ·v|S(Ai) ≈ ⊥ ⊥ |ui · v|S(Ai), and thus S(Ai|v ) ≈ S(Ai|v ). Therefore,e e e ⊥ e |u · ve|du ≈ S(Ai|v ). Sj i Z X ⊥ Because the collection of sets {Ai|v } contains equivalent portions above and below the ⊥ hyperplane v , it covers the unit, j-dimensional ball Bj once from above and once from below, we see that

|u · v|du ≈ 2ωj. Sj Z The similarities in this proof all converge to equalities in the limit as the mesh of the partition Ai goes to zero. ⊓⊔

Theorem 3.37 (Cauchy’s surface area formula). For every convex polytope K ⊂ Rj+1 we denote by S(K) its surface area. Then

1 ⊥ S(K)= µj(K|u )du. ω Sj j Z Proof. Let P have facets Pi with corresponding unit normal vectors vi and surface areas αi. ⊥ For every unit vector u the projection (Pi|u ) onto the j-dimensional subspace orthogonal to u has j-dimensional volume ⊥ µj(Pi|u )= αi|u · vi|. j ⊥ For u ∈ S , the region (P |u ) is covered completely by the projections of facets Pi, ⊥ ⊥ (P |u )= (Pi|u ). i [ For almost every point p in the projection (P |u⊥) the line containing p and parallel to u intersects the boundary of P in two different points. (The set of points p ∈ (P |u⊥) for 38 Csar-Johnson-Lamberty

which this does happen is negligible, i.e. it has j-dimensional volume 0.) Thus, in the above decomposition of (P |u⊥) as a union of projection of facets, almost every point of (P |u⊥) is contained in exactly two such projections. Therefore 1 µ (P |u⊥)= µ (P |u⊥) j 2 j i i X so that m ⊥ 1 µj(P |u )du = αi|u · vi|du Sj Sj 2 i=1 Z Z X m m αi (3.12) = |u · vi|du = αiωj = ωjS(P ). 2 Sj i=1 i=1 X Z X ⊓⊔

Recall that for parallelotopes P ∈ Par(j + 1), 1 µ (P )= S(P ). j 2 In this light, Cauchy’s surface area formula becomes

1 ⊥ µj(P )= µj(P |u )du, ∀P ∈ Par(j + 1). 2ω Sj j Z We want to rewrite this as an integral over the Grassmannian Gr(j +1, 1). Recall that we have a 2-to-1 map ℓ : Sj → Gr(j +1, 1).

An invariant measure µc on Gr(j+1, 1) of total volume c corresponds to an invariant measure j j µ˜c on S of total volume 2c. If σ denotes the invariant measure on S defined by the area element on the sphere of radius 1 then

j 2c σ(S )= σj, µ˜c = σ σj Any function f : Gr(j +1, 1) → R defines a function ℓ∗f := f ◦ ℓ : Sj → R called the pullback of f via ℓ. Conversely, any even function g on the sphere is the pullback of some function g¯ on Gr(j +1, 1). Moreover

1 ∗ c ∗ fdµc = ℓ fdµ˜c = ℓ fdσ. 2 Sj σ Sj ZGr(j+1,1) Z j Z This shows that ∗ σj ℓ fdσ = fdµc. Sj c Z ZGr(j+1,1) j+1 The measure ν1 has total volume c = [j + 1] so that

1 ⊥ σj ⊥ j+1 µj(P )= µj(P |u )du = µj(P |L )dν1 (L) 2ω Sj 2[j + 1]ω j Z j ZGr(j+1,1) Valuations on polyconvex sets 39

Using (3.11) we deduce σj j+1 µj(P )= µj (P ) 2[j + 1]ωj for every parallelotope P ⊂ Rj+1. Recalling that

(j + 1)ωj+1 σj =(j + 1)ωj+1, [j +1] = 2ωj we conclude σ j =1 2[j + 1]ωj so that finally j+1 µj(P )= µj (P ), ∀P ∈ Par(j + 1). 40 Csar-Johnson-Lamberty

4. The Characterization Theorem

We are now ready to completely characterize Val(n).

§4.1. The Characterization of Simple Valuations. Before we can state and prove the characterization theorem for simple valuations, we need to recall a few things. En denotes the group of rigid motions of Rn, i.e. the subgroup of affine transformations of Rn generated n by translations and rotations. Two subsets S0,S1 ⊂ R are called congruent if there exists φ ∈ En such that

φ(S0)= S1. A valuation µ : Polycon(n) → R is called simple if µ(S)=0, for every S ∈ Polycon(n) such that dim S < n. The Minkowski Sum of two compact convex sets, K, and L is given by K + L = x + y | x ∈ K,y ∈ L . We call a zonotope a finite Minkowski sum of straight line segments, and we call a zonoid a convex set Y that can be approximated in the Hausdorff metric by a convergent sequence of zonotopes. If a compact convex set is symmetric about the origin, the we call it a centered set. We n n denote the set of centered sets in K by Kc . The proof of the characterization theorem relies in a crucial way on the following technical result.

n Lemma 4.1. Let K ∈ Kc . Suppose that the support function of K is smooth. Then there exist zonoids Y1,Y2 such that

K + Y2 = Y1. ⊓⊔

Idea of proof. We begin by observing that the support function of a centered line segment in Rn with endpoints u, −u is given by n−1 hu : S → R, h(x) := |hu, xi|. n−1 Thus, any function g : S → R which is a linear combinations of hu’s with positive coefficients is the support function of a centered zonotope. In particular any uniform limit of such linear combinations will be the support function of a zonoid. For example, a function g : Sn−1 → R of the form

g(x)= |hu, xi|f(u)dSu, Sn−1 Z with f : Sn−1 → [0, ∞) a continuous, symmetric function, is the support function of a zonoid. The characterization theorem 41

n−1 n−1 Let us denoteby Ceven(S ) the space of continuous, symmetric (even) functions S → R. We obtain a linear map

n−1 n−1 C : Ceven(S ) → Ceven(S ), (Cf)(x) := |hu, xi|f(u)dSu Sn−1 Z called the cosine transform. Thus we can rephrase the last remark by saying that the cosine transform of a nonnegative continuous, even function on Sn−1 is the support function of a zonoid. Note that f is continuous, even, then we can write is as the difference of two continuous, even, nonnegative functions 1 1 f = f − f , f = (f + |f|), f = (|f| − f). + − + 2 − 2 n Thus if h is the support function of some C ∈ Kc such that

h = Cf =⇒ h + Cf− = Cf+

Note that Cf− is the support function of a zonoid Y1 and Cf+ is the support function of a zonoid Y1 and thus h + Cf− = Cf+ ⇐⇒ C + Y1 = Y2. We deduce that any continuous function which is in the image of the cosine transform is the support function of a set C ∈ Kn satisfying the conclusions of the lemma. The punch line is now that any smooth, even function f : Sn−1 → R is the cosine trans- form of s smooth, even function on the sphere. This existence statement is a nontrivial result, and its proof is based on a very detailed knowledge of the action of the cosine transform on the harmonic polynomials on Sn−1. For a proof along these lines we refer to [Gr, Chap. 3]. For an alternate approach which reduces the problem to a similar statement about Fourier transforms we refer to [Gon, Prop. 6.3]. ⊓⊔

Remark 4.2. The connection between the classification of valuations and the spectral proper- ties of the cosine transform is philosophically the key reason why the classification turns out to be simple. The essence of the classification is invariant theoretic, and is roughly says that the invariance under the group of motions dramatically cuts down the number of possible choices. ⊓⊔

Theorem 4.3. Suppose that µ ∈ Val(n) is a simple valuation. If µ is even, that is µ(K)= µ(−K), ∀K ∈ Kn, then µ([0, 1]n)=0 ⇐⇒ µ is identically zero on Kn. Proof. Clearly only the implication =⇒ is nontrivial. To prove it we employ a simple strategy. By cleverly cutting and pasting and taking limits we produce more and more sets S ∈ Polycon(n) such that µ(S)=0 until we get what we want. We will argue by induction on the dimension n of the ambient space. The result is true for n = 1 because in this case K1 = Par(1) and we can invoke the volume theorem for Pix(1), Theorem 2.19. 42 Csar-Johnson-Lamberty

So, now suppose that n > 1 and the theorem holds for valuations on Kn−1. Note that n 1 for every positive integer k the cube [0, 1] decomposes in k cubes congruent to 0, k and overlapping on sets of dimension < n. We deduce that   µ([0, 1/k]n)=0.

1 n Since any box whose edges have rational length decomposes into cubes congruent to 0, k for some positive integer k we deduce that µ vanishes on such boxes. By continuity we deduce that it vanishes on all boxes P ∈ Par(n). Using the rotation invariance we conclude  that rotation invariance, µ(C)=0 for all boxes C with positive sides parallel to some other orthonormal axes. We now consider orthogonal with convex bases, i.e. sets congruent to convex sets of the form C ×[0,r] ∈ Kn, C ∈ Kn−1. For every real numbers a < b define a valuation n−1 τ = τa,b on K by τ(K)= µ(K × [a, b]), for all K ∈ Kn−1. Note that [0, 1]n−1 × [a, b] ∈ Par(n) so that τ([0, 1]n−1)= µ([0, 1]n−1 × [a, b])=0. Then τ satisfies the inductive hypotheses, so τ = 0. Hence µ vanishes on all orthogonal cylinders with convex basis. Now suppose that M is a , with congruent top and bottom faces parallel to the hyperplane xn = 0, but whose cylindrical boundary is not orthogonal to xn = 0 but rather meets it at some constant angle. See the top of Figure 4

M1

M 2

M 2

M1

FIGURE 4. Cutting and pasting a prism. The characterization theorem 43

We now cut M into two pieces M1, M2 by a hyperplane orthogonal to the cylindrical boundary of M. Using translation invariance, slide M1 and M2 around and glue them to- gether along the original top and bottom faces. (See Figure 4). We then have a right C. Thus, µ(M)= µ(M1)+ µ(M2)= µ(C)=0. Note that we must actually be careful with this cutting and repasting. It is possible that M is too “fat”, preventing us from slicing M with a hyperplane orthogonal to the cylindrical axes and fully contained in each of them. If this problem occurs, we can easily remedy it by subdividing the top and bottom of M into sufficiently small convex pieces. Using the simplicity of µ, we can then consider each separately. Now let P be a convex polytope having facets P1,...,Pm, and corresponding outward unit n normal vectors u1,...,um. Let v ∈ R and let v¯ denote the straight line segment connecting the point v to the origin. Without loss of generality, suppose that P1,...,Pj are exactly those facets of P such that hui, vi > 0 for all 1 ≤ i ≤ j. We can thus express the Minkowski sum P +¯v as j

P +¯v = P ∪ (Pi +¯v) , i=1 ! where each term in the above union is either disjoint[ from the others or intersects in dimen- sion less than n (see Figure 5).

_ P 1 + v

P1 _ P + v P 2 P 2

_ v

FIGURE 5. Smearing a convex polytope.

This simply results from the fact that P +¯v is the ’smearing’ of P in the direction of v. Hence, since µ is simple, j

µ(P +¯v)= µ(P )+ µ(Pi +¯v). i=1 X But now each term Pi +¯v is the smearing of the facet Pi in the direction of v, making Pi +¯v a prism, so that µ(P +¯v)= µ(P ), 44 Csar-Johnson-Lamberty

for all convex polytopes P and line segments v¯. By translation invariance v¯ could be any segment in the space. We deduce iteratively that for all convex polytopes P and all zonotopes Z we have µ(Z)=0 and µ(P + Z)= µ(P ). By continuity, µ(Y )=0 and µ(K + Y )= µ(K), for all K ∈ Kn and all zonoids Y. n Now suppose that K ∈ Kc and has a smooth support function. Then by our lemma, there are zonoids Y1,Y2 such that K + Y1 = Y2. Thus, we now have that

µ(K)= µ(K + Y2)= µ(Y1)=0. Since any centered convex body can be approximated by a sequence of centered convex sets with smooth support functions, by continuity of µ, µ is zero on all centered convex, compact sets. Now let ∆ be an n-dimensional simplex, with one vertex at the origin. Let u1,...,un denote the other vertices of ∆, and let P be the parallelepiped spanned by the vectors u1,...,un. Let v = u1 + ··· + un. Let ξ1 be the hyperplane passing through the points u1,...,un and let ξ2 be the hyperplane passing through the points v − u1,...,v − un. Fi- nally, denote by Q the set of all points of P lying between the hyperplanes ξ1 and ξ2. We now write P = ∆ ∪ Q ∪ (−∆+ v), where each term in the union intersects any other in dimension less than n. P and Q are centered, so 0= µ(P )= µ(∆) + µ(Q)+ µ(−∆+ v)= µ(∆) + µ(−∆). Thus, µ(∆) = −µ(−∆). Yet by assumption µ(∆) = µ(−∆). Thus, µ(∆) = 0. Now since any convex polytope can be expressed as the finite union of simplices each of which intersects with any other in dimension less than n, we have that µ(P )=0 for all convex polytopes P . Since convex polytopes are dense in Kn, we have that µ is zero on Kn, which is what we wanted. ⊓⊔

We now immediately get the following result. Theorem 4.4. Suppose that µ is a continuous simple valuation on Kn that is translation and rotation invariant. Then there exists some c ∈ R such that µ(K)+ µ(−K)= cµn(K) for all K ∈ Kn. Proof. For K ∈ Kn, define n η(K)= µ(K)+ µ(−K) − 2µ([0, 1] )µn(K). Then η satisfies the conditions of the previous theorem, so that η is zero on Kn. Thus,

µ(K)+ µ(−K)= cµn(K), where c =2µ([0, 1]n). ⊓⊔ The characterization theorem 45

§4.2. The Volume Theorem.

Theorem 4.5 (Sah). Let ∆ bea n-dimensional simplex. There exist convex polytopes P1,...,Pm such that ∆ = P1 ∪···∪ Pm where each term of the union intersects another in dimension at most n − 1 and where each of the polytopes Pi is symmetric under a reflection across a hyperplane, i.e. for each Pi, the exists a hyperplane such that Pi is symmetric when reflected across it.

Proof. Let x0,...,xn be the vertices of ∆ and let ∆i be the facet of ∆ opposite to xi. Let z be the center of the inscribed sphere of ∆ and let zi be the foot of the perpendicular line from z to the facet ∆i. For all i < j, let Ai,j denote the convex hull of z, zi, zj and the face ∆i ∩ ∆j (see Figure 6).

x 0

z z 2 1 z1 z z A0,1 x 1 z0 z x2 0

FIGURE 6. Cutting a simplex into symmetric polytopes.

Then

∆= Ai,j, 0≤i

Theorem 4.6 (Volume Theorem for Polycon(n)). Suppose that µ is a continuous rigid mo- n tion invariant simple valuation on K . Then there exists c ∈ R such that µ(K) = cµn(K), for all K ∈ Kn. Note that we can extend continuous valuations on Kn to continuous valua- tions on Polycon(n), so the theorem also holds replacing Kn with Polycon(n). Proof. Since µ is translation invariant and simple, by Theorem 4.4, there exists a ∈ R such n n that µ(K) + µ(−K) = aµn(K) for all K ∈ K . Let ∆ be a n-simplex in R . Then µ(∆) + µ(−∆) = aµn(∆). Suppose n is even, meaning the dimension of Rn is even. Then ∆ and −∆ differ by a rotation. This is clearly true in R2. We can then rotate ∆ to −∆ in each “orthogonal” plane of Rn, meaning ∆ and −∆ differ by the composition of these rotations, i.e. a rotation. Then a µ(∆) = µ(−∆) = 2 µn(∆). Now suppose that n is odd. By Theorem 4.5, there exist polytopes P1,...,Pm such that ∆ = P1 ∪···∪ Pm and dim Pi ∩ Pj ≤ n − 1 and each Pi is symmetric under a reflection 46 Csar-Johnson-Lamberty

across a hyperplane. Thus, we can transform Pi into −Pi by a series of rotations and then reflection across the hyperplane. Then, µ(Pi)= µ(−Pi) and m m

µ(−∆) = µ(−Pi)= µ(Pi)= µ(∆) i=1 i=1 Xa X Then for all n-simplices ∆, µ(∆) = 2 µn(∆). a n Let c = 2 and suppose P is a convex polytope in R . Then P is a finite union of simplices, i.e. P = ∆1 ∪···∪ ∆m such that dim ∆i ∩ ∆j < n. Then

µ(P ) = µ(∆1)+ ··· + µ(∆m)

= cµn(∆1)+ ··· + cµn(∆m)

= cµn(P ) n The set of convex polytopes is dense in K , so since µ is continuous, µ(K) = cµn(K) for all K ∈ Kn. ⊓⊔

§4.3. Intrinsic Valuations. We would like to show that for each k ≥ 0 the valuations n µ k determine an intrinsic valuation, meaning that for all N ≥ n and P ∈ Polycon(n), n N µ k (P )= µ k (P ). For uniformity, we set b n µ k =0, ∀k > n. b b Theorem 4.7. For every k ≥ 0 the sequence b n Val (µ k ) ∈ (n) n≥0 Y defines an intrinsic valuation, i.e., anb element of Val Val µ k ∈ (∞) = limprojn (n). Proof. First of all let us fix k. Since there is no danger of confusion we will write µ n instead n b of µ k . Consider the statement n ℓ µ (P )= µ (P ), ∀P ∈ Polycon(ℓ). b (Pn,ℓ) b Note that l ≤ n. We want to prove by induction over n that the statement b b Pn,ℓ for any ℓ ≤ n (Sn) is true for any n. The statement S0 is trivially true. We will prove that

S0, ··· , Sn−1 =⇒ Sn.

Thus, we know that Pm,ℓ is true for every ℓ ≤ m < n, and we want to prove that

Pn,ℓ is true for any ℓ. (Tℓ)

To prove Tℓ we argue by induction on ℓ. (n is fixed.) The result is obviously true for ℓ = 0, 1 since in these case Pix = Polycon. Thus, we assume that Pn,ℓ is true and we will show that Pn,ℓ+1 is true. In other words, we know that µ n(P )= µ ℓ(P ), P ∈ Polycon(ℓ) (4.1)

b b The characterization theorem 47 and we want to prove that µ n(P )= µ ℓ+1(P ), ∀P ∈ Polycon(ℓ + 1). Clearly the last statement is trivially true if ℓ +1= n so we may assume ℓ +1 < n. We denote by ν theb restrictionb of µ n to Polycon(ℓ + 1). Then ν restricts to µ ℓ on Polycon(ℓ) and µ n restricts to µ ℓ by (4.1). S ℓ+1 ℓ Since ℓ +1 < n we deduce from bℓ+1 that µ restricts to µ on Polycon(ℓ).b Then ℓ+1 ν − µ vanishesb on Polycon(bℓ), meaning it is a continuous invariant simple valuation on ℓ+1 ℓ+1 Polycon(ℓ +1). Theorem 4.6 implies that there existsb c ∈ R such thatb ν − µ = cµ ℓ+1 on Polycon(ℓ + 1). l+1 ℓ+1 On the other hand, ν = µ on Pix(l + 1), meaning c =0 and thus ν =bµ . b ⊓⊔ Based on this result, the superscript of µ n does not matter. Therefore, we define b k b n µ k := µ k . At this point, we are able to characterizeb the continuous, invariant valuations on Polycon(n), as we did for Pix(n) in Theorem 2.20. b b

Theorem 4.8 (Hadwiger’s Characterization Theorem). The valuations µ 0, µ 1,..., µ n form a basis for the vector space Val(n). Proof. Let µ ∈ Val(n) and let H be a hyperplane in Rn. The restrictionb ofb µ to Hbis then a continuous invariant valuation on H. Note that the choice of H does not matter since µ is rigid motion invariant and all hyperplanes can be arrived at through rigid motions of a single hyperplane. Recall that Polycon(1) = Pix(1), and by Theorem 2.20, the statement is true for n =1. We take this as our base case and proceed by induction. For every polyconvex set A in H, we assume that n−1

µ(A)= ciµ i(A) i=0 X Thus, b n−1

µ − ciµ i i=0 Val X is a simple valuation in (n). Then, by Theoremb 4.6, n−1

µ − ciµ i = cnµ n i=0 X We move the sum to the other side and b b n

µ = ciµ i i=0 X b ⊓⊔ In a definition analogous to that on Pix(n), a valuation µ on Polycon(n) is homogenous of degree k if for α ≥ 0 and all K ∈ Polycon(n), µ(αK)= αkµ(K). 48 Csar-Johnson-Lamberty

Corollary 4.9. Let µ ∈ Val(n) be homogenous of degree k. Then there exists c ∈ R such that µ(K)= cµ k(K) for all K ∈ Polycon(n).

Proof. By Theorem 4.8, we know that there exist c0, ··· ,cn ∈ R such that b n

µ = ciµ i i=0 X Suppose P = [0, 1]n, the unit cube in Rn. Fix α ≥b0. Then, n n n n µ(αP )= c µ (αP )= αic µ (P )= c αi. i i i i i i i=0 i=0 i=0   n X X X µ i(P ) = µi(P ) = i since forbP ∈ Par(n), µi is theb i-th elementary symmetric function. At the same time,  b n n n µ(αP )= αkµ(P )= c µ (P )αk = c αk. i i i i i=0 i=0 X X   We compare the coefficients ci in the two sumsb and conclude that cj = 0 for i =6 k. Thus, µ = ckµ k. ⊓⊔

b Applications 49

5. Applications

It is payoff time! Having thus proven Hadwiger’s Characterization Theorem, we now seek to use it in extracting as much interesting geometric information as possible. Let us recall that n n n n µk = Rn,n−kµ0 and µ k = Ck µk , n where the normalization constants Ck are chosen so that b n µ n([0, 1]n)= µ ([0, 1]n)= . k k k   Above, µk is the intrinsic valuationb on the lattice of pixelations defined in Section 2. We have n seen in the last section that the sequence {µ k }n≥k defines an intrinsic valuation and thus we n will write µ k instead of µ k b §5.1. Theb Tube Formula.b Let K, L ∈ Kn, and let α ≥ 0. We have the Minkowski sum K + αL = {x + αy | x ∈ K and y ∈ L} Proposition 5.1 (Smearing Formula). Let u denote the straight line segment connecting u and the origin o. For K ∈ Kn and any unit vector u ∈ Rn, ⊥ µn(K + ǫu)= µn(K)+ ǫµn−1(K|u )

Proof. Let L = K + ǫu. We will compute the volume of L by integrating the lengths of ⊥ one-dimensional slices of L with lines ℓx parallel through u passing through points x ∈ u , that is

µn(L)= µ1(L ∩ ℓx)dx. ⊥ Zu ⊥ ⊥ Since µ1(l ∩ ℓx)= µ1(K ∩ ℓx)+ ǫ for all x ∈ K|u and zero for x∈ / K|u , we have that

⊥ µn(L)= µ1(L ∩ ℓx)dx = (µ1(K ∩ ℓx)+ ǫ)dx = µn(K)+ ǫµn−1(K|u ) ⊥ ⊥ Zu ZK|u ⊓⊔

Let Cn denote the n-dimensional unit cube. Recall that for 0 ≤ i ≤ n, n µ (C )= . i n i   Proposition 5.2. For ǫ ≥ 0, n n n n µ (C + ǫB )= µ (C )ω ǫn−i = ω ǫn−i = ω µ n(C )ǫn−i. n n n i n n−i i n−i n−i i n i=0 i=0 i=0 X X   X b 50 Csar-Johnson-Lamberty

n Proof. Let u1,...,un denote the standard orthonormal basis for R , and ui the line segment connecting ui to the origin. By Proposition 5.1 we have n 1 µ (B + ru )= ω + rω = ω ri n n 1 n n−1 i n−i i=0 X   for all n ≥ 1. Having proven the base case, we proceed by induction. Suppose that

k k µ (B + ru + ··· + ru )= ω ri, n n 1 k i n−i i=0 X   for some 1 ≤ k < n. Then B B B ⊥ µn( n+ru1+···+ruk+1)= µn( n+ru1+···+ruk)+rµn−1(( n−1+ru1+···+ruk)|uk+1)

k k k k k k = ω ri +rµ (B +ru +···+u )= ω ri + ω ri+1 i n−i n−1 n−1 1 k i n−i i n−i−1 i=0 i=0 i=0 X   X   X   k+1 k k k+1 k +1 = + ω ri = ω ri. i i − 1 n−i i n−i i=0 i=0 X     X   Thus, by induction we have that

n n µ (B + ru + ··· + ru )= ω ri n n 1 n i n−i i=0 X   Noting that µn is homogeneous of degree n, we have that 1 1 µ (B +ru +···+ru )= µ (B +rC )= µ r B + C = rnµ B + C . n n 1 n n n n n r n n n r n n      1 Letting ǫ = r , the previous two inequalities give us n n 1 i n n µ (C + ǫB )= ǫn ω = ω ǫn−i n n n i n−i ǫ i n−i i=0 i=0 X   X   ⊓⊔

We can now prove the following remarkable result, first proved by Jakob Steiner in di- mensions ≤ 3 in the 19th century. Theorem 5.3 (Tube Formula). For K ∈ Kn and ǫ ≥ 0, n B n−i µn(K + ǫ n)= µ i(K)ωn−iǫ . (5.1) i=0 X b Applications 51

n n n Proof. Let η(K)= µn(K + Bn), for K ∈ K . Because Bn ∈ K , for K, L ∈ K we have that

(K ∪ L)+ Bn =(K + Bn) ∪ (L + Bn), and clearly

(K + Bn) ∩ (L + Bn)=(K ∩ L)+ Bn, so we have

η(K ∪ L)= µn (K ∪ L)+ Bn = µn (K + Bn) ∪ (L + Bn)     = µn(K + Bn)+ µn(L + Bn) − µn (K + Bn) ∩ (L + Bn) = η(K)+ η(L) − η(K ∩ L).  n Thus η is a valuation on K . The continuity and invariance of µn, and the symmetry of Bn under Euclidean transformations show that η is a convex-continuous invariant valuation on Kn, which according to the Extension Theorem 3.17 extends to a valuation in Val(n). By Hadwiger’s Characterization Theorem we have that n

η(K)= ciµ i(K), i=0 X for all K ∈ Kn. Therefore, for ǫ> 0 we have thatb 1 n 1 n µ (K + ǫB )= ǫnµ K + B = ǫn c µ (K) = c µ (K)ǫn−i. n n n ǫ n i i ǫi i i i=0 i=0   X X In particular, if we let K = Cn in the previous equationb and comparingb with the results of the previous Proposition, we find that n n n−i n−i ciµi(Cn)ǫ = µ i(Cn)ωn−iǫ , i=0 i=0 X X i.e. ci = ωn−i. b ⊓⊔

Theorem 5.4 (The intrinsic volumes of the unit ball). For 0 ≤ i ≤ n, n ω n µ (B )= n = ω . i n i ω i i   n−i   b Proof. Applying the tube formula to to K = Bn we obtain n B n−i B B B µ i( n)ωn−iǫ = µn( n + ǫ n)= µn ((1 + ǫ) n) i=0 X b n n = (1+ ǫ)nµ (B )= ω ǫn−i, n n i n i=0 X   for all ǫ> 0. Comparing coefficients of powers of ǫ, we uncover the desired result. ⊓⊔ 52 Csar-Johnson-Lamberty

§5.2. Universal Normalization and Crofton’s Formulæ. It turns out that the intrinsic n volumes of the unit ball are the final piece of the puzzle in understanding the constants Ck n n n in the formula µ k = Ck µk , where n n µk = Rn,n−kµ0 and µ k = µk on Pix(n). b We have the following very pleasant surprise. b Theorem 5.5. For 0 ≤ k ≤ n and K ∈ Kn, n n µ k = µ k = µk In other words, no more hats! b b Proof. From Theorem 5.4 we deduce n n Cnµn(B )= µ n = ω = ω . k k n k n n − k n k     On the other hand (3.8) shows that b n µn = ω . k k k   n Equating the two, we readily see that Ck =1 for all n, k ≥ 0. In fact, since both µ k and µk are intrinsic, we have shown that µ k = µk. ⊓⊔ b We have thus proved that the intrinsic volumes µ on Polycon(n) are exactly the Radon b k Transforms Rn,n−kµ0, a result to which we feel obliged to “tip our hats”. The following theorem is a generalization of theb previous one. Theorem 5.6 (Crofton’s formula). For 0 ≤ i, j ≤ n and K ∈ Polycon(n), i + j (R µ )(K)= µ (K) n,n−i j j i+j  

Proof. From the section before regarding Radon Transforms, we already know that Rn,n−iµj ∈ Vali+j n (n). By Corollary 4.9 we have that there exists a c ∈ R such that Rn−iµj = cµi+j. To obtain this c we simply evaluate (Rn,n−iµj)(Bn).

(Rn,n−iµj)(Bn)= Rn,n−i((Rn,n−i)µ0) (Bn)

B n−i  n = µ0( n ∩ W )dλn−i−j(W ) dλn−i(V ) ZGraff(n,n−i) ZGraff(V,n−i−j)  B n−i n = µ0( n ∩ (L + x))dpdνn−i−j(L) dνn−i(V ) ⊥ ZGr(n,n−i) ZGr(V,n−i−j) ZL 

n−i n = dpdνn−i−j(L) dνn−i(V ) B ZGr(n,n−i) ZGr(V,n−i−j) Z i+j ! n − i − j = ω dνn (V ) n − i i+j n−i ZGr(n,n−i)    Applications 53

n n − i − j = ω . n − i n − i i+j    But we also have by Theorem 5.4 that n cµ (B )= c ω i+j n i + j i+j   and thus n −1 n n − i i + j c = = . i + j n − i n − i − j j        ⊓⊔

Remark 5.7. If we define a weighted Radon transform ∗ Val Val ∗ Rw : (k) → (k + w), Rw := [w]!Rk+w,k ∗ and we set µi := [i]!µi then we can rewrite Crofton’s formulæ in the more symmetric form ∗ ∗ ∗ µi+j = Rj µi . ∗ Note that µi are also intrinsic valuations, i.e. are independent of the dimension of the ambient space. ⊓⊔

§5.3. The Mean Projection Formula Revisited. The conclusion of Theorem 5.5 allows us to restate Cauchy’s surface area formula as follows: Theorem 5.8 (The mean projection formula). For 0 ≤ k ≤ n and C ∈ Kn,

n (Rn,n−kµ0)(C)= µk(C)= µk(C|V0) dνk (V0). ZGr(n,k) ⊓⊔

It turns out that Cauchy’s mean value formula is a special case of a more general formula. Theorem 5.9 (Kubota). For 0 ≤ k ≤ l ≤ n and C ∈ Kn, n − k µ (C|V ) dνk(V )= µ (C). k l l − k k ZGr(n,l)  

Proof. Define a valuation ν on Kn by

k ν(C)= µk(C|V ) dνl (V ). ZGr(n,l) Arguing as in the proof of Proposition 3.30 we deduce that ν ∈ Valk(n). By Corollary 4.9 there exists a constant c ∈ R such that ν = cµk. As before, we compute the constant c by considering what happens when C = Bn. n cµ (B )= ν(B )= µ (C|V ) dνk(V )= µ (B ) . k n n k l k l l ZGr(n,l)   54 Csar-Johnson-Lamberty

Therefore µ (B ) n l n −1 1 n c = k l = ω µ (B ) l k k k ω l k n       k   [l]! [k]![n − k]! [n]! [n − k]! n − k = = = . [k]![l − k]! [n]! [l]![n − l]! [n − l]![l − k]! l − k   ⊓⊔

§5.4. Product Formulæ. We now examine the intrinsic volumes on products. Theorem 5.10. Let 0 ≤ k ≤ n. Let K ∈ Polycon(k) and L ∈ Polycon(n − k). Then

µi(K × L)= µr(K)µs(L). r+s=i X Proof. The set function µi(K × L) is a continuous valuation in each of the variables K and L when the other is fixed. Also, note that every rigid motion φ ∈ Ek is the restriction of n−k some rigid motion Φ ∈ En that restricts to the identity on the orthogonal complement R . Thus, by the invariance of µi,

µi φ(K × L) = µi Φ(K × L) = µi(K × L).

The characterization theorem tells us that for every Lthat there exist constants cr(L), de- pending on L, such that k

µi(K × L)= cr(L)µr(K), r=0 X for all K ∈ Kk. Repeating the same argument with fixed K and varying L, we see that the cr(L) are continuous invariant valuations on Kn−k, so we apply the characterization theorem again, collect terms appropriately, and ultimately conclude that there are constants crs ∈ R such that k n−k

µi(K × L)= crsµr(K)µs(L), r=0 s=0 X X for all K ∈ Kk and L ∈ Kn−k. Now we determine the constants using Cm, the unit m-dimensional cube. Let α, β ≥ 0. Then k n−k r s µi(αCk × βCn−k)= crsµr(Ck)µs(Cn−k)α β r=0 s=0 X X k n−k k n − k = c αrβs. rs r s r=0 s=0    On the other hand, we already knowX X how to compute this from the product formula in The- orem 2.17 k n − k µ (αC × βC )= αrβs. i k n−k r s r+s=i X    Applications 55

Comparing these two, we see that for 0 ≤ r ≤ k and 0 ≤ s ≤ n − k, we have crs = 1 if r + s = i and crs =0 otherwise. Hence,

µi(K × L)= µr(K)µs(L). r+s=i X ⊓⊔

As a result of this theorem, we get the following corollary. Corollary 5.11. Suppose that µ is a convex-continuous invariant valuation on Polycon(n) such that µ(K × L)= µ(K)µ(L), for all K ∈ Polycon(k), L ∈ Polycon(n − k), where 0 ≤ k ≤ n. Then either µ =0 or there exists c ∈ R such that 2 n µ = µ0 + cµ1 + c µ2 + ··· + c µn. Conversely, if µ is a valuation satisfying 2 n µ = µ0 + cµ1 + c µ2 + ··· + c µn, then µ satisfies µ(K × L)= µ(K)µ(L).

Proof. By the characterization theorem, there are real constants ci such that

µ = c0µ0 + c1µ1 + ··· + cnµn. k Let Ck denote the unit cube in R . Then for all α, β ≥ 0, the multiplication condition implies that k n−k r s µ(αCk × βCn−k)= µ(αCk)µ(βCn−k)= crα µr(Ck) csβ µs(Cn−k) r=0 s=0 !  X  X k n−k r s = crcsµr(Ck)µs(Cn−k)α β . r=0 s=0 X X On the other hand, by the previous theorem, n n r s µ(αCk × βCn−k)= ciµi(αCk × βCn−k)= ci µr(Ck)µs(Cn−k)α β i=0 i=0 r+s=i X X X k n−k r s = cr+sµr(Ck)µs(Cn−k)α β . r=0 s=0 X X Comparing these polynomials in α, β, we see that the coefficients must be equal. Hence, 2 crcs = cr+s, for all 0 ≤ r, s ≤ n. Thus, c0 = c0+0 = c0. Hence, c0 is either 0 or 1. If c0 =0, then cr = cr+0 = crc0 = 0, making µ zero. If c0 = 1, then relabel c1 = c. Then, for r > 0, r r cr = c1+···+1 = c1 = c , and we are done. The converse follows by simply applying the previous theorem. ⊓⊔ 56 Csar-Johnson-Lamberty

§5.5. Computing Intrinsic Volumes. Now that we have a better understanding of the intrinsic measures µi, it would be nice if we could compute the intrinsic volumes of various types of sets. We already know how to compute them for pixelations. We want to explain how to determine them for convex polytopes. In a later section we will explain how to compute them for arbitrary polytopes using triangulations and the M¨obius inversion formula. Suppose P ∈ Kn is a convex polytope. Using the tube formula (5.1) we deduce that for all r> 0 n n−i µn(P + rB)= µi(K)ωn−ir , i=0 X where the superscript of Bn has been dropped for notational convenience. Suppose x ∈ P + rB. Because P is compact and convex, there exists a unique point xP ∈ P such that

|x − xP |≤|x − y|, ∀y ∈ P.

If x ∈ P , then clearly x = xP . If, on the other hand, x∈ / P , then xP ∈ ∂P . Furthermore, if x∈ / P and y ∈ ∂P , then y = xP ⇐⇒ x − y ⊥ H, where H is the support plane of P and y ∈ P ∩ H. That is to say, the line connecting x and xp must be perpendicular to the boundary. Denote by Pi(r) the set of all x ∈ P + rB such that xP lies in the relative interior of an i-face of P . If x is in the relative interior of P then it is contained in an n-face of P , and consequently so is xP = x. If x is on the boundary then it is in one of P ’s faces (and again, so is xP = x). Finally, if x is in neither the interior of P nor the boundary, then as stated before there is a unique xP on the boundary and hence in an i-face of P . n

P + rB = Pi(r). i=0 [ The uniqueness of xP implies that this union is disjoint. Denote by Fi(P ) the set of all i-faces of P . Let Q be a face of P , and let v be any outward unit normal to the boundary of P at y. Then we set M(Q, r)= {y + δv | y ∈ relint(Q), 0 ≤ δ ≤ r}.

M(x,r)

x

M(Q,r) Q P

FIGURE 7. A decomposition of P + rB2. Applications 57

Proposition 5.12.

Pi(r)= M(Q, r). Q∈[Fi(P ) Proof. First, suppose x ∈ Pi(r). From before, xp ∈ relint(Q) for some Q ∈ Fi(P ). If x = xP , then x ∈ relint(Q), and therefore x ∈ M(Q, r) for any r. On the other hand, if x =6 xP , then let v denote the unit vector parallel to the line between x and xP . From the above discussion we know that v is perpendicular to the boundary of P at xP , and therefore , where . Thus we have that . x = xP + δv δ = |x − xP | Pi(r) ⊂ Q∈Fi(P ) M(Q, r) On the other hand, suppose x ∈ M(Q, r). Then x = y + δv for some y ∈ relint(Q). If S δ = 0 then x = y and is in Pi(r). If δ =6 0, x − y = δv, and is therefore ⊥ to the boundary of at . Therefore, and . Therefore , and thus P y y = xP x ∈ Pi(r) Pi(r) ⊃ Q∈Fi(P ) M(Q, r) we have equality. ⊓⊔ S For A ⊂ Rn, let the affine hull of A be the intersection of all planes in Rn containing A. We then denote by A⊥ the set of all vectors in Rn orthogonal to the affine hull of A. ⊥ Let Q ∈ Fi(P ). Then Q has dimension n − i. Because of this, M(Q, r) is like M(Q, 1), except that it has been “stretched” by a factor of r in n−i dimensions. Thus, µn(M(Q, r)) = n−i r µn(M(Q, 1)). This fact, coupled with the fact that our M(Q, r) are disjoint, gives us that

µ (P (r)) = µ M(Q, r) = rn−iµ M(Q, 1) = rn−iµ (P (1)). n i n   n   n i Q∈[Fi(P ) Q∈[Fi(P ) Furthermore,     n n n n−i µn(P + rB)= µn Pi(r) = µn(Pi(r)) = µn(Pi(1))r , i=0 ! i=0 i=0 [ X X for all r> 0. Comparing with the tube formula (5.1), we see that n n n−i n−i µi(K)ωn−ir = µn(Pi(1))r . i=0 i=0 X X By comparing coefficients of powers or r, we get that

µn(Pi(1)) µi(P )= . (5.2) ωn−i

(z ) θ µ1 i i

FIGURE 8. M(zi, 1) from Example 5.13. 58 Csar-Johnson-Lamberty

3 Example 5.13. Consider a general convex polyhedron P in R with edges z1,..., zm. Each edge zi is formed by two facets of P , denote these two faces Qi1 and Qi2 , and denote their outward unit normals by ui1 and ui2 , respectively. We then have that the volume of the section M(zi, 1) is given by arccos(u · u ) µ (z )θ µ (M(z , 1)) = i1 i2 (π(1)2)(µ (z )) = 1 i i , 3 i 2π 1 i 2

where θi = arccos(ui1 · ui2 ). Refer to Figure 8 for an example. Therefore µ (P (1)) 1 m µ (P )= 3 1 = µ (z )θ . 1 ω 2π 1 i i 2 i=1 X This remarkable formula has immediate applications in specific polyhedra of interest. For example, take an orthogonal parallelotope P = a1 × a2 × a3. Clearly there are four π sides of length ai for each i, and the angle θi = 2 for each i. It is evident then that 1 m π µ (P )= 4a = a + a + a . 1 2π i 2 1 2 3 i=1 X Simplicial complexes 59

6. Simplicial complexes and polytopes

At this point, we feel like we understand Val(n) to our satisfaction. We would now like to fo- cus much more concretely on computing the these valuations for a special case of polyconvex sets, called polytopes. These are finite unions of convex polytopes. The inclusion-exclusion principle suggest dissecting these into smaller, and simpler pieces. Technically, this dissec- tion operation is called triangulation, and the simpler pieces are called simplices. In this section, we formalize the notion of triangulation and investigate some of its combinatorial features.

§6.1. Combinatorial Simplicial Complexes. A combinatorial simplicial complex (CSC for brevity) with vertex set in V is a collection K of non-empty subsets of the finite set V such that τ ∈ K =⇒ σ ∈ K, ∀σ ⊂ τ, σ =6 ∅. The elements of K are called the (open) faces of the simplicial complex. If σ is a face then we define dim σ := #σ − 1. A subset σ of a face τ is also called a face of τ. If additionally #σ = #τ − 1 then we say that σ is an (open) facet of τ. If dim σ =0 then we say that σ is a vertex of K. We denote by VK the collection of vertices of K. In other words, the vertices are exactly the singletons belonging to K. For simplicity we will write v ∈ VK instead of {v} ∈ VK. Two CSCs K and K′ are called isomorphic, and we write this K ∼= K′, if there exists a bijection from the set of vertices of K to the set of vertices of K′ which induces a bijection between the set of faces of K and K′. We denote by Σ(V ) ⊂ P (P (V )) the collection of all CSC’s with vertices in V . Observe that

K1,K2 ∈ Σ(V )=⇒ K1 ∩ K2, K1 ∪ K2 ∈ Σ(V ) so that Σ(V ) is a sublattice of P (P (V )).

Example 6.1. For every subset S ⊂ V define ∆S ∈ Σ(V ) to be the CSC consisting of all non-empty subsets of S. ∆S is called the elementary simplex with vertex set S. Observe that

∆S1 ∩ ∆S2 = ∆S1∩S2 ,

For every CSC K ∈ Σ(V ) the simplices ∆σ, σ ∈ K are called the closed faces of K. We deduce that the collection

∆(V ) := ∆S | S ⊂ V is a generating set of the lattice Σ(V ). n o ⊓⊔

Example 6.2. Let K ∈ Σ(V ). For every nonnegative integer m we define

Km := σ ∈ K | dim σ ≤ m .  60 Csar-Johnson-Lamberty

Km is a CSC called the m-skeleton of K. Observe that

K0 ⊂ K1 ⊂··· , K = Km. ⊓⊔ m≥0 [ Definition 6.3. The Euler characteristic of a CSC K ∈ Σ(V ) is the integer

χ(K) := (−1)dim σ. ⊓⊔ σ∈K X Proposition 6.4. The map χ : Σ(K) → Z, K 7→ χ(K) is a valuation.

Proof. For every K ∈ Σ(V ) and every nonnegative integer m we denote by Fm(K) the collection of its m-dimensional faces, i.e.

Fm(K) := σ ∈ K | dim σ := m . We then have  m χ(K)= (−1) #Fm(K). m≥0 X Then

Fm(K1 ∪ K2)= Fm(K1) ∪ Fm(K2), Fm(K1 ∩ K2)= Fm(K1) ∩ Fm(K2) and the claim of the proposition follows from the inclusion-exclusion property of the cardi- nality. ⊓⊔

Example 6.5. Suppose ∆S is a simplex. Then n χ(∆ )= (−1)m = (−1)m =1. ⊓⊔ S m +1 m≥0 ! m≥0 X σ⊂S, X#σ=m+1 X  

§6.2. The Nerve of a Family of Sets. Definition 6.6. Fix a finite set V . Consider a family

A := {Av; v ∈ V } of subsets a set X parameterized by V . (a) For every σ ⊂ V we set

Aσ := Av. v∈σ \ The nerve of the family A is the collection

N(A) := σ ⊂ V | Aσ =6 ∅ . ⊓⊔

Clearly the nerve of a family A = {Av | v ∈ V } is a combinatorial simplicial complex with vertices in V . Simplicial complexes 61

Definition 6.7. Suppose V is a finite set and K ∈ Σ(V ). For every σ ∈ K we define the star of σ in K to be the subset

StK (σ) := τ ∈ K | τ ⊃ σ . Observe that  K = StK(v). ⊓⊔ v∈V [K Proposition 6.8. For every CSC K ∈ Σ(V ) we have the equalities

StK(σ)= StK (v), ∀σ ∈ K, K = ∆σ. v∈σ σ∈K \ [ In particular, we deduce that K is the nerve of the family of stars at vertices

StK := StK(v) | v ∈ VK . ⊓⊔ We can now rephrase of the inclusion-exclusion formula using the language of nerves. Corollary 6.9. Suppose L is a lattice of subsets of a set X, µ : L → R is a valuation into a commutative ring with . Then for every finite family A of sets in L we have 1 = Av v∈V

dim σ  µ Av = (−1) µ(Aσ). ! A v[∈V σ∈XN( ) In particular, for the universal valuation A 7→ IA we have dim σ I∪v Av = (−1) IAσ . ⊓⊔ A σ∈XN( ) n Corollary 6.10. Suppose C = {Cs}s∈S is a finite family of compact convex subsets of R . Denote by N(C) its nerve and set

C := Cs. s∈S [ Then µ0(C)= χ N(C) . In particular, if  Cs =6 ∅. s\∈S then µ0(C)=1, where µ0 : Polycon(n) → Z denotes the Euler characteristic. Proof. Let

Cσ = Cs, ∀σ ⊂ S. s∈σ \ Note that if σ ∈ N(C) then Cσ is nonempty, compact and convex and thus µ0(Cσ)=1 . Hence dim σ µ0(C)= (−1) µ0(Cσ)= χ(C). X 62 Csar-Johnson-Lamberty

If Cσ =6 ∅ ∀σ ⊂ S we deduce that N(C) is the elementary simplex ∆S. In particular

µ0(C)= χ(∆S)=1. ⊓⊔

§6.3. Geometric Realizations of Combinatorial Simplicial Complexes. Recall that a convex polytope in Rn is the convex hull of a finite collection of points, or equivalently, a compact set which is the intersection of finitely many half-spaces. Definition 6.11. (a) A subset V {v0, v1,...,vk} of k +1 points of a real vector space E is called affinely independent if the collection of vectors −−→ −−→ v0v1,..., v0vk is linearly independent. (b) Let 0 ≤ k ≤ n be nonnegative integers. An affine k-simplex in Rn is the convex hull of a collection of (k +1), affinely independent points {v0, ··· , vk} called the vertices of the simplex. We will denote it by [v0, v1, ··· , vk]. (c) If S is an affine simplex in Rn, then we denote by V (S) the set of its vertices and we write dim σ := #V (σ) − 1. An affine simplex T is said to be a face of S, and we write this as τ ≺ σ if V (S) ⊂ V (T ). ⊓⊔

Example 6.12. A set of three points is affinely independent if they are not collinear. A set of four points is affinely independent if they are not coplanar. If v0 =6 v1 then [v0, v1] is the line segment connecting v0 to v1. If v0, v1, v2 are not collinear then [v0, v1, v2] is the triangle spanned by v0, v1, v2 (see Figure 9). ⊓⊔

v v 0 1

v0

v v 1 2

FIGURE 9. A 1-simplex and a 2-simplex.

n Proposition 6.13. Suppose [v0,...,vk] is an affine k-simplex in R . Then for every point p ∈ [v0, v1,...,vk] there exist real numbers t0, t1, ··· , tk uniquely determined by the re- quirements k k

ti ∈ [0, 1], ∀i =0, 1, ··· , n, ti =1, p = tivi. i=0 i=0 X X Simplicial complexes 63

Proof. The existence follows from the fact that [v0,...,vk] is the convex hull of the finite set {v0,...,vk}. To prove uniqueness, suppose k k

sivi = tivi. i=0 i=0 X X Note that v0 =(s0 + ··· + sk)v0 =(t0 + ··· , tk)v0 so that k k −−→ si(vi − v0)= ti(vi − v0), vi − v0 = v0vi. i=1 i=1 X X −−→ From the linear independence of the vectors v0vi we deduce si = ti, ∀i =1, ··· ,k. Finally

s0 = 1(s1 + ··· + sk)=1 − (t1 + ··· + tk)= t0. ⊓⊔

The numbers (ti) postulated by Proposition 6.13 are called the barycentric coordinates of the point p. We will denote them by (ti(p)) In particular, the barycenter of a k simplex σ is the unique point bσ whose barycentric coordinates are equal, i.e. 1 t (b )= ··· = t (b )= . 0 σ k σ k +1

The relative interior of ∆[v0, v1,...,vk] is the convex set

∆(v0, v1, ··· , vk) := p ∈ ∆[v0, ··· , vk] | ti(p) > 0, ∀i =0, 1, ··· ,k }. Definition 6.14. An affine simplicial complex in Rn (ASC for brevity) is a pair (C, T) satis- fying the following conditions. (a) C is a compact subset. (b) T is a triangulation of C, i.e. a finite collection of affine simplices satisfying the condi- tions (b1) If T ∈ T and S is a face of T then S ∈ T. (b2) If T0, T1 ∈ T then T0 ∩ T1 is a face of both T0 and T1. (b3) C is the union of all the affine simplices in T. ⊓⊔

FIGURE 10. An ASC in the plane.

Remark 6.15. One can prove that any polytope in Rn admits a triangulation. ⊓⊔ 64 Csar-Johnson-Lamberty

In Figure 10 we depicted an ASC in the plane consisting of six simplices of dimension 0, nine simplices of dimension 1 and two simplices of dimension 2. To an ASC (C, T) we can associate in a natural way a CSC K(C, T) as follows. The set of vertices V is the collection of 0-dimensional simplices in T. Then K = V (S) | S ∈ T , (6.1) where we recall that V (S) denotes the set of vertices of the affine simplex S ∈ T. Definition 6.16. Suppose K is a CSC with vertex set V . Then an affine realization of K is an ASC (C, T) such that K ∼= K(C, T). ⊓⊔ Proposition 6.17. Let V be a finite set. Then any CSC K ∈ Σ(V ) admits an affine realiza- tion. Proof. Denote by RV the space of functions f : V → R. This is a vector space of the same dimension as V . It has a natural basis determined by the Dirac functions δu : V → R, u ∈ V , where 1 v = u δu(v)= . (0 v =6 u V The set {δu | u ∈ V } ⊂ R is affinely independent. V For every σ ∈ K we denote by [σ] the affine simplex in R spanned by the set {δu | u ∈ σ}. Now define C = [σ], T = [σ] | σ ∈ K}. σ[∈K Then (C, T) is an ASC and by construction K =K(C, T). ⊓⊔

Suppose (C, T) is an ASC and denote by K the associated CSC. Denote by fk = fk(C, T). According to the Euler-Schl¨afli-Poincar´eformula we have k µ0(C)= (−1) fk(C, T). Xk The sum in the right hand side is precisely χ(K), the Euler characteristic of K. For this reason, we will use the symbols χ and µ0 interchangeably to denote the Euler characteristic of a polytope. M¨obius inversion 65

7. The Mobius¨ inversion formula

§7.1. A Linear Algebra Interlude. As in the previous section, for any finite set A we denote by RA the space of functions A → R. This is a vector space and has a canonical basis given by the Dirac functions

δa : A → R, a ∈ A. Note that to any linear transformation T : RA → RB we can associate a function

S = ST : B × A → R uniquely determined by the equalities

T δa = S(b, a)δb, a ∈ A. (7.1) Xb∈B We say that ST is a scattering matrix of T . If f : A → R is a function then

f = f(a)δa Xa∈A and we deduce that the function T f : B → R is given by the equalities ′ T f = S(b , a)f(a)δb′ ⇐⇒ (T f)(b)= S(b, a)f(a), ∀b ∈ B. (7.2) ′ a∈A a∈A,bX∈B X Conversely, to any map S : B × A → R we can associate a linear transformation T : RA → B R whose action of δa is determined by (7.2).

Lemma 7.1. Suppose A0, A1, A2 are finite sets and

A0 A1 A1 A2 T0 : R → R , T1 : R → R are linear transformations with scattering matrices

S0 : A1 × A0 → R, S1 : A2 × A1 → R

A2 A0 Then the scattering matrix of T1 ◦ T0 : R × R → R is the map S1 ∗ S0 : A2 × A0 → R defined by

S1 ∗ S0(a2, a0)= S1(a2, a1)S0(a1, a0). aX1∈A1

Proof. Denote by S the scattering matrix of T1 ◦ T0 so that

(T1 ◦ T0)δa0 = δa2 S(a2, a0). a2 X On the other hand

(T1 ◦ T0)δa0 = T1 δa1 S0(a1, a0) a1 ! X

= δa2 S1(a2, a1)S0(a1, a0) = δa2 (S1 ∗ S0)(a2, a0). a2 a1 ! a2 X X X 66 Csar-Johnson-Lamberty

Comparing the above equalities we obtain the sought for identity

S(a2, a0)= S1 ∗ S0 (a2, a0).  ⊓⊔

Lemma 7.2. Suppose that A is a finite set equipped with a partial order < (we use the convention that a ≮ a, ∀a ∈ A). Suppose S : A × A → R is strictly upper triangular, i.e. it satisfies the condition

S(a0, a1) =0=6 ⇒ a0 < a1. Then the linear transformation T : RA → RA determined by S is nilpotent, i.e. there exists a positive integer n such that T n =0.

n Proof. Let a, b be in A and denote by Sn the scattering matrix of T . By repeated application of the previous lemma,

Sn(a, b)= S(a, c1)S(c1,c2) ··· S(cn−1, b). c1,...,cXn−1∈A Then since S is strictly upper triangular, we deduce that the above sums contains only ‘mono- mials‘ of the form

S(a, c1)S(c1,c2) ··· S(cn−1, b), ci

Sn(a, b)= S(a, c1)S(c1,c2) ··· S(cn−1, b). a #A we cannot find such a sequence, so n Sn =0, meaning that T =0. ⊓⊔

Lemma 7.3. Suppose T is a finite dimensional vector space and T : E → E is a nilpotent

linear transformation. Then the linear map ½E + T is invertible and −1 k k (½E + T ) = (−1) T . Xk≥0

Proof. Since T is nilpotent, there is some positive integer m such that T m = 0. We may even assume that m is odd. Thus

m 2 m−1 k k

½ ½ ½ ½ ½E = E + T =( E + T )( E − T + T −··· + T )=( E + T ) (−1) T . ! Xk≥0 ⊓⊔ M¨obius inversion 67

§7.2. The Mobius¨ Function of a Simplicial Complex. Suppose V is a finite set and K ∈ Σ(V ) is a CSC. The zeta function of K is the map 1 σ  τ ζ = ζK : K × K → Z, ζ(σ, τ)= . (0 σ  τ

Define ξ = ξK : K × K → Z by 1 σ  τ ξ(σ, τ)= . (0 otherwise ζ defines a linear map Z : RK → RK and ξ defines a linear map Ξ : RK → RK. Note that for every function f : K → R we have (Zf)(σ)= ζ(σ, τ)f(τ)= f(τ). (7.3) τσ τX∈K X

Proposition 7.4. (a) Z = ½ + Ξ. (b) The linear map Ξ is nilpotent. In particular, Z is invertible. (c) Let µ : K × K → R the scattering matrix of Z−1. Then µ satisfies the following conditions. µ(σ, τ) =0=6 ⇒ σ  τ, (7.4a) µ(σ, σ)=1, ∀σ ∈ K, (7.4b) µ(σ, τ)= − µ(σ, ϕ). (7.4c) σXϕτ Proof. First, we denote by δ(σ, τ) the delta function, that is, 1 σ = τ δ(σ, τ)= (0 σ =6 τ (a) From above we have (Zf)(σ)= ζ(σ, τ)f(τ)= f(τ)= f(σ)+ f(τ) τ∈K τσ

X X Xτσ ½ = δ(σ, τ)f(τ)+ ξ(σ, τ)f(τ)=(Ξf)(σ)+(½f)(σ)= (Ξ + )f (σ). τX∈K τX∈K   (b) We have from before that Ξ is defined by a scattering matrix ξ which is strictly upper triangular, i.e. ξ(σ, τ) =06 ⇒ σ  τ.

By Lemma 7.2, we have that Ξ is nilpotent. Thus, because Z = ½ + Ξ, by Lemma 7.3 we have that Z is invertible. (c) From Lemma 7.3 we have that

−1 −1 k k 2 ½ Z =(½ + Ξ) = (−1) Ξ = − Ξ + Ξ −··· Xk≥0 68 Csar-Johnson-Lamberty

Let us examine Ξi. By an iteration of Lemma 7.1, we have that the scattering matrix for Ξi will be

ξi(σ, τ)= ξ(σ, ϕ1)ξ(ϕ1,ϕ2) ··· ξ(ϕi−1, τ) ϕ1,···X,ϕi−1∈K This scattering matrix is, like ξ, strictly upper triangular. Note that this also implies that the ξi(σ, τ) gives a value of 0 for the diagonal terms σ = σ. Furthermore, the identity operator

½ has scattering matrix δ(σ, τ), which gives a value of 1 on the diagonal terms and zero elsewhere. We therefore have proven equations (7.4a) and (7.4b). To prove the third, we −1 convert the equality ½ = Z Z into a statement about scattering matrices δ = µ ∗ ζ. By Lemma 7.1, we therefore have δ(σ, τ)= µ(σ, ϕ)ζ(ϕ, τ) ϕX∈K Thus, for σ =6 τ, we have that 0= µ(σ, ϕ)ζ(ϕ, τ)= µ(σ, ϕ) ϕ∈K σϕτ X X and thus µ(σ, τ)= − µ(σ, ϕ). σXϕτ ⊓⊔

Definition 7.5. (a) Suppose A, B are two subsets of a set V . A chain of length k ≥ 0 between A and B is a sequence of subsets

A = C0 C1 ( ··· Ck = B.

We denote by ck(A, B) the number of chains of length k between A and B. We set 0 B =6 A c0(A, B)= , (1 B = A and k c(A, B)= (−1) ck(A, B). Xk (b) We set

ck(n) := ck(∅, B), c(n)= c(∅, B), where B is a set of cardinality n. ⊓⊔

Lemma 7.6. If σ, τ are faces of the combinatorial simplicial complex K then µ(σ, τ)= c(σ, τ). M¨obius inversion 69

Proof. We have that µ is the scattering matrix of

−1 k k 2

Z = (−1) Ξ = ½ − Ξ + Ξ −··· , Xk≥0 so µ should be equal to the scattering matrix of the right hand side, i.e.

k µ(σ, τ)= (−1) ξk(σ, τ), Xk≥0 where ξk(σ, τ) is, as before,

ξk(σ, τ)= ξ(σ, ϕ1)ξ(ϕ1,ϕ2) ··· ξ(ϕk−1, τ)= 1= ck(σ, τ). ϕ1,···X,ϕk−1∈K σ=ϕ0ϕ1X···ϕk−1ϕk=τ Furthermore, because c0(σ, τ)= δ(σ, τ), we have that k µ(σ, τ)= (−1) ck(σ, τ)= c(σ, τ). Xk≥0 ⊓⊔

Lemma 7.7. We have the following equalities. (a) n c (n)= c (n − j). k j k−1 j>0 X   (b) c(n)=(−1)n. (c) If A ⊂ B are two finite sets then c(A, B)=(−1)#B−#A.

Proof of a. We recall that ck(n) is the number of chains of length k in C(B, ∅), where |B| = n. Then, ck−1(n − j) is the number of chains in C(A, ∅), where |A| = n − j. Consider a chain of length k − 1 in C(A, ∅) where A ⊂ B ⊂ V : ∅ ( C1 ( ··· ( Ck−1 = A. There is only one option for turning it into a chain of length k in C(B, ∅), namely: ∅ ( C1 ( ··· ( n Ck−1 ( Ck = B. For each j, there are j ways to choose the set J of elements to be taken out of B, i.e. to take A = B r J, |A| = n − j. Then there are c (n − j) chains of length  k−1 k in C(B, ∅) to be constructed as above. Thus, n c (n)= c (n − j) k j k−1 j>0 X   ⊓⊔

Proof of b. We note that the statement is trivially true for n = 0, and we use this as a base case for induction. It is also important to note that for n =06 , c0(n)=0. By part a, n (−1)kc (n)=(−1)k c (n − j) k j k−1 j>0 X   70 Csar-Johnson-Lamberty

Then, n c(n)= (−1)kc (n)= − (−1)k−1c (n − j) k j n−1 j>0 Xk≥0 Xk≥0 X   n = − (−1)k−1c (n − j) j k−1 j>0 X   Xk≥1 n n = − (−1)kc (n − j)= − c(n − j). j k j j>0 j>0 X   Xk≥0 X   By the induction hypothesis, n n n − c(n − j)= − (−1)n−j =(−1)n − (−1)n−j. j j j j>0 j>0 j≥0 X   X   X   The last sum is the Newton binomial expansion of (1 − 1)n so it is equal to zero. Hence c(n)=(−1)n. ⊓⊔

Proof of c. Suppose A = {α1,...,αj}. If A ⊂ B, B = {α1,...,αj, β1,...,βm}. Then a chain in ck(A, B) can be identified with a chain of length k of the set

B r A = {β1,...,βm}. Therefore, ck(A, B)= ck(∅, B r A)= ck(#B − #A). In particular, c(A, B)=(−1)(#B−#A). ⊓⊔

Theorem 7.8 (M¨obius inversion formula). Let V be a finite set and K ∈ Σ(V ) suppose that f,u : K → R are two functions satisfying the equality u(σ)= f(σ), ∀σ ∈ K. τσ X Then f(σ)= (−1)dim τ−dim σu(τ). τσ X Proof. Note that u(σ)=(Zf)σ so that Z−1u = (Z−1Z)f = f. Using (7.2), and realizing that µ is the scattering matrix of Z−1 we deduce f(σ)=(Z−1u)(σ)= µ(σ, τ)u(τ) τ∈K X By Lemma 7.6, f(σ)= c(σ, τ)u(τ) τ⊇σ X Then, by Lemma 7.7, f(σ)= (−1)dim τ−dim σu(τ). τ⊇σ X M¨obius inversion 71

⊓⊔

Corollary 7.9. Suppose (C, T) is an ASC. Then

IC = m(S)IS T XS∈ where m(S) := (−1)dim T −dim S =(−1)dim S (−1)dim T . T S T S X X In particular, we deduce that the coefficients m(S) depend only on the combinatorial simpli- cial complex associated to (C, T).

Proof. Define f,u : T → Z by

f(T ):=(−1)dim T and u(S):=(−1)dim Sm(S)= f(T ). T S X We apply the M¨obius inversion formula and see that

f(S)= (−1)dim T −dim Su(T )=(−1)dim S m(T ). TX≥S TXS Then, 1=(−1)dim Sf(S)= m(T ). (7.5) T S X Now consider the function L : Rn → Z,

L(x)= m(S)IS(x). T XS∈

Clearly, L(x)=0 if x 6∈ C. Suppose x ∈ C. Denote by Tx the collection of simplices T ∈ T such that x ∈ T . Then

Sx := T T T\∈ x is a simplex in T and in fact it is the minimal affine simplex in T containing x. We have

(7.5) L(x)= m(S)IS(x)= m(S)IS(x)= m(S) = 1. T SS SS XS∈ Xx Xx Therefore,

m(S)IS = IC T XS∈ ⊓⊔ 72 Csar-Johnson-Lamberty

§7.3. The Local Euler Characteristic. We begin by defining an important concept in the theory of affine simplicial complexes, namely the concept of barycentric subdivision. As its definition is a bit involved we discuss first a simple example.

n Example 7.10. (a) Suppose [v0, v1] is an affine 1-simplex in some Euclidean space R . Its barycenter is precisely the midpoint of the segment and we denote it by v01. The barycentric subdivision of the line segment [v0v1] is the triangulation depicted at the top of Figure 11 consisting of the affine simplices.

v0, v01, v1, [v0, v01], [v01, v1].

v v v v v 0 1 0 01 1 v v 0 0

v v 01 02

v012 v v v v 1 2 1 v12 2

FIGURE 11. Barycentric subdivisions

n (b) Suppose [v0, v1, v2] is an affine 2-simplex in some Euclidean space R . We denote the barycenter of the face [vi, vj] by vij = vji and we denote by v012 the barycenter of the two simplex. Observe that there exists a natural bijection between the faces of [v0, v1, v2] and the nonempty subsets of {0, 1, 2}. For every such subset σ ⊂ {0, 1, 2} we denote by vσ the barycenter of the face corresponding to σ. For example,

v{0,1} = v01 etc.. To any chain subsets of {0, 1, 2}, i.e. a strictly increasing family of subsets, we can associate a simplex. For example, to the increasing family {2}⊂{0, 2}⊂{0, 1, 2} we associate in Figure 11 the triangle [v2, v02, v012]. We obtain in this fashion a triangulation of the 2-simplex [v0, v1, v2] whose simplices correspond bijectively to the chain of subsets. The next two results follow directly from the definitions.

n Lemma 7.11. Suppose [v0, ··· , vk] is an affine simplex in R . For every nonempty subset σ ⊂ {0, ··· ,k} we denote by vσ the barycenter of the affine simplex spanned by the points {vs | s ∈ σ}. Then for every chain

∅ ( σ0 ( σ1 ( ··· ( σℓ of subsets of {0, 1 ··· ,k} the barycenters vσ0 , ··· , vσℓ are affinely independent.⊓⊔ M¨obius inversion 73

Proposition 7.12. Suppose (C, T) is an affine simplicial complex. For every chain

S0 ( S1 ( ··· ( Sk of simplices in T we denote by ∆[S0, ··· ,Sk] the affine simplex spanned by the barycenters ′ of S0, ··· ,Sk and we denote T the collection of all the simplices ∆[S0, ··· ,Sk] obtained in this fashion. Then T′ is a triangulation of T.⊓⊔ Definition 7.13. The triangulation T′ constructed in Proposition 7.12 is called the barycen- tric subdivision of the triangulation T. ⊓⊔

Definition 7.14. Suppose (C, T) is an affine simplicial complex and v is a vertex of T. (a) The star of v in T is the collection of all simplices of T which contain v as a vertex. We denote the star by StT(v). For every simplex S ∈ StT(v) we denote by S/v the face of S opposite to v. (b) The link of v in T is the polytope

lkT(v) := S/v, S∈[St(v) with the triangulation induced from T′. (c) For every face S of T we denote by bS its barycenter. We define link of S in T to be the link of bS in the barycentric subdivision

lkT(S) := lkT′ (bS). (d) For every face S of T we define the local Euler characteristic of (C, T) along S to be the integer χS(C, T):=1 − χ lkT(S) . ⊓⊔  a c c 3 d d 1 3 2 c 1 c o c2 o 3 a d 1 d 1 d2 3 c 1 b a a 3 2 d d 1 2

d 3

FIGURE 12. An open book with three pages

Example 7.15. Consider the polyconvex set C consisting of three triangles in space which have a common edge. Denote these triangles by [a, b, ai], i =1, 2, 3 (see Figure 12). We denote by o the barycenter of [a, b], by ci the barycenter of [a, ai] and by di the barycen- ter of [a, b, ai]. Then the link of the vertex a is the star with tree arms joined at o depicted at the top right hand side. It has Euler characteristic 1. The link of [a, b] is the set consisting of 74 Csar-Johnson-Lamberty

three points {d1,d2,d3}. It has Euler characteristic 3 The local Euler characteristic along a is 0, while the local Euler characteristic along [a, b] is −2. ⊓⊔

Remark 7.16. Suppose (C, T) is an ASC. Denote by VT ⊂ T the set of vertices, i.e. the collection of 0-dimensional simplices. Recall that the associated combinatorial complex K consists of the collection V (S), S ∈ T where V (S) denotes the vertex set of S. The m-dimensional simplices the barycentric sub- division correspond bijectively to chains of length m in K. Let S in T and denote by σ its vertex set. The number of m-dimensional simplices in StT′ (bS) is equal to

cm(σ, τ). ⊓⊔ τ⊇σ X Proposition 7.17. Suppose (C, T) is an ASC. Then for every S ∈ T we have

dim T −dim S χS(C, T)= (−1) , TXS so that IC = S∈T χS(C, T)IS. In particular,

P dim S (−1) = χ(C)= IC dµ0 = χS(C, T). T T XS∈ Z XS∈ Proof. To begin, for a simplicial complex, K, let Fm(K) = {faces of dimension m in K} and recall that m χ(K)= (−1) #Fm(K). m≥0 X The (m−1)-dimensional faces of lkT(S)(m ≥ 1) correspond bijectively to the m-dimensional ′ ′ simplices T ∈ T containing bS. According to Remark 7.16 there are T S cm(S, T ) of them. Hence P Fm−1(lkT(S)) = cm(S, T ) TXS Hence m χ(lkT(S)) = − (−1) cm(S, T ) m>0 T S X X so that m χS(C, T)=1 − χ(lkT(S))=1+ (−1) cm(S, T ) m>0 T S X X m m dim T −dim S = (−1) cm(S, T )= (−1) cm(S, T )= c(S, T )= (−1) . m≥0 m≥0 X TXS TXS X TXS TXS ⊓⊔

The above proof implies the following result. M¨obius inversion 75

Corollary 7.18. For any vertex v ∈ VT we have dim S χ(lkT(v)) = − (−1) . ⊓⊔ S v X) Corollary 7.19. Suppose (C, T) is an ASC in Rn, R is an commutative ring with 1, and µ : Polycon(n) → R a valuation. Then

µ(C)= χS(C, T)µ(S). T XS∈ In particular, if we let µ be the Euler characteristic we deduce dim S (−1) = χ(C)= χS(C, T). T T XS∈ XS∈ Proof. Denote by dµ the integral defined by µ. Then R µ(C)= IC dµ = χS(C, T)IS dµ = χS(C, T) ISdµ = χS(C, T)µ(C). T ! T T Z Z XS∈ XS∈ Z XS∈ ⊓⊔ 76 Csar-Johnson-Lamberty

8. Morse theory on polytopes in R3

§8.1. Linear Morse Functions on Polytopes. Suppose (C, T) is an ASC in R3. For sim- plicity we assume that all the faces of T have dimension < 3. Denote by h−, −i the canonical inner product in R3 and S2 the unit sphere centered at the origin of R3. 2 3 Any vector u ∈ S defines a linear map Lu : R → R given by 3 Lu(x) := hu, xi, ∀x ∈ R .

We denote by ℓu its restriction to C. We say the u is T-nondegenerate if the restriction of Lu to the set of vertices of T is injective. Otherwise u is called T-degenerate. We denote by ∆T the set of degenerate vectors.

Definition 8.1. A linear Morse function on (C, T) is a function of the form ℓu, whereu is a T-nondegenerate unit vector. ⊓⊔

2 Lemma 8.2 (Bertini-Sard). ∆T is a subset of S of zero (surface) area.

2 Proof. Let v1, ··· , vk be the vertices of T. For u ∈ S to be a T-degenerate vector, it must be perpendicular to an edge connecting two of the vi. The set of all such lines is a plane in 3 2 R , and the intersection of such a plane and S is a great circle. Therefore, ∆T is composed k T of at most 2 great circles, meaning ∆ is a finite set and thus has zero surface area. ⊓⊔ Supposeuis T-nondegenerate. For every t ∈ R we set

Ct = x ∈ C | ℓu(x)= t .

In other words, Ct is the intersection of C with the affine plane Hu,x0 := {x | hu, xi = hu, x0i = t}. We denote by VT the set of vertices of T. For every nondegenerate vector u the function ℓu defines an injection ℓu : VT → R.

Its image is a subset Ku ⊂ R called the critical set of ℓu. The elements of Ku are called the critical values of ℓu.

Lemma 8.3. For every t ∈ R the slice Ct is a polytope of dimension ≤ 1.

Proof. Ct is the intersection of C with the plane Ht = {hu, xi}. Ht contains at most one vertex, so it must intersect each face transversally. The intersection with each face will have codimension 1 inside that face. Hence, no intersection can have dimension greater than 1. ⊓⊔

The above proof also shows that the slice Ct has a natural simplicial structure induced from the simplicial structure of C. The faces of Ct are the non-empty intersections of the faces of C with the hyperplane Ht. Remark 8.4. Define a binary relation

R = Rt,t+ǫ ⊂ V (Ct+ǫ) × V (Ct)

such that for a ∈ V (Ct+ǫ) and b ∈ V (Ct)

a Rt,t+ǫ b ⇒ a and b lie on the same edge of C Morse theory 77

Observe that fixed t and for ǫ ≪ 1, the binary relation Rt,t+ǫ is the graph of a map

V (Ct+ǫ) → V (Ct) which preserves the incidence relation. This means the following:

• ∀a ∈ Ct+ǫ, ∃ a unique b ∈ V (Ct) such that aRb, and • If [a0, a1, ··· , ak] is a face of Ct+ǫ and aiRbi for bi ∈ V (Ct), then [b0, b1, ..., bk], is a face of Ct.

We will denote the map V (Ct+ǫ) → V (Ct) by the same symbol R. ⊓⊔

Denote by χu(t) the Euler characteristic of Ct. We know that

χ(C)= ju(t), t X where ju(t) denotes the jump of χu at t,

ju(t) := χu(t) − χu(t + 0). Every nondegenerate vector u defines a map

j(u|−) : VT → Z, j(u|x) := the jump of χu at the critical value ℓu(x). Lemma 8.5. ju(t) =0=6 ⇒ t ∈ Ku.

Proof. By definition ju(t) =6 0 ⇒ χ(Ct) =6 χ(Ct+0). We would like to better understand the relationship between Ct and Ct+ǫ for any ǫ ≪ 1. By Remark 8.4, for ǫ sufficiently small Rt,t+ǫ defines a map V (Ct+ǫ) → V (Ct) which preserves the incidence relationship. Assume that t∈ / Ku. Allpointsin V (Ct) and V (Ct+ǫ) come from transversal intersections of edges of C. Because these edges cannot terminate anywhere inbetween those two points and because the intersection is transversal (and thus unique), for small ǫ we have that V (Ct+ǫ) → V (Ct) is a bijection. Any edge which connects two vertices in Ct+ǫ is subsequently mapped by to the edge connecting the corresponding vertices in Ct. By the Euler-Schl¨afli-Poincar´eformula, χ(Ct+ǫ)= χ(Ct). But this implies that ju(t)=0, a contradiction. Thus t ∈ Ku. ⊓⊔

Lemma 8.5 shows that χ(C)= j(u|x). (8.1) xX∈VT For every x ∈ VT we have a function 2 2 j(−|x) : S \ ∆T → Z, S \ ∆T ∋ u 7→ j(u|x). Now, for every vertex x we set 1 ρ(x) := j(u|x)dσ(u), (8.2) 4π S2 Z \∆T where dσ(u) denotes the area element on S2 such that 2 2 Area (S ) = Area(S \ ∆T)=4π. 78 Csar-Johnson-Lamberty

2 Integrating (8.1) over S \ ∆T we deduce

4π = j(u|x) dσ(u)= j(u|x)=4π ρ(x) S2\∆T S2\∆T Z xX∈VT  xX∈VT Z xX∈VT so that χ(C)= ρ(x) (8.3) xX∈VT We dedicate the remainder of this section to giving a more concrete description of ρ(x).

§8.2. The Morse Index. We begin by providing a combinatorial description of the jumps. 2 Let u ∈ S be a T-nondegenerate vector, x0 a vertex of the triangulation T. We set

t0 := ℓu(x0)= hu, x0i. + We denote by StT (x0,u) the collection of simplices in T which admit x0 as a vertex and are contained in the half-space + 3 Hu,x0 = v ∈ R |hu, vi≥hu, x0i . We define the Morse index of u at x0to be the integer dim S µ(u|x0) := (−1) . + S∈StXT (x0,u) Example 8.6. Consider the situation depicted in Figure 13.

x1 x x 2 0 x3

x x x4 8 x7 6 x 5

t t +ε 0 0

C t0 C t0+ε

FIGURE 13. Slicing a polytope.

The hyperplanes ℓu = const are depicted as vertical lines. The slice Ct0 is a segment, and for every ǫ > 0 sufficiently small the slice Ct0+ǫ consists of two segments and two points. Hence χu(t0)=1, χu(t0 + ǫ)=4, j(u|x0)= ju(t0)=1 − 4= −3. Morse theory 79

+ Now observe that StT (x0,u) consists of the simplices

{x0}, [x0, x1], [x0, x2], [x0, x3], [x0, x5], [x0, x6], [x0, x1, x5]. + We see that StT (x0,u) consists of 1 simplex of dimension zero, 5 simplices of dimension 1 and one simplex of dimension 2 so that

µ(u|x0)=1 − 5+1= −3= j(u|x0). We will see that the above equality is no accident. ⊓⊔

Proposition 8.7. Let (C, T) be a polytope in R3 such that all its simplices have dimension ≤ 2. Then for any T-nondegenerate vector u and any vertex x0 of T the jump of ℓu at x0 is equal to the Morse index of u at x0, i.e.

j(u|x0)= µ(u|x0).

Proof. By definition,

j(u|x0)= χ(Ct0 ) − χ(Ct0+0) We will again utilize the binary relation R defined in Remark 8.4.

We have that t0 is a critical value, so Ct0 contains a unique vertex x0 of C. Denote by −1 R (x0) the set of all vertices of V (Ct0+ǫ) which are mapped to x0 by Rt0,t0+ǫ. Then the −1 induced map V (Ct0+ǫ) r R (x0) → V (Ct0 ) r {x0} behaves as it did in Lemma 8.5 is a bijection on these sets and preserves the face incidence relation (see Figure 14). Using the Euler-Schl¨afli-Poincar´eformula we then deduce that

χ(Ct0 ) − χ(Ct0+ǫ)

−1 −1 = # {x0} − # R (x0) − # Edges in Ct+ǫ connecting vertices in R (x0) . Note here that   −1 + # R (x0) = # Edges of C inside Hu,x0 and containing x0 , and   −1 # Edges in Ct+ǫ connecting vertices in R (x0)

+ = # Triangles of C inside Hu,x0 and containig x0 . Thus we see that  + j(u|x0) = # {x0} − # Edges of C inside Hu,x0 and containing x0

 + +# Triangles of C inside Hu,x0 and containig x0

dim S  = (−1) = µ(u|x0). + S∈StXT (x0,u) ⊓⊔ 80 Csar-Johnson-Lamberty

t

C t 0+ ε

C t0 x0

FIGURE 14. The behavior of the map V (Ct0+ǫ) → V (Ct0 ).

§8.3. Combinatorial Curvature. To formulate the notion of combinatorial curvature we need to introduce the notion of conormal . For u, x ∈ Rn define + n Hu,x := y ∈ R |hu,yi≥hu, xi . + Note that if u =06 then Hu,x is a half-space containing x on its boundary. u is normal to the boundary and points towards the interior of this half-space. Definition 8.8. Suppose P is a convex polytope in Rn and x is a point in P . The conormal cone of x ∈ C is the set n n + Cx(P )= Cx(P, R ) := u ∈ R | P ⊂ Hu,x . ⊓⊔ We create an equivalent and more useful description of the conormal cone. n + n Cx(P ) := u ∈ R | P ⊂ Hu,x = u ∈ R |hu,yi≥hu, xi, ∀y ∈ P , The following result follows immediately from the definition. Proposition 8.9. The conormal cone is a convex cone, i.e. it satisfies the conditions

u ∈ Cx(P ), t ∈ [0, ∞)=⇒ tu ∈ Cx(P )

u0,u1 ∈ Cx(P )=⇒ u0 + u1 ∈ Cx(P ). ⊓⊔

Proposition 8.9 shows that the conormal cone is an (infinite) union of rays (half-lines) starting at the origin. Each one of these rays intersects the unit sphere Sn−1, and as a ray sweeps the cone P , its intersection with the sphere sweeps a region Ωx(P ) on the sphere, n n−1 Ωx(P ) = Ωx(P, R ) := Cx(P ) ∩ S . n The more elaborate notation Ωx(P, R ) is meant to emphasize that the region Ωx(P ) depends on the ambient space Rn. Thus we also have n−1 Ωx(P )= u ∈ S |hu,yi≥hu, xi, ∀y ∈ P  Morse theory 81

n−1 We denote σn−1 the total (n − 1)-dimensional surface area of S and by ωx(P ) the (n − 1)-dimensional “surface area” of Ωx(P ) divided by σn−1,

arean−1 Ωx(P ) arean−1 Ωx(P ) ω (P ) := = . x Sn−1 arean−1 ( )  σn−1  Remark 8.10. One can show that ωx(P ) is independent of the dimension n of the ambient space Rn, i.e. if we regard P as a polytope in an Euclidean space RN ⊃ Rn then we obtain the same result for ωx(P ). We will not pursue this aspect here. ⊓⊔

Proposition 8.11. (a) If P ⊂ R3 is a zero simplex [x] then

ωx(x)=1. 3 (b) If P = [x0, x1] ⊂ R is a 1-simplex then 1 ω ([x , x ]) = . x0 0 1 2 3 (c) If P = [x0, x1, x2] ⊂ R is a 2-simplex and the angle at the vertex xi is riπ, ri ∈ (0, 1) then 1 r r + r + r =1, ω ([x , x , x ]) = − i . 0 1 2 xi 0 1 2 2 2

Proof. (a) For [x] ⊂ R3 a singleton, clearly n−1 n−1 n−1 Ω[x](P )= u ∈ S |hu,yi≥hu, xi, ∀y ∈ [x] = u ∈ S |hu, xi≥hu, xi = S and thus   ω[x](P )=1

(b) We can fix coordinates such that [x0, x1] lies horizontal with x0 located at the origin. It is then easy to see that the conormal cone is the half-space with boundary perpendicular to 1 [x0, x1] passing through x0 with rays pointing “towards” x1. Therefore, ωx0 = 2 . (c) Without loss of generality, we can consider ωx0 . We fix coordinates such that x0 lies at the origin and [x0, x1, x2] lies in a plane with [x0, x1] lying along an axis. The conormal of [x0, x1] and [x0, x2] are each a half-space as described above. The conormal cone of [x0, x1, x2] is then the intersection of these two cones. This intersection is bounded by planes determined by the perpendiculars to [x0, x1] and [x0, x1], both passing through x0. In other words, the intersection of the conormal cone with the sphere is a lune. Call the angle of the opening of the lune θ. The angles between the perpendicular lines and the sides of the π triangle are given by 2 − πr0 (possibly negative), and the total angle is π π θ = − πr + − πr + πr = π − πr 2 0 2 0 0 0     The area of the lune defined by θ0 is, in spherical coordinates,

θ0 π sin(φ)dφ dθ =2θ0 Z0 Z0  Thus 2θ 1 r ω = = − 1 . x0 4π 2 2 82 Csar-Johnson-Lamberty

⊓⊔

π πr 2 i

πri

π πr 2 i

FIGURE 15. The angle between the perpendiculars of [x0, x1] and [x0, x2].

Suppose (C, T) is an affine simplicial complex in R3 such that all the simplices in T have dimension ≤ 2. Recall that for every vertex v ∈ VT we defined its star to be the collection of all simplices in T which admit v as a vertex. We now define the combinatorial curvature of (C, T) at v ∈ VT to be the quantity

dim S κ(v) := (−1) ωv(S). S∈XStT(v) Example 8.12. (a) Consider a rectangle A1A2A3A4. Then any interior point O determines a triangulation of the rectangle as in Figure 16.

A1 A4

O

A A 3 2

FIGURE 16. A simple triangulation of a rectangle.

Suppose that ∡(AiOAi+1)= riπ, i =1, 2, 3, 4 so that

r1 + r2 + r3 + r4 =2. The star of O in the above triangulation consists of the simplices

{O}, [OAi], [OAiAi+1], i =1, 2, 3, 4. Morse theory 83

Then 1 1 r ω (O)=1, ω ([OA ]) = , ω ([OA A ]) = − i . O O i 2 O i i+1 2 2 We deduce that the combinatorial curvature at O is 4 1 4 1 r 1 1 − + − i =1 − 2+2 − (r + r + r + r )=0. 2 2 2 2 1 2 3 4 i=1 i=1 X X  This corresponds to our intuition that a rectangle is “flat”. Note that the above equality can be written as 4

2πκ(O)=2π − ∡(AiOAi+1). i=1 X (b) Suppose (C, T) is the standard triangulation of the boundary of a tetrahedron [v0, v1, v2, v3] in R3 (see Figure 17).

v 0

v1

v2

v 3

FIGURE 17. The boundary of a tetrahedron.

Set θijk = ∡(vivjvk), i, j, k =0, 1, 2, 3..

Then the star of v0 consists of the simplices

{v0}, [v0vi], [v0vivj], i, j ∈{1, 2, 3}, i =6 j. We deduce 1 1 ω (v )=1, ω ([v v ]) = , ω ([v v v ]) = (π − θ ) v0 0 v0 0 i 2 v0 0 i j 2π i0j and 3 1 1 κ(v )=1 − + (π − θ )= 2π − θ . 0 2 2π i0j 2π i0j 1≤i

Motivated by the above example we introduce the notion of angular defect. Definition 8.13. Suppose (C, T) is an affine simplicial complex in R3 such that all simplices have dimension ≤ 2. For every vertex v of T. we denote by Θ(v) the sum of the angles at v of the triangles in T which have v as a vertex. The defect at v is the quantity def(v):=2π − Θ(v). ⊓⊔ Proposition 8.14. If (C, T) is an affine simplicial complex in R3 such that all simplices have dimension ≤ 2 then for every vertex v of T we have 1 1 κ(v)= def(v) − χ( lkT(v)). 2π 2

Proof. Using Proposition 8.11 we deduce that 1 1 1 κ(v)=1 − # edges at v + # triangles at v − Θ(v) 2 2 2π 1  1  = def(v) − # edges at v − # triangles at v 2π 2 The proposition now follows from Corollary 7.18 which states that 

χ( lkT(v))= # edges at v − # triangles at v .     ⊓⊔

Theorem 8.15 (Combinatorial Gauss-Bonnet). If (C, T) is an affine simplicial complex in R3 such that all the simplices in T have dimension ≤ 2 then 1 1 χ(C)= κ(v)= def(v) − χ( lkT(v)). 2π 2 vX∈VT vX∈VT vX∈VT

Proof. Let V represent the number of vertices in C, E the number of edges in C and T the number of triangles in C. Then we denote by Ev, Tv the number of edges and triangles in StT(v) for v ∈ VT, respectively. We note that

dim S−1 χ(lkT(v)) = (−1) = Ev − Tv S∈StXT(v)rv We note that each edge is in two stars and each triangle in three. Then, 1 3 − χ(lkT(v)) = −E + T 2 2 v∈VT X def 1 def Θ(v) Recall that (v)=2π − Θ(v). Therefore, 2π (v)=1 − 2π . Thus, 1 Θ(v) T def(v)= 1 − = V − 2π 2π 2 vX∈VT vX∈VT Morse theory 85

Consequently, κ(v)= V − E + T = χ(C) vX∈VT ⊓⊔

Recall that a combinatorial surface is an ASC with simplices of dimension ≤ 2 such that every edge is the face of exactly two triangles. In this case, the link of every vertex is a simple cycle, that is a 1-dimensional ASC whose vertex set has a cyclical ordering

v1, v2, ··· , vn, vn+1 = v1 and whose only edges are [vi, vi+1], i =1, ··· , n. The Euler characteristic of such a cycle is zero. We thus obtain the following result of T. Banchoff, [Ban] Corollary 8.16. If C is a combinatorial surface with vertex set V then 1 χ(C)= def(v). ⊓⊔ 2π v∈V X We can now finally close the circle and relate the combinatorial curvature to the average Morse index. Theorem 8.17 (Microlocal Gauss-Bonnet). If (C, T) is an affine simplicial complex in R3 such that all the simplices in T have dimension ≤ 2 then

κ(v)= ρ(v), ∀v ∈ VT, where ρ is defined by (8.2).

Proof. Fix x ∈ VT. We now recall some previous ideas. Proposition 8.7 tells us that for any 2 u ∈ S \ ∆T, (−1)dim S = µ(u|x)= j(u|x). S∈StXT(x,u) We then recall from the proof of Lemma 8.2 that ∆T is a finite union of great circles on 2 2 S . Thus, S \ ∆T consists of a finite union of chambers, A1,...,Am. We now note that for + + any i, 1 ≤ i ≤ m, StT (x, u) = StT (x, v) for any u, v ∈ Ai. So, letting vi be an element in Ai, we have: 1 1 ρ(x)= j(u|x)dσ(u)= µ(u|x)dσ(u) 4π S2 4π S2 Z \∆T Z \∆T 1 1 m = (−1)dim Sdσ(u)= (−1)dim Sdσ(u) 4π S2 4π \∆T + i=1 Ai + Z S∈StXT (x,u) X Z S∈StXT (x,u) 1 m = (−1)dim S dσ(u). 4π i=1 + Ai X S∈StXT (x,vi) Z 86 Csar-Johnson-Lamberty

A3

A1

A5

A2

A6 A4 x1

x0 x2

FIGURE 18. The chambers which coincide with the conormal section 2 Cx(S) ∩ S .

dim S Considering this sum, we see that every term of it has a (−1) in it for some S ∈ StT(x). We also notice that every S ∈ StT(x) appears at least once. So, we expand the sum, collect the coefficients for each (−1)dim S and if we set + kS := # i; S ∈ St (x, vi) we obtain  k 1 m 1 S ρ(x)= (−1)dim S dσ(u)= (−1)dim S dσ(u) . 4π A 4π A i=1 + i S∈StT(x) j=1 ij ! X S∈StXT (x,vi) Z X X Z Now we consider the sum kS dσ(u). This is the area of the set of vectors, u, on the j=1 Aij + unit sphere such that S ∈ PStT (x,R u). That is, this sum is the area of the set of vectors, u, + S2 on the unit sphere such that S lies in Hu,x. But this is precisely the area of Cx(S) ∩ (see 2 area(Cx(S)∩S ) 3 Figure 18). Now recall that ωx(S)= 4π (since the area of the unit sphere in R is 4π). Thus, we have: k 1 S 1 ρ(x)= (−1)dim S dσ(u) = (−1)dim S(area(C (S) ∩ S2)) 4π 4π x j=1 Aij ! S∈XStT(x) X Z S∈XStT(x) dim S = (−1) ωx(S)= κ(x). S∈XStT(x) ⊓⊔

Remark 8.18. The above equality can be interpreted as saying that the curvature at a vertex x0 is the average Morse index at x0 of a linear Morse function ℓu. ⊓⊔ Morse theory 87

References

[Ban] T.F. Banchoff: Critical points and curvature for embedded polyhedral surfaces, Amer. Math. Monthly, 77(1970), 475-485. [Gon] A.B. Goncharov: Differential equations and integral , Adv. in Math., 131(1997), 279- 343. [Gr] H. Groemer: Geometric Applications of Fourier Series and Spherical Harmonics, Cambridge University Press, 1996. [KR] D.A. Klain, G.-C. Rota: Introduction to Geometric Probability, Cambridge University Press, 1997. [San] L. Santalo: Integral Geometry and Geometric Probability, Cambridge University Press, 2004.

DEPARTMENT OF MATH. U.C. BERKELEY, BERKELEY, CA 94720 E-mail address: [email protected]

DEPARTMENT OF MATH., UNIV. OF CHICAGO, CHICAGO, IL 60637 E-mail address: [email protected]

DEPARTMENT OF MATH., UNIV. OF NOTRE DAME,NOTRE DAME, IN 46556 E-mail address: [email protected]