MAGIC: Ergodic Theory Lecture 8 - Topological entropy

Charles Walkden

March 13th 2013 In the context of a continuous transformation of a compact space we study how hµ(T ) depends on µ.

We also relate entropy to another important quantity: topological entropy.

Throughout: metric entropy = measure-theoretic entropy = hµ(T ).

Introduction

Let T be an m.p.t. of a prob. space (X , B, µ). Last time we defined the entropy hµ(T ).

In this lecture we recap some basic facts about entropy. We also relate entropy to another important quantity: topological entropy.

Throughout: metric entropy = measure-theoretic entropy = hµ(T ).

Introduction

Let T be an m.p.t. of a prob. space (X , B, µ). Last time we defined the entropy hµ(T ).

In this lecture we recap some basic facts about entropy.

In the context of a continuous transformation of a compact we study how hµ(T ) depends on µ. Throughout: metric entropy = measure-theoretic entropy = hµ(T ).

Introduction

Let T be an m.p.t. of a prob. space (X , B, µ). Last time we defined the entropy hµ(T ).

In this lecture we recap some basic facts about entropy.

In the context of a continuous transformation of a compact metric space we study how hµ(T ) depends on µ.

We also relate entropy to another important quantity: topological entropy. Introduction

Let T be an m.p.t. of a prob. space (X , B, µ). Last time we defined the entropy hµ(T ).

In this lecture we recap some basic facts about entropy.

In the context of a continuous transformation of a compact metric space we study how hµ(T ) depends on µ.

We also relate entropy to another important quantity: topological entropy.

Throughout: metric entropy = measure-theoretic entropy = hµ(T ). Let ζ = {Aj } be a finite partition of a prob. space (X , B, µ).

Define the entropy of ζ X Hµ(ζ) = − µ(A) log µ(A). A∈ζ

If ζ, η are partitions then ζ ∨ η = {A ∩ B | A ∈ ζ, B ∈ η}.

If T : X → X is measurable then T −1ζ = {T −1A | A ∈ ζ}.

The entropy of T relative to ζ is

n−1  1 _ −j hµ(T , ζ) = lim Hµ  T α . n→∞ n j=0

The entropy of T is

hµ(T ) = sup{hµ(T , ζ) | ζ a finite partition}.

Recap on entropy Define the entropy of ζ X Hµ(ζ) = − µ(A) log µ(A). A∈ζ

If ζ, η are partitions then ζ ∨ η = {A ∩ B | A ∈ ζ, B ∈ η}.

If T : X → X is measurable then T −1ζ = {T −1A | A ∈ ζ}.

The entropy of T relative to ζ is

n−1  1 _ −j hµ(T , ζ) = lim Hµ  T α . n→∞ n j=0

The entropy of T is

hµ(T ) = sup{hµ(T , ζ) | ζ a finite partition}.

Recap on entropy Let ζ = {Aj } be a finite partition of a prob. space (X , B, µ). If ζ, η are partitions then ζ ∨ η = {A ∩ B | A ∈ ζ, B ∈ η}.

If T : X → X is measurable then T −1ζ = {T −1A | A ∈ ζ}.

The entropy of T relative to ζ is

n−1  1 _ −j hµ(T , ζ) = lim Hµ  T α . n→∞ n j=0

The entropy of T is

hµ(T ) = sup{hµ(T , ζ) | ζ a finite partition}.

Recap on entropy Let ζ = {Aj } be a finite partition of a prob. space (X , B, µ).

Define the entropy of ζ X Hµ(ζ) = − µ(A) log µ(A). A∈ζ If T : X → X is measurable then T −1ζ = {T −1A | A ∈ ζ}.

The entropy of T relative to ζ is

n−1  1 _ −j hµ(T , ζ) = lim Hµ  T α . n→∞ n j=0

The entropy of T is

hµ(T ) = sup{hµ(T , ζ) | ζ a finite partition}.

Recap on entropy Let ζ = {Aj } be a finite partition of a prob. space (X , B, µ).

Define the entropy of ζ X Hµ(ζ) = − µ(A) log µ(A). A∈ζ

If ζ, η are partitions then ζ ∨ η = {A ∩ B | A ∈ ζ, B ∈ η}. The entropy of T relative to ζ is

n−1  1 _ −j hµ(T , ζ) = lim Hµ  T α . n→∞ n j=0

The entropy of T is

hµ(T ) = sup{hµ(T , ζ) | ζ a finite partition}.

Recap on entropy Let ζ = {Aj } be a finite partition of a prob. space (X , B, µ).

Define the entropy of ζ X Hµ(ζ) = − µ(A) log µ(A). A∈ζ

If ζ, η are partitions then ζ ∨ η = {A ∩ B | A ∈ ζ, B ∈ η}.

If T : X → X is measurable then T −1ζ = {T −1A | A ∈ ζ}. The entropy of T is

hµ(T ) = sup{hµ(T , ζ) | ζ a finite partition}.

Recap on entropy Let ζ = {Aj } be a finite partition of a prob. space (X , B, µ).

Define the entropy of ζ X Hµ(ζ) = − µ(A) log µ(A). A∈ζ

If ζ, η are partitions then ζ ∨ η = {A ∩ B | A ∈ ζ, B ∈ η}.

If T : X → X is measurable then T −1ζ = {T −1A | A ∈ ζ}.

The entropy of T relative to ζ is

n−1  1 _ −j hµ(T , ζ) = lim Hµ  T α . n→∞ n j=0 Recap on entropy Let ζ = {Aj } be a finite partition of a prob. space (X , B, µ).

Define the entropy of ζ X Hµ(ζ) = − µ(A) log µ(A). A∈ζ

If ζ, η are partitions then ζ ∨ η = {A ∩ B | A ∈ ζ, B ∈ η}.

If T : X → X is measurable then T −1ζ = {T −1A | A ∈ ζ}.

The entropy of T relative to ζ is

n−1  1 _ −j hµ(T , ζ) = lim Hµ  T α . n→∞ n j=0

The entropy of T is

hµ(T ) = sup{hµ(T , ζ) | ζ a finite partition}. Wn −j A finite partition ζ is a generator if j=−n T ζ %B. (Equiv. Wn −j j=−n T ζ separates µ-a.e. pair of points.)

Theorem (Sinai) Suppose T is an invertible m.p.t. and ζ is a generator. Then

hµ(T ) = hµ(T , ζ).

Let σ be the full k-shift with the Bernoulli (p1,..., pk )-measure µ. Then ζ = {[1],..., [k]} is a generator.

k X hµ(σ) = hµ(σ, ζ) = − pj log pj . j=1

Sinai’s theorem Theorem (Sinai) Suppose T is an invertible m.p.t. and ζ is a generator. Then

hµ(T ) = hµ(T , ζ).

Let σ be the full k-shift with the Bernoulli (p1,..., pk )-measure µ. Then ζ = {[1],..., [k]} is a generator.

k X hµ(σ) = hµ(σ, ζ) = − pj log pj . j=1

Sinai’s theorem

Wn −j A finite partition ζ is a generator if j=−n T ζ %B. (Equiv. Wn −j j=−n T ζ separates µ-a.e. pair of points.) Let σ be the full k-shift with the Bernoulli (p1,..., pk )-measure µ. Then ζ = {[1],..., [k]} is a generator.

k X hµ(σ) = hµ(σ, ζ) = − pj log pj . j=1

Sinai’s theorem

Wn −j A finite partition ζ is a generator if j=−n T ζ %B. (Equiv. Wn −j j=−n T ζ separates µ-a.e. pair of points.)

Theorem (Sinai) Suppose T is an invertible m.p.t. and ζ is a generator. Then

hµ(T ) = hµ(T , ζ). Sinai’s theorem

Wn −j A finite partition ζ is a generator if j=−n T ζ %B. (Equiv. Wn −j j=−n T ζ separates µ-a.e. pair of points.)

Theorem (Sinai) Suppose T is an invertible m.p.t. and ζ is a generator. Then

hµ(T ) = hµ(T , ζ).

Let σ be the full k-shift with the Bernoulli (p1,..., pk )-measure µ. Then ζ = {[1],..., [k]} is a generator.

k X hµ(σ) = hµ(σ, ζ) = − pj log pj . j=1 Let (X , B) be a compact metric space with the Borel σ-algebra. Let T : X → X be continuous.

Let M(X ) = {all Borel probability measures}. Let M(X , T ) = { all T -invariant Borel probability measures}.

∗ A sequence µn ∈ M(X ) weak -converges to µ (µn * µ) if Z Z f dµn → f dµ ∀f ∈ C(X , R).

Q: How does the entropy hµ(T ) vary as a function of µ?

The weak∗ topology ∗ A sequence µn ∈ M(X ) weak -converges to µ (µn * µ) if Z Z f dµn → f dµ ∀f ∈ C(X , R).

Q: How does the entropy hµ(T ) vary as a function of µ?

The weak∗ topology

Let (X , B) be a compact metric space with the Borel σ-algebra. Let T : X → X be continuous.

Let M(X ) = {all Borel probability measures}. Let M(X , T ) = { all T -invariant Borel probability measures}. Q: How does the entropy hµ(T ) vary as a function of µ?

The weak∗ topology

Let (X , B) be a compact metric space with the Borel σ-algebra. Let T : X → X be continuous.

Let M(X ) = {all Borel probability measures}. Let M(X , T ) = { all T -invariant Borel probability measures}.

∗ A sequence µn ∈ M(X ) weak -converges to µ (µn * µ) if Z Z f dµn → f dµ ∀f ∈ C(X , R). The weak∗ topology

Let (X , B) be a compact metric space with the Borel σ-algebra. Let T : X → X be continuous.

Let M(X ) = {all Borel probability measures}. Let M(X , T ) = { all T -invariant Borel probability measures}.

∗ A sequence µn ∈ M(X ) weak -converges to µ (µn * µ) if Z Z f dµn → f dµ ∀f ∈ C(X , R).

Q: How does the entropy hµ(T ) vary as a function of µ? Let Σ2 = full 2-sided 2-shift with shift map σ.

x is periodic with period n iff n x = (··· x0x1 ··· xn−1 x0x1 ··· xn−1 ··· ). There are 2 points of | {z } | {z } period n.

Let 1 X µ = δ ∈ M(X , T ). n 2n x x=σnx

Then hµn (σ) = 0 (as µn is supported on a finite set).

However, µn * µ, where µ = the Bernoulli (1/2, 1/2)-measure. Note hµ(σ) = log 2.

The entropy map is not continuous x is periodic with period n iff n x = (··· x0x1 ··· xn−1 x0x1 ··· xn−1 ··· ). There are 2 points of | {z } | {z } period n.

Let 1 X µ = δ ∈ M(X , T ). n 2n x x=σnx

Then hµn (σ) = 0 (as µn is supported on a finite set).

However, µn * µ, where µ = the Bernoulli (1/2, 1/2)-measure. Note hµ(σ) = log 2.

The entropy map is not continuous

Let Σ2 = full 2-sided 2-shift with shift map σ. Let 1 X µ = δ ∈ M(X , T ). n 2n x x=σnx

Then hµn (σ) = 0 (as µn is supported on a finite set).

However, µn * µ, where µ = the Bernoulli (1/2, 1/2)-measure. Note hµ(σ) = log 2.

The entropy map is not continuous

Let Σ2 = full 2-sided 2-shift with shift map σ.

x is periodic with period n iff n x = (··· x0x1 ··· xn−1 x0x1 ··· xn−1 ··· ). There are 2 points of | {z } | {z } period n. However, µn * µ, where µ = the Bernoulli (1/2, 1/2)-measure. Note hµ(σ) = log 2.

The entropy map is not continuous

Let Σ2 = full 2-sided 2-shift with shift map σ.

x is periodic with period n iff n x = (··· x0x1 ··· xn−1 x0x1 ··· xn−1 ··· ). There are 2 points of | {z } | {z } period n.

Let 1 X µ = δ ∈ M(X , T ). n 2n x x=σnx

Then hµn (σ) = 0 (as µn is supported on a finite set). The entropy map is not continuous

Let Σ2 = full 2-sided 2-shift with shift map σ.

x is periodic with period n iff n x = (··· x0x1 ··· xn−1 x0x1 ··· xn−1 ··· ). There are 2 points of | {z } | {z } period n.

Let 1 X µ = δ ∈ M(X , T ). n 2n x x=σnx

Then hµn (σ) = 0 (as µn is supported on a finite set).

However, µn * µ, where µ = the Bernoulli (1/2, 1/2)-measure. Note hµ(σ) = log 2. Proof (sketch): R Let f = χ[0]. Note that f dµ2 1 = χ (...00...) + χ (...01...) + χ (...10...) + χ (...11...) 22 [0] [0] [0] [0] 1 1 Z = 2 = = µ(χ ) = f dµ. 22 2 [0] R R In general, χ[i0,...,im−1] dµn = χ[i0,...,im−1] dµ when n ≥ m. Characteristic functions of intervals are continuous. Finite linear combinations of characteristic functions are dense in C(X , R) (by the Stone-Weierstrass theorem). Hence µn * µ.

Is the entropy map upper semi-continuous? i.e. does

µn * µ ⇒ lim supn→∞ hµn (T ) ≤ hµ(T )? Answer: no in general, yes in many important cases.

The entropy map is not continuous 1 = χ (...00...) + χ (...01...) + χ (...10...) + χ (...11...) 22 [0] [0] [0] [0] 1 1 Z = 2 = = µ(χ ) = f dµ. 22 2 [0] R R In general, χ[i0,...,im−1] dµn = χ[i0,...,im−1] dµ when n ≥ m. Characteristic functions of intervals are continuous. Finite linear combinations of characteristic functions are dense in C(X , R) (by the Stone-Weierstrass theorem). Hence µn * µ.

Is the entropy map upper semi-continuous? i.e. does

µn * µ ⇒ lim supn→∞ hµn (T ) ≤ hµ(T )? Answer: no in general, yes in many important cases.

The entropy map is not continuous Proof (sketch): R Let f = χ[0]. Note that f dµ2 1 1 Z = 2 = = µ(χ ) = f dµ. 22 2 [0] R R In general, χ[i0,...,im−1] dµn = χ[i0,...,im−1] dµ when n ≥ m. Characteristic functions of intervals are continuous. Finite linear combinations of characteristic functions are dense in C(X , R) (by the Stone-Weierstrass theorem). Hence µn * µ.

Is the entropy map upper semi-continuous? i.e. does

µn * µ ⇒ lim supn→∞ hµn (T ) ≤ hµ(T )? Answer: no in general, yes in many important cases.

The entropy map is not continuous Proof (sketch): R Let f = χ[0]. Note that f dµ2 1 = χ (...00...) + χ (...01...) + χ (...10...) + χ (...11...) 22 [0] [0] [0] [0] Z = µ(χ[0]) = f dµ.

R R In general, χ[i0,...,im−1] dµn = χ[i0,...,im−1] dµ when n ≥ m. Characteristic functions of intervals are continuous. Finite linear combinations of characteristic functions are dense in C(X , R) (by the Stone-Weierstrass theorem). Hence µn * µ.

Is the entropy map upper semi-continuous? i.e. does

µn * µ ⇒ lim supn→∞ hµn (T ) ≤ hµ(T )? Answer: no in general, yes in many important cases.

The entropy map is not continuous Proof (sketch): R Let f = χ[0]. Note that f dµ2 1 = χ (...00...) + χ (...01...) + χ (...10...) + χ (...11...) 22 [0] [0] [0] [0] 1 1 = 2 = 22 2 R R In general, χ[i0,...,im−1] dµn = χ[i0,...,im−1] dµ when n ≥ m. Characteristic functions of intervals are continuous. Finite linear combinations of characteristic functions are dense in C(X , R) (by the Stone-Weierstrass theorem). Hence µn * µ.

Is the entropy map upper semi-continuous? i.e. does

µn * µ ⇒ lim supn→∞ hµn (T ) ≤ hµ(T )? Answer: no in general, yes in many important cases.

The entropy map is not continuous Proof (sketch): R Let f = χ[0]. Note that f dµ2 1 = χ (...00...) + χ (...01...) + χ (...10...) + χ (...11...) 22 [0] [0] [0] [0] 1 1 Z = 2 = = µ(χ ) = f dµ. 22 2 [0] Characteristic functions of intervals are continuous. Finite linear combinations of characteristic functions are dense in C(X , R) (by the Stone-Weierstrass theorem). Hence µn * µ.

Is the entropy map upper semi-continuous? i.e. does

µn * µ ⇒ lim supn→∞ hµn (T ) ≤ hµ(T )? Answer: no in general, yes in many important cases.

The entropy map is not continuous Proof (sketch): R Let f = χ[0]. Note that f dµ2 1 = χ (...00...) + χ (...01...) + χ (...10...) + χ (...11...) 22 [0] [0] [0] [0] 1 1 Z = 2 = = µ(χ ) = f dµ. 22 2 [0] R R In general, χ[i0,...,im−1] dµn = χ[i0,...,im−1] dµ when n ≥ m. Is the entropy map upper semi-continuous? i.e. does

µn * µ ⇒ lim supn→∞ hµn (T ) ≤ hµ(T )? Answer: no in general, yes in many important cases.

The entropy map is not continuous Proof (sketch): R Let f = χ[0]. Note that f dµ2 1 = χ (...00...) + χ (...01...) + χ (...10...) + χ (...11...) 22 [0] [0] [0] [0] 1 1 Z = 2 = = µ(χ ) = f dµ. 22 2 [0] R R In general, χ[i0,...,im−1] dµn = χ[i0,...,im−1] dµ when n ≥ m. Characteristic functions of intervals are continuous. Finite linear combinations of characteristic functions are dense in C(X , R) (by the Stone-Weierstrass theorem). Hence µn * µ. Answer: no in general, yes in many important cases.

The entropy map is not continuous Proof (sketch): R Let f = χ[0]. Note that f dµ2 1 = χ (...00...) + χ (...01...) + χ (...10...) + χ (...11...) 22 [0] [0] [0] [0] 1 1 Z = 2 = = µ(χ ) = f dµ. 22 2 [0] R R In general, χ[i0,...,im−1] dµn = χ[i0,...,im−1] dµ when n ≥ m. Characteristic functions of intervals are continuous. Finite linear combinations of characteristic functions are dense in C(X , R) (by the Stone-Weierstrass theorem). Hence µn * µ.

Is the entropy map upper semi-continuous? i.e. does

µn * µ ⇒ lim supn→∞ hµn (T ) ≤ hµ(T )? The entropy map is not continuous Proof (sketch): R Let f = χ[0]. Note that f dµ2 1 = χ (...00...) + χ (...01...) + χ (...10...) + χ (...11...) 22 [0] [0] [0] [0] 1 1 Z = 2 = = µ(χ ) = f dµ. 22 2 [0] R R In general, χ[i0,...,im−1] dµn = χ[i0,...,im−1] dµ when n ≥ m. Characteristic functions of intervals are continuous. Finite linear combinations of characteristic functions are dense in C(X , R) (by the Stone-Weierstrass theorem). Hence µn * µ.

Is the entropy map upper semi-continuous? i.e. does

µn * µ ⇒ lim supn→∞ hµn (T ) ≤ hµ(T )? Answer: no in general, yes in many important cases. Definition A homeomorphism T is expansive if: ∃δ > 0 s.t. if n n d(T x, T y) ≤ δ for all n ∈ Z then x = y.

Example A shift of finite type is expansive. Recall d(x, y) = 1/2n, n = first disagreement. Let δ < 1. If n n xn 6= yn then d(T x, T y) = 1 ≥ δ. Example k k k k Let T : R /Z → R /Z , Tx = Ax mod 1 be a toral automorphism given by A ∈ SL(2, R). Then T is expansive iff A is hyperbolic (no eigenvalues of modulus 1). Other examples: all Anosov diffeomorphisms, Smale horseshoe, solenoid,...

Expansive homeomorphisms Example A shift of finite type is expansive. Recall d(x, y) = 1/2n, n = first disagreement. Let δ < 1. If n n xn 6= yn then d(T x, T y) = 1 ≥ δ. Example k k k k Let T : R /Z → R /Z , Tx = Ax mod 1 be a toral automorphism given by A ∈ SL(2, R). Then T is expansive iff A is hyperbolic (no eigenvalues of modulus 1). Other examples: all Anosov diffeomorphisms, Smale horseshoe, solenoid,...

Expansive homeomorphisms

Definition A homeomorphism T is expansive if: ∃δ > 0 s.t. if n n d(T x, T y) ≤ δ for all n ∈ Z then x = y. Recall d(x, y) = 1/2n, n = first disagreement. Let δ < 1. If n n xn 6= yn then d(T x, T y) = 1 ≥ δ. Example k k k k Let T : R /Z → R /Z , Tx = Ax mod 1 be a toral automorphism given by A ∈ SL(2, R). Then T is expansive iff A is hyperbolic (no eigenvalues of modulus 1). Other examples: all Anosov diffeomorphisms, Smale horseshoe, solenoid,...

Expansive homeomorphisms

Definition A homeomorphism T is expansive if: ∃δ > 0 s.t. if n n d(T x, T y) ≤ δ for all n ∈ Z then x = y.

Example A shift of finite type is expansive. Example k k k k Let T : R /Z → R /Z , Tx = Ax mod 1 be a toral automorphism given by A ∈ SL(2, R). Then T is expansive iff A is hyperbolic (no eigenvalues of modulus 1). Other examples: all Anosov diffeomorphisms, Smale horseshoe, solenoid,...

Expansive homeomorphisms

Definition A homeomorphism T is expansive if: ∃δ > 0 s.t. if n n d(T x, T y) ≤ δ for all n ∈ Z then x = y.

Example A shift of finite type is expansive. Recall d(x, y) = 1/2n, n = first disagreement. Let δ < 1. If n n xn 6= yn then d(T x, T y) = 1 ≥ δ. Other examples: all Anosov diffeomorphisms, Smale horseshoe, solenoid,...

Expansive homeomorphisms

Definition A homeomorphism T is expansive if: ∃δ > 0 s.t. if n n d(T x, T y) ≤ δ for all n ∈ Z then x = y.

Example A shift of finite type is expansive. Recall d(x, y) = 1/2n, n = first disagreement. Let δ < 1. If n n xn 6= yn then d(T x, T y) = 1 ≥ δ. Example k k k k Let T : R /Z → R /Z , Tx = Ax mod 1 be a toral automorphism given by A ∈ SL(2, R). Then T is expansive iff A is hyperbolic (no eigenvalues of modulus 1). Expansive homeomorphisms

Definition A homeomorphism T is expansive if: ∃δ > 0 s.t. if n n d(T x, T y) ≤ δ for all n ∈ Z then x = y.

Example A shift of finite type is expansive. Recall d(x, y) = 1/2n, n = first disagreement. Let δ < 1. If n n xn 6= yn then d(T x, T y) = 1 ≥ δ. Example k k k k Let T : R /Z → R /Z , Tx = Ax mod 1 be a toral automorphism given by A ∈ SL(2, R). Then T is expansive iff A is hyperbolic (no eigenvalues of modulus 1). Other examples: all Anosov diffeomorphisms, Smale horseshoe, solenoid,... Theorem Let T be an expansive homeomorphism of a compact metric space. Then the entropy map is upper semi-continuous: if

µn, µ ∈ M(X , T ), µn * µ then lim sup hµn (T ) ≤ hµ(T ). Proof (sketch):

Fact: Suppose µn * µ. If B ∈ B is s.t. µ(∂B) = 0 then µn(B) → µ(B).

If ζ is a partition such that µ(∂A) = 0 ∀A ∈ ζ then

Hµj (ζ) → Hµ(ζ). Hence

hµn (T , ζ) → hµ(T , ζ).

Let δ be an expansive constant. If diam ζ < δ then ζ is a generator. So hµ(T ) = hµ(T , ζ) by Sinai. Alter ζ slightly to ensure µ(∂A) = 0 ∀A ∈ ζ.

Expansive homeomorphisms Proof (sketch):

Fact: Suppose µn * µ. If B ∈ B is s.t. µ(∂B) = 0 then µn(B) → µ(B).

If ζ is a partition such that µ(∂A) = 0 ∀A ∈ ζ then

Hµj (ζ) → Hµ(ζ). Hence

hµn (T , ζ) → hµ(T , ζ).

Let δ be an expansive constant. If diam ζ < δ then ζ is a generator. So hµ(T ) = hµ(T , ζ) by Sinai. Alter ζ slightly to ensure µ(∂A) = 0 ∀A ∈ ζ.

Expansive homeomorphisms

Theorem Let T be an expansive homeomorphism of a compact metric space. Then the entropy map is upper semi-continuous: if

µn, µ ∈ M(X , T ), µn * µ then lim sup hµn (T ) ≤ hµ(T ). If ζ is a partition such that µ(∂A) = 0 ∀A ∈ ζ then

Hµj (ζ) → Hµ(ζ). Hence

hµn (T , ζ) → hµ(T , ζ).

Let δ be an expansive constant. If diam ζ < δ then ζ is a generator. So hµ(T ) = hµ(T , ζ) by Sinai. Alter ζ slightly to ensure µ(∂A) = 0 ∀A ∈ ζ.

Expansive homeomorphisms

Theorem Let T be an expansive homeomorphism of a compact metric space. Then the entropy map is upper semi-continuous: if

µn, µ ∈ M(X , T ), µn * µ then lim sup hµn (T ) ≤ hµ(T ). Proof (sketch):

Fact: Suppose µn * µ. If B ∈ B is s.t. µ(∂B) = 0 then µn(B) → µ(B). Hence

hµn (T , ζ) → hµ(T , ζ).

Let δ be an expansive constant. If diam ζ < δ then ζ is a generator. So hµ(T ) = hµ(T , ζ) by Sinai. Alter ζ slightly to ensure µ(∂A) = 0 ∀A ∈ ζ.

Expansive homeomorphisms

Theorem Let T be an expansive homeomorphism of a compact metric space. Then the entropy map is upper semi-continuous: if

µn, µ ∈ M(X , T ), µn * µ then lim sup hµn (T ) ≤ hµ(T ). Proof (sketch):

Fact: Suppose µn * µ. If B ∈ B is s.t. µ(∂B) = 0 then µn(B) → µ(B).

If ζ is a partition such that µ(∂A) = 0 ∀A ∈ ζ then

Hµj (ζ) → Hµ(ζ). Let δ be an expansive constant. If diam ζ < δ then ζ is a generator. So hµ(T ) = hµ(T , ζ) by Sinai. Alter ζ slightly to ensure µ(∂A) = 0 ∀A ∈ ζ.

Expansive homeomorphisms

Theorem Let T be an expansive homeomorphism of a compact metric space. Then the entropy map is upper semi-continuous: if

µn, µ ∈ M(X , T ), µn * µ then lim sup hµn (T ) ≤ hµ(T ). Proof (sketch):

Fact: Suppose µn * µ. If B ∈ B is s.t. µ(∂B) = 0 then µn(B) → µ(B).

If ζ is a partition such that µ(∂A) = 0 ∀A ∈ ζ then

Hµj (ζ) → Hµ(ζ). Hence

hµn (T , ζ) → hµ(T , ζ). So hµ(T ) = hµ(T , ζ) by Sinai. Alter ζ slightly to ensure µ(∂A) = 0 ∀A ∈ ζ.

Expansive homeomorphisms

Theorem Let T be an expansive homeomorphism of a compact metric space. Then the entropy map is upper semi-continuous: if

µn, µ ∈ M(X , T ), µn * µ then lim sup hµn (T ) ≤ hµ(T ). Proof (sketch):

Fact: Suppose µn * µ. If B ∈ B is s.t. µ(∂B) = 0 then µn(B) → µ(B).

If ζ is a partition such that µ(∂A) = 0 ∀A ∈ ζ then

Hµj (ζ) → Hµ(ζ). Hence

hµn (T , ζ) → hµ(T , ζ).

Let δ be an expansive constant. If diam ζ < δ then ζ is a generator. Alter ζ slightly to ensure µ(∂A) = 0 ∀A ∈ ζ.

Expansive homeomorphisms

Theorem Let T be an expansive homeomorphism of a compact metric space. Then the entropy map is upper semi-continuous: if

µn, µ ∈ M(X , T ), µn * µ then lim sup hµn (T ) ≤ hµ(T ). Proof (sketch):

Fact: Suppose µn * µ. If B ∈ B is s.t. µ(∂B) = 0 then µn(B) → µ(B).

If ζ is a partition such that µ(∂A) = 0 ∀A ∈ ζ then

Hµj (ζ) → Hµ(ζ). Hence

hµn (T , ζ) → hµ(T , ζ).

Let δ be an expansive constant. If diam ζ < δ then ζ is a generator. So hµ(T ) = hµ(T , ζ) by Sinai. Expansive homeomorphisms

Theorem Let T be an expansive homeomorphism of a compact metric space. Then the entropy map is upper semi-continuous: if

µn, µ ∈ M(X , T ), µn * µ then lim sup hµn (T ) ≤ hµ(T ). Proof (sketch):

Fact: Suppose µn * µ. If B ∈ B is s.t. µ(∂B) = 0 then µn(B) → µ(B).

If ζ is a partition such that µ(∂A) = 0 ∀A ∈ ζ then

Hµj (ζ) → Hµ(ζ). Hence

hµn (T , ζ) → hµ(T , ζ).

Let δ be an expansive constant. If diam ζ < δ then ζ is a generator. So hµ(T ) = hµ(T , ζ) by Sinai. Alter ζ slightly to ensure µ(∂A) = 0 ∀A ∈ ζ. Let X be compact metric, let T : X → X be continuous. Recall X compact ⇒ every open cover of X has a finite subcover. Definition Let α be an open cover of X . Let N(α) < ∞ be the cardinality of the smallest finite subcover of X . Define the entropy of α to be

Htop(α) = log N(α)

Definition Let α = {Ai }, β = {Bj } be open covers. The join is the open cover α ∨ β = {Ai ∩ Bj | Ai ∈ α, Bj ∈ β}.

Definition We say α ≤ β if every element of β is a subset of an element of α. (Example: α ≤ α ∨ β.) Easy check: α ≤ β ⇒ Htop(α) ≤ Htop(β). Definition T −1α is the open cover {T −1A | A ∈ α}.

Topological entropy Definition Let α be an open cover of X . Let N(α) < ∞ be the cardinality of the smallest finite subcover of X . Define the entropy of α to be

Htop(α) = log N(α)

Definition Let α = {Ai }, β = {Bj } be open covers. The join is the open cover α ∨ β = {Ai ∩ Bj | Ai ∈ α, Bj ∈ β}.

Definition We say α ≤ β if every element of β is a subset of an element of α. (Example: α ≤ α ∨ β.) Easy check: α ≤ β ⇒ Htop(α) ≤ Htop(β). Definition T −1α is the open cover {T −1A | A ∈ α}.

Topological entropy Let X be compact metric, let T : X → X be continuous. Recall X compact ⇒ every open cover of X has a finite subcover. Definition Let α = {Ai }, β = {Bj } be open covers. The join is the open cover α ∨ β = {Ai ∩ Bj | Ai ∈ α, Bj ∈ β}.

Definition We say α ≤ β if every element of β is a subset of an element of α. (Example: α ≤ α ∨ β.) Easy check: α ≤ β ⇒ Htop(α) ≤ Htop(β). Definition T −1α is the open cover {T −1A | A ∈ α}.

Topological entropy Let X be compact metric, let T : X → X be continuous. Recall X compact ⇒ every open cover of X has a finite subcover. Definition Let α be an open cover of X . Let N(α) < ∞ be the cardinality of the smallest finite subcover of X . Define the entropy of α to be

Htop(α) = log N(α) Definition We say α ≤ β if every element of β is a subset of an element of α. (Example: α ≤ α ∨ β.) Easy check: α ≤ β ⇒ Htop(α) ≤ Htop(β). Definition T −1α is the open cover {T −1A | A ∈ α}.

Topological entropy Let X be compact metric, let T : X → X be continuous. Recall X compact ⇒ every open cover of X has a finite subcover. Definition Let α be an open cover of X . Let N(α) < ∞ be the cardinality of the smallest finite subcover of X . Define the entropy of α to be

Htop(α) = log N(α)

Definition Let α = {Ai }, β = {Bj } be open covers. The join is the open cover α ∨ β = {Ai ∩ Bj | Ai ∈ α, Bj ∈ β}. Definition T −1α is the open cover {T −1A | A ∈ α}.

Topological entropy Let X be compact metric, let T : X → X be continuous. Recall X compact ⇒ every open cover of X has a finite subcover. Definition Let α be an open cover of X . Let N(α) < ∞ be the cardinality of the smallest finite subcover of X . Define the entropy of α to be

Htop(α) = log N(α)

Definition Let α = {Ai }, β = {Bj } be open covers. The join is the open cover α ∨ β = {Ai ∩ Bj | Ai ∈ α, Bj ∈ β}.

Definition We say α ≤ β if every element of β is a subset of an element of α. (Example: α ≤ α ∨ β.) Easy check: α ≤ β ⇒ Htop(α) ≤ Htop(β). Topological entropy Let X be compact metric, let T : X → X be continuous. Recall X compact ⇒ every open cover of X has a finite subcover. Definition Let α be an open cover of X . Let N(α) < ∞ be the cardinality of the smallest finite subcover of X . Define the entropy of α to be

Htop(α) = log N(α)

Definition Let α = {Ai }, β = {Bj } be open covers. The join is the open cover α ∨ β = {Ai ∩ Bj | Ai ∈ α, Bj ∈ β}.

Definition We say α ≤ β if every element of β is a subset of an element of α. (Example: α ≤ α ∨ β.) Easy check: α ≤ β ⇒ Htop(α) ≤ Htop(β). Definition T −1α is the open cover {T −1A | A ∈ α}. Definition The topological entropy of T relative to the open cover α is

n−1  1 _ −j htop(T , α) = lim Htop  T α . n→∞ n j=0

Remark Wn−1 −j  The limit exists as Hn = Htop j=0 T α is subadditive: Hn+m ≤ Hn + Hm. Definition The topological entropy of T is

htop(T ) = sup{htop(T , α) | α is an open cover of X }.

Topological entropy Remark Wn−1 −j  The limit exists as Hn = Htop j=0 T α is subadditive: Hn+m ≤ Hn + Hm. Definition The topological entropy of T is

htop(T ) = sup{htop(T , α) | α is an open cover of X }.

Topological entropy

Definition The topological entropy of T relative to the open cover α is

n−1  1 _ −j htop(T , α) = lim Htop  T α . n→∞ n j=0 Definition The topological entropy of T is

htop(T ) = sup{htop(T , α) | α is an open cover of X }.

Topological entropy

Definition The topological entropy of T relative to the open cover α is

n−1  1 _ −j htop(T , α) = lim Htop  T α . n→∞ n j=0

Remark Wn−1 −j  The limit exists as Hn = Htop j=0 T α is subadditive: Hn+m ≤ Hn + Hm. Topological entropy

Definition The topological entropy of T relative to the open cover α is

n−1  1 _ −j htop(T , α) = lim Htop  T α . n→∞ n j=0

Remark Wn−1 −j  The limit exists as Hn = Htop j=0 T α is subadditive: Hn+m ≤ Hn + Hm. Definition The topological entropy of T is

htop(T ) = sup{htop(T , α) | α is an open cover of X }. Let X be compact metric with metric d. Let T : X → X be continuous.

j j Let dn(x, y) = max0≤j≤n−1 d(T x, T y). The balls in this metric are j j Bn(x, ε) = {y | d(T x, T y) < ε, 0 ≤ j ≤ n − 1}.

So x, y are dn-close if the first n points in the orbits of x, y are close.

Idea: suppose we can’t distinguish two orbits if they are close for the first n iterates. How many such orbits are there?

An alternative definition j j Let dn(x, y) = max0≤j≤n−1 d(T x, T y). The balls in this metric are j j Bn(x, ε) = {y | d(T x, T y) < ε, 0 ≤ j ≤ n − 1}.

So x, y are dn-close if the first n points in the orbits of x, y are close.

Idea: suppose we can’t distinguish two orbits if they are close for the first n iterates. How many such orbits are there?

An alternative definition

Let X be compact metric with metric d. Let T : X → X be continuous. So x, y are dn-close if the first n points in the orbits of x, y are close.

Idea: suppose we can’t distinguish two orbits if they are close for the first n iterates. How many such orbits are there?

An alternative definition

Let X be compact metric with metric d. Let T : X → X be continuous.

j j Let dn(x, y) = max0≤j≤n−1 d(T x, T y). The balls in this metric are j j Bn(x, ε) = {y | d(T x, T y) < ε, 0 ≤ j ≤ n − 1}. Idea: suppose we can’t distinguish two orbits if they are close for the first n iterates. How many such orbits are there?

An alternative definition

Let X be compact metric with metric d. Let T : X → X be continuous.

j j Let dn(x, y) = max0≤j≤n−1 d(T x, T y). The balls in this metric are j j Bn(x, ε) = {y | d(T x, T y) < ε, 0 ≤ j ≤ n − 1}.

So x, y are dn-close if the first n points in the orbits of x, y are close. An alternative definition

Let X be compact metric with metric d. Let T : X → X be continuous.

j j Let dn(x, y) = max0≤j≤n−1 d(T x, T y). The balls in this metric are j j Bn(x, ε) = {y | d(T x, T y) < ε, 0 ≤ j ≤ n − 1}.

So x, y are dn-close if the first n points in the orbits of x, y are close.

Idea: suppose we can’t distinguish two orbits if they are close for the first n iterates. How many such orbits are there? Definition Let n ≥ 1, ε > 0. F ⊂ X is (n, ε)-spanning if the dn-balls of radius ε and centres in F covers X : [ X = Bn(x, ε). x∈F

We want to make spanning sets as small as possible. Let pn(ε) be the cardinality of the smallest (n, ε)-spanning set. 1 Let p(ε) = lim sup log pn(ε). n→∞ n

Let hspanning (T ) = limε→0 p(ε).

Spanning sets We want to make spanning sets as small as possible. Let pn(ε) be the cardinality of the smallest (n, ε)-spanning set. 1 Let p(ε) = lim sup log pn(ε). n→∞ n

Let hspanning (T ) = limε→0 p(ε).

Spanning sets

Definition Let n ≥ 1, ε > 0. F ⊂ X is (n, ε)-spanning if the dn-balls of radius ε and centres in F covers X : [ X = Bn(x, ε). x∈F 1 Let p(ε) = lim sup log pn(ε). n→∞ n

Let hspanning (T ) = limε→0 p(ε).

Spanning sets

Definition Let n ≥ 1, ε > 0. F ⊂ X is (n, ε)-spanning if the dn-balls of radius ε and centres in F covers X : [ X = Bn(x, ε). x∈F

We want to make spanning sets as small as possible. Let pn(ε) be the cardinality of the smallest (n, ε)-spanning set. Let hspanning (T ) = limε→0 p(ε).

Spanning sets

Definition Let n ≥ 1, ε > 0. F ⊂ X is (n, ε)-spanning if the dn-balls of radius ε and centres in F covers X : [ X = Bn(x, ε). x∈F

We want to make spanning sets as small as possible. Let pn(ε) be the cardinality of the smallest (n, ε)-spanning set. 1 Let p(ε) = lim sup log pn(ε). n→∞ n Spanning sets

Definition Let n ≥ 1, ε > 0. F ⊂ X is (n, ε)-spanning if the dn-balls of radius ε and centres in F covers X : [ X = Bn(x, ε). x∈F

We want to make spanning sets as small as possible. Let pn(ε) be the cardinality of the smallest (n, ε)-spanning set. 1 Let p(ε) = lim sup log pn(ε). n→∞ n

Let hspanning (T ) = limε→0 p(ε). Then E is (n, ε)-spanning. Hence pn(ε) ≤ qn(ε).

We want to make separated sets as large as possible. Let q(ε) be the cardinality of the largest (n, ε)-separated set. Remark pn(ε) ≤ qn(ε) ≤ pn(ε/2). Let E be (n, ε)-separated of cardinal- ity qn(ε).

Separated sets

Definition Let n ≥ 1, ε > 0. E ⊂ X is (n, ε)-separated if: x, y ∈ E, x 6= y then dn(x, y) > ε. Then E is (n, ε)-spanning. Hence pn(ε) ≤ qn(ε).

Remark pn(ε) ≤ qn(ε) ≤ pn(ε/2). Let E be (n, ε)-separated of cardinal- ity qn(ε).

Separated sets

Definition Let n ≥ 1, ε > 0. E ⊂ X is (n, ε)-separated if: x, y ∈ E, x 6= y then dn(x, y) > ε. We want to make separated sets as large as possible. Let q(ε) be the cardinality of the largest (n, ε)-separated set. Then E is (n, ε)-spanning. Hence pn(ε) ≤ qn(ε).

Let E be (n, ε)-separated of cardinal- ity qn(ε).

Separated sets

Definition Let n ≥ 1, ε > 0. E ⊂ X is (n, ε)-separated if: x, y ∈ E, x 6= y then dn(x, y) > ε. We want to make separated sets as large as possible. Let q(ε) be the cardinality of the largest (n, ε)-separated set. Remark pn(ε) ≤ qn(ε) ≤ pn(ε/2). Then E is (n, ε)-spanning. Hence pn(ε) ≤ qn(ε).

Let E be (n, ε)-separated of cardinal- ity qn(ε).

Separated sets

Definition Let n ≥ 1, ε > 0. E ⊂ X is (n, ε)-separated if: x, y ∈ E, x 6= y then dn(x, y) > ε. We want to make separated sets as large as possible. Let q(ε) be the cardinality of the largest (n, ε)-separated set. Remark pn(ε) ≤ qn(ε) ≤ pn(ε/2). Then E is (n, ε)-spanning. Hence pn(ε) ≤ qn(ε).

Separated sets

Definition Let n ≥ 1, ε > 0. E ⊂ X is (n, ε)-separated if: x, y ∈ E, x 6= y then dn(x, y) > ε. We want to make separated sets as large as possible. Let q(ε) be the cardinality of the largest (n, ε)-separated set. Remark pn(ε) ≤ qn(ε) ≤ pn(ε/2).

Let E be (n, ε)-separated of cardinal- ity qn(ε). Hence pn(ε) ≤ qn(ε).

Separated sets

Definition Let n ≥ 1, ε > 0. E ⊂ X is (n, ε)-separated if: x, y ∈ E, x 6= y then dn(x, y) > ε. We want to make separated sets as large as possible. Let q(ε) be the cardinality of the largest (n, ε)-separated set. Remark pn(ε) ≤ qn(ε) ≤ pn(ε/2).

Let E be (n, ε)-separated of cardinal- ity qn(ε). Then E is (n, ε)-spanning. Hence pn(ε) ≤ qn(ε).

Separated sets

Definition Let n ≥ 1, ε > 0. E ⊂ X is (n, ε)-separated if: x, y ∈ E, x 6= y then dn(x, y) > ε. We want to make separated sets as large as possible. Let q(ε) be the cardinality of the largest (n, ε)-separated set. Remark pn(ε) ≤ qn(ε) ≤ pn(ε/2).

Let E be (n, ε)-separated of cardinal- ity qn(ε). Then E is (n, ε)-spanning. Hence pn(ε) ≤ qn(ε).

Separated sets

Definition Let n ≥ 1, ε > 0. E ⊂ X is (n, ε)-separated if: x, y ∈ E, x 6= y then dn(x, y) > ε. We want to make separated sets as large as possible. Let q(ε) be the cardinality of the largest (n, ε)-separated set. Remark pn(ε) ≤ qn(ε) ≤ pn(ε/2).

Let E be (n, ε)-separated of cardinal- ity qn(ε). Then E is (n, ε)-spanning. Separated sets

Definition Let n ≥ 1, ε > 0. E ⊂ X is (n, ε)-separated if: x, y ∈ E, x 6= y then dn(x, y) > ε. We want to make separated sets as large as possible. Let q(ε) be the cardinality of the largest (n, ε)-separated set. Remark pn(ε) ≤ qn(ε) ≤ pn(ε/2).

Let E be (n, ε)-separated of cardinal- ity qn(ε). Then E is (n, ε)-spanning. Hence pn(ε) ≤ qn(ε). Remark (continued)

pn(ε) ≤ qn(ε) ≤ pn(ε/2).

Suppose E is (n, ε)-separated of cardinality qn(ε).

Suppose F is (n, ε/2)-spanning of cardinality pn(ε/2).

For every x ∈ E there exists a y ∈ F such that x ∈ Bn(y, ε/2).

This map E → F : x 7→ y is injective. (If not, then x, x0 ∈ E could 0 map to the same y ∈ F . Then dn(x, x ) ≤ dn(x, y) + dn(y, x) < ε. Then x = x0 as E is (n, ε)-separated.)

Hence qn(ε) ≤ pn(ε/2).

Separated sets Suppose E is (n, ε)-separated of cardinality qn(ε).

Suppose F is (n, ε/2)-spanning of cardinality pn(ε/2).

For every x ∈ E there exists a y ∈ F such that x ∈ Bn(y, ε/2).

This map E → F : x 7→ y is injective. (If not, then x, x0 ∈ E could 0 map to the same y ∈ F . Then dn(x, x ) ≤ dn(x, y) + dn(y, x) < ε. Then x = x0 as E is (n, ε)-separated.)

Hence qn(ε) ≤ pn(ε/2).

Separated sets

Remark (continued)

pn(ε) ≤ qn(ε) ≤ pn(ε/2). For every x ∈ E there exists a y ∈ F such that x ∈ Bn(y, ε/2).

This map E → F : x 7→ y is injective. (If not, then x, x0 ∈ E could 0 map to the same y ∈ F . Then dn(x, x ) ≤ dn(x, y) + dn(y, x) < ε. Then x = x0 as E is (n, ε)-separated.)

Hence qn(ε) ≤ pn(ε/2).

Separated sets

Remark (continued)

pn(ε) ≤ qn(ε) ≤ pn(ε/2).

Suppose E is (n, ε)-separated of cardinality qn(ε).

Suppose F is (n, ε/2)-spanning of cardinality pn(ε/2). This map E → F : x 7→ y is injective. (If not, then x, x0 ∈ E could 0 map to the same y ∈ F . Then dn(x, x ) ≤ dn(x, y) + dn(y, x) < ε. Then x = x0 as E is (n, ε)-separated.)

Hence qn(ε) ≤ pn(ε/2).

Separated sets

Remark (continued)

pn(ε) ≤ qn(ε) ≤ pn(ε/2).

Suppose E is (n, ε)-separated of cardinality qn(ε).

Suppose F is (n, ε/2)-spanning of cardinality pn(ε/2).

For every x ∈ E there exists a y ∈ F such that x ∈ Bn(y, ε/2). (If not, then x, x0 ∈ E could 0 map to the same y ∈ F . Then dn(x, x ) ≤ dn(x, y) + dn(y, x) < ε. Then x = x0 as E is (n, ε)-separated.)

Hence qn(ε) ≤ pn(ε/2).

Separated sets

Remark (continued)

pn(ε) ≤ qn(ε) ≤ pn(ε/2).

Suppose E is (n, ε)-separated of cardinality qn(ε).

Suppose F is (n, ε/2)-spanning of cardinality pn(ε/2).

For every x ∈ E there exists a y ∈ F such that x ∈ Bn(y, ε/2).

This map E → F : x 7→ y is injective. Hence qn(ε) ≤ pn(ε/2).

Separated sets

Remark (continued)

pn(ε) ≤ qn(ε) ≤ pn(ε/2).

Suppose E is (n, ε)-separated of cardinality qn(ε).

Suppose F is (n, ε/2)-spanning of cardinality pn(ε/2).

For every x ∈ E there exists a y ∈ F such that x ∈ Bn(y, ε/2).

This map E → F : x 7→ y is injective. (If not, then x, x0 ∈ E could 0 map to the same y ∈ F . Then dn(x, x ) ≤ dn(x, y) + dn(y, x) < ε. Then x = x0 as E is (n, ε)-separated.) Separated sets

Remark (continued)

pn(ε) ≤ qn(ε) ≤ pn(ε/2).

Suppose E is (n, ε)-separated of cardinality qn(ε).

Suppose F is (n, ε/2)-spanning of cardinality pn(ε/2).

For every x ∈ E there exists a y ∈ F such that x ∈ Bn(y, ε/2).

This map E → F : x 7→ y is injective. (If not, then x, x0 ∈ E could 0 map to the same y ∈ F . Then dn(x, x ) ≤ dn(x, y) + dn(y, x) < ε. Then x = x0 as E is (n, ε)-separated.)

Hence qn(ε) ≤ pn(ε/2). Hence 1 1 hspanning(T ) = lim lim sup log pn(ε) = lim lim sup log qn(ε). ε→0 n→∞ n ε→0 n→∞ n

Theorem (Bowen) The definition of topological entropy using open sets agrees with the definition of topological entropy using spanning/separated sets. Proof (sketch): Careful analysis using Lebesgue numbers of open covers...

Spanning and separated sets Theorem (Bowen) The definition of topological entropy using open sets agrees with the definition of topological entropy using spanning/separated sets. Proof (sketch): Careful analysis using Lebesgue numbers of open covers...

Spanning and separated sets

Hence 1 1 hspanning(T ) = lim lim sup log pn(ε) = lim lim sup log qn(ε). ε→0 n→∞ n ε→0 n→∞ n Proof (sketch): Careful analysis using Lebesgue numbers of open covers...

Spanning and separated sets

Hence 1 1 hspanning(T ) = lim lim sup log pn(ε) = lim lim sup log qn(ε). ε→0 n→∞ n ε→0 n→∞ n

Theorem (Bowen) The definition of topological entropy using open sets agrees with the definition of topological entropy using spanning/separated sets. Spanning and separated sets

Hence 1 1 hspanning(T ) = lim lim sup log pn(ε) = lim lim sup log qn(ε). ε→0 n→∞ n ε→0 n→∞ n

Theorem (Bowen) The definition of topological entropy using open sets agrees with the definition of topological entropy using spanning/separated sets. Proof (sketch): Careful analysis using Lebesgue numbers of open covers... Let α = {A1,..., Ak } be a finite open cover. For each x look at the sequence of elements of α the orbit of x visits. This codes the orbit of x by a bi-infinite sequence of symbols from {1,..., k}.

This coding may not be ‘nice’: different points may have the same coding, the coding may not be unique, the set of all sequences may be complicated (eg: not of finite type).

α is a (topological) generator if each sequence codes at most one ∞ point. Precisely, α is a generator if for each sequence (ij )j=−∞

∞ \ −j card T Aij = 0 or 1. j=−∞

Calculating topological entropy For each x look at the sequence of elements of α the orbit of x visits. This codes the orbit of x by a bi-infinite sequence of symbols from {1,..., k}.

This coding may not be ‘nice’: different points may have the same coding, the coding may not be unique, the set of all sequences may be complicated (eg: not of finite type).

α is a (topological) generator if each sequence codes at most one ∞ point. Precisely, α is a generator if for each sequence (ij )j=−∞

∞ \ −j card T Aij = 0 or 1. j=−∞

Calculating topological entropy

Let α = {A1,..., Ak } be a finite open cover. This coding may not be ‘nice’: different points may have the same coding, the coding may not be unique, the set of all sequences may be complicated (eg: not of finite type).

α is a (topological) generator if each sequence codes at most one ∞ point. Precisely, α is a generator if for each sequence (ij )j=−∞

∞ \ −j card T Aij = 0 or 1. j=−∞

Calculating topological entropy

Let α = {A1,..., Ak } be a finite open cover. For each x look at the sequence of elements of α the orbit of x visits. This codes the orbit of x by a bi-infinite sequence of symbols from {1,..., k}. different points may have the same coding, the coding may not be unique, the set of all sequences may be complicated (eg: not of finite type).

α is a (topological) generator if each sequence codes at most one ∞ point. Precisely, α is a generator if for each sequence (ij )j=−∞

∞ \ −j card T Aij = 0 or 1. j=−∞

Calculating topological entropy

Let α = {A1,..., Ak } be a finite open cover. For each x look at the sequence of elements of α the orbit of x visits. This codes the orbit of x by a bi-infinite sequence of symbols from {1,..., k}.

This coding may not be ‘nice’: α is a (topological) generator if each sequence codes at most one ∞ point. Precisely, α is a generator if for each sequence (ij )j=−∞

∞ \ −j card T Aij = 0 or 1. j=−∞

Calculating topological entropy

Let α = {A1,..., Ak } be a finite open cover. For each x look at the sequence of elements of α the orbit of x visits. This codes the orbit of x by a bi-infinite sequence of symbols from {1,..., k}.

This coding may not be ‘nice’: different points may have the same coding, the coding may not be unique, the set of all sequences may be complicated (eg: not of finite type). ∞ Precisely, α is a generator if for each sequence (ij )j=−∞

∞ \ −j card T Aij = 0 or 1. j=−∞

Calculating topological entropy

Let α = {A1,..., Ak } be a finite open cover. For each x look at the sequence of elements of α the orbit of x visits. This codes the orbit of x by a bi-infinite sequence of symbols from {1,..., k}.

This coding may not be ‘nice’: different points may have the same coding, the coding may not be unique, the set of all sequences may be complicated (eg: not of finite type).

α is a (topological) generator if each sequence codes at most one point. Calculating topological entropy

Let α = {A1,..., Ak } be a finite open cover. For each x look at the sequence of elements of α the orbit of x visits. This codes the orbit of x by a bi-infinite sequence of symbols from {1,..., k}.

This coding may not be ‘nice’: different points may have the same coding, the coding may not be unique, the set of all sequences may be complicated (eg: not of finite type).

α is a (topological) generator if each sequence codes at most one ∞ point. Precisely, α is a generator if for each sequence (ij )j=−∞

∞ \ −j card T Aij = 0 or 1. j=−∞ Proposition T has a (topological) generator iff T is expansive. Proof (sketch): Suppose T is expansive with expansive constant δ. Consider the open cover by balls of radius δ/2. Let α be a finite subcover. Then α is a (topological) generator. The converse is slightly more involved.

Calculating topological entropy Proof (sketch): Suppose T is expansive with expansive constant δ. Consider the open cover by balls of radius δ/2. Let α be a finite subcover. Then α is a (topological) generator. The converse is slightly more involved.

Calculating topological entropy

Proposition T has a (topological) generator iff T is expansive. The converse is slightly more involved.

Calculating topological entropy

Proposition T has a (topological) generator iff T is expansive. Proof (sketch): Suppose T is expansive with expansive constant δ. Consider the open cover by balls of radius δ/2. Let α be a finite subcover. Then α is a (topological) generator. Calculating topological entropy

Proposition T has a (topological) generator iff T is expansive. Proof (sketch): Suppose T is expansive with expansive constant δ. Consider the open cover by balls of radius δ/2. Let α be a finite subcover. Then α is a (topological) generator. The converse is slightly more involved. Proposition Let T be an expansive homeomorphism & let α be a generator. Then htop(T ) = htop(T , α). Proof (sketch):

Step 1: Clearly htop(T , α) ≤ htop(T ).

Wn −j Step 2: diam j=−n T α → 0. Wn −j (If diam j=−n T α → ε0 > 0 then two points could have the same coding - contradicting α being a generator.)

Step 3: Let β be any open cover. Let r > 0 be a Lebesgue Wn −j number for β. Choose n s.t. diam j=−n T α ≤ r. Then Wn −j β ≤ j=−n T α. Then  Wn −j  htop(T , β) ≤ htop T , j=−n T α = htop(T , α). Take the supremum over all β.

Calculating topological entropy using generators Proof (sketch):

Step 1: Clearly htop(T , α) ≤ htop(T ).

Wn −j Step 2: diam j=−n T α → 0. Wn −j (If diam j=−n T α → ε0 > 0 then two points could have the same coding - contradicting α being a generator.)

Step 3: Let β be any open cover. Let r > 0 be a Lebesgue Wn −j number for β. Choose n s.t. diam j=−n T α ≤ r. Then Wn −j β ≤ j=−n T α. Then  Wn −j  htop(T , β) ≤ htop T , j=−n T α = htop(T , α). Take the supremum over all β.

Calculating topological entropy using generators

Proposition Let T be an expansive homeomorphism & let α be a generator. Then htop(T ) = htop(T , α). Step 1: Clearly htop(T , α) ≤ htop(T ).

Wn −j Step 2: diam j=−n T α → 0. Wn −j (If diam j=−n T α → ε0 > 0 then two points could have the same coding - contradicting α being a generator.)

Step 3: Let β be any open cover. Let r > 0 be a Lebesgue Wn −j number for β. Choose n s.t. diam j=−n T α ≤ r. Then Wn −j β ≤ j=−n T α. Then  Wn −j  htop(T , β) ≤ htop T , j=−n T α = htop(T , α). Take the supremum over all β.

Calculating topological entropy using generators

Proposition Let T be an expansive homeomorphism & let α be a generator. Then htop(T ) = htop(T , α). Proof (sketch): Wn −j Step 2: diam j=−n T α → 0. Wn −j (If diam j=−n T α → ε0 > 0 then two points could have the same coding - contradicting α being a generator.)

Step 3: Let β be any open cover. Let r > 0 be a Lebesgue Wn −j number for β. Choose n s.t. diam j=−n T α ≤ r. Then Wn −j β ≤ j=−n T α. Then  Wn −j  htop(T , β) ≤ htop T , j=−n T α = htop(T , α). Take the supremum over all β.

Calculating topological entropy using generators

Proposition Let T be an expansive homeomorphism & let α be a generator. Then htop(T ) = htop(T , α). Proof (sketch):

Step 1: Clearly htop(T , α) ≤ htop(T ). Wn −j (If diam j=−n T α → ε0 > 0 then two points could have the same coding - contradicting α being a generator.)

Step 3: Let β be any open cover. Let r > 0 be a Lebesgue Wn −j number for β. Choose n s.t. diam j=−n T α ≤ r. Then Wn −j β ≤ j=−n T α. Then  Wn −j  htop(T , β) ≤ htop T , j=−n T α = htop(T , α). Take the supremum over all β.

Calculating topological entropy using generators

Proposition Let T be an expansive homeomorphism & let α be a generator. Then htop(T ) = htop(T , α). Proof (sketch):

Step 1: Clearly htop(T , α) ≤ htop(T ).

Wn −j Step 2: diam j=−n T α → 0. Step 3: Let β be any open cover. Let r > 0 be a Lebesgue Wn −j number for β. Choose n s.t. diam j=−n T α ≤ r. Then Wn −j β ≤ j=−n T α. Then  Wn −j  htop(T , β) ≤ htop T , j=−n T α = htop(T , α). Take the supremum over all β.

Calculating topological entropy using generators

Proposition Let T be an expansive homeomorphism & let α be a generator. Then htop(T ) = htop(T , α). Proof (sketch):

Step 1: Clearly htop(T , α) ≤ htop(T ).

Wn −j Step 2: diam j=−n T α → 0. Wn −j (If diam j=−n T α → ε0 > 0 then two points could have the same coding - contradicting α being a generator.) Wn −j Choose n s.t. diam j=−n T α ≤ r. Then Wn −j β ≤ j=−n T α. Then  Wn −j  htop(T , β) ≤ htop T , j=−n T α = htop(T , α). Take the supremum over all β.

Calculating topological entropy using generators

Proposition Let T be an expansive homeomorphism & let α be a generator. Then htop(T ) = htop(T , α). Proof (sketch):

Step 1: Clearly htop(T , α) ≤ htop(T ).

Wn −j Step 2: diam j=−n T α → 0. Wn −j (If diam j=−n T α → ε0 > 0 then two points could have the same coding - contradicting α being a generator.)

Step 3: Let β be any open cover. Let r > 0 be a Lebesgue number for β. Then  Wn −j  htop(T , β) ≤ htop T , j=−n T α = htop(T , α). Take the supremum over all β.

Calculating topological entropy using generators

Proposition Let T be an expansive homeomorphism & let α be a generator. Then htop(T ) = htop(T , α). Proof (sketch):

Step 1: Clearly htop(T , α) ≤ htop(T ).

Wn −j Step 2: diam j=−n T α → 0. Wn −j (If diam j=−n T α → ε0 > 0 then two points could have the same coding - contradicting α being a generator.)

Step 3: Let β be any open cover. Let r > 0 be a Lebesgue Wn −j number for β. Choose n s.t. diam j=−n T α ≤ r. Then Wn −j β ≤ j=−n T α. Take the supremum over all β.

Calculating topological entropy using generators

Proposition Let T be an expansive homeomorphism & let α be a generator. Then htop(T ) = htop(T , α). Proof (sketch):

Step 1: Clearly htop(T , α) ≤ htop(T ).

Wn −j Step 2: diam j=−n T α → 0. Wn −j (If diam j=−n T α → ε0 > 0 then two points could have the same coding - contradicting α being a generator.)

Step 3: Let β be any open cover. Let r > 0 be a Lebesgue Wn −j number for β. Choose n s.t. diam j=−n T α ≤ r. Then Wn −j β ≤ j=−n T α. Then  Wn −j  htop(T , β) ≤ htop T , j=−n T α = htop(T , α). Calculating topological entropy using generators

Proposition Let T be an expansive homeomorphism & let α be a generator. Then htop(T ) = htop(T , α). Proof (sketch):

Step 1: Clearly htop(T , α) ≤ htop(T ).

Wn −j Step 2: diam j=−n T α → 0. Wn −j (If diam j=−n T α → ε0 > 0 then two points could have the same coding - contradicting α being a generator.)

Step 3: Let β be any open cover. Let r > 0 be a Lebesgue Wn −j number for β. Choose n s.t. diam j=−n T α ≤ r. Then Wn −j β ≤ j=−n T α. Then  Wn −j  htop(T , β) ≤ htop T , j=−n T α = htop(T , α). Take the supremum over all β. Let σ :Σk → Σk be the full two-sided k-shift.

Let α = {[1],..., [k]}. Note α is an open cover of Σk . It’s clear that α is a (top.) generator.

Wn−1 −j Note j=0 σ α is the open cover of Σk into all cylinders of length n n. There are k of these and all of them are needed to cover Σk . Hence

n−1  1 _ −j htop(σ) = htop(σ, α) = lim Htop  σ α n→∞ n j=0 1 = lim log kn = log k. n→∞ n

Topological entropy of shifts Wn−1 −j Note j=0 σ α is the open cover of Σk into all cylinders of length n n. There are k of these and all of them are needed to cover Σk . Hence

n−1  1 _ −j htop(σ) = htop(σ, α) = lim Htop  σ α n→∞ n j=0 1 = lim log kn = log k. n→∞ n

Topological entropy of shifts

Let σ :Σk → Σk be the full two-sided k-shift.

Let α = {[1],..., [k]}. Note α is an open cover of Σk . It’s clear that α is a (top.) generator. Hence

n−1  1 _ −j htop(σ) = htop(σ, α) = lim Htop  σ α n→∞ n j=0 1 = lim log kn = log k. n→∞ n

Topological entropy of shifts

Let σ :Σk → Σk be the full two-sided k-shift.

Let α = {[1],..., [k]}. Note α is an open cover of Σk . It’s clear that α is a (top.) generator.

Wn−1 −j Note j=0 σ α is the open cover of Σk into all cylinders of length n n. There are k of these and all of them are needed to cover Σk . n−1  1 _ −j = htop(σ, α) = lim Htop  σ α n→∞ n j=0 1 = lim log kn = log k. n→∞ n

Topological entropy of shifts

Let σ :Σk → Σk be the full two-sided k-shift.

Let α = {[1],..., [k]}. Note α is an open cover of Σk . It’s clear that α is a (top.) generator.

Wn−1 −j Note j=0 σ α is the open cover of Σk into all cylinders of length n n. There are k of these and all of them are needed to cover Σk . Hence

htop(σ) n−1  1 _ −j lim Htop  σ α n→∞ n j=0 1 = lim log kn = log k. n→∞ n

Topological entropy of shifts

Let σ :Σk → Σk be the full two-sided k-shift.

Let α = {[1],..., [k]}. Note α is an open cover of Σk . It’s clear that α is a (top.) generator.

Wn−1 −j Note j=0 σ α is the open cover of Σk into all cylinders of length n n. There are k of these and all of them are needed to cover Σk . Hence

htop(σ) = htop(σ, α) = 1 lim log kn = log k. n→∞ n

Topological entropy of shifts

Let σ :Σk → Σk be the full two-sided k-shift.

Let α = {[1],..., [k]}. Note α is an open cover of Σk . It’s clear that α is a (top.) generator.

Wn−1 −j Note j=0 σ α is the open cover of Σk into all cylinders of length n n. There are k of these and all of them are needed to cover Σk . Hence

n−1  1 _ −j htop(σ) = htop(σ, α) = lim Htop  σ α n→∞ n j=0

= Topological entropy of shifts

Let σ :Σk → Σk be the full two-sided k-shift.

Let α = {[1],..., [k]}. Note α is an open cover of Σk . It’s clear that α is a (top.) generator.

Wn−1 −j Note j=0 σ α is the open cover of Σk into all cylinders of length n n. There are k of these and all of them are needed to cover Σk . Hence

n−1  1 _ −j htop(σ) = htop(σ, α) = lim Htop  σ α n→∞ n j=0 1 = lim log kn = log k. n→∞ n Let A be an irreducible k × k 0 − 1 matrix. Let σ :ΣA → ΣA be the shift of finite type. Let α = {[1],..., [k]}. Again, this open cover is a generator.

Wn−1 −j  Now Htop j=0 σ α = log(no. of cylinders of length n in ΣA).

Number of words in ΣA of length n starting at i and ending at j is n (A )i,j . Hence number of cylinders of length n is Pk n n i,j=1(A )i,j = kA k. Hence n−1  1 _ −j htop(σ) = htop(σ, α) = lim Htop  σ α n→∞ n j=0 1 = lim log kAnk = log λ n→∞ n where λ > 0 is the largest eigenvalue of A, by the spectral radius formula.

Topological entropy of shifts Wn−1 −j  Now Htop j=0 σ α = log(no. of cylinders of length n in ΣA).

Number of words in ΣA of length n starting at i and ending at j is n (A )i,j . Hence number of cylinders of length n is Pk n n i,j=1(A )i,j = kA k. Hence n−1  1 _ −j htop(σ) = htop(σ, α) = lim Htop  σ α n→∞ n j=0 1 = lim log kAnk = log λ n→∞ n where λ > 0 is the largest eigenvalue of A, by the spectral radius formula.

Topological entropy of shifts Let A be an irreducible k × k 0 − 1 matrix. Let σ :ΣA → ΣA be the shift of finite type. Let α = {[1],..., [k]}. Again, this open cover is a generator. Number of words in ΣA of length n starting at i and ending at j is n (A )i,j . Hence number of cylinders of length n is Pk n n i,j=1(A )i,j = kA k. Hence n−1  1 _ −j htop(σ) = htop(σ, α) = lim Htop  σ α n→∞ n j=0 1 = lim log kAnk = log λ n→∞ n where λ > 0 is the largest eigenvalue of A, by the spectral radius formula.

Topological entropy of shifts Let A be an irreducible k × k 0 − 1 matrix. Let σ :ΣA → ΣA be the shift of finite type. Let α = {[1],..., [k]}. Again, this open cover is a generator.

Wn−1 −j  Now Htop j=0 σ α = log(no. of cylinders of length n in ΣA). Hence number of cylinders of length n is Pk n n i,j=1(A )i,j = kA k. Hence n−1  1 _ −j htop(σ) = htop(σ, α) = lim Htop  σ α n→∞ n j=0 1 = lim log kAnk = log λ n→∞ n where λ > 0 is the largest eigenvalue of A, by the spectral radius formula.

Topological entropy of shifts Let A be an irreducible k × k 0 − 1 matrix. Let σ :ΣA → ΣA be the shift of finite type. Let α = {[1],..., [k]}. Again, this open cover is a generator.

Wn−1 −j  Now Htop j=0 σ α = log(no. of cylinders of length n in ΣA).

Number of words in ΣA of length n starting at i and ending at j is n (A )i,j . Hence n−1  1 _ −j htop(σ) = htop(σ, α) = lim Htop  σ α n→∞ n j=0 1 = lim log kAnk = log λ n→∞ n where λ > 0 is the largest eigenvalue of A, by the spectral radius formula.

Topological entropy of shifts Let A be an irreducible k × k 0 − 1 matrix. Let σ :ΣA → ΣA be the shift of finite type. Let α = {[1],..., [k]}. Again, this open cover is a generator.

Wn−1 −j  Now Htop j=0 σ α = log(no. of cylinders of length n in ΣA).

Number of words in ΣA of length n starting at i and ending at j is n (A )i,j . Hence number of cylinders of length n is Pk n n i,j=1(A )i,j = kA k. n−1  1 _ −j = lim Htop  σ α n→∞ n j=0 1 = lim log kAnk = log λ n→∞ n where λ > 0 is the largest eigenvalue of A, by the spectral radius formula.

Topological entropy of shifts Let A be an irreducible k × k 0 − 1 matrix. Let σ :ΣA → ΣA be the shift of finite type. Let α = {[1],..., [k]}. Again, this open cover is a generator.

Wn−1 −j  Now Htop j=0 σ α = log(no. of cylinders of length n in ΣA).

Number of words in ΣA of length n starting at i and ending at j is n (A )i,j . Hence number of cylinders of length n is Pk n n i,j=1(A )i,j = kA k. Hence

htop(σ) = htop(σ, α) 1 lim log kAnk = log λ n→∞ n where λ > 0 is the largest eigenvalue of A, by the spectral radius formula.

Topological entropy of shifts Let A be an irreducible k × k 0 − 1 matrix. Let σ :ΣA → ΣA be the shift of finite type. Let α = {[1],..., [k]}. Again, this open cover is a generator.

Wn−1 −j  Now Htop j=0 σ α = log(no. of cylinders of length n in ΣA).

Number of words in ΣA of length n starting at i and ending at j is n (A )i,j . Hence number of cylinders of length n is Pk n n i,j=1(A )i,j = kA k. Hence n−1  1 _ −j htop(σ) = htop(σ, α) = lim Htop  σ α n→∞ n j=0

= Topological entropy of shifts Let A be an irreducible k × k 0 − 1 matrix. Let σ :ΣA → ΣA be the shift of finite type. Let α = {[1],..., [k]}. Again, this open cover is a generator.

Wn−1 −j  Now Htop j=0 σ α = log(no. of cylinders of length n in ΣA).

Number of words in ΣA of length n starting at i and ending at j is n (A )i,j . Hence number of cylinders of length n is Pk n n i,j=1(A )i,j = kA k. Hence n−1  1 _ −j htop(σ) = htop(σ, α) = lim Htop  σ α n→∞ n j=0 1 = lim log kAnk = log λ n→∞ n where λ > 0 is the largest eigenvalue of A, by the spectral radius formula. We can relate metric and topological entropy Theorem (The variational principle) Let T be a continuous transformation of a compact metric X . Then htop(T ) = sup{hµ(T ) | µ ∈ M(X , T )}.

Remark There are examples to show that this supremum need not be achieved. Definition Let Mmax(X , T ) = {µ ∈ M(X , T ) | htop(T ) = hµ(T )} denote the set of all measures of maximal entropy.

The variational principle Theorem (The variational principle) Let T be a continuous transformation of a compact metric X . Then htop(T ) = sup{hµ(T ) | µ ∈ M(X , T )}.

Remark There are examples to show that this supremum need not be achieved. Definition Let Mmax(X , T ) = {µ ∈ M(X , T ) | htop(T ) = hµ(T )} denote the set of all measures of maximal entropy.

The variational principle

We can relate metric and topological entropy Remark There are examples to show that this supremum need not be achieved. Definition Let Mmax(X , T ) = {µ ∈ M(X , T ) | htop(T ) = hµ(T )} denote the set of all measures of maximal entropy.

The variational principle

We can relate metric and topological entropy Theorem (The variational principle) Let T be a continuous transformation of a compact metric X . Then htop(T ) = sup{hµ(T ) | µ ∈ M(X , T )}. Definition Let Mmax(X , T ) = {µ ∈ M(X , T ) | htop(T ) = hµ(T )} denote the set of all measures of maximal entropy.

The variational principle

We can relate metric and topological entropy Theorem (The variational principle) Let T be a continuous transformation of a compact metric X . Then htop(T ) = sup{hµ(T ) | µ ∈ M(X , T )}.

Remark There are examples to show that this supremum need not be achieved. The variational principle

We can relate metric and topological entropy Theorem (The variational principle) Let T be a continuous transformation of a compact metric X . Then htop(T ) = sup{hµ(T ) | µ ∈ M(X , T )}.

Remark There are examples to show that this supremum need not be achieved. Definition Let Mmax(X , T ) = {µ ∈ M(X , T ) | htop(T ) = hµ(T )} denote the set of all measures of maximal entropy. Proposition

If the entropy map is upper semi-continuous then Mmax(X , T ) 6= ∅. Proof: an upper semi- on a compact metric space achieves its supremum. Note that M(X , T ) is a compact metric space. Remark Hence expansive homeomorphisms always have at least one measure of maximal entropy. In many cases, there is a unique measure of maximal entropy.

The variational principle and measures of maximal entropy Proof: an upper semi-continuous function on a compact metric space achieves its supremum. Note that M(X , T ) is a compact metric space. Remark Hence expansive homeomorphisms always have at least one measure of maximal entropy. In many cases, there is a unique measure of maximal entropy.

The variational principle and measures of maximal entropy

Proposition

If the entropy map is upper semi-continuous then Mmax(X , T ) 6= ∅. Remark Hence expansive homeomorphisms always have at least one measure of maximal entropy. In many cases, there is a unique measure of maximal entropy.

The variational principle and measures of maximal entropy

Proposition

If the entropy map is upper semi-continuous then Mmax(X , T ) 6= ∅. Proof: an upper semi-continuous function on a compact metric space achieves its supremum. Note that M(X , T ) is a compact metric space. The variational principle and measures of maximal entropy

Proposition

If the entropy map is upper semi-continuous then Mmax(X , T ) 6= ∅. Proof: an upper semi-continuous function on a compact metric space achieves its supremum. Note that M(X , T ) is a compact metric space. Remark Hence expansive homeomorphisms always have at least one measure of maximal entropy. In many cases, there is a unique measure of maximal entropy. Proposition

Let σ :Σk → Σk be the full two-sided k-shift. Then the Bernoulli (1/k,..., 1/k)-measure is the unique measure of maximal entropy. Proof. We know the topological entropy of σ is log k. We know the entropy of the Bernoulli (1/k,..., 1/k)-measure is log k. We show: if µ ∈ M(X , T ) has hµ(σ) = log k then µ is the Bernoulli (1/k,..., 1/k)-measure.

Let ζ = {[1],..., [k]}. This is a generator, so by Sinai’s thm log k = hµ(σ) = hµ(σ, ζ).

n−1  1 _ Note that h (σ, ζ) ≤ H σ−j α (as H = µ n µ   n j=0 Wn−1 −j  1 1 Hµ j=0 σ α is subadditive, so n Hn → infn n Hn = hµ(σ, ζ)).

Measures of maximal entropy for shifts Proof. We know the topological entropy of σ is log k. We know the entropy of the Bernoulli (1/k,..., 1/k)-measure is log k. We show: if µ ∈ M(X , T ) has hµ(σ) = log k then µ is the Bernoulli (1/k,..., 1/k)-measure.

Let ζ = {[1],..., [k]}. This is a generator, so by Sinai’s thm log k = hµ(σ) = hµ(σ, ζ).

n−1  1 _ Note that h (σ, ζ) ≤ H σ−j α (as H = µ n µ   n j=0 Wn−1 −j  1 1 Hµ j=0 σ α is subadditive, so n Hn → infn n Hn = hµ(σ, ζ)).

Measures of maximal entropy for shifts Proposition

Let σ :Σk → Σk be the full two-sided k-shift. Then the Bernoulli (1/k,..., 1/k)-measure is the unique measure of maximal entropy. We know the entropy of the Bernoulli (1/k,..., 1/k)-measure is log k. We show: if µ ∈ M(X , T ) has hµ(σ) = log k then µ is the Bernoulli (1/k,..., 1/k)-measure.

Let ζ = {[1],..., [k]}. This is a generator, so by Sinai’s thm log k = hµ(σ) = hµ(σ, ζ).

n−1  1 _ Note that h (σ, ζ) ≤ H σ−j α (as H = µ n µ   n j=0 Wn−1 −j  1 1 Hµ j=0 σ α is subadditive, so n Hn → infn n Hn = hµ(σ, ζ)).

Measures of maximal entropy for shifts Proposition

Let σ :Σk → Σk be the full two-sided k-shift. Then the Bernoulli (1/k,..., 1/k)-measure is the unique measure of maximal entropy. Proof. We know the topological entropy of σ is log k. We show: if µ ∈ M(X , T ) has hµ(σ) = log k then µ is the Bernoulli (1/k,..., 1/k)-measure.

Let ζ = {[1],..., [k]}. This is a generator, so by Sinai’s thm log k = hµ(σ) = hµ(σ, ζ).

n−1  1 _ Note that h (σ, ζ) ≤ H σ−j α (as H = µ n µ   n j=0 Wn−1 −j  1 1 Hµ j=0 σ α is subadditive, so n Hn → infn n Hn = hµ(σ, ζ)).

Measures of maximal entropy for shifts Proposition

Let σ :Σk → Σk be the full two-sided k-shift. Then the Bernoulli (1/k,..., 1/k)-measure is the unique measure of maximal entropy. Proof. We know the topological entropy of σ is log k. We know the entropy of the Bernoulli (1/k,..., 1/k)-measure is log k. Let ζ = {[1],..., [k]}. This is a generator, so by Sinai’s thm log k = hµ(σ) = hµ(σ, ζ).

n−1  1 _ Note that h (σ, ζ) ≤ H σ−j α (as H = µ n µ   n j=0 Wn−1 −j  1 1 Hµ j=0 σ α is subadditive, so n Hn → infn n Hn = hµ(σ, ζ)).

Measures of maximal entropy for shifts Proposition

Let σ :Σk → Σk be the full two-sided k-shift. Then the Bernoulli (1/k,..., 1/k)-measure is the unique measure of maximal entropy. Proof. We know the topological entropy of σ is log k. We know the entropy of the Bernoulli (1/k,..., 1/k)-measure is log k. We show: if µ ∈ M(X , T ) has hµ(σ) = log k then µ is the Bernoulli (1/k,..., 1/k)-measure. n−1  1 _ Note that h (σ, ζ) ≤ H σ−j α (as H = µ n µ   n j=0 Wn−1 −j  1 1 Hµ j=0 σ α is subadditive, so n Hn → infn n Hn = hµ(σ, ζ)).

Measures of maximal entropy for shifts Proposition

Let σ :Σk → Σk be the full two-sided k-shift. Then the Bernoulli (1/k,..., 1/k)-measure is the unique measure of maximal entropy. Proof. We know the topological entropy of σ is log k. We know the entropy of the Bernoulli (1/k,..., 1/k)-measure is log k. We show: if µ ∈ M(X , T ) has hµ(σ) = log k then µ is the Bernoulli (1/k,..., 1/k)-measure.

Let ζ = {[1],..., [k]}. This is a generator, so by Sinai’s thm log k = hµ(σ) = hµ(σ, ζ). (as Hn =

Wn−1 −j  1 1 Hµ j=0 σ α is subadditive, so n Hn → infn n Hn = hµ(σ, ζ)).

Measures of maximal entropy for shifts Proposition

Let σ :Σk → Σk be the full two-sided k-shift. Then the Bernoulli (1/k,..., 1/k)-measure is the unique measure of maximal entropy. Proof. We know the topological entropy of σ is log k. We know the entropy of the Bernoulli (1/k,..., 1/k)-measure is log k. We show: if µ ∈ M(X , T ) has hµ(σ) = log k then µ is the Bernoulli (1/k,..., 1/k)-measure.

Let ζ = {[1],..., [k]}. This is a generator, so by Sinai’s thm log k = hµ(σ) = hµ(σ, ζ).

n−1  1 _ Note that h (σ, ζ) ≤ H σ−j α µ n µ   j=0 Measures of maximal entropy for shifts Proposition

Let σ :Σk → Σk be the full two-sided k-shift. Then the Bernoulli (1/k,..., 1/k)-measure is the unique measure of maximal entropy. Proof. We know the topological entropy of σ is log k. We know the entropy of the Bernoulli (1/k,..., 1/k)-measure is log k. We show: if µ ∈ M(X , T ) has hµ(σ) = log k then µ is the Bernoulli (1/k,..., 1/k)-measure.

Let ζ = {[1],..., [k]}. This is a generator, so by Sinai’s thm log k = hµ(σ) = hµ(σ, ζ).

n−1  1 _ Note that h (σ, ζ) ≤ H σ−j α (as H = µ n µ   n j=0 Wn−1 −j  1 1 Hµ j=0 σ α is subadditive, so n Hn → infn n Hn = hµ(σ, ζ)). We need the following fact: Fact If η = {A1,..., A`} is a finite partition then H(η) = P` − i=1 µ(Ai ) log Ai ≤ log ` with equality iff µ(Ai ) = 1/`, 1 ≤ i ≤ `. (This follows from concavity of −t log t.) So

n−1  1 _ 1 log k = h (σ) = h (σ, ζ) ≤ H σ−j α ≤ log kn = log k. µ µ n µ   n j=0

Wn−1 −j  n Hence Hµ j=0 σ α = log k . So by the fact, each element of Wn−1 −j n j=0 σ α has the same measure 1/k . Hence µ assigns measure 1/kn to each cylinder. So µ and the Bernoulli (1/k,..., 1/k)-measure agree on cylinders. By the Kolmogorov Extension Theorem, µ is the Bernoulli (1/k,..., 1/k)-measure.

Proof continued n−1  1 _ 1 log k = h (σ) = h (σ, ζ) ≤ H σ−j α ≤ log kn = log k. µ µ n µ   n j=0

Wn−1 −j  n Hence Hµ j=0 σ α = log k . So by the fact, each element of Wn−1 −j n j=0 σ α has the same measure 1/k . Hence µ assigns measure 1/kn to each cylinder. So µ and the Bernoulli (1/k,..., 1/k)-measure agree on cylinders. By the Kolmogorov Extension Theorem, µ is the Bernoulli (1/k,..., 1/k)-measure.

Proof continued We need the following fact: Fact If η = {A1,..., A`} is a finite partition then H(η) = P` − i=1 µ(Ai ) log Ai ≤ log ` with equality iff µ(Ai ) = 1/`, 1 ≤ i ≤ `. (This follows from concavity of −t log t.) So n−1  1 _ 1 = h (σ) = h (σ, ζ) ≤ H σ−j α ≤ log kn = log k. µ µ n µ   n j=0

Wn−1 −j  n Hence Hµ j=0 σ α = log k . So by the fact, each element of Wn−1 −j n j=0 σ α has the same measure 1/k . Hence µ assigns measure 1/kn to each cylinder. So µ and the Bernoulli (1/k,..., 1/k)-measure agree on cylinders. By the Kolmogorov Extension Theorem, µ is the Bernoulli (1/k,..., 1/k)-measure.

Proof continued We need the following fact: Fact If η = {A1,..., A`} is a finite partition then H(η) = P` − i=1 µ(Ai ) log Ai ≤ log ` with equality iff µ(Ai ) = 1/`, 1 ≤ i ≤ `. (This follows from concavity of −t log t.) So

log k n−1  1 _ 1 = h (σ, ζ) ≤ H σ−j α ≤ log kn = log k. µ n µ   n j=0

Wn−1 −j  n Hence Hµ j=0 σ α = log k . So by the fact, each element of Wn−1 −j n j=0 σ α has the same measure 1/k . Hence µ assigns measure 1/kn to each cylinder. So µ and the Bernoulli (1/k,..., 1/k)-measure agree on cylinders. By the Kolmogorov Extension Theorem, µ is the Bernoulli (1/k,..., 1/k)-measure.

Proof continued We need the following fact: Fact If η = {A1,..., A`} is a finite partition then H(η) = P` − i=1 µ(Ai ) log Ai ≤ log ` with equality iff µ(Ai ) = 1/`, 1 ≤ i ≤ `. (This follows from concavity of −t log t.) So

log k = hµ(σ) n−1  1 _ 1 ≤ H σ−j α ≤ log kn = log k. n µ   n j=0

Wn−1 −j  n Hence Hµ j=0 σ α = log k . So by the fact, each element of Wn−1 −j n j=0 σ α has the same measure 1/k . Hence µ assigns measure 1/kn to each cylinder. So µ and the Bernoulli (1/k,..., 1/k)-measure agree on cylinders. By the Kolmogorov Extension Theorem, µ is the Bernoulli (1/k,..., 1/k)-measure.

Proof continued We need the following fact: Fact If η = {A1,..., A`} is a finite partition then H(η) = P` − i=1 µ(Ai ) log Ai ≤ log ` with equality iff µ(Ai ) = 1/`, 1 ≤ i ≤ `. (This follows from concavity of −t log t.) So

log k = hµ(σ) = hµ(σ, ζ) 1 ≤ log kn = log k. n

Wn−1 −j  n Hence Hµ j=0 σ α = log k . So by the fact, each element of Wn−1 −j n j=0 σ α has the same measure 1/k . Hence µ assigns measure 1/kn to each cylinder. So µ and the Bernoulli (1/k,..., 1/k)-measure agree on cylinders. By the Kolmogorov Extension Theorem, µ is the Bernoulli (1/k,..., 1/k)-measure.

Proof continued We need the following fact: Fact If η = {A1,..., A`} is a finite partition then H(η) = P` − i=1 µ(Ai ) log Ai ≤ log ` with equality iff µ(Ai ) = 1/`, 1 ≤ i ≤ `. (This follows from concavity of −t log t.) So

n−1  1 _ log k = h (σ) = h (σ, ζ) ≤ H σ−j α µ µ n µ   j=0 So by the fact, each element of Wn−1 −j n j=0 σ α has the same measure 1/k . Hence µ assigns measure 1/kn to each cylinder. So µ and the Bernoulli (1/k,..., 1/k)-measure agree on cylinders. By the Kolmogorov Extension Theorem, µ is the Bernoulli (1/k,..., 1/k)-measure.

Proof continued We need the following fact: Fact If η = {A1,..., A`} is a finite partition then H(η) = P` − i=1 µ(Ai ) log Ai ≤ log ` with equality iff µ(Ai ) = 1/`, 1 ≤ i ≤ `. (This follows from concavity of −t log t.) So

n−1  1 _ 1 log k = h (σ) = h (σ, ζ) ≤ H σ−j α ≤ log kn = log k. µ µ n µ   n j=0

Wn−1 −j  n Hence Hµ j=0 σ α = log k . Hence µ assigns measure 1/kn to each cylinder. So µ and the Bernoulli (1/k,..., 1/k)-measure agree on cylinders. By the Kolmogorov Extension Theorem, µ is the Bernoulli (1/k,..., 1/k)-measure.

Proof continued We need the following fact: Fact If η = {A1,..., A`} is a finite partition then H(η) = P` − i=1 µ(Ai ) log Ai ≤ log ` with equality iff µ(Ai ) = 1/`, 1 ≤ i ≤ `. (This follows from concavity of −t log t.) So

n−1  1 _ 1 log k = h (σ) = h (σ, ζ) ≤ H σ−j α ≤ log kn = log k. µ µ n µ   n j=0

Wn−1 −j  n Hence Hµ j=0 σ α = log k . So by the fact, each element of Wn−1 −j n j=0 σ α has the same measure 1/k . So µ and the Bernoulli (1/k,..., 1/k)-measure agree on cylinders. By the Kolmogorov Extension Theorem, µ is the Bernoulli (1/k,..., 1/k)-measure.

Proof continued We need the following fact: Fact If η = {A1,..., A`} is a finite partition then H(η) = P` − i=1 µ(Ai ) log Ai ≤ log ` with equality iff µ(Ai ) = 1/`, 1 ≤ i ≤ `. (This follows from concavity of −t log t.) So

n−1  1 _ 1 log k = h (σ) = h (σ, ζ) ≤ H σ−j α ≤ log kn = log k. µ µ n µ   n j=0

Wn−1 −j  n Hence Hµ j=0 σ α = log k . So by the fact, each element of Wn−1 −j n j=0 σ α has the same measure 1/k . Hence µ assigns measure 1/kn to each cylinder. Proof continued We need the following fact: Fact If η = {A1,..., A`} is a finite partition then H(η) = P` − i=1 µ(Ai ) log Ai ≤ log ` with equality iff µ(Ai ) = 1/`, 1 ≤ i ≤ `. (This follows from concavity of −t log t.) So

n−1  1 _ 1 log k = h (σ) = h (σ, ζ) ≤ H σ−j α ≤ log kn = log k. µ µ n µ   n j=0

Wn−1 −j  n Hence Hµ j=0 σ α = log k . So by the fact, each element of Wn−1 −j n j=0 σ α has the same measure 1/k . Hence µ assigns measure 1/kn to each cylinder. So µ and the Bernoulli (1/k,..., 1/k)-measure agree on cylinders. By the Kolmogorov Extension Theorem, µ is the Bernoulli (1/k,..., 1/k)-measure. Let A be an irreducible 0 − 1 matrix with corresponding shift of finite type ΣA. We show how to construct the measure of maximal entropy. Theorem (Perron-Frobenius) Let A be a non-negative irreducible matrix. Then

I there is a positive maximal eigenvalue λ > 0 s.t. all other eigenvalues satisfy |λj | < λ, moreover λ is simple;

I there are positive left- and right-eigenvectors u = (u1,..., uk ), T P P v = (v1,..., vk ) , ui = vi = 1, s.t. uA = λu, Av = λv.

Ai,j vj ui vi Apply Perron-Frobenius to A and define Pi,j = , pi = , λvi c Pk where c = j=1 uj vj . Then P is stochastic and pP = p. We define the Parry measure to be the Markov measure

µ[i0, i1,..., in] = pi0 Pi0,i1 ··· Pin−1,in .

The Parry measure Theorem (Perron-Frobenius) Let A be a non-negative irreducible matrix. Then

I there is a positive maximal eigenvalue λ > 0 s.t. all other eigenvalues satisfy |λj | < λ, moreover λ is simple;

I there are positive left- and right-eigenvectors u = (u1,..., uk ), T P P v = (v1,..., vk ) , ui = vi = 1, s.t. uA = λu, Av = λv.

Ai,j vj ui vi Apply Perron-Frobenius to A and define Pi,j = , pi = , λvi c Pk where c = j=1 uj vj . Then P is stochastic and pP = p. We define the Parry measure to be the Markov measure

µ[i0, i1,..., in] = pi0 Pi0,i1 ··· Pin−1,in .

The Parry measure Let A be an irreducible 0 − 1 matrix with corresponding shift of finite type ΣA. We show how to construct the measure of maximal entropy. I there is a positive maximal eigenvalue λ > 0 s.t. all other eigenvalues satisfy |λj | < λ, moreover λ is simple;

I there are positive left- and right-eigenvectors u = (u1,..., uk ), T P P v = (v1,..., vk ) , ui = vi = 1, s.t. uA = λu, Av = λv.

Ai,j vj ui vi Apply Perron-Frobenius to A and define Pi,j = , pi = , λvi c Pk where c = j=1 uj vj . Then P is stochastic and pP = p. We define the Parry measure to be the Markov measure

µ[i0, i1,..., in] = pi0 Pi0,i1 ··· Pin−1,in .

The Parry measure Let A be an irreducible 0 − 1 matrix with corresponding shift of finite type ΣA. We show how to construct the measure of maximal entropy. Theorem (Perron-Frobenius) Let A be a non-negative irreducible matrix. Then I there are positive left- and right-eigenvectors u = (u1,..., uk ), T P P v = (v1,..., vk ) , ui = vi = 1, s.t. uA = λu, Av = λv.

Ai,j vj ui vi Apply Perron-Frobenius to A and define Pi,j = , pi = , λvi c Pk where c = j=1 uj vj . Then P is stochastic and pP = p. We define the Parry measure to be the Markov measure

µ[i0, i1,..., in] = pi0 Pi0,i1 ··· Pin−1,in .

The Parry measure Let A be an irreducible 0 − 1 matrix with corresponding shift of finite type ΣA. We show how to construct the measure of maximal entropy. Theorem (Perron-Frobenius) Let A be a non-negative irreducible matrix. Then

I there is a positive maximal eigenvalue λ > 0 s.t. all other eigenvalues satisfy |λj | < λ, moreover λ is simple; Ai,j vj ui vi Apply Perron-Frobenius to A and define Pi,j = , pi = , λvi c Pk where c = j=1 uj vj . Then P is stochastic and pP = p. We define the Parry measure to be the Markov measure

µ[i0, i1,..., in] = pi0 Pi0,i1 ··· Pin−1,in .

The Parry measure Let A be an irreducible 0 − 1 matrix with corresponding shift of finite type ΣA. We show how to construct the measure of maximal entropy. Theorem (Perron-Frobenius) Let A be a non-negative irreducible matrix. Then

I there is a positive maximal eigenvalue λ > 0 s.t. all other eigenvalues satisfy |λj | < λ, moreover λ is simple;

I there are positive left- and right-eigenvectors u = (u1,..., uk ), T P P v = (v1,..., vk ) , ui = vi = 1, s.t. uA = λu, Av = λv. Then P is stochastic and pP = p. We define the Parry measure to be the Markov measure

µ[i0, i1,..., in] = pi0 Pi0,i1 ··· Pin−1,in .

The Parry measure Let A be an irreducible 0 − 1 matrix with corresponding shift of finite type ΣA. We show how to construct the measure of maximal entropy. Theorem (Perron-Frobenius) Let A be a non-negative irreducible matrix. Then

I there is a positive maximal eigenvalue λ > 0 s.t. all other eigenvalues satisfy |λj | < λ, moreover λ is simple;

I there are positive left- and right-eigenvectors u = (u1,..., uk ), T P P v = (v1,..., vk ) , ui = vi = 1, s.t. uA = λu, Av = λv.

Ai,j vj ui vi Apply Perron-Frobenius to A and define Pi,j = , pi = , λvi c Pk where c = j=1 uj vj . We define the Parry measure to be the Markov measure

µ[i0, i1,..., in] = pi0 Pi0,i1 ··· Pin−1,in .

The Parry measure Let A be an irreducible 0 − 1 matrix with corresponding shift of finite type ΣA. We show how to construct the measure of maximal entropy. Theorem (Perron-Frobenius) Let A be a non-negative irreducible matrix. Then

I there is a positive maximal eigenvalue λ > 0 s.t. all other eigenvalues satisfy |λj | < λ, moreover λ is simple;

I there are positive left- and right-eigenvectors u = (u1,..., uk ), T P P v = (v1,..., vk ) , ui = vi = 1, s.t. uA = λu, Av = λv.

Ai,j vj ui vi Apply Perron-Frobenius to A and define Pi,j = , pi = , λvi c Pk where c = j=1 uj vj . Then P is stochastic and pP = p. The Parry measure Let A be an irreducible 0 − 1 matrix with corresponding shift of finite type ΣA. We show how to construct the measure of maximal entropy. Theorem (Perron-Frobenius) Let A be a non-negative irreducible matrix. Then

I there is a positive maximal eigenvalue λ > 0 s.t. all other eigenvalues satisfy |λj | < λ, moreover λ is simple;

I there are positive left- and right-eigenvectors u = (u1,..., uk ), T P P v = (v1,..., vk ) , ui = vi = 1, s.t. uA = λu, Av = λv.

Ai,j vj ui vi Apply Perron-Frobenius to A and define Pi,j = , pi = , λvi c Pk where c = j=1 uj vj . Then P is stochastic and pP = p. We define the Parry measure to be the Markov measure

µ[i0, i1,..., in] = pi0 Pi0,i1 ··· Pin−1,in . Recall that for a Markov measure µ given by the stochastic matrix P P we have hµ(σ) = − i,j pi Pi,j log Pi,j .

It’s an easy check that the Parry measure µ has hµ(σ) = log λ.

We already know that the topological entropy of σ is log λ. Hence the Parry measure is a measure of maximal entropy. Proposition Let A be an irreducible 0 − 1 matrix with corresponding shift of finite type ΣA. Then the Parry measure is the unique measure of maximal entropy.

The Parry measure It’s an easy check that the Parry measure µ has hµ(σ) = log λ.

We already know that the topological entropy of σ is log λ. Hence the Parry measure is a measure of maximal entropy. Proposition Let A be an irreducible 0 − 1 matrix with corresponding shift of finite type ΣA. Then the Parry measure is the unique measure of maximal entropy.

The Parry measure

Recall that for a Markov measure µ given by the stochastic matrix P P we have hµ(σ) = − i,j pi Pi,j log Pi,j . We already know that the topological entropy of σ is log λ. Hence the Parry measure is a measure of maximal entropy. Proposition Let A be an irreducible 0 − 1 matrix with corresponding shift of finite type ΣA. Then the Parry measure is the unique measure of maximal entropy.

The Parry measure

Recall that for a Markov measure µ given by the stochastic matrix P P we have hµ(σ) = − i,j pi Pi,j log Pi,j .

It’s an easy check that the Parry measure µ has hµ(σ) = log λ. Proposition Let A be an irreducible 0 − 1 matrix with corresponding shift of finite type ΣA. Then the Parry measure is the unique measure of maximal entropy.

The Parry measure

Recall that for a Markov measure µ given by the stochastic matrix P P we have hµ(σ) = − i,j pi Pi,j log Pi,j .

It’s an easy check that the Parry measure µ has hµ(σ) = log λ.

We already know that the topological entropy of σ is log λ. Hence the Parry measure is a measure of maximal entropy. The Parry measure

Recall that for a Markov measure µ given by the stochastic matrix P P we have hµ(σ) = − i,j pi Pi,j log Pi,j .

It’s an easy check that the Parry measure µ has hµ(σ) = log λ.

We already know that the topological entropy of σ is log λ. Hence the Parry measure is a measure of maximal entropy. Proposition Let A be an irreducible 0 − 1 matrix with corresponding shift of finite type ΣA. Then the Parry measure is the unique measure of maximal entropy. Many other dynamical systems have measures of maximal entropy. Lebesgue measure is the unique measure of maximal entropy for a linear hyperbolic toral automorphism.

If the T is ‘hyperbolic’ (in an appropriate sense, but this includes: Anosov diffeomorphisms, diffeos on basic sets such as the Smale horseshoe, (in continuous time) geodesic flows on compact negatively curved Riemannian manifolds) then there is a unique measure of maximal entropy.

These measures of maximal entropy can often be related to the spectral properties (=maximal eigenvalue) of an associated operator. We will discuss this further in the next lecture.

Towards thermodynamic formalism If the dynamical system T is ‘hyperbolic’ (in an appropriate sense, but this includes: Anosov diffeomorphisms, Axiom A diffeos on basic sets such as the Smale horseshoe, (in continuous time) geodesic flows on compact negatively curved Riemannian manifolds) then there is a unique measure of maximal entropy.

These measures of maximal entropy can often be related to the spectral properties (=maximal eigenvalue) of an associated operator. We will discuss this further in the next lecture.

Towards thermodynamic formalism

Many other dynamical systems have measures of maximal entropy. Lebesgue measure is the unique measure of maximal entropy for a linear hyperbolic toral automorphism. These measures of maximal entropy can often be related to the spectral properties (=maximal eigenvalue) of an associated operator. We will discuss this further in the next lecture.

Towards thermodynamic formalism

Many other dynamical systems have measures of maximal entropy. Lebesgue measure is the unique measure of maximal entropy for a linear hyperbolic toral automorphism.

If the dynamical system T is ‘hyperbolic’ (in an appropriate sense, but this includes: Anosov diffeomorphisms, Axiom A diffeos on basic sets such as the Smale horseshoe, (in continuous time) geodesic flows on compact negatively curved Riemannian manifolds) then there is a unique measure of maximal entropy. Towards thermodynamic formalism

Many other dynamical systems have measures of maximal entropy. Lebesgue measure is the unique measure of maximal entropy for a linear hyperbolic toral automorphism.

If the dynamical system T is ‘hyperbolic’ (in an appropriate sense, but this includes: Anosov diffeomorphisms, Axiom A diffeos on basic sets such as the Smale horseshoe, (in continuous time) geodesic flows on compact negatively curved Riemannian manifolds) then there is a unique measure of maximal entropy.

These measures of maximal entropy can often be related to the spectral properties (=maximal eigenvalue) of an associated operator. We will discuss this further in the next lecture.