<<

arXiv:1612.01659v6 [cs.CC] 1 Mar 2021 where hsapoc yielded approach This when hoe 1. Theorem hoe 2. Theorem when iligtomi eut.Frt eetn h olwn nescinf intersection following the extend we First, results. main two yielding ieso fgnrlzdFrtnegst,a endb otran im Molter an by deriving defined by as geometry sets, Furstenberg b generalized classical used of in been since problem has a technique on bounding progress This 2. dimension Hausdorff te ieto,ie,t pl loihi nomto hoei metho theoretic A information 2]). algorithmic [28, apply (e.g., to geometry i.e., t direction, information other algorithmic in useful especially proved have and science hrceie h lsia asoffdmnino n e in set any of dimension Hausdorff classical the characterizes loihi nomto hoei ro fDve’ 91term[8] theorem o 1971 Davies’s proof of a as proof theoretic and, information bounds, lower algorithmic dimensional for technique pointwise points. individual hscaatrzto a eeetvzdi re oqatttvl c quantitatively to order in effectivized be can characterization this ee 5.Tepofgvnteeasmsta e sBrl b Borel, is set a that assumes there given proof The [5]. Peres lsia rca iesos mn which among , fractal Classical Introduction 1 hwdta asoffdmnincnb ipycaatrzduigb using characterized set simply classify be can quantitatively dimension to Hausdorff that of showed notions refine important, ttmn [3]. statement 1 h aeagrtmcdmninltcnqei ple eet intersec to here applied is technique dimensional algorithmic same The h oncinbtenagrtmcadcascldmnin a mor has dimensions classical and algorithmic between connection The ntesm ok .H uzadN uzsoe httepoint-to the that showed Lutz N. and Lutz H. J. work, same the In eod epoeta h olwn hrceiaino akn dim packing of characterization following the that prove we Second, hsrsl scoeyrltdt h asrn’ lcn th slicing Marstrand’s the to related closely is result This rca nescin n rdcsvaAgrtmcDimens Algorithmic via Products and Intersections Fractal nltcst lohl o rirr sets. arbitrary for fundamenta us prominent, hold work two also that sets This shows analytic approach . This Kolmogorov intersection spaces. of of dimensions packing terms and Hausdorff in classical defined be may E E F and sa nltcst od o arbitrary for holds set, analytic an is + loihi rca iesosqatf h loihi i algorithmic the quantify dimensions fractal Algorithmic z = F o all For o vr set every For { r oe es[1,t rirr sets arbitrary to [11], sets Borel are x + z ,F E, : loihi rca dimensions fractal algorithmic x ∈ ⊆ on-ostprinciple -to-set F E dim } R . ⊆ dim n n o Lbsu)ams all almost (Lebesgue) for and , H R ( n P E , ( E ∩ sup = ) ( F asoffdimension Hausdorff + F ⊆ z R E )) n . elLutz Neil ≤ Abstract fJ .Lt n .Lt 1] ttdhr sTerm7, Theorem as here stated [16], Lutz N. and Lutz H. J. of dim max E hc aebe ple omlil ra fcomputer of areas multiple to applied been have which , tti supinwsiavretyoitdfo h theor the from omitted inadvertently was assumption this ut H 1 oe,a ttdi h xeln eetbo yBso and Bishop by book recent excellent the in stated as eorem, and ( { E 0 , × n atsa rdcso rcasi Euclidean in of products Cartesian and s dim F R F . 1 fraindniyo niiulpit and points individual of density nformation n ) H − hsfruai lutae nFgr 1. Figure in illustrated is formula This 1]and [12] ntrso h loihi iesoso its of dimensions algorithmic the of terms in eut bu h ieso fBrlor Borel of dimension the about results l z ( E dim ∈ fmaue0 n20,J .Lt [15] Lutz H. J. 2000, In 0. measure of s ea[26]. Rela d × ocp,ue hstcnqet iean give to technique this used concept, f asf o-admifiiedt objects. data infinite non-random lassify R ttn hteeyKky e in set Kakeya every that stating H F n tigsrtge called strategies etting steedmnin obudthe bound to dimensions these es er [25]. heory rvdlwrbudo h Hausdorff the on bound lower proved .Lt n tl 1]t aenew make to [18] Stull and Lutz N. y , ( ) saditiint lsia fractal classical to intuition and ds F akn dimension packing − rua rvosysont hold to shown previously ormula, ) nin rvosykont hold to known previously ension, stpicpegvsrs oanew, a to rise gives principle -set  n eetybe xlie nthe in exploited been recently e . } in n rdcso fractals, of products and tions , 3]aetemost the are [37] gales n that and , ion R 2 has em Figure 1: Let E and F both be Koch snowflakes, which have Hausdorff dimension log3 4 1.26. Left: Theo- rem 1 states that, for almost all translation parameters z R2, the Hausdorff dimension≈ of the intersection ∈ E (F + z) is at most 2 log3 4 2 0.52. Right: For a measure zero set of translations, the intersection may∩ have Hausdorff dimension as− large≈ as that of the original sets. Note that Koch are Borel sets, so the new generality introduced by Theorem 1 is not required for this example.

Our approach also yields a simplified proof of Marstrand’s [19] product formula for general sets: Theorem 3 ([19]). For all E Rm and F Rn, ⊆ ⊆ dim (E) + dim (F ) dim (E F ) . H H ≤ H × There are previously known analogues to Theorems 1 and 3 for [37, 10], and we use symmetric arguments to derive these as well. These results are included here to showcase the versatility of this technique and its ability to capture the exact duality between Hausdorff and packing dimensions.

2 Classical Fractal Dimensions

We begin by stating classical, measure-theoretic definitions of the two most well-studied notions of , Hausdorff dimension and packing dimension. These definitions are included here for completeness but are not used directly in the remainder of this work; we will instead apply equivalent characterizations in terms of algorithmic information, as described in Section 3.

n Definition 1 ([12]). For E R , let δ(E) be the collection of all countable covers of E by sets of positive diameter at most δ. For all s⊆ 0, let U ≥

s s H (E) = inf diam(Ui) : Ui i N δ(E) . δ { } ∈ ∈U (i N ) X∈ The s-dimensional Hausdorff (outer) measure of E is

s s H (E) = lim Hδ (E) , δ→0+ and the Hausdorff dimension of E is

dim (E) = inf s> 0 : Hs(E)=0 . H { }

Three desirable properties have made dimH the most standard notion of fractal dimension since it was introduced by Hausdorff in 1919 [12]. First, it is defined on every set in Rn. Second, it is monotone: if E F , ⊆ then dimH(E) dimH(F ). Third, it is countably stable: if E = i∈N Ei, then dimH(E) = supi∈N dimH(Ei). These three properties≤ also hold for packing dimension, which was defined much later, independently by Tricot [37] and by Sullivan [36]. S

2 n Definition 2 ([37, 36]). For all x R and ρ > 0, let Bρ(x) denote the open of radius ρ and center n ∈ x. For all E R , let δ(E) be the class of all countable collections of pairwise disjoint open balls with ⊆ V centers in E and diameters at most δ. That is, for every i N, we have Vi = Bρ (xi) for some xi E and ∈ i ∈ ρi [0,δ/2], and for every j = i, Vi Vj = . For all s 0, define ∈ 6 ∩ ∅ ≥

s s P (E) = sup diam(Vi) : Vi i N δ(E) , δ { } ∈ ∈V (i N ) X∈ and let s s P0 (E) = lim Pδ (E) . δ→0+ The s-dimensional packing (outer) measure of E is

s s P (E) = inf P (Ei): E Ei , 0 ⊆ i N i N  X∈ [∈  and the packing dimension of E is

dim (E) = inf s : P s(E)=0 . P { } Notice that defining packing dimension in this way requires an extra step of optimization compared to Hausdorff dimension. More properties and details about classical fractal dimensions may be found in standard references such as [23, 11, 35].

3 Algorithmic Fractal Dimensions

This section defines algorithmic fractal dimensions in terms of algorithmic information, i.e., Kolmogorov complexity. We also discuss some properties of these dimensions, including their relationships to classical Hausdorff and packing dimensions.

3.1 Kolmogorov Complexity Kolmogorov complexity quantifies the incompressibility of finite data objects. Definition 3 (see [14]). The (prefix-free) conditional Kolmogorov complexity of a string σ 0, 1 ∗ given another string τ 0, 1 ∗ is the length of the shortest binary program that outputs σ when∈ given { }τ as an input. Formally, it∈ is{ } K(σ τ) = min π : U(π, τ)= σ , | π∈{0,1}∗{| | } where U is a fixed universal Turing machine that is prefix-free in its first input and π denotes the length of the string π. | | The Kolmogorov complexity of σ 0, 1 ∗ is conditional Kolmogorov complexity of σ 0, 1 ∗ given λ, the empty string: ∈ { } ∈{ } K(σ)= K(σ λ) . | This quantity is also called the algorithmic information content of σ. The symmetry of algorithmic information (Levin and Kolmogorov, see [38]) is an essential property of conditional Kolomogorov complexity stating that for all σ, τ 0, 1 ∗, ∈{ } K(στ)= K(σ τ)+ K(τ)+ O(log στ ) . (1) | | | The definitions in this subsection are readily extended to discrete domains other than binary strings. For the purposes of this work, the complexity of rational points is most relevant. Hence, fix some standard binary encoding for n-tuples of rationals.

3 3.2 Algorithmic Dimensions of a Point Using approximation by rationals, Kolmogorov complexity may be further extended to Euclidean spaces [17]. For every E Rn, define ⊆ K(E) = min K(p): p E Qn , { ∈ ∩ } where the minimum is understood to be infinite if E Qn is empty. This is the length of the shortest program that outputs some rational point in E. The Kolmogorov∩ complexity of x Rn at precision r N is given by ∈ ∈

Kr(x)= K(B2−r (x)) ,

the length of the shortest program that outputs any precision-r rational approximation of x. Kr(x) may also be described as the algorithmic information content of x at precision r, and similarly, Kr(x)/r is the algorithmic information density of x at precision r. This ratio does not necessarily converge as r , but it does have limiting bounds in [0,n]. These limits are used to define algorithmic dimensions. → ∞ Definition 4 ([15, 24, 1, 17]). Let x Rn. ∈ 1. The (algorithmic) dimension of x is

K (x) dim(x) = lim inf r . r→∞ r

2. The strong (algorithmic) dimension of x is

K (x) Dim(x) = lim sup r . r→∞ r

These dimensions were originally defined by J. H. Lutz [15] and Athreya, Hitchcock, J. H. Lutz, and Mayordomo [1], respectively. The original definitions were in Cantor and used gales, which are bet- ting strategies that generalize martingales, emphasizing the unpredictability of a sequence instead of its incompressibility. The Kolmogorov complexity characterizations and translation to Euclidean spaces are due to Mayordomo [24] and J. H. Lutz and Mayordomo [17]. Relationships between Hausdorff dimension and Kolmogorov complexity were also studied earlier by Ryabko [29, 30, 31, 32], Staiger [33, 34], and Cai and Hartmanis [6]; see Section 6 of [15] for a detailed discussion of this history. We will use the fact that these dimensions are preserved by sufficiently well-behaved functions, namely bi-Lipschitz computable functions. Lemma 4 ([27, 7]). If f : Rm Rn is computable and bi-Lipschitz, then dim(x) = dim(f(x)) and Dim(x)= Dim(f(x)) for all x Rm. → ∈ It will sometimes be convenient to identify a Euclidean point x Rn with an infinite binary sequence that interleaves the binary expansions of x’s coordinates to the right∈ of the binary point. That is, if x = 1 2 n i i i i i i N (x , x ,...,x ), where each x = x−ki x1−ki ...x0 . x1x2 ..., for some ki , then we associate x with the binary sequence ∈ 1 2 n−1 n 1 2 n−1 n x˙ = x1x1 ...x1 x1 x2x2 ...x2 x2 .... Discarding the bits to the left of the binary point is convenient and will affect the complexity of prefixes of x˙ by only a constant amount. Given an infinite binary sequence y = y1y2y3 ... and a,b N, we let y[a : b] ∈ n denote ya+1ya+2 ...yb−1yb. If a b, then y[a : b] is the empty string λ. Thus, for any point x [0, 1] and precision parameter r N, the string≥ x ˙[0 : nr] specifies a dyadic point that is within distance∈ 2−r√n of x. Naturally, ∈ K(x ˙[0 : nr]) = Kr(x)+ O(log r) . (2) (Throughout this work, the dimensions m and n of Euclidean spaces are treated as constants in asymptotic notation.) See [18] for a detailed analysis of the relationship between these two quantities.

4 3.3 Conditional and Relative Dimensions By analogy with Definition 4, J. H. Lutz and N. Lutz [16] defined conditional dimensions. For every E Rm and F Rn, define ⊆ ⊆ K(E F )= max min K(p q) , | q∈F ∩Qn p∈E∩Qm | where this quantity is understood to be infinite if E Qm is empty and K(E) if F Qn is empty. This is the length of the shortest program that outputs some rational∩ point in E when given the∩ “least helpful” rational point from F as an input. The conditional Kolmogorov complexity of x Rm given y Rn at precision r N is given by ∈ ∈ ∈ Kr(x y)= K(B −r (x) B −r (y)) , | 2 | 2 the length of the shortest program that outputs any precision-r rational approximation of x, given an adversarially selected precision-r rational approximation of y. These quantities can also be put in terms of binary expansions with only logarithmic additive error [18]: For all x Rm, y Rn, and r N, ∈ ∈ ∈

K(x ˙[0 : mr] y˙[0 : nr]) = Kr(x y)+ O(log r) . (3) | | Definition 5 ([16]). Let x Rm and y Rn. The lower and upper conditional dimensions of x given y are ∈ ∈ Kr(x y) Kr(x y) dim(x y) = lim inf | and Dim(x y) = lim sup | , | r→∞ r | r→∞ r respectively. Conditional dimensions obey the following chain rule for dimensions, which plays a key role in the proofs of our main results. Theorem 5 ([16]). For all x Rm and y Rn, ∈ ∈ dim(x y) + dim(y) dim(x, y) | ≤ Dim(x y) + dim(y) ≤ | Dim(x, y) ≤ Dim(x y) + Dim(y) . ≤ | In addition to conditional dimensions, which are based on bounded-precision access to the given point y, we consider relativized dimensions, which allow for arbitrary access to a given point or set. By making the fixed universal machine U an oracle machine, the algorithmic information quantities above may be defined A A A A A A relative to any oracle A N. The definitions of K (σ), K (σ τ), Kr (x), Kr (x y), dim (x), Dim (x), dimA(x y), and DimA(⊆x y) all exactly mirror their unrelativized| versions, except| that U is permitted | | n to query membership in A as a computational step. For y R , we let Ay N be the set defined by y y ∈ Ay ⊆ Ay i Ay y˙i = 1, and we write dim (x) and Dim (x) as shorthand for dim (x) and Dim (x). ∈ Relativization⇐⇒ and conditioning never increase the dimension of a point; the length of an instruction to ignore the additional information is asymptotically negligible. The following lemma formalizes the related intuition that, when approximating a point x, arbitrary access to another point y is asymptotically at least as helpful as bounded-precision access to y.

m n y Lemma 6 ([16]). If x R , y R , and r N, then Kr (x) Kr(x y)+ O(log n), and therefore dimy(x) dim(x y) and∈Dimy(x)∈ Dim(x y).∈ ≤ | ≤ | ≤ | 3.4 Point-to-Set Principle Algorithmic fractal dimensions were conceived as effective versions of classical Hausdorff dimension and packing dimension [15, 1]. J. H. Lutz and N. Lutz [16] established the following point-to-set principle, which uses relativization to precisely characterize the relationship of these dimensions to their non-algorithmic precursors. Theorem 7 ([16]). For every E Rn, the Hausdorff dimension and packing dimension of E are ⊆

5 A 1. dimH(E) = min sup dim (x) , A⊆N x∈E

A 2. dimP(E) = min sup Dim (x) . A⊆N x∈E n A We call A N a Hausdorff oracle for a set E R if dimH(E) = supx∈E Dim (x) and a packing ⊆ A ⊆ oracle for E if dimP(E) = supx∈E dim (x). Notice that because relativization cannot increase dimension, Theorem 7 implies that pairing a Hausdorff oracle for E with any other oracle results in another Hausdorff oracle for E, and similarly for packing oracles. This theorem allows us to prove lower bounds on classical dimensions in a pointwise manner. In a typical proof using this technique, we apply Theorem 7 twice, in two different ways. For the sake of clarity about how it is being applied each time, we enumerate its immediate consequences in the following corollary. Corollary 8. Let E Rn. ⊆ 1. There is an oracle A N such that dim (E) = sup dimA(x). ⊆ H x∈E 2. For every A N and every ε> 0, there is a point x E such that dimA(x) > dim (E) ε. ⊆ ∈ H − 3. There is an oracle A N such that dim (E) = sup DimA(x). ⊆ P x∈E 4. For every A N and every ε> 0, there is a point x E such that DimA(x) > dim (E) ε. ⊆ ∈ P − Unlike earlier applications of this bounding technique [16, 18], the proofs in Sections 4 and 5 do not directly invoke Kolmogorov complexity; the only tools needed are Lemma 4, Theorem 5, Lemma 6, and Theorem 7 (in the form of Corollary 8). The proof of our product characterization of packing dimension in Section 6 employs both the point-to-set principle and a construction that explicitly uses Kolmogorov complexity.

4 Intersections of Fractals

In this section we prove Theorem 1. We then use a symmetric argument to give a new proof of the cor- responding statement for packing dimension, which was originally proved by Falconer [10]. The case of Theorem 1 in which E, F Rn are Borel sets was also proved by Falconer [11]. The special case where E R2 is a Borel set and F⊆ is a vertical is Marstrand’s slicing theorem [19, 5]. Closely related results, which⊆ also place restrictions on E and F , were proved by Mattila [21, 22] and Kahane [13]. Theorem 1. For all E, F Rn, and for (Lebesgue) almost all z Rn, ⊆ ∈ dim (E (F + z)) max 0, dim (E F ) n , (4) H ∩ ≤ { H × − } where F + z = x + z : x F . { ∈ } Proof. Let E, F Rn and z Rn. If E (F + z) = , then inequality (4) holds trivially, so assume that the intersection is⊆ nonempty.∈ Applying Corollary∩ 8(1),∅ let A N be a Hausdorff oracle for E F , i.e., ⊆ × A dimH(E F ) = sup dim (x, y) . (5) × (x,y)∈E×F Fix ε> 0. Corollary 8(2), applied to E (F + z), gives a point x E (F + z) such that ∩ ∈ ∩ dimA,z(x) dim (E (F + z)) ε . (6) ≥ H ∩ − Since (x, x z) E F , we have − ∈ × dim (E F ) dimA(x, x z) (by equation (5)) H × ≥ − = dimA(x, z) (by Lemma 4) dimA(z) + dimA(x z) (byTheorem5relativeto A) ≥ | dimA(z) + dimA,z(x) (by Lemma 6 relative to A) ≥ dimA(z) + dim (E (F + z)) ε . (by inequality (6)) ≥ H ∩ −

6 Letting ε 0, we have → dim (E (F + z)) dim (E F ) dimA(z) . H ∩ ≤ H × − Thus, inequality (4) holds whenever dimA(z) = n. In particular, it holds when z is Martin-L¨of random relative to A, i.e., for Lebesgue almost all z Rn [14, 20]. ∈ The corresponding intersection formula for packing dimension has been shown for arbitrary E, F Rn by Falconer [10]. That proof is not difficult or long, but an algorithmic dimensional proof is presented⊆ here as an instance where this technique applies symmetrically to both Hausdorff and packing dimension. Theorem 9 ([10]). For all E, F Rn, and for (Lebesgue) almost all z Rn, ⊆ ∈ dim (E (F + z)) max 0, dim (E F ) n . P ∩ ≤ { P × − } Proof. As in Theorem 1, we may assume that the intersection is nonempty. Apply Corollary 8(3) to find a packing oracle B N for E F , i.e., ⊆ × B dimP(E F ) = sup Dim (x, y) (7) × (x,y)∈E×F

and, given ε> 0, apply Corollary 8(4) to E (F + z) to find a point y E (F + z) satisfying ∩ ∈ ∩ DimB,z(y) dim (E (F + z)) ε . (8) ≥ P ∩ − Then (y,y z) E F , and we may proceed much as before: − ∈ × dim (E F ) DimB(y,y z) (by equation (7)) P × ≥ − = DimB(y,z) (by Lemma 4) dimB(z) + DimB(y z) (byTheorem5relativeto B) ≥ | dimB(z) + DimB,z(y) (by Lemma 6 relative to B) ≥ dimB(z) + dim (E (F + z)) ε . (by inequality (8)) ≥ P ∩ − Again, dimB (z)= n for almost every z Rn, so this completes the proof. ∈ 5 Product Inequalities for Fractals

In this section we give new proofs of four known product inequalities for fractal dimensions. Inequality (9) below, which was stated in the introduction as Theorem 3, is due to Marstrand [19]. The other three inequalities are due to Tricot [37]. Reference [23] gives a more detailed account of this history. Theorem 10 ([19, 37]). For all E Rm and F Rn, ⊆ ⊆ dim (E) + dim (F ) dim (E F ) (9) H H ≤ H × dim (E) + dim (F ) (10) ≤ P H dim (E F ) (11) ≤ P × dim (E) + dim (F ) . (12) ≤ P P When E and F are Borel sets, it is simple to prove inequality (9) by using Frostman’s Lemma, but the argument for general sets using net measures is considerably more difficult [23, 9]. The new proof of inequality (9) given here applies to general sets but is similar in length to the previous proof for Borel sets. Although previous proofs of inequalities (9)–(11) for general sets were already short, we present new, algorithmic information-theoretic proofs of those inequalities. Notice the superficial resemblance of Theo- rem 10 to Theorem 5. This is not a coincidence; each inequality in Theorem 10 follows from the corresponding line in Theorem 5.

7 Proof. Corollary 8(1) gives a Hausdorff oracle for E F , i.e., a set A N such that × ⊆ A dimH(E F ) = sup dim (z) , (13) × z∈E×F and, for every ε> 0, Corollary 8(2) gives points x E and y F such that ∈ ∈ dimA(x) dim (E) ε (14) ≥ H − and dimA,x(y) dim (F ) ε . (15) ≥ H − Then we have

dim (E F ) dimA(x, y) (by equation (13)) H × ≥ dimA(x) + dimA(y x) (byTheorem5relativeto A) ≥ | dimA(x) + dimA,x(y) (by Lemma 6 relative to A) ≥ dim (E) + dim (F ) 2ε . (by inequalities (14) and (15)) ≥ H H − Since ε> 0 was arbitrary, we conclude that inequality (9) holds. For inequality (10), we use a packing oracle for E and a Hausdorff oracle for F . Apply Corollary 8(3) to find a set B N such that ⊆ B dimP(E) = sup Dim (z) , (16) z∈E and apply Corollary 8(1) to find a set C N such that ⊆ C dimH(F ) = sup dim (z) . (17) z∈F Given ε> 0, apply Corollary 8(2) to find a point (u, v) E F such that ∈ × dimB,C(u, v) dim (E F ) ε . (18) ≥ H × − Then we have

dim (E) + dim (F ) DimB(u) + dimC (v) (by inequalities (16) and (17)) P H ≥ DimB,C (u) + dimB,C(v) (relativization cannot increase dimension) ≥ DimB,C (u v) + dimB,C (v) (conditioning cannot increase dimension) ≥ | dimB,C(u, v) (byTheorem5relativeto B, C) ≥ dim (E F ) ε . (by inequality (18)) ≥ H × − Again, ε> 0 was arbitrary, so inequality (10) holds. For inequalities (11) and (12), we use essentially the same arguments as above. We begin by finding a Hausdorff oracle for E and packing oracles for E F and F . Apply Corollary 8(1) to find a set B′ N such × ⊆ that ′ B dimH(E) = sup dim (z) , (19) z∈E and apply Corollary 8(3) to find sets A′, C′ N such that ⊆ ′ A dimP(E F ) = sup Dim (z) , (20) × z∈E×F

and ′ C dimP(F ) = sup Dim (z) , (21) z∈F Given ε> 0, apply Corollary 8(2) to find a point y′ F such that ∈ ′ dimA (y′) dim (F ) ε , (22) ≥ H −

8 and apply Corollary 8(4) to find points x′,u′ E and v′ F such that ∈ ∈ ′ ′ DimA ,y (x′) dim (E) ε (23) ≥ P − and ′ DimB,C (u′, v′) dim (E F ) ε , (24) ≥ P × − where B is as in equation (16). Then we have

′ B ′ C ′ dimP(E) + dimP(F ) Dim (u ) + Dim (v ) (by equations (16) and (21)) ≥ ′ ′ DimB,C (u′) + DimB,C (v′) (relativization cannot increase dimension) ≥ ′ ′ DimB,C (u′ v′) + DimB,C (v′) (conditioning cannot increase dimension) ≥ ′ | DimB,C (u′, v′) (byTheorem5relativeto B, C′) ≥ dimP(E F ) ε (by inequality (24)) ≥ ′ × − DimA (x′,y′) ε (by equation (20)) ≥ ′ − ′ DimA (x′ y′) + dimA (y′) ε (by Theorem 5 relative to A′) ≥ ′ ′ | ′ − DimA ,y (x′) + dimA (y′) ε (by Lemma 6 relative to A′) ≥ − dim (E) + dim (F ) 3ε . (by inequalities (22) and (23)) ≥ P H − Letting ε 0 completes the proof. → 6 Product Characterization of Packing Dimension

Theorem 10 is clearly tight in the sense that, for each inequality (9)–(12), there exist sets E and F that will make equality hold; for example, let E = F = . In fact, for inequalities (9), (11), and (12), an even stronger tightness property holds: For every set E ∅Rm, there is a set F (e.g., any singleton) such that equality holds. Does this latter property hold for inequality⊆ (10)? The answer is much less obvious, but Bishop and Peres [4] proved that for every analytic set E, there are compact sets F that make inequality (10) approach equality, in the sense that dim (E) = sup dim (E F ) dim (F ) , P H × − H where the supremum is over all sets compact sets F Rn. We now prove that for arbitrary E Rn, there are (not necessarily compact) sets F Rn that make⊆ inequality (10) approach equality. ⊆ ⊆ Theorem 2. For every set E Rn, ⊆

dimP(E) = sup dimH(E F ) dimH(F ) . F ⊆Rn × −  Proof. Let E Rn, and let ε > 0. By inequality (10), it suffices to show that there is some set F Rn satisfying ⊆ ⊆ dim (E) dim (E F ) dim (F )+ ε . (25) P ≤ H × − H We define the set F Rn by ⊆ F = y Rn : dim(y) n dim (E)+ ε . (26) { ∈ ≤ − P } Then by Theorem 7, dimH(F ) sup dim(y) n dimP(E)+ ε , ≤ y∈F ≤ − so it only remains to show that dim (E F ) n . (27) H × ≥

9 Corollary 8(1) tells us that there is a Hausdorff oracle for E F , i.e., an oracle A N such that × ⊆ A dimH(E F ) = sup dim (x, y) . (28) × (x,y)∈E×F

We now construct a point (x, y) E F whose dimension relative to A is close to n. Let ∈ × ε δ 0, . (29) ∈ 2n +1   By Corollary 8(4), there is some point x E such that ∈ DimA(x) > dim (E) δ . (30) P − We want the point y F to complement x, in the sense that y’s information density should be high at precisions where x’s information∈ density is low, so that the pair (x, y) can never have information density much lower than n. At the same time, in order for y to belong to F , y needs to have low information density — arbitrarily close to n dimP(E)+ ε — at infinitely many precisions. To this end, we construct− y by appending zeros to its binary expansion when the information density of (x, y) exceeds n, and appending random bits when that information density drops below n. This decision — whether to append random bits or zeros — is made at each of an exponentially increasing sequence of precisions. j Formally, for each j N, let rj = (1 + δ) . Notice that rj+1 rj = δrj + O(1) and log(rj ) = Θ(j). Define the point y [0, 1]∈n by, for all j⌊ N, ⌋ − ∈ ∈ A uj if Krj (x, y) > nrj y˙[nrj : nrj+1]= (vj otherwise ,

nrj+1−nrj ∗ where uj = 0 and vj 0, 1 is an (nrj+1 nrj )-bit string that is random relative to A, x, and A,x∈ { } − y˙[0 : nrj ], in the sense that K (vj y˙[0 : nrj ]) nrj nrj . Hence, for all r rj , | ≥ +1 − ≤ +1 A,x K (vj [0 : nr nrj ] y˙[0 : nrj ]) = nr nrj + O(j) . (31) − | − Constructed in this way, the point y has two key properties. First, the dimension and strong dimension of (x, y) are both close to n. Second, access to information about A and x cannot affect the complexity of y very much. These properties are formalized in the following two lemmas. Lemma 11. n δn dimA(x, y) DimA(x, y) n +2δn. − ≤ ≤ ≤ Proof. First, we will prove by induction on j that there is a constant c such that, for all j 1 and all integer ≥ precisions r [rj−1, rj ], ∈ (n δn)r cj2 KA(x, y) (n +2δn)r + cj2 . (32) − − ≤ r ≤ j Recall that rj = (1 + δ) . We can choose c large enough to make inequality (32) hold for r = r = 1. ⌊ ⌋ 0 Fix j 1 and r [rj , rj+1], and let c be a sufficiently large constant satisfying inequality (32) for r = rj . We consider≥ two cases.∈ A First, if Krj (x, y) nrj , theny ˙[nrj : nrj+1]= vj . Therefore, by equations (1) and (2), assuming that c is large enough to accommodate≤ the implicit constants in each appearance of O(j), we have

A A A ˙ K (x, y) K (x, y)+ K (vj [0 : nr nrj )] (x, y)[0 : nrj ]) + O(j) r ≥ rj − | A K (x, y)+ nr nrj O(j) (by equation (31)) ≥ rj − − 2 (n δn)rj cj + nr nrj O(j) (by inductive hypothesis) ≥ − − − − 2 = nr δnrj cj O(j) − − − (n δn)r cj2 O(j) ≥ − − − (n δn)r c (j + 1)2 ≥ − − ·

10 and

A A A ˙ K (x, y) K (x, y)+ K ((x, y)[2nrj :2nr]) + O(j) r ≤ rj A ˙ nrj + K ((x, y)[2nrj :2nr]) + O(j) ≤ nrj +2nr 2nrj + O(j) ≤ − nrj +2n(1 + δ)rj 2nrj + O(j) ≤ − = (n 2δn)rj + O(j) − (n 2δn)r + c (j + 1)2 . ≤ − · A Otherwise, if Krj (x, y) > nrj , theny ˙[nrj : nrj+1] = uj. Applying equations (1) and (2) and again assuming that c is sufficiently large to accommodate the constants in each O(j), we have

KA(x, y) KA (x, y) O(j) r ≥ rj − nrj O(j) ≥ − 2 (1 δ )nrj O(j) ≥ − − = (n δn)(1 + δ)rj O(j) − − (n δn)r O(j) ≥ − − (n δn)r c (j + 1)2 ≥ − − · and

A A A nr−nrj A K (x, y) K (x, y)+ K (0 )+ K (x ˙[nr : nrj ]) + O(j) r ≤ rj A K (x, y)+ nr nrj + O(j) ≤ rj − 2 nrj + cj + nr nrj + O(j) (by inductive hypothesis) ≤ − nr + c (j + 1)2 . ≤ ·

Thus, by induction, there is a constant c such that (32) holds for all j 1 and all r rj . Since j = O(log r), dividing by r and taking limits completes the proof of the lemma.≥ ≤

A Lemma 12. For all r N, Kr(y) K (y x)+ O(log r). ∈ ≤ r | Proof. First observe that for all j N ∈ A,x K(y ˙[nrj : nrj ] y˙[0 : nrj ]) K (y ˙[nrj : nrj ] y˙[0 : nrj ]) + O(j) . (33) +1 | ≤ +1 |

This is true becausey ˙[nrj : nrj ] uj, vj , +1 ∈{ }

K(uj ) log uj + 2 log log uj + O(1) = O(j) , ≤ | | | | and K(vj ) vj + 2 log vj + O(1) = vj + O(j) , ≤ | | | | | | A,x which is at most O(j) greater than K (vj y˙[0 : nrj ]) by inequality (31). | Now let r N, and let j N satisfy rj r < rj . For exactly the same reason as inequality (33), ∈ ∈ ≤ +1 A,x K(y ˙[nrj : nr] y˙[0 : nrj ]) K (y ˙[nrj : nr] y˙[0 : nrj ]) + O(j) . (34) | ≤ | We now use inequalities (33) and (34) together with equation (2) and a chained application of the symmetry

11 of information:

Kr(y)= K(y ˙[0 : nr]) + O(j) j−1

= K(y ˙[nrj : nr] y˙[0 : nrj ]) + K(y ˙[nri : nri ] y˙[0 : nri]) + O(i) + O(j) | +1 | i=0 X j−1  A,x A,x K (y ˙[nrj : nr] y˙[0 : nrj ]) + K (y ˙[nri : nri ] y˙[0 : nri]) + O(i) + O(j) ≤ | +1 | i=0 X  = KA,x(y ˙[0 : nr]) + O(j2) A,x = Kr (y)+ O(log r) KA(y x)+ O(log r) , ≤ r | where the last inequality is an application of Lemma 6 relative to A. We will use two simple consequences of Lemma 12 in terms of dimension, as given in the following corollaries. Corollary 13. dim(y) dimA(y). ≤ Proof. Dividing the conclusion of Lemma 12 by r and taking limits inferior gives dim(y) dimA,x(y). Since relativization cannot increase dimension, dimA,x(y) dimA(y). ≤ ≤ Corollary 14. DimA(x) DimA(x y) ≤ | Proof. By symmetry of information,

KA(x)+ KA(y x) KA(y)+ KA(x y)+ O(log r) . r r | ≤ | Combining this with Lemma 12 gives

KA(x) KA(x y)+ O(log r) . r ≤ r | Dividing by r and taking limits superior shows that DimA(x) DimA(x y). ≤ | We are now ready to complete the proof of Theorem 2. First,

dim(y) dimA(y) (by Corollary 13) ≤ DimA(x, y) DimA(x y) (byTheorem5relativeto A) ≤ − | = DimA(x, y) DimA(x) (by Corollary 14) − n +2δn DimA(x) (by Lemma 11) ≤ −

12 7 Conclusion

The applications of theoretical computer science to pure mathematics in this paper yielded significant gen- eralizations to basic theorems on Hausdorff and packing dimensions, as well as simpler arguments for other such theorems. Understanding classical fractal dimensions as pointwise, algorithmic information theoretic quantities enables reasoning about them in a way that is both fine-grained and intuitive, and the proofs in this work are further evidence of the power and versatility of bounding techniques using the point-to-set principle (Theorem 7). In particular, Theorems 1 and 2 demonstrate that this approach can be used to strengthen the foundations of fractal geometry. Therefore, in addition to further applications of these tech- niques, broadening and refining our understanding of the relationship between classical geometric measure theory and Kolmogorov complexity is an appealing direction for future investigations.

References

[1] Krishna B. Athreya, John M. Hitchcock, Jack H. Lutz, and Elvira Mayordomo. Effective strong dimen- sion in algorithmic information and computational complexity. SIAM Journal of Computing, 37(3):671– 705, 2007. doi:10.1137/s0097539703446912. [2] Ver´onica Becher, Jan Reimann, and Theodore A. Slaman. Irrationality exponent, Haus- dorff dimension and effectivization. Monatshefte f¨ur Mathematik, 185(2):167–188, Feb 2018. doi:10.1007/s00605-017-1094-2. [3] Christopher J. Bishop. Personal communication, April 27, 2017. [4] Christopher J. Bishop and Yuval Peres. Packing dimension and Cartesian products. Transactions of the American Mathematical Society, 348:4433–4445, 1996. doi:10.1090/S0002-9947-96-01750-3. [5] Christopher J. Bishop and Yuval Peres. Fractals in Probability and Analysis. Cambridge Studies in Advanced Mathematics. Cambridge University Press, 2016. doi:10.1017/9781316460238. [6] Jin-Yi Cai and Juris Hartmanis. On Hausdorff and topological dimensions of the Kolmogorov complexity of the . Journal of Computer and System Sciences, 49(3):605–619, 1994. doi:10.1016/S0022-0000(05)80073-X.

[7] Adam Case and Jack H. Lutz. Mutual dimension. ACM Transactions on Computation Theory, 7(3):12, 2015. doi:10.1145/2786566. [8] R. O. Davies. Some remarks on the Kakeya problem. Proceedings of the Cambridge Philosophical Society, 69:417–421, 1971. doi:10.1017/s0305004100046867. [9] Kenneth J. Falconer. The Geometry of Fractal Sets. Cambridge University Press, 1985. doi:10.1017/cbo9780511623738. [10] Kenneth J. Falconer. Sets with large intersection properties. Journal of the London Mathematical Society, 49(2):267–280, 1994. doi:10.1112/jlms/49.2.267. [11] Kenneth J. Falconer. Fractal Geometry: Mathematical Foundations and Applications. Wiley, third edition, 2014. doi:10.1002/0470013850. [12] Felix Hausdorff. Dimension und ¨ausseres Mass. Mathematische Annalen, 79:157–179, 1919. doi:10.1007/978-3-642-59483-0_2. [13] Jean-Pierre Kahane. Sur la dimension des intersections. In Jorge Alberto Barroso, editor, Aspects of mathematics and its applications, North-Holland Mathematical Library, 34, pages 419–430. Elsevier, 1986. doi:10.1016/s0924-6509(09)70272-7. [14] Ming Li and Paul M.B. Vit´anyi. An Introduction to Kolmogorov Complexity and Its Applications. Springer, third edition, 2008. doi:10.1007/978-0-387-49820-1.

13 [15] Jack H. Lutz. The dimensions of individual strings and sequences. Information and Computation, 187(1):49–79, 2003. doi:10.1016/s0890-5401(03)00187-1. [16] Jack H. Lutz and Neil Lutz. Algorithmic information, plane Kakeya sets, and conditional dimension. ACM Transactions on Computation Theory, 10(2):7:1–7:22, 2018. doi:10.1145/3201783. [17] Jack H. Lutz and Elvira Mayordomo. Dimensions of points in self-similar fractals. SIAM Journal of Computing, 38(3):1080–1112, 2008. doi:10.1007/978-3-540-69733-6_22. [18] Neil Lutz and D. M. Stull. Bounding the dimension of points on a line. Information and Computation, 275, 2020. doi:10.1016/j.ic.2020.104601. [19] John M. Marstrand. Some fundamental geometrical properties of plane sets of fractional dimensions. Proceedings of the London Mathematical Society, 4(3):257–302, 1954. doi:10.1112/plms/s3-4.1.257. [20] Per Martin-L¨of. The definition of random sequences. Information and Control, 9(6):602–619, 1966. doi:10.1016/s0019-9958(66)80018-9. [21] Pertti Mattila. Hausdorff dimension and capacities of intersections of sets in n-space. Acta Mathematica, 152:77–105, 1984. doi:10.1007/bf02392192. [22] Pertti Mattila. On the Hausdorff dimension and capacities of intersections. Mathematika, 32:213–217, 1985. doi:10.1112/s0025579300011001. [23] Pertti Mattila. Geometry of sets and measures in Euclidean spaces: fractals and rectifiability. Cambridge University Press, 1995. doi:10.1017/cbo9780511623813. [24] Elvira Mayordomo. A Kolmogorov complexity characterization of constructive Hausdorff dimension. Information Processing Letters, 84(1):1–3, 2002. doi:10.1016/s0020-0190(02)00343-5. [25] Elvira Mayordomo. Effective fractal dimension in algorithmic information theory. In S. Barry Cooper, Benedikt L¨owe, and Andrea Sorbi, editors, New Computational Paradigms: Changing Conceptions of What is Computable, pages 259–285. Springer New York, 2008. doi:10.1007/978-0-387-68546-5_12. [26] Ursula Molter and Ezequiel Rela. Furstenberg sets for a fractal set of directions. Proceedings of the American Mathematical Society, 140:2753–2765, 2012. doi:10.1090/s0002-9939-2011-11111-0. [27] Jan Reimann. Computability and fractal dimension. PhD thesis, Heidelberg University, 2004. doi:10.11588/heidok.00005543. [28] Jan Reimann. Effectively closed sets of measures and . Annals of Pure and Applied Logic, 156(1):170–182, 2008. doi:10.1016/j.apal.2008.06.015. [29] Boris Ryabko. Coding of combinatorial sources and Hausdorff dimension. Soviets Mathematics Doklady, 30:219–222, 1984. [30] Boris Ryabko. Noiseless coding of combinatorial sources. Problems of Information Transmission, 22:170– 179, 1986. [31] Boris Ryabko. Algorithmic approach to the prediction problem. Problems of Information Transmission, 29:186–193, 1993. [32] Boris Ryabko. The complexity and effectiveness of prediction algorithms. Journal of Complexity, 10(3):281–295, 1994. doi:10.1006/jcom.1994.1015. [33] Ludwig Staiger. Kolmogorov complexity and Hausdorff dimension. Information and Computation, 103:159–194, 1989. doi:10.1007/3-540-51498-8_42. [34] Ludwig Staiger. A tight upper bound on Kolmogorov complexity and uniformly optimal prediction. Theory of Computing Systems, 31:215–229, 1998. doi:10.1007/s002240000086.

14 [35] Elias M. Stein and Rami Shakarchi. Real Analysis: Measure Theory, Integration, and Hilbert Spaces. Princeton Lectures in Analysis. Princeton University Press, 2005. [36] Dennis Sullivan. Entropy, Hausdorff measures old and new, and limit sets of geometrically finite Kleinian groups. Acta Mathematica, 153(1):259–277, 1984. doi:10.1007/bf02392379. [37] Claude Tricot. Two definitions of fractional dimension. Mathematical Proceedings of the Cambridge Philosophical Society, 91(1):57–74, 1982. doi:10.1017/s0305004100059119. [38] A. K. Zvonkin and L. A. Levin. The complexity of finite objects and the development of the concepts of information and randomness by means of the theory of algorithms. Russian Mathematical Surveys, 25:83–124, 1970. doi:10.1070/RM1970v025n06ABEH001269.

15