Dimension and Entropy

Total Page:16

File Type:pdf, Size:1020Kb

Dimension and Entropy 7. Dimension and Entropy. x This section will ¯rst describe several possible de¯nitions for the concept of \dimension", and then use related methods to de¯ne the \topological entropy" of a compact dynamical system. 7A. Box Dimension. There are several possible real valued invariants for a compact metricx space X which can be called the \dimension" of X . Let us ¯rst describe two dimen- sion functions which measure how e±ciently X can be covered by balls (or boxes) of equal size. Following Falconer, I will call these the upper and lower \box-counting dimension", or briefly box dimension, of X . The presentation will depend on some elementary inequalities, which will also be needed in 7C section on topological entropy. x Remarks. This concept of metric dimension has a long history, and has been studied under a variety of names. The oldest form of the de¯nition, based on studying the volume of the ²-neighborhood of X as a function of ² , was introduced by Bouligand, generalizing earlier work by Steiner and Minkowski. (See Problem 7-d below.) More modern forms of the de¯nition were introduced by Pontryagin and Schnirelman, and later by Kolmogorov and Tihomirov. (Compare Mandelbrot 39.) It should not be confused with Hausdor® dimension or with topological dimension. Thex Hausdor® dimension, introduced by Felix Hausdor® in 1919, is also a real valued metric invariant, but measures how e±ciently X can be covered by balls of varying size. It coincides with the box dimension in many special cases, but is better behaved in general. The topological dimension, introduced by Lebesgue, Brouwer, Menger and Urysohn, is quite di®erent, being integer valued, and invariant under arbitrary homeomorphisms. These other concepts of dimension will be discussed in 7B. (For further information, see Edgar, and Falconer, as well as the classical book of Hurewiczx and Wallman.) Let X be compact metric. Suppose that we can only distinguish points of X to an accuracy of ² . How many distinct points can we distinguish? Here are three possible de¯nitions. The distance between points of X will be written as d(x; x0) . By the ²-covering number C²(X) will be meant the smallest number k 0 so that X can be covered by sets X ; : : : ; X of diameter diam(X ) ² . ¸ 1 k i · By the ²-separated number S²(X) will be meant the largest cardinality of a subset x ; : : : ; x X such that d(x ; x ) > ² for h = j . f 1 `g ½ h j 6 2 Figure 33. A set X , covered by B (X) = 13 ²-boxes. This ²-box number ½ ² B²(X) is relatively easy to compute, while the numbers S²(X) and C²(X) are more di±cult to compute. 7-1 7. DIMENSION AND ENTROPY n Now suppose that X is contained in the euclidean space . Then the ²-box number n B²(X) is de¯ned as follows. Cover by countably many disjoint \half-open boxes" box (i ; : : : ; i ) = [i ² ; (i + 1)²) [i ² ; (i + 1)²) ² 1 n 1 1 £ ¢ ¢ ¢ £ n n of edge length ² , indexed by n-tuples of integers. De¯ne B²(X) to be the number of these boxes box²(i1 ; : : : ; in) which intersect X . Evidently this is a quantity which is particularly well adapted to computer calculations. An easy compactness argument shows that the number C²(X) is always ¯nite. The number S²(X) is also ¯nite, with C (X) S (X) C (X) : (7: 1) 2² · ² · ² For given sets X X = X and x ; : : : ; x X as above, we have ` k since 1 [ ¢ ¢ ¢ [ k f 1 `g ½ · each Xi can contain at most one xj , and the left hand inequality is true since the closed ²-neighborhoods of the x are sets of diameter 2² covering X . Similarly, note that j · C (X) B (X) 2n C (X) : (7: 2) ²pn · ² · ² The left hand inequality is true since each such ²-box has diameter ²pn , and the right hand inequality since a set of diameter ² can intersect at most 2n such ²-boxes. · How rapidly do the numbers C²(X) or S²(X) or B²(X) grow as ² tends to zero? For many interesting examples, these numbers grow roughly like a power of 1=² . In other words, the logarithms log C²(X) log S²(X) log B²(X) are roughly linear functions of log(1=²) , so that the ratios ¼ ¼ log C²(X) log S²(X) log B²(X) log(1=²) ¼ log(1=²) ¼ log(1=²) converge to a common limit as ² 0 . This limit does not always exist. (See Lemma 7.2 below. ????) However we can alwa!ys consider the lim inf , which we write as log C²(X) log S²(X) dim¡(X) = lim inf = lim inf ; B ² 0 log(1=²) ² 0 log(1=²) ! ! as well as the lim sup + log C²(X) log S²(X) dimB(X) = lim sup = lim sup : ² 0 log(1=²) ² 0 log(1=²) ! ! Here the equalities on the right follow easily from formula (7: 1) above. Similarly, if X is n contained in the Euclidean space , then it follows from (7: 2) that log B²(X) + log B²(X) dimB¡(X) = lim inf ; dimB(X) = lim sup : ² 0 log(1=²) ² 0 log(1=²) ! ! Furthermore, in this case it is easy to check that dim+(X) n . In all cases, note that B · + 0 dim¡(X) dim (X) · B · B · 1 provided that X = . 6 ; n De¯nition. Whether or not X is contained in , we will call dimB¡(X) the lower + + box dimension and dimB(X) the upper box dimension of X . If dimB¡(X) = dimB(X) , then 7-2 7B. OTHER DEFINITIONS we will call this common value simply the box dimension dimB(X) . (For a well de¯ned invariant intermediate between the upper and lower box dimensions, see Problem 7-e.) In terms of information theory, the number log B²(X) can be described as the quantity of information needed to specify the coordinates a point of X to within an accuracy of ² . If we use logarithms to the base two, so as to measure information in bits, and set ² = 1=2k , then the statement that log B (X)= log (1=²) dim (X) , means that 2 ² 2 ¼ B we need roughly k dimB(X) bits of information in order to specify a point of X to an accuracy of 1=2k . |||||||||||||||||| 7B. Other De¯nitions of Dimension. The box dimension is a rather crude invariant, wellxsuited to empirical studies. The Hausdor® dimension is a more sophisticated invariant. For our purposes, it can be de¯ned as follows. De¯nition. The Hausdor® dimension dimH (X) of a (not necessarily compact) metric space X is the in¯mum of real numbers ± 0 with the following property: For any ² > 0 , X can be covered by at most countably many¸ sets A with diam(A ) < ² and with f ig i ± diam(Ai) < ² : Xi If there is no such number ± , then by de¯nition dim (X) = + . H 1 Closely associated is the concept of Hausdor® measure. For an arbitrary subset S X ½ and an arbitrary real number ± 0 , the ±-dimensional outer Hausdor® measure ´±(S) is de¯ned by the formula ¸ ± ´±(S) = lim inf diam(Ai) [0; ] : ² 0 ( ) 2 1 ! Xi Here A is to range over all ¯nite or countably in¯nite coverings of S by sets of diameter f ig less than ² . It is not di±cult to check that ´±(X) = 0 whenever ± > dimH (X) , and that ´±(X) = whenever ± < dimH (X) . Thus this concept of measure is possibly non-trivial only when1± is precisely equal to the Hausdor® dimension of X . De¯nition. Following Lebesgue, the topological dimension dim top(X) can be de¯ned as the smallest integer n such that X can be covered by arbitrarily small open sets with the property that no point belongs to more than n + 1 of these sets. (This is sometimes called the covering dimension, to distinguish it from an alternative inductive de¯nition of topological dimension, due to Menger and Urysohn. However, these two de¯nitions of topological dimen- sion always agree, provided that X is metrizable with a countable dense subset. Compare [Hurewicz-Wallman].) As an example, if X is an n-dimensional simplicial complex, then the \open star neigh- borhoods" of the vertices form a covering with this property, and by subdividing the complex we can make this covering arbitrarily ¯ne. In fact the topological dimension is precisely equal to n in this case. (Compare Problem 7-g, as well as 7.1 below.) Note that all four de¯nitions of dimension clearly have the following monotonicity prop- erty: X Y = dim(X) dim(Y ) : (7: 3) ½ ) · 7-3 7. DIMENSION AND ENTROPY The relationship between the four de¯nitions can be described as follows. Lemma 7.1. The inequalities + 0 dim (X) dim (X) dim¡(X) dim (X) · top · H · B · B · 1 are valid whenever they make sense. Furthermore, if X is a compact region in Euclidean n-space, then all four dimensions take the value n . In fact the inequality dim (X) dim¡(X) follows easily from the de¯nitions. (Com- H · B pare Problem 7-h below.) The proof that dim top(X) dimH (X) is more di±cult, and will not be given here. It is due to NÄobeling for subsets· of Euclidean space and to Szpil- rajn in general. (See [Hurewicz and Wallman] for further information.) For the proof that n dim top(X) n when X contains an open subset of , see Problem 7-g. The rest of the proof is straigh¸ tforward. ¤ If X can be described as the union of ¯nitely many smooth compact pieces, then it is not hard to show that these four de¯nitions of dimension all coincide.
Recommended publications
  • 1 Lifts of Polytopes
    Lecture 5: Lifts of polytopes and non-negative rank CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Lifts of polytopes 1.1 Polytopes and inequalities Recall that the convex hull of a subset X n is defined by ⊆ conv X λx + 1 λ x0 : x; x0 X; λ 0; 1 : ( ) f ( − ) 2 2 [ ]g A d-dimensional convex polytope P d is the convex hull of a finite set of points in d: ⊆ P conv x1;:::; xk (f g) d for some x1;:::; xk . 2 Every polytope has a dual representation: It is a closed and bounded set defined by a family of linear inequalities P x d : Ax 6 b f 2 g for some matrix A m d. 2 × Let us define a measure of complexity for P: Define γ P to be the smallest number m such that for some C s d ; y s ; A m d ; b m, we have ( ) 2 × 2 2 × 2 P x d : Cx y and Ax 6 b : f 2 g In other words, this is the minimum number of inequalities needed to describe P. If P is full- dimensional, then this is precisely the number of facets of P (a facet is a maximal proper face of P). Thinking of γ P as a measure of complexity makes sense from the point of view of optimization: Interior point( methods) can efficiently optimize linear functions over P (to arbitrary accuracy) in time that is polynomial in γ P . ( ) 1.2 Lifts of polytopes Many simple polytopes require a large number of inequalities to describe.
    [Show full text]
  • Hausdorff Dimension of Singular Vectors
    HAUSDORFF DIMENSION OF SINGULAR VECTORS YITWAH CHEUNG AND NICOLAS CHEVALLIER Abstract. We prove that the set of singular vectors in Rd; d ≥ 2; has Hausdorff dimension d2 d d+1 and that the Hausdorff dimension of the set of "-Dirichlet improvable vectors in R d2 d is roughly d+1 plus a power of " between 2 and d. As a corollary, the set of divergent t t −dt trajectories of the flow by diag(e ; : : : ; e ; e ) acting on SLd+1 R= SLd+1 Z has Hausdorff d codimension d+1 . These results extend the work of the first author in [6]. 1. Introduction Singular vectors were introduced by Khintchine in the twenties (see [14]). Recall that d x = (x1; :::; xd) 2 R is singular if for every " > 0, there exists T0 such that for all T > T0 the system of inequalities " (1.1) max jqxi − pij < and 0 < q < T 1≤i≤d T 1=d admits an integer solution (p; q) 2 Zd × Z. In dimension one, only the rational numbers are singular. The existence of singular vectors that do not lie in a rational subspace was proved by Khintchine for all dimensions ≥ 2. Singular vectors exhibit phenomena that cannot occur in dimension one. For instance, when x 2 Rd is singular, the sequence 0; x; 2x; :::; nx; ::: fills the torus Td = Rd=Zd in such a way that there exists a point y whose distance in the torus to the set f0; x; :::; nxg, times n1=d, goes to infinity when n goes to infinity (see [4], chapter V).
    [Show full text]
  • Simplicial Complexes
    46 III Complexes III.1 Simplicial Complexes There are many ways to represent a topological space, one being a collection of simplices that are glued to each other in a structured manner. Such a collection can easily grow large but all its elements are simple. This is not so convenient for hand-calculations but close to ideal for computer implementations. In this book, we use simplicial complexes as the primary representation of topology. Rd k Simplices. Let u0; u1; : : : ; uk be points in . A point x = i=0 λiui is an affine combination of the ui if the λi sum to 1. The affine hull is the set of affine combinations. It is a k-plane if the k + 1 points are affinely Pindependent by which we mean that any two affine combinations, x = λiui and y = µiui, are the same iff λi = µi for all i. The k + 1 points are affinely independent iff P d P the k vectors ui − u0, for 1 ≤ i ≤ k, are linearly independent. In R we can have at most d linearly independent vectors and therefore at most d+1 affinely independent points. An affine combination x = λiui is a convex combination if all λi are non- negative. The convex hull is the set of convex combinations. A k-simplex is the P convex hull of k + 1 affinely independent points, σ = conv fu0; u1; : : : ; ukg. We sometimes say the ui span σ. Its dimension is dim σ = k. We use special names of the first few dimensions, vertex for 0-simplex, edge for 1-simplex, triangle for 2-simplex, and tetrahedron for 3-simplex; see Figure III.1.
    [Show full text]
  • Degrees of Freedom in Quadratic Goodness of Fit
    Submitted to the Annals of Statistics DEGREES OF FREEDOM IN QUADRATIC GOODNESS OF FIT By Bruce G. Lindsay∗, Marianthi Markatouy and Surajit Ray Pennsylvania State University, Columbia University, Boston University We study the effect of degrees of freedom on the level and power of quadratic distance based tests. The concept of an eigendepth index is introduced and discussed in the context of selecting the optimal de- grees of freedom, where optimality refers to high power. We introduce the class of diffusion kernels by the properties we seek these kernels to have and give a method for constructing them by exponentiating the rate matrix of a Markov chain. Product kernels and their spectral decomposition are discussed and shown useful for high dimensional data problems. 1. Introduction. Lindsay et al. (2008) developed a general theory for good- ness of fit testing based on quadratic distances. This class of tests is enormous, encompassing many of the tests found in the literature. It includes tests based on characteristic functions, density estimation, and the chi-squared tests, as well as providing quadratic approximations to many other tests, such as those based on likelihood ratios. The flexibility of the methodology is particularly important for enabling statisticians to readily construct tests for model fit in higher dimensions and in more complex data. ∗Supported by NSF grant DMS-04-05637 ySupported by NSF grant DMS-05-04957 AMS 2000 subject classifications: Primary 62F99, 62F03; secondary 62H15, 62F05 Keywords and phrases: Degrees of freedom, eigendepth, high dimensional goodness of fit, Markov diffusion kernels, quadratic distance, spectral decomposition in high dimensions 1 2 LINDSAY ET AL.
    [Show full text]
  • Arxiv:1910.10745V1 [Cond-Mat.Str-El] 23 Oct 2019 2.2 Symmetry-Protected Time Crystals
    A Brief History of Time Crystals Vedika Khemania,b,∗, Roderich Moessnerc, S. L. Sondhid aDepartment of Physics, Harvard University, Cambridge, Massachusetts 02138, USA bDepartment of Physics, Stanford University, Stanford, California 94305, USA cMax-Planck-Institut f¨urPhysik komplexer Systeme, 01187 Dresden, Germany dDepartment of Physics, Princeton University, Princeton, New Jersey 08544, USA Abstract The idea of breaking time-translation symmetry has fascinated humanity at least since ancient proposals of the per- petuum mobile. Unlike the breaking of other symmetries, such as spatial translation in a crystal or spin rotation in a magnet, time translation symmetry breaking (TTSB) has been tantalisingly elusive. We review this history up to recent developments which have shown that discrete TTSB does takes place in periodically driven (Floquet) systems in the presence of many-body localization (MBL). Such Floquet time-crystals represent a new paradigm in quantum statistical mechanics — that of an intrinsically out-of-equilibrium many-body phase of matter with no equilibrium counterpart. We include a compendium of the necessary background on the statistical mechanics of phase structure in many- body systems, before specializing to a detailed discussion of the nature, and diagnostics, of TTSB. In particular, we provide precise definitions that formalize the notion of a time-crystal as a stable, macroscopic, conservative clock — explaining both the need for a many-body system in the infinite volume limit, and for a lack of net energy absorption or dissipation. Our discussion emphasizes that TTSB in a time-crystal is accompanied by the breaking of a spatial symmetry — so that time-crystals exhibit a novel form of spatiotemporal order.
    [Show full text]
  • THE DIMENSION of a VECTOR SPACE 1. Introduction This Handout
    THE DIMENSION OF A VECTOR SPACE KEITH CONRAD 1. Introduction This handout is a supplementary discussion leading up to the definition of dimension of a vector space and some of its properties. We start by defining the span of a finite set of vectors and linear independence of a finite set of vectors, which are combined to define the all-important concept of a basis. Definition 1.1. Let V be a vector space over a field F . For any finite subset fv1; : : : ; vng of V , its span is the set of all of its linear combinations: Span(v1; : : : ; vn) = fc1v1 + ··· + cnvn : ci 2 F g: Example 1.2. In F 3, Span((1; 0; 0); (0; 1; 0)) is the xy-plane in F 3. Example 1.3. If v is a single vector in V then Span(v) = fcv : c 2 F g = F v is the set of scalar multiples of v, which for nonzero v should be thought of geometrically as a line (through the origin, since it includes 0 · v = 0). Since sums of linear combinations are linear combinations and the scalar multiple of a linear combination is a linear combination, Span(v1; : : : ; vn) is a subspace of V . It may not be all of V , of course. Definition 1.4. If fv1; : : : ; vng satisfies Span(fv1; : : : ; vng) = V , that is, if every vector in V is a linear combination from fv1; : : : ; vng, then we say this set spans V or it is a spanning set for V . Example 1.5. In F 2, the set f(1; 0); (0; 1); (1; 1)g is a spanning set of F 2.
    [Show full text]
  • 23. Dimension Dimension Is Intuitively Obvious but Surprisingly Hard to Define Rigorously and to Work With
    58 RICHARD BORCHERDS 23. Dimension Dimension is intuitively obvious but surprisingly hard to define rigorously and to work with. There are several different concepts of dimension • It was at first assumed that the dimension was the number or parameters something depended on. This fell apart when Cantor showed that there is a bijective map from R ! R2. The Peano curve is a continuous surjective map from R ! R2. • The Lebesgue covering dimension: a space has Lebesgue covering dimension at most n if every open cover has a refinement such that each point is in at most n + 1 sets. This does not work well for the spectrums of rings. Example: dimension 2 (DIAGRAM) no point in more than 3 sets. Not trivial to prove that n-dim space has dimension n. No good for commutative algebra as A1 has infinite Lebesgue covering dimension, as any finite number of non-empty open sets intersect. • The "classical" definition. Definition 23.1. (Brouwer, Menger, Urysohn) A topological space has dimension ≤ n (n ≥ −1) if all points have arbitrarily small neighborhoods with boundary of dimension < n. The empty set is the only space of dimension −1. This definition is mostly used for separable metric spaces. Rather amazingly it also works for the spectra of Noetherian rings, which are about as far as one can get from separable metric spaces. • Definition 23.2. The Krull dimension of a topological space is the supre- mum of the numbers n for which there is a chain Z0 ⊂ Z1 ⊂ ::: ⊂ Zn of n + 1 irreducible subsets. DIAGRAM pt ⊂ curve ⊂ A2 For Noetherian topological spaces the Krull dimension is the same as the Menger definition, but for non-Noetherian spaces it behaves badly.
    [Show full text]
  • Linear Algebra Handout
    Artificial Intelligence: 6.034 Massachusetts Institute of Technology April 20, 2012 Spring 2012 Recitation 10 Linear Algebra Review • A vector is an ordered list of values. It is often denoted using angle brackets: ha; bi, and its variable name is often written in bold (z) or with an arrow (~z). We can refer to an individual element of a vector using its index: for example, the first element of z would be z1 (or z0, depending on how we're indexing). Each element of a vector generally corresponds to a particular dimension or feature, which could be discrete or continuous; often you can think of a vector as a point in Euclidean space. p 2 2 2 • The magnitude (also called norm) of a vector x = hx1; x2; :::; xni is x1 + x2 + ::: + xn, and is denoted jxj or kxk. • The sum of a set of vectors is their elementwise sum: for example, ha; bi + hc; di = ha + c; b + di (so vectors can only be added if they are the same length). The dot product (also called scalar product) of two vectors is the sum of their elementwise products: for example, ha; bi · hc; di = ac + bd. The dot product x · y is also equal to kxkkyk cos θ, where θ is the angle between x and y. • A matrix is a generalization of a vector: instead of having just one row or one column, it can have m rows and n columns. A square matrix is one that has the same number of rows as columns. A matrix's variable name is generally a capital letter, often written in bold.
    [Show full text]
  • The Simplex Algorithm in Dimension Three1
    The Simplex Algorithm in Dimension Three1 Volker Kaibel2 Rafael Mechtel3 Micha Sharir4 G¨unter M. Ziegler3 Abstract We investigate the worst-case behavior of the simplex algorithm on linear pro- grams with 3 variables, that is, on 3-dimensional simple polytopes. Among the pivot rules that we consider, the “random edge” rule yields the best asymptotic behavior as well as the most complicated analysis. All other rules turn out to be much easier to study, but also produce worse results: Most of them show essentially worst-possible behavior; this includes both Kalai’s “random-facet” rule, which is known to be subexponential without dimension restriction, as well as Zadeh’s de- terministic history-dependent rule, for which no non-polynomial instances in general dimensions have been found so far. 1 Introduction The simplex algorithm is a fascinating method for at least three reasons: For computa- tional purposes it is still the most efficient general tool for solving linear programs, from a complexity point of view it is the most promising candidate for a strongly polynomial time linear programming algorithm, and last but not least, geometers are pleased by its inherent use of the structure of convex polytopes. The essence of the method can be described geometrically: Given a convex polytope P by means of inequalities, a linear functional ϕ “in general position,” and some vertex vstart, 1Work on this paper by Micha Sharir was supported by NSF Grants CCR-97-32101 and CCR-00- 98246, by a grant from the U.S.-Israeli Binational Science Foundation, by a grant from the Israel Science Fund (for a Center of Excellence in Geometric Computing), and by the Hermann Minkowski–MINERVA Center for Geometry at Tel Aviv University.
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]
  • The Number of Degrees of Freedom of a Gauge Theory (Addendum to the Discussion on Pp
    The number of degrees of freedom of a gauge theory (Addendum to the discussion on pp. 59–62 of notes) Let us work in the Fourier picture (Remark, p. 61 of notes). In a general gauge, the Maxwell equation for the vector potential is µ α α µ α −∂ ∂µA + ∂ ∂µA = J . (1) Upon taking Fourier transforms, this becomes µ α α µ α ′ k kµA − k kµA = J , (1 ) where A~ and J~ are now functions of the 4-vector ~k. (One would normally denote the transforms by a caret (Aˆα, etc.), but for convenience I won’t.) The field strength tensor is F αβ = ∂αAβ − ∂βAα, (2) or F αβ = ikαAβ − ikβAα. (2′) The relation between field and current is (factor 4π suppressed) α αβ J = ∂βF , (3) or α αµ ′ J = ikµF . (3 ) Of course, (2′) and (3′) imply (1′). (1′) can be written in matrix form as J~ = MA,~ (4) 2 0 0 0 0 ~k − k k0 −k k1 −k k2 −k k3 1 ~ 2 1 1 1 ~ µ ~ −k k0 k − k k1 −k k2 −k k3 M(k) = k kµ I − k ⊗ k˜ = 2 2 2 2 2 (5) −k k0 −k k1 ~k − k k2 −k k3 3 3 3 2 3 −k k0 −k k1 −k k2 ~k − k k3 Consider a generic ~k (not a null vector). Suppose that A~ is a multiple of ~k : Aa(~k) = kαχ(~k). (6′) Then it is easy to see that A~ is in the kernel (null space) of M(~k); that is, it yields J~(~k) = 0.
    [Show full text]
  • Arxiv:2012.03192V2 [Gr-Qc] 25 Jan 2021 Rotation, Embedding and Topology for the Szekeres Geometry
    Rotation, Embedding and Topology for the Szekeres Geometry Charles Hellaby ∗ Dept. of Maths. and Applied Maths, University of Cape Town, Rondebosch, 7701, South Africa Robert G. Buckley † Dept. of Physics and Astronomy, University of Texas at San Antonio, San Antonio, Texas 78249, USA Abstract Recent work on the Szekeres inhomogeneous cosmological models uncovered a surprising rotation effect. Hellaby showed that the angular (θ, φ) coordinates do not have a constant orientation, while Buckley and Schlegel provided explicit expressions for the rate of rotation from shell to shell, as well as the rate of tilt when the 3-space is embedded in a flat 4-d Euclidean space. We here investigate some properties of this embedding, for the quasi-spherical recollapsing case, and use it to show that the two sets of results are in complete agreement. We also show how to construct Szekeres models that are closed in the ‘radial’ direction, and hence have a ‘natural’ embedded torus topology. Several explicit models illustrate the embedding as well as the shell rotation and tilt effects. 1 Previous Work Since its discovery, the Szekeres inhomogeneous cosmological model has always intrigued relativists, having no Killing vectors on the one hand, and yet still being silent on the other hand. However, the study of this metric has been limited by its relative complication, and, in the cases of the planar arXiv:2012.03192v2 [gr-qc] 25 Jan 2021 and hyperboloidal models, with ǫ = +1, lack of a Newtonian analogy from which to derive physical understanding. Still, precisely because6 it is one of the most realistic inhomogeneous exact solutions of Einstein’s field equations, which gives it much potential for application in modelling relatively complex cosmological structures on a range of scales, a fuller description of the geometry and the evolution of this spacetime is indispensable to a proper physical understanding.
    [Show full text]