Dimensionality Reduction in Euclidean Space

Total Page:16

File Type:pdf, Size:1020Kb

Dimensionality Reduction in Euclidean Space Dimensionality Reduction in Euclidean Space Jelani Nelson I begin with a description of what this article is not about. but complementary to the above-mentioned approaches, It is not about Principal Component Analysis (PCA), Ker- as the form of dimension reduction we focus on here for nel PCA, Multidimensional Scaling, ISOMAP, Hessian example can be used to obtain faster algorithms for approx- Eigenmaps, or other methods of dimensionality reduction imate PCA. primarily created to help understand high-dimensional Moving back a few steps from dimension reduction, datasets. Rather, this article focuses on high dimension- more generally an effective technique in the design of al- ality as a barrier to algorithmic efficiency (i.e., low run- gorithms processing geometric data is to employ a metric ning time and/or memory consumption), and explores embedding to transform the input in one given metric space how dimension reduction can be used as an algorithmic to another that is computationally friendlier, and then to tool to overcome this barrier. In fact, as we discuss more in work over the latter space (see the survey [Ind01]). To mea- length in Section 5.2, this view is not only different than sure the quality of such an embedding, we use the follow- ing terminology: given a host metric space 풳 = (푋, 푑푋 ) and Jelani Nelson is a professor of electrical engineering and computer science at the a target space 풴 = (푌, 푑푌 ), 푓 ∶ 푋 → 푌 is said to be a bi- University of California, Berkeley. His email address is minilek@berkeley Lipschitz embedding with distortion 퐷 if there exists a (scal- .edu. ing) constant 푐 such that for all 푥, 푦 ∈ 푋, The author’s research was supported by NSF award CCF-1951384, ONR grant N00014-18-1-2562, ONR DORECG award N00014-17-1-2127, an Alfred P. 푐 ⋅ 푑푋 (푥, 푦) ≤ 푑푌 (푓(푥), 푓(푦)) ≤ 푐퐷 ⋅ 푑푋 (푥, 푦). (1) Sloan Research Fellowship, and a Google Faculty Research Award. Due to publisher constraints, only a limited number of references could be in- To illustrate the embedding paradigm in action, con- cluded. For a version of this article with a full list of references, please see full sider the k-median problem. The input is a finite metric version on the arXiv. space 풳 = (푋, 푑푋 ), |푋| = 푛, together with an integer Communicated by Notices Associate Editor Reza Malek-Madani. 1 ≤ 푘 ≤ 푛. The goal is to compute For permission to reprint this article, please contact: [email protected] ∗ . 푆 = 푎푟푔푚푖푛 ∑ min 푑푋 (푥, 푐). (2) 푐∈푆 푆⊂푋 푥∈푋 DOI: https://doi.org/10.1090/noti2166 |푆|=푘 1498 NOTICES OF THE AMERICAN MATHEMATICAL SOCIETY VOLUME 67, NUMBER 10 That is, we would like to partition 푋 into 푘 clusters, to- A natural question is then: for which normed spaces gether with identifying a cluster center 푐 in each cluster, so do there exist such dimensionality-reducing maps with as to minimize the sum of distances from every 푥 ∈ 푋 low distortion? An early and seminal result in this direc- to its closest cluster center. If 풳 can be an arbitrary 푛- tion was given by Johnson and Lindenstrauss [JL84], who point metric space, then this problem is known to be NP- showed that near-isometric embeddings exist when 풳, 풴 hard. Meanwhile when 풳 is the shortest path metric on are Euclidean. a tree, the problem can be solved exactly in time 푂(푘푛2) Lemma 1 (JL lemma [JL84]). Let 휀 ∈ (0, 1) and 푋 ⊂ ℝ푑 via the Kariv-Hakimi dynamic programming algorithm.1 be arbitrary with |푋| having size 푛 > 1. Then there exists 푓 ∶ Tree shortest path metrics are thus an example of what we 푋 → ℝ푚 with 푚 = 푂(휀−2 log 푛) such that for all 푥, 푦 ∈ 푋, would call a computationally friendly metric space for the 푘-median problem. Thus if 풳 admits an algorithmically ‖푥 − 푦‖2 ≤ ‖푓(푥) − 푓(푦)‖2 ≤ (1 + 휀)‖푥 − 푦‖2. (3) efficient embedding into a tree metric with some small dis- In fact, all proofs of the JL lemma show that 푓 can be tortion 퐷, we can obtain a fast 퐷-approximation algorithm taken as a linear map. The various known proofs of the JL for 푘-median on 풳 (i.e., achieving a clustering cost that lemma all identify a distribution Γ over ℝ푚×푑 such that if is at most a factor 퐷 larger than optimal) by first embed- one draws a random Π ∼ Γ, then 푓(푥) = Π푥 satisfies equa- ding our original metric into some tree 푇 and then solv- tion (3) with high probability. In the original proof [JL84], ing 푘-median exactly in 푇. In fact it has been shown by Γ was taken as a scaled orthogonal projection onto a ran- Fakcharoenphol et al., following previous work of Bartal, dom 푚-dimensional subspace of ℝ푑 (and hence their tech- that any 푛-point metric space embeds into a distribution nique is often called the random projection method), though over tree metrics with distortion 푂(log 푛). We will not dis- since then several other distributions have been shown to cuss here what distortion means for probabilistic embed- provide a similar guarantee. dings into a distribution over target spaces, but to make our Hearing of such a result naturally inspires certain case for the embedding paradigm it suffices to point out follow-up questions. Is low-distortion dimension reduc- that these results implied the first ever polynomial time tion possible in other normed spaces, e.g., ℓ푝 for 푝 ≠ 2? algorithms for 푘-median computation in arbitrary metric Is the 푚 = 푂(휀−2 log 푛) bound in the JL lemma the best spaces with approximation factor at most polylogarithmic possible? Is it possible to obtain a distribution Γ provid- in 푛. ing the JL lemma as mentioned above such that Π ∼ Γ can In this article we focus on embeddings in which both be sampled using few random bits? Given that the stated the host and target spaces are normed spaces, in which case primary motivation of dimension reduction is algorithmic we can drop the scaling factor 푐 in equation (1). We even efficiency, just how fast can the mapping 푥 ↦ Π푥 be per- more specifically focus on the case when 풳, 풴 are finite- formed? dimensional subspaces of the same normed space 풵, and where 푑푖푚(풴) ≤ 푑푖푚(풳) so that 푓 provides us with the al- 1. Dimension Reduction in Other Spaces gorithmic advantage of dimension reduction. As one might Given the dimension reduction possible in Euclidean imagine, several algorithms for high-dimensional compu- space, one might wonder in what other spaces such a re- tational geometry problems have running times or mem- sult is possible. A negative result was proven by Johnson ory requirements which grow (sometimes poorly) with the and Naor, who showed that at least for linear embeddings, dimension of the input. An example is the nearest neigh- spaces that enjoy dimension reduction as good as in the bor search data structural problem, in which one wants Euclidean case must themselves be nearly Euclidean. to preprocess a set of input points 푥 , … , 푥 ∈ ℝ푑 to cre- 1 푛 푍 ate a low-memory data structure 풟 such that later one can Theorem 1 ([JN10]). Suppose is normed space satisfying 푋 ⊂ 푍 |푋| = 푛 quickly identify the closest 푥 to some query point 푞 ∈ ℝ푑 the property that for every , , there exists a linear 푖 푓 ∶ 푍 → 퐸 푂(log 푛) by querying 풟.2 The best known algorithms for this prob- mapping for an -dimensional subspace 퐸 ⊂ 푍 푓 푂(1) 푋 lem with fast query time (in terms of 푛) either have run- such that has -distortion when restricted to . Then, every 푘-dimensional linear subspace of 푍 embeds into ning time or memory usage exponential in 푑 (see the dis- ∗ 2푂(log 푘) cussion in [HPIM12]). Euclidean space with distortion 2 . ∗ In the above, log 푚 is the number of times one must take the iterated logarithm of 푚, base two, to obtain a ∗ 22 1We use standard asymptotic notation. For functions 푓, 푔: 푓 = 푂(푔) if number which is at most 1. For example, log (22 ) = 4. lim sup |푓(푥)/푔(푥)| < ∞ 푓 = Ω(푔) 푔 = 푂(푓) 푓 = Θ(푔) ∗ 푥→∞ . if ; if both The key takeaway here is that log 푚 is a very slow-growing 푓 = 푂(푔) and 푓 = Ω(푔); 푓 = 표(푔) if lim푥→∞ 푓(푥)/푔(푥) = 0; and 푓 = 휔(푔) if 푔 = 표(푓). function, so that the distance to being Euclidean is small. 2Though specifically for the nearest neighbor problem, an embedding satisfying The theorem though does not preclude the existence of a weaker guarantee suffices for applications. some form of dimension reduction in spaces that are not NOVEMBER 2020 NOTICES OF THE AMERICAN MATHEMATICAL SOCIETY 1499 nearly Euclidean. In particular, one can still shoot for di- Unfortunately, the above approach does not extend to mension reduction bounds that are 휔(log 푛), or potentially show that 푚 must grow by more than a constant factor achieve 푂(log 푛) target dimension with 푂(1) distortion via beyond log8 푛 as 휀 → 0. Subsequently, Alon showed the nonlinear embeddings. Several results exist showing that lower bound Ω(휀−2 log 푛/ log(1/휀)) for 휀 > 1/√푛. Roughly, some nontrivial dimension reduction in ℓ푝-spaces, for ex- the approach was to let 푋 be as above (0, together with ample, is possible. On the negative side, Brinkman and the simplex), and to again let 푓 be a low-distortion em- Charikar have shown that for an 푛-point set endowed with bedding as above with 푓(0) = 0.
Recommended publications
  • 1 Lifts of Polytopes
    Lecture 5: Lifts of polytopes and non-negative rank CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Lifts of polytopes 1.1 Polytopes and inequalities Recall that the convex hull of a subset X n is defined by ⊆ conv X λx + 1 λ x0 : x; x0 X; λ 0; 1 : ( ) f ( − ) 2 2 [ ]g A d-dimensional convex polytope P d is the convex hull of a finite set of points in d: ⊆ P conv x1;:::; xk (f g) d for some x1;:::; xk . 2 Every polytope has a dual representation: It is a closed and bounded set defined by a family of linear inequalities P x d : Ax 6 b f 2 g for some matrix A m d. 2 × Let us define a measure of complexity for P: Define γ P to be the smallest number m such that for some C s d ; y s ; A m d ; b m, we have ( ) 2 × 2 2 × 2 P x d : Cx y and Ax 6 b : f 2 g In other words, this is the minimum number of inequalities needed to describe P. If P is full- dimensional, then this is precisely the number of facets of P (a facet is a maximal proper face of P). Thinking of γ P as a measure of complexity makes sense from the point of view of optimization: Interior point( methods) can efficiently optimize linear functions over P (to arbitrary accuracy) in time that is polynomial in γ P . ( ) 1.2 Lifts of polytopes Many simple polytopes require a large number of inequalities to describe.
    [Show full text]
  • Simplicial Complexes
    46 III Complexes III.1 Simplicial Complexes There are many ways to represent a topological space, one being a collection of simplices that are glued to each other in a structured manner. Such a collection can easily grow large but all its elements are simple. This is not so convenient for hand-calculations but close to ideal for computer implementations. In this book, we use simplicial complexes as the primary representation of topology. Rd k Simplices. Let u0; u1; : : : ; uk be points in . A point x = i=0 λiui is an affine combination of the ui if the λi sum to 1. The affine hull is the set of affine combinations. It is a k-plane if the k + 1 points are affinely Pindependent by which we mean that any two affine combinations, x = λiui and y = µiui, are the same iff λi = µi for all i. The k + 1 points are affinely independent iff P d P the k vectors ui − u0, for 1 ≤ i ≤ k, are linearly independent. In R we can have at most d linearly independent vectors and therefore at most d+1 affinely independent points. An affine combination x = λiui is a convex combination if all λi are non- negative. The convex hull is the set of convex combinations. A k-simplex is the P convex hull of k + 1 affinely independent points, σ = conv fu0; u1; : : : ; ukg. We sometimes say the ui span σ. Its dimension is dim σ = k. We use special names of the first few dimensions, vertex for 0-simplex, edge for 1-simplex, triangle for 2-simplex, and tetrahedron for 3-simplex; see Figure III.1.
    [Show full text]
  • Degrees of Freedom in Quadratic Goodness of Fit
    Submitted to the Annals of Statistics DEGREES OF FREEDOM IN QUADRATIC GOODNESS OF FIT By Bruce G. Lindsay∗, Marianthi Markatouy and Surajit Ray Pennsylvania State University, Columbia University, Boston University We study the effect of degrees of freedom on the level and power of quadratic distance based tests. The concept of an eigendepth index is introduced and discussed in the context of selecting the optimal de- grees of freedom, where optimality refers to high power. We introduce the class of diffusion kernels by the properties we seek these kernels to have and give a method for constructing them by exponentiating the rate matrix of a Markov chain. Product kernels and their spectral decomposition are discussed and shown useful for high dimensional data problems. 1. Introduction. Lindsay et al. (2008) developed a general theory for good- ness of fit testing based on quadratic distances. This class of tests is enormous, encompassing many of the tests found in the literature. It includes tests based on characteristic functions, density estimation, and the chi-squared tests, as well as providing quadratic approximations to many other tests, such as those based on likelihood ratios. The flexibility of the methodology is particularly important for enabling statisticians to readily construct tests for model fit in higher dimensions and in more complex data. ∗Supported by NSF grant DMS-04-05637 ySupported by NSF grant DMS-05-04957 AMS 2000 subject classifications: Primary 62F99, 62F03; secondary 62H15, 62F05 Keywords and phrases: Degrees of freedom, eigendepth, high dimensional goodness of fit, Markov diffusion kernels, quadratic distance, spectral decomposition in high dimensions 1 2 LINDSAY ET AL.
    [Show full text]
  • Arxiv:1910.10745V1 [Cond-Mat.Str-El] 23 Oct 2019 2.2 Symmetry-Protected Time Crystals
    A Brief History of Time Crystals Vedika Khemania,b,∗, Roderich Moessnerc, S. L. Sondhid aDepartment of Physics, Harvard University, Cambridge, Massachusetts 02138, USA bDepartment of Physics, Stanford University, Stanford, California 94305, USA cMax-Planck-Institut f¨urPhysik komplexer Systeme, 01187 Dresden, Germany dDepartment of Physics, Princeton University, Princeton, New Jersey 08544, USA Abstract The idea of breaking time-translation symmetry has fascinated humanity at least since ancient proposals of the per- petuum mobile. Unlike the breaking of other symmetries, such as spatial translation in a crystal or spin rotation in a magnet, time translation symmetry breaking (TTSB) has been tantalisingly elusive. We review this history up to recent developments which have shown that discrete TTSB does takes place in periodically driven (Floquet) systems in the presence of many-body localization (MBL). Such Floquet time-crystals represent a new paradigm in quantum statistical mechanics — that of an intrinsically out-of-equilibrium many-body phase of matter with no equilibrium counterpart. We include a compendium of the necessary background on the statistical mechanics of phase structure in many- body systems, before specializing to a detailed discussion of the nature, and diagnostics, of TTSB. In particular, we provide precise definitions that formalize the notion of a time-crystal as a stable, macroscopic, conservative clock — explaining both the need for a many-body system in the infinite volume limit, and for a lack of net energy absorption or dissipation. Our discussion emphasizes that TTSB in a time-crystal is accompanied by the breaking of a spatial symmetry — so that time-crystals exhibit a novel form of spatiotemporal order.
    [Show full text]
  • THE DIMENSION of a VECTOR SPACE 1. Introduction This Handout
    THE DIMENSION OF A VECTOR SPACE KEITH CONRAD 1. Introduction This handout is a supplementary discussion leading up to the definition of dimension of a vector space and some of its properties. We start by defining the span of a finite set of vectors and linear independence of a finite set of vectors, which are combined to define the all-important concept of a basis. Definition 1.1. Let V be a vector space over a field F . For any finite subset fv1; : : : ; vng of V , its span is the set of all of its linear combinations: Span(v1; : : : ; vn) = fc1v1 + ··· + cnvn : ci 2 F g: Example 1.2. In F 3, Span((1; 0; 0); (0; 1; 0)) is the xy-plane in F 3. Example 1.3. If v is a single vector in V then Span(v) = fcv : c 2 F g = F v is the set of scalar multiples of v, which for nonzero v should be thought of geometrically as a line (through the origin, since it includes 0 · v = 0). Since sums of linear combinations are linear combinations and the scalar multiple of a linear combination is a linear combination, Span(v1; : : : ; vn) is a subspace of V . It may not be all of V , of course. Definition 1.4. If fv1; : : : ; vng satisfies Span(fv1; : : : ; vng) = V , that is, if every vector in V is a linear combination from fv1; : : : ; vng, then we say this set spans V or it is a spanning set for V . Example 1.5. In F 2, the set f(1; 0); (0; 1); (1; 1)g is a spanning set of F 2.
    [Show full text]
  • Linear Algebra Handout
    Artificial Intelligence: 6.034 Massachusetts Institute of Technology April 20, 2012 Spring 2012 Recitation 10 Linear Algebra Review • A vector is an ordered list of values. It is often denoted using angle brackets: ha; bi, and its variable name is often written in bold (z) or with an arrow (~z). We can refer to an individual element of a vector using its index: for example, the first element of z would be z1 (or z0, depending on how we're indexing). Each element of a vector generally corresponds to a particular dimension or feature, which could be discrete or continuous; often you can think of a vector as a point in Euclidean space. p 2 2 2 • The magnitude (also called norm) of a vector x = hx1; x2; :::; xni is x1 + x2 + ::: + xn, and is denoted jxj or kxk. • The sum of a set of vectors is their elementwise sum: for example, ha; bi + hc; di = ha + c; b + di (so vectors can only be added if they are the same length). The dot product (also called scalar product) of two vectors is the sum of their elementwise products: for example, ha; bi · hc; di = ac + bd. The dot product x · y is also equal to kxkkyk cos θ, where θ is the angle between x and y. • A matrix is a generalization of a vector: instead of having just one row or one column, it can have m rows and n columns. A square matrix is one that has the same number of rows as columns. A matrix's variable name is generally a capital letter, often written in bold.
    [Show full text]
  • The Simplex Algorithm in Dimension Three1
    The Simplex Algorithm in Dimension Three1 Volker Kaibel2 Rafael Mechtel3 Micha Sharir4 G¨unter M. Ziegler3 Abstract We investigate the worst-case behavior of the simplex algorithm on linear pro- grams with 3 variables, that is, on 3-dimensional simple polytopes. Among the pivot rules that we consider, the “random edge” rule yields the best asymptotic behavior as well as the most complicated analysis. All other rules turn out to be much easier to study, but also produce worse results: Most of them show essentially worst-possible behavior; this includes both Kalai’s “random-facet” rule, which is known to be subexponential without dimension restriction, as well as Zadeh’s de- terministic history-dependent rule, for which no non-polynomial instances in general dimensions have been found so far. 1 Introduction The simplex algorithm is a fascinating method for at least three reasons: For computa- tional purposes it is still the most efficient general tool for solving linear programs, from a complexity point of view it is the most promising candidate for a strongly polynomial time linear programming algorithm, and last but not least, geometers are pleased by its inherent use of the structure of convex polytopes. The essence of the method can be described geometrically: Given a convex polytope P by means of inequalities, a linear functional ϕ “in general position,” and some vertex vstart, 1Work on this paper by Micha Sharir was supported by NSF Grants CCR-97-32101 and CCR-00- 98246, by a grant from the U.S.-Israeli Binational Science Foundation, by a grant from the Israel Science Fund (for a Center of Excellence in Geometric Computing), and by the Hermann Minkowski–MINERVA Center for Geometry at Tel Aviv University.
    [Show full text]
  • Iasinstitute for Advanced Study
    IAInsti tSute for Advanced Study Faculty and Members 2012–2013 Contents Mission and History . 2 School of Historical Studies . 4 School of Mathematics . 21 School of Natural Sciences . 45 School of Social Science . 62 Program in Interdisciplinary Studies . 72 Director’s Visitors . 74 Artist-in-Residence Program . 75 Trustees and Officers of the Board and of the Corporation . 76 Administration . 78 Past Directors and Faculty . 80 Inde x . 81 Information contained herein is current as of September 24, 2012. Mission and History The Institute for Advanced Study is one of the world’s leading centers for theoretical research and intellectual inquiry. The Institute exists to encourage and support fundamental research in the sciences and human - ities—the original, often speculative thinking that produces advances in knowledge that change the way we understand the world. It provides for the mentoring of scholars by Faculty, and it offers all who work there the freedom to undertake research that will make significant contributions in any of the broad range of fields in the sciences and humanities studied at the Institute. Y R Founded in 1930 by Louis Bamberger and his sister Caroline Bamberger O Fuld, the Institute was established through the vision of founding T S Director Abraham Flexner. Past Faculty have included Albert Einstein, I H who arrived in 1933 and remained at the Institute until his death in 1955, and other distinguished scientists and scholars such as Kurt Gödel, George F. D N Kennan, Erwin Panofsky, Homer A. Thompson, John von Neumann, and A Hermann Weyl. N O Abraham Flexner was succeeded as Director in 1939 by Frank Aydelotte, I S followed by J.
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]
  • Curriculum Vitae
    Massachusetts Institute of Technology School of Engineering Faculty Personnel Record Date: April 1, 2020 Full Name: Charles E. Leiserson Department: Electrical Engineering and Computer Science 1. Date of Birth November 10, 1953 2. Citizenship U.S.A. 3. Education School Degree Date Yale University B. S. (cum laude) May 1975 Carnegie-Mellon University Ph.D. Dec. 1981 4. Title of Thesis for Most Advanced Degree Area-Efficient VLSI Computation 5. Principal Fields of Interest Analysis of algorithms Caching Compilers and runtime systems Computer chess Computer-aided design Computer network architecture Digital hardware and computing machinery Distance education and interaction Fast artificial intelligence Leadership skills for engineering and science faculty Multicore computing Parallel algorithms, architectures, and languages Parallel and distributed computing Performance engineering Scalable computing systems Software performance engineering Supercomputing Theoretical computer science MIT School of Engineering Faculty Personnel Record — Charles E. Leiserson 2 6. Non-MIT Experience Position Date Founder, Chairman of the Board, and Chief Technology Officer, Cilk Arts, 2006 – 2009 Burlington, Massachusetts Director of System Architecture, Akamai Technologies, Cambridge, 1999 – 2001 Massachusetts Shaw Visiting Professor, National University of Singapore, Republic of 1995 – 1996 Singapore Network Architect for Connection Machine Model CM-5 Supercomputer, 1989 – 1990 Thinking Machines Programmer, Computervision Corporation, Bedford, Massachusetts 1975
    [Show full text]
  • The Number of Degrees of Freedom of a Gauge Theory (Addendum to the Discussion on Pp
    The number of degrees of freedom of a gauge theory (Addendum to the discussion on pp. 59–62 of notes) Let us work in the Fourier picture (Remark, p. 61 of notes). In a general gauge, the Maxwell equation for the vector potential is µ α α µ α −∂ ∂µA + ∂ ∂µA = J . (1) Upon taking Fourier transforms, this becomes µ α α µ α ′ k kµA − k kµA = J , (1 ) where A~ and J~ are now functions of the 4-vector ~k. (One would normally denote the transforms by a caret (Aˆα, etc.), but for convenience I won’t.) The field strength tensor is F αβ = ∂αAβ − ∂βAα, (2) or F αβ = ikαAβ − ikβAα. (2′) The relation between field and current is (factor 4π suppressed) α αβ J = ∂βF , (3) or α αµ ′ J = ikµF . (3 ) Of course, (2′) and (3′) imply (1′). (1′) can be written in matrix form as J~ = MA,~ (4) 2 0 0 0 0 ~k − k k0 −k k1 −k k2 −k k3 1 ~ 2 1 1 1 ~ µ ~ −k k0 k − k k1 −k k2 −k k3 M(k) = k kµ I − k ⊗ k˜ = 2 2 2 2 2 (5) −k k0 −k k1 ~k − k k2 −k k3 3 3 3 2 3 −k k0 −k k1 −k k2 ~k − k k3 Consider a generic ~k (not a null vector). Suppose that A~ is a multiple of ~k : Aa(~k) = kαχ(~k). (6′) Then it is easy to see that A~ is in the kernel (null space) of M(~k); that is, it yields J~(~k) = 0.
    [Show full text]
  • Arxiv:2012.03192V2 [Gr-Qc] 25 Jan 2021 Rotation, Embedding and Topology for the Szekeres Geometry
    Rotation, Embedding and Topology for the Szekeres Geometry Charles Hellaby ∗ Dept. of Maths. and Applied Maths, University of Cape Town, Rondebosch, 7701, South Africa Robert G. Buckley † Dept. of Physics and Astronomy, University of Texas at San Antonio, San Antonio, Texas 78249, USA Abstract Recent work on the Szekeres inhomogeneous cosmological models uncovered a surprising rotation effect. Hellaby showed that the angular (θ, φ) coordinates do not have a constant orientation, while Buckley and Schlegel provided explicit expressions for the rate of rotation from shell to shell, as well as the rate of tilt when the 3-space is embedded in a flat 4-d Euclidean space. We here investigate some properties of this embedding, for the quasi-spherical recollapsing case, and use it to show that the two sets of results are in complete agreement. We also show how to construct Szekeres models that are closed in the ‘radial’ direction, and hence have a ‘natural’ embedded torus topology. Several explicit models illustrate the embedding as well as the shell rotation and tilt effects. 1 Previous Work Since its discovery, the Szekeres inhomogeneous cosmological model has always intrigued relativists, having no Killing vectors on the one hand, and yet still being silent on the other hand. However, the study of this metric has been limited by its relative complication, and, in the cases of the planar arXiv:2012.03192v2 [gr-qc] 25 Jan 2021 and hyperboloidal models, with ǫ = +1, lack of a Newtonian analogy from which to derive physical understanding. Still, precisely because6 it is one of the most realistic inhomogeneous exact solutions of Einstein’s field equations, which gives it much potential for application in modelling relatively complex cosmological structures on a range of scales, a fuller description of the geometry and the evolution of this spacetime is indispensable to a proper physical understanding.
    [Show full text]