Degrees of Freedom in Quadratic Goodness of Fit

Total Page:16

File Type:pdf, Size:1020Kb

Degrees of Freedom in Quadratic Goodness of Fit Submitted to the Annals of Statistics DEGREES OF FREEDOM IN QUADRATIC GOODNESS OF FIT By Bruce G. Lindsay∗, Marianthi Markatouy and Surajit Ray Pennsylvania State University, Columbia University, Boston University We study the effect of degrees of freedom on the level and power of quadratic distance based tests. The concept of an eigendepth index is introduced and discussed in the context of selecting the optimal de- grees of freedom, where optimality refers to high power. We introduce the class of diffusion kernels by the properties we seek these kernels to have and give a method for constructing them by exponentiating the rate matrix of a Markov chain. Product kernels and their spectral decomposition are discussed and shown useful for high dimensional data problems. 1. Introduction. Lindsay et al. (2008) developed a general theory for good- ness of fit testing based on quadratic distances. This class of tests is enormous, encompassing many of the tests found in the literature. It includes tests based on characteristic functions, density estimation, and the chi-squared tests, as well as providing quadratic approximations to many other tests, such as those based on likelihood ratios. The flexibility of the methodology is particularly important for enabling statisticians to readily construct tests for model fit in higher dimensions and in more complex data. ∗Supported by NSF grant DMS-04-05637 ySupported by NSF grant DMS-05-04957 AMS 2000 subject classifications: Primary 62F99, 62F03; secondary 62H15, 62F05 Keywords and phrases: Degrees of freedom, eigendepth, high dimensional goodness of fit, Markov diffusion kernels, quadratic distance, spectral decomposition in high dimensions 1 2 LINDSAY ET AL. The paper by Lindsay et al. introduced, as a unifying concept, a formula for the spectral degrees of freedom (DOF ) of the test statistic. It was based on a functional spectral decomposition of the quadratic kernel, but could be calculated without knowing the decomposition. In essence, the limiting null distribution of the test statistic was shown to be approximately chi-squared, with DOF being its degrees of freedom. One feature of building tests within the quadratic distance framework is that the DOF is often a continuously tuneable parameter; i.e:, one to be selected by the user. In our examples here, which involve kernel density estimation, DOF is a decreasing function of the smoothing parameter h: This is a particularly valuable characteristic in higher dimensions, as one can then adjust the degrees of freedom to compensate for both dimension and sample size. Lindsay et al. (2008) offered some possible guidelines for selecting degrees of freedom. They were based solely on the heuristic that DOF should be chosen as if carrying out a chi-squared test. Ray and Lindsay (2008) created a useful risk assessment tool based on tuneable quadratic distances and their associated degrees of freedom, but again provided little guidance on the selection of the tuning parameter. Our first goal here is to show that indeed, the choice of DOF is important for obtaining good power properties. Just as in a chi-square test procedure, selecting DOF too large or too small can lead to very weak power against important alter- natives. We demonstrate this through a careful simulation. As in a chi-squared test, too many degrees of freedom, relative to the sample size, creates procedures with low power (\cells with small counts"). And too few degrees of freedom, especially in higher dimensional data, can fail to provide enough dimensions of DEGREES OF FREEDOM FOR DISTANCES 3 discrimination. In this paper, we will show that many weaknesses of the standard chi squared methodology can be overcome by careful choice of the kernel in the quadratic distance. As one example of this, we will examine goodness of fit in the multivariate normal (or mixtures of normals) case, where the kernel of the distance is also taken to be a multivariate normal density. In this example, and others like it, the quadratic testing method requires no cell creation, the degrees of freedom are continuously adjustable, the calculations do not require numerical integration, and the power is global. By using this kernel we can always com- pute explicitly the distance between the data and the hypothesized multivariate normal, or mixtures of normals, model as well as degrees of freedom. Our second goal is to provide methodology that is useful for choosing DOF in such a way that one has good power across a range of alternatives in a high dimensional setting. For this we first need to develop a set of tools for the analysis of DOF, especially as it relates to the dimension of the data d: To do so, we will focus on a important class of distance kernels that we call diffusion kernels. These kernels are of special interest because they are tuneable and they allow easy computation of the distance in certain high dimensional models. They also enable explicit spectral decompositions and formulas for the corresponding degrees of freedom. We will also focus on a special mechanism for constructing distance kernels in higher dimensions that we call product kernels. The use of such kernels enables one to construct distances for data vectors not only of arbitrary dimension, but also those that have a mixture of discrete and continuous coordinates. Finally, we will introduce here a new tool for choosing DOF in higher dimen- sions. Based on spectral theory, we derived a simple but informative function of 4 LINDSAY ET AL. DOF and the data dimension d that we call the eigendepth index k^: It is valid for kernels with geometrically decaying eigenvalues. It describes how the test statistic weights different eigenspaces in higher dimensions. This index proved particularly useful for understanding how the results of our simulation study depended on data dimension. The paper is organized as follows. In Section 2 we develop some essential background on quadratic distance testing, especially as it relates to degrees of freedom. We then introduce in Section 3 the kernels that will be of particular interest to us. Called diffusion kernels, we construct them by exponentiating the rate matrix of a Markov Chain to fit a variety of sample spaces. We consider these kernels to be the natural generalization of the normal kernel to other sample space. We then turn to the challenge of degrees of freedom in higher dimensions. In Section 4 we develop a description of the standard eigenanalysis of the diffusion kernel type. This in turn leads to a recognition that in higher dimensions, the product kernel eigendecomposition has a beautiful structure that can be exploited to understand how the kernel weights deviations from the model. This leads to a proposed eigendepth index k^ to measure this effect. The eigendepth index is transformed into a simple formula depending only on dimension d and DOF: Our final sections are devoted to a detailed study of testing for multivariate normality using a quadratic distance. After a brief review of preceding literature in Section 5, we turn to a detailed simulation study in Section 6. Here we will show that power as a function of eigendepth is remarkably homogeneous across d = 2, 4; and 8; with the peak power typically occurring at k^ = 4: 1.1. Quadratic distances. Lindsay et al. (2008) introduced a unified frame- DEGREES OF FREEDOM FOR DISTANCES 5 work for the study of quadratic form distance measures that are used to assess model fit. We briefly review some fundamentals of this framework. Let be a sample space and let du(s) be the canonical uniform measure on X this space; this could be Lebesgue measure, counting measure, or spherical volume measure depending on the application. The building block of a statistical distance is the function K(s; t), a bounded, symmetric, non-negative definite kernel defined on . X × X Definition 1. Given a CNND kernel function K(s, t), possibly depending on a distribution G whose goodness of fit we wish to assess, the K-based quadratic distance between two probability measures F and G is defined as d (F; G) = K (s; t)d(F G)(s)d(F G)(t): K G − − Z Z An important example of quadratic distance is Pearson's chi-squared. The kernel of the Pearson chi-squared distance is given by m I(r Ai)I(s Ai) KG(r; s) = 2 2 ; G(Ai) Xi=1 where I is the indicator function and A1;A2;:::; Am is a partitioning of the sample space into m bins. If x1; :::; xn is a random sample with empirical distribution F^, then we have a natural construction of an empirical distance between the data and the model as d(F^; G): This becomes the building block for goodness of fit procedures. To obtain the asymptotic theory, we modify the kernel KG by centering it to obtain Kc;G (details in Section 3), in which case we can write dK (F; G) = Kc;G(s; t)dF (s)dF (t): Z Z 6 LINDSAY ET AL. In this form it is clear that d(F^; G) is a V-statistic. Suppose that Fτ is the true distribution. The U-statistic that unbiasedly estimates the distance d(Fτ ; G) is given by the expression 1 U = K (x ; x ): n n(n 1) c;G i j i j=i − X X6 The family of possible quadratic distances is enormous, needing just the spec- ification of the kernel K(x; y): Our particular interest here will be in kernels of the diffusion type, or products thereof. 1.2. The L2 representation. The spectral representation theory of Lindsay et al. (2008) shows that for a given symmetric kernel K(s; t), there generally exists a symmet- ric \square root" kernel K1=2(s; t) satisfying the relationship K1=2(s; r)K1=2(r; t)du(r) = K(s; t): Z 1 For example, if one uses as a distance kernel the normal K 2 (x; y) = (p2πh2) exp((x h − − y)2=2h2); then the square root kernel is a normal kernel with variance h2=2.
Recommended publications
  • 1 Lifts of Polytopes
    Lecture 5: Lifts of polytopes and non-negative rank CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Lifts of polytopes 1.1 Polytopes and inequalities Recall that the convex hull of a subset X n is defined by ⊆ conv X λx + 1 λ x0 : x; x0 X; λ 0; 1 : ( ) f ( − ) 2 2 [ ]g A d-dimensional convex polytope P d is the convex hull of a finite set of points in d: ⊆ P conv x1;:::; xk (f g) d for some x1;:::; xk . 2 Every polytope has a dual representation: It is a closed and bounded set defined by a family of linear inequalities P x d : Ax 6 b f 2 g for some matrix A m d. 2 × Let us define a measure of complexity for P: Define γ P to be the smallest number m such that for some C s d ; y s ; A m d ; b m, we have ( ) 2 × 2 2 × 2 P x d : Cx y and Ax 6 b : f 2 g In other words, this is the minimum number of inequalities needed to describe P. If P is full- dimensional, then this is precisely the number of facets of P (a facet is a maximal proper face of P). Thinking of γ P as a measure of complexity makes sense from the point of view of optimization: Interior point( methods) can efficiently optimize linear functions over P (to arbitrary accuracy) in time that is polynomial in γ P . ( ) 1.2 Lifts of polytopes Many simple polytopes require a large number of inequalities to describe.
    [Show full text]
  • Simplicial Complexes
    46 III Complexes III.1 Simplicial Complexes There are many ways to represent a topological space, one being a collection of simplices that are glued to each other in a structured manner. Such a collection can easily grow large but all its elements are simple. This is not so convenient for hand-calculations but close to ideal for computer implementations. In this book, we use simplicial complexes as the primary representation of topology. Rd k Simplices. Let u0; u1; : : : ; uk be points in . A point x = i=0 λiui is an affine combination of the ui if the λi sum to 1. The affine hull is the set of affine combinations. It is a k-plane if the k + 1 points are affinely Pindependent by which we mean that any two affine combinations, x = λiui and y = µiui, are the same iff λi = µi for all i. The k + 1 points are affinely independent iff P d P the k vectors ui − u0, for 1 ≤ i ≤ k, are linearly independent. In R we can have at most d linearly independent vectors and therefore at most d+1 affinely independent points. An affine combination x = λiui is a convex combination if all λi are non- negative. The convex hull is the set of convex combinations. A k-simplex is the P convex hull of k + 1 affinely independent points, σ = conv fu0; u1; : : : ; ukg. We sometimes say the ui span σ. Its dimension is dim σ = k. We use special names of the first few dimensions, vertex for 0-simplex, edge for 1-simplex, triangle for 2-simplex, and tetrahedron for 3-simplex; see Figure III.1.
    [Show full text]
  • Arxiv:1910.10745V1 [Cond-Mat.Str-El] 23 Oct 2019 2.2 Symmetry-Protected Time Crystals
    A Brief History of Time Crystals Vedika Khemania,b,∗, Roderich Moessnerc, S. L. Sondhid aDepartment of Physics, Harvard University, Cambridge, Massachusetts 02138, USA bDepartment of Physics, Stanford University, Stanford, California 94305, USA cMax-Planck-Institut f¨urPhysik komplexer Systeme, 01187 Dresden, Germany dDepartment of Physics, Princeton University, Princeton, New Jersey 08544, USA Abstract The idea of breaking time-translation symmetry has fascinated humanity at least since ancient proposals of the per- petuum mobile. Unlike the breaking of other symmetries, such as spatial translation in a crystal or spin rotation in a magnet, time translation symmetry breaking (TTSB) has been tantalisingly elusive. We review this history up to recent developments which have shown that discrete TTSB does takes place in periodically driven (Floquet) systems in the presence of many-body localization (MBL). Such Floquet time-crystals represent a new paradigm in quantum statistical mechanics — that of an intrinsically out-of-equilibrium many-body phase of matter with no equilibrium counterpart. We include a compendium of the necessary background on the statistical mechanics of phase structure in many- body systems, before specializing to a detailed discussion of the nature, and diagnostics, of TTSB. In particular, we provide precise definitions that formalize the notion of a time-crystal as a stable, macroscopic, conservative clock — explaining both the need for a many-body system in the infinite volume limit, and for a lack of net energy absorption or dissipation. Our discussion emphasizes that TTSB in a time-crystal is accompanied by the breaking of a spatial symmetry — so that time-crystals exhibit a novel form of spatiotemporal order.
    [Show full text]
  • THE DIMENSION of a VECTOR SPACE 1. Introduction This Handout
    THE DIMENSION OF A VECTOR SPACE KEITH CONRAD 1. Introduction This handout is a supplementary discussion leading up to the definition of dimension of a vector space and some of its properties. We start by defining the span of a finite set of vectors and linear independence of a finite set of vectors, which are combined to define the all-important concept of a basis. Definition 1.1. Let V be a vector space over a field F . For any finite subset fv1; : : : ; vng of V , its span is the set of all of its linear combinations: Span(v1; : : : ; vn) = fc1v1 + ··· + cnvn : ci 2 F g: Example 1.2. In F 3, Span((1; 0; 0); (0; 1; 0)) is the xy-plane in F 3. Example 1.3. If v is a single vector in V then Span(v) = fcv : c 2 F g = F v is the set of scalar multiples of v, which for nonzero v should be thought of geometrically as a line (through the origin, since it includes 0 · v = 0). Since sums of linear combinations are linear combinations and the scalar multiple of a linear combination is a linear combination, Span(v1; : : : ; vn) is a subspace of V . It may not be all of V , of course. Definition 1.4. If fv1; : : : ; vng satisfies Span(fv1; : : : ; vng) = V , that is, if every vector in V is a linear combination from fv1; : : : ; vng, then we say this set spans V or it is a spanning set for V . Example 1.5. In F 2, the set f(1; 0); (0; 1); (1; 1)g is a spanning set of F 2.
    [Show full text]
  • Linear Algebra Handout
    Artificial Intelligence: 6.034 Massachusetts Institute of Technology April 20, 2012 Spring 2012 Recitation 10 Linear Algebra Review • A vector is an ordered list of values. It is often denoted using angle brackets: ha; bi, and its variable name is often written in bold (z) or with an arrow (~z). We can refer to an individual element of a vector using its index: for example, the first element of z would be z1 (or z0, depending on how we're indexing). Each element of a vector generally corresponds to a particular dimension or feature, which could be discrete or continuous; often you can think of a vector as a point in Euclidean space. p 2 2 2 • The magnitude (also called norm) of a vector x = hx1; x2; :::; xni is x1 + x2 + ::: + xn, and is denoted jxj or kxk. • The sum of a set of vectors is their elementwise sum: for example, ha; bi + hc; di = ha + c; b + di (so vectors can only be added if they are the same length). The dot product (also called scalar product) of two vectors is the sum of their elementwise products: for example, ha; bi · hc; di = ac + bd. The dot product x · y is also equal to kxkkyk cos θ, where θ is the angle between x and y. • A matrix is a generalization of a vector: instead of having just one row or one column, it can have m rows and n columns. A square matrix is one that has the same number of rows as columns. A matrix's variable name is generally a capital letter, often written in bold.
    [Show full text]
  • The Simplex Algorithm in Dimension Three1
    The Simplex Algorithm in Dimension Three1 Volker Kaibel2 Rafael Mechtel3 Micha Sharir4 G¨unter M. Ziegler3 Abstract We investigate the worst-case behavior of the simplex algorithm on linear pro- grams with 3 variables, that is, on 3-dimensional simple polytopes. Among the pivot rules that we consider, the “random edge” rule yields the best asymptotic behavior as well as the most complicated analysis. All other rules turn out to be much easier to study, but also produce worse results: Most of them show essentially worst-possible behavior; this includes both Kalai’s “random-facet” rule, which is known to be subexponential without dimension restriction, as well as Zadeh’s de- terministic history-dependent rule, for which no non-polynomial instances in general dimensions have been found so far. 1 Introduction The simplex algorithm is a fascinating method for at least three reasons: For computa- tional purposes it is still the most efficient general tool for solving linear programs, from a complexity point of view it is the most promising candidate for a strongly polynomial time linear programming algorithm, and last but not least, geometers are pleased by its inherent use of the structure of convex polytopes. The essence of the method can be described geometrically: Given a convex polytope P by means of inequalities, a linear functional ϕ “in general position,” and some vertex vstart, 1Work on this paper by Micha Sharir was supported by NSF Grants CCR-97-32101 and CCR-00- 98246, by a grant from the U.S.-Israeli Binational Science Foundation, by a grant from the Israel Science Fund (for a Center of Excellence in Geometric Computing), and by the Hermann Minkowski–MINERVA Center for Geometry at Tel Aviv University.
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]
  • The Number of Degrees of Freedom of a Gauge Theory (Addendum to the Discussion on Pp
    The number of degrees of freedom of a gauge theory (Addendum to the discussion on pp. 59–62 of notes) Let us work in the Fourier picture (Remark, p. 61 of notes). In a general gauge, the Maxwell equation for the vector potential is µ α α µ α −∂ ∂µA + ∂ ∂µA = J . (1) Upon taking Fourier transforms, this becomes µ α α µ α ′ k kµA − k kµA = J , (1 ) where A~ and J~ are now functions of the 4-vector ~k. (One would normally denote the transforms by a caret (Aˆα, etc.), but for convenience I won’t.) The field strength tensor is F αβ = ∂αAβ − ∂βAα, (2) or F αβ = ikαAβ − ikβAα. (2′) The relation between field and current is (factor 4π suppressed) α αβ J = ∂βF , (3) or α αµ ′ J = ikµF . (3 ) Of course, (2′) and (3′) imply (1′). (1′) can be written in matrix form as J~ = MA,~ (4) 2 0 0 0 0 ~k − k k0 −k k1 −k k2 −k k3 1 ~ 2 1 1 1 ~ µ ~ −k k0 k − k k1 −k k2 −k k3 M(k) = k kµ I − k ⊗ k˜ = 2 2 2 2 2 (5) −k k0 −k k1 ~k − k k2 −k k3 3 3 3 2 3 −k k0 −k k1 −k k2 ~k − k k3 Consider a generic ~k (not a null vector). Suppose that A~ is a multiple of ~k : Aa(~k) = kαχ(~k). (6′) Then it is easy to see that A~ is in the kernel (null space) of M(~k); that is, it yields J~(~k) = 0.
    [Show full text]
  • Arxiv:2012.03192V2 [Gr-Qc] 25 Jan 2021 Rotation, Embedding and Topology for the Szekeres Geometry
    Rotation, Embedding and Topology for the Szekeres Geometry Charles Hellaby ∗ Dept. of Maths. and Applied Maths, University of Cape Town, Rondebosch, 7701, South Africa Robert G. Buckley † Dept. of Physics and Astronomy, University of Texas at San Antonio, San Antonio, Texas 78249, USA Abstract Recent work on the Szekeres inhomogeneous cosmological models uncovered a surprising rotation effect. Hellaby showed that the angular (θ, φ) coordinates do not have a constant orientation, while Buckley and Schlegel provided explicit expressions for the rate of rotation from shell to shell, as well as the rate of tilt when the 3-space is embedded in a flat 4-d Euclidean space. We here investigate some properties of this embedding, for the quasi-spherical recollapsing case, and use it to show that the two sets of results are in complete agreement. We also show how to construct Szekeres models that are closed in the ‘radial’ direction, and hence have a ‘natural’ embedded torus topology. Several explicit models illustrate the embedding as well as the shell rotation and tilt effects. 1 Previous Work Since its discovery, the Szekeres inhomogeneous cosmological model has always intrigued relativists, having no Killing vectors on the one hand, and yet still being silent on the other hand. However, the study of this metric has been limited by its relative complication, and, in the cases of the planar arXiv:2012.03192v2 [gr-qc] 25 Jan 2021 and hyperboloidal models, with ǫ = +1, lack of a Newtonian analogy from which to derive physical understanding. Still, precisely because6 it is one of the most realistic inhomogeneous exact solutions of Einstein’s field equations, which gives it much potential for application in modelling relatively complex cosmological structures on a range of scales, a fuller description of the geometry and the evolution of this spacetime is indispensable to a proper physical understanding.
    [Show full text]
  • Euclidean Space - Wikipedia, the Free Encyclopedia Page 1 of 5
    Euclidean space - Wikipedia, the free encyclopedia Page 1 of 5 Euclidean space From Wikipedia, the free encyclopedia In mathematics, Euclidean space is the Euclidean plane and three-dimensional space of Euclidean geometry, as well as the generalizations of these notions to higher dimensions. The term “Euclidean” distinguishes these spaces from the curved spaces of non-Euclidean geometry and Einstein's general theory of relativity, and is named for the Greek mathematician Euclid of Alexandria. Classical Greek geometry defined the Euclidean plane and Euclidean three-dimensional space using certain postulates, while the other properties of these spaces were deduced as theorems. In modern mathematics, it is more common to define Euclidean space using Cartesian coordinates and the ideas of analytic geometry. This approach brings the tools of algebra and calculus to bear on questions of geometry, and Every point in three-dimensional has the advantage that it generalizes easily to Euclidean Euclidean space is determined by three spaces of more than three dimensions. coordinates. From the modern viewpoint, there is essentially only one Euclidean space of each dimension. In dimension one this is the real line; in dimension two it is the Cartesian plane; and in higher dimensions it is the real coordinate space with three or more real number coordinates. Thus a point in Euclidean space is a tuple of real numbers, and distances are defined using the Euclidean distance formula. Mathematicians often denote the n-dimensional Euclidean space by , or sometimes if they wish to emphasize its Euclidean nature. Euclidean spaces have finite dimension. Contents 1 Intuitive overview 2 Real coordinate space 3 Euclidean structure 4 Topology of Euclidean space 5 Generalizations 6 See also 7 References Intuitive overview One way to think of the Euclidean plane is as a set of points satisfying certain relationships, expressible in terms of distance and angle.
    [Show full text]
  • 15 BASIC PROPERTIES of CONVEX POLYTOPES Martin Henk, J¨Urgenrichter-Gebert, and G¨Unterm
    15 BASIC PROPERTIES OF CONVEX POLYTOPES Martin Henk, J¨urgenRichter-Gebert, and G¨unterM. Ziegler INTRODUCTION Convex polytopes are fundamental geometric objects that have been investigated since antiquity. The beauty of their theory is nowadays complemented by their im- portance for many other mathematical subjects, ranging from integration theory, algebraic topology, and algebraic geometry to linear and combinatorial optimiza- tion. In this chapter we try to give a short introduction, provide a sketch of \what polytopes look like" and \how they behave," with many explicit examples, and briefly state some main results (where further details are given in subsequent chap- ters of this Handbook). We concentrate on two main topics: • Combinatorial properties: faces (vertices, edges, . , facets) of polytopes and their relations, with special treatments of the classes of low-dimensional poly- topes and of polytopes \with few vertices;" • Geometric properties: volume and surface area, mixed volumes, and quer- massintegrals, including explicit formulas for the cases of the regular simplices, cubes, and cross-polytopes. We refer to Gr¨unbaum [Gr¨u67]for a comprehensive view of polytope theory, and to Ziegler [Zie95] respectively to Gruber [Gru07] and Schneider [Sch14] for detailed treatments of the combinatorial and of the convex geometric aspects of polytope theory. 15.1 COMBINATORIAL STRUCTURE GLOSSARY d V-polytope: The convex hull of a finite set X = fx1; : : : ; xng of points in R , n n X i X P = conv(X) := λix λ1; : : : ; λn ≥ 0; λi = 1 : i=1 i=1 H-polytope: The solution set of a finite system of linear inequalities, d T P = P (A; b) := x 2 R j ai x ≤ bi for 1 ≤ i ≤ m ; with the extra condition that the set of solutions is bounded, that is, such that m×d there is a constant N such that jjxjj ≤ N holds for all x 2 P .
    [Show full text]
  • Quasiorthogonal Dimension of Euclidean Spaces
    CORE Metadata, citation and similar papers at core.ac.uk Provided by Elsevier - Publisher Connector Appl. Math. Lett. Vol. 6, No. 3, pp. 7-10, 1993 089%9659193 $6.00 + 0.00 Printed in Great Britain Pergamon Press Ltd QUASIORTHOGONAL DIMENSION OF EUCLIDEAN SPACES PAUL C. KAINEN Industrial Math, 3044 N St., N.W. Washington, DC 20007, U.S.A. VERA K~RKOVA Institute of Computer Science P.O. Box 5, 18207 Prague 8, Czechoslovakia (Received and accepted March 1993) Abstract-A concept of dimension allowing for inexact measurement of angular distance is intro- duced. Its two basic properties, robustness and exponential growth, are proved. Connections to antipodal sphere codes and Hadamard matrices are discussed. 1. INTROIJUCTION Let S be the unit sphere in some finite-dimensional Euclidean space V (if V = R”, S = Pbl). If the linear dimension of V were not given, one could in principle determine it by an experiment on S: choose a maximal set of points in S with the property that any two distinct vectors x,y have angular distance o(x,y) = 4, where Q(X, y) = arccos(x +y) (x. y denotes the inner product). The size of this maximal set is the linear dimension of V. The choice algorithm proceeds by picking a point xi in S, then x2 in S n VI, where VI is the orthogonal complement of xi, etc. Each new point is chosen from the successively decreasing equatorial spheres. In this paper, we introduce a new notion of dimension which extends this approach to the case in which angle (equivalently, inner product) is subject to a measurement tolerance.
    [Show full text]