Carg`Ese 2015

Total Page:16

File Type:pdf, Size:1020Kb

Carg`Ese 2015 Carg`ese2015 Nisheeth K. Vishnoi i Preface These notes are intended for the attendees of the Sixth Carg`eseWork- shop on Combinatorial Optimization to accompany the lectures deliv- ered by the author in the Fall of 2015. These notes draw heavily from a set of surveys/lecture notes written by the author on the underly- ing techniques for fast algorithms (see [Vis13, SV14, Vis14]) and are intended for an audience familiar with the basics. Many thanks to the many people who have contributed to these notes and to my under- standing. Please feel free to email your comments and suggestions. Nisheeth K. Vishnoi September 2015, EPFL nisheeth.vishnoi@epfl.ch Contents 1 Solving Equations via the Conjugate Gradient Method 1 1.1 A Gradient Descent Based Linear Solver 2 1.2 The Conjugate Gradient Method 3 1.3 Matrices with Clustered Eigenvalues 5 2 Preconditioning for Laplacian Systems 7 2.1 Preconditioning 7 2.2 Combinatorial Preconditioning via Trees 8 2.3 An O~(m4=3)-Time Laplacian Solver 9 3 Newton's Method and the Interior Point Method 11 3.1 Newton's Method and its Quadratic Convergence 11 3.2 Constrained Convex Optimization via Barriers 16 3.3 Interior Point Method for Linear Programming 18 3.4 Self-Concordant Barrier Functions 28 3.5 Appendix: The Dikin Ellipsoid 28 ii Contents iii 3.6 Appendix: The Length of Newton Step 32 4 Volumetric Barrier, Universal Barrier and John Ellipsoids 35 4.1 Restating the Interior Point Method for LPs 36 4.2 Vaidya's Volumetric Barrier 38 4.3 Appendix: The Universal Barrier 43 4.4 Appendix: Calculating the Gradient and the Hessian of the Volumetric Barrier 45 4.5 Appendix: John ellipsoid 47 References 50 1 Solving Linear Equations via the Conjugate Gradient Method We discuss the problem of solving a system of linear equations. We first present the Conjugate Gradient method which works when the corresponding matrix is positive definite. We then give a simple proof of the rate of convergence of this method by using low-degree polynomial approximations for xs.1 n×n n Given a matrix A 2 R and a vector v 2 R ; our goal is to find a vector n ? def −1 x 2 R such that Ax = v: The exact solution x = A v can be computed by Gaussian elimination, but the fastest known implementation requires the same time as matrix multiplication (currently O(n2:737)). For many applications, the number of non-zero entries in A (denoted by m), or its sparsity, is much smaller than n2 and, ideally, we would like linear solvers which run in time O~(m) 2, roughly the time it takes to multiply a vector with A: While we are far from this goal for general matrices, iterative methods, based on techniques such as gradient descent or the Conjugate Gradient method reduce the problem of solving a system of linear equations to the computation of a small number of matrix-vector products with the matrix A when A is symmetric and positive definite (PD). These methods often produce only approximate solutions. However, these approximate solutions suffice for most applications. While the running time of the gradient descent-based method varies linearly with the condition number of A; that of the Conjugate Gradient 1 This is almost identical to Chapter 9 from Sachdeva and Vishnoi [SV14]. 2 The O~ notation hides polynomial factors in log n: 1 2 Solving Equations via the Conjugate Gradient Method method depends on the square-root of thep condition number; the quadratic saving occurring precisely because there exist s- degree polynomials approximating xs: The guarantees of the Conjugate Gradient method are summarized in the following theorem: n Theorem 1.1. Given an n × n symmetric matrix A 0; and a vector v 2 R ; the −1 −1 Conjugate Gradient method can find a vector x such that kx−A vkA ≤ δkA vkA p in time O((tA + n) · κ(A) log 1=δ); where tA is the time required to multiply A with a given vector, and κ(A) is the condition number of A: 1.1 A Gradient Descent Based Linear Solver The gradient descent method is a general method to solve convex programs; here we only focus on its application to linear systems. The PD assumption on A allows us to formulate the problem of solving Ax = v as a convex programming problem: For the function def ? 2 ? > ? > > ?> ? fA(x) = kx − x kA = (x − x ) A(x − x ) = x Ax − 2x v + x Ax ; find the vector x that minimizes fA(x). Since A is symmetric and PD, this is a convex function, and has a unique minimizer x = x?: When minimizing fA, each iteration of the gradient descent method is as follows: ? Start from the current estimate of x , say xt; and move along the direction of maximum rate of decrease of the function fA; i.e., against its gradient, to the point that minimizes the function along this line. We can explicitly compute the gradient ? of fA to be rfA(x) = 2A(x − x ) = 2(Ax − v): Thus, we have xt+1 = xt − αtrfA(xt) = xt − 2αt(Axt − v) def for some αt: Define the residual rt = v − Axt: Thus, we have ? > ? fA(xt+1) = fA(xt + 2αtrt) = (xt − x + 2αtrt) A(xt − x + 2αtrt) ? > ? ? > 2 > = (xt − x ) A(xt − x ) + 4αt(xt − x ) Art + 4αt rt Art: ? > ? > 2 > = (xt − x ) A(xt − x ) − 4αtrt rt + 4αt rt Art is a quadratic function in αt: We can analytically compute the αt that minimizes fA > 1 rt rt ? −1 and find that it is · > : Substituting this value of αt; and using x −xt = A rt; 2 rt Art we obtain > 2 > > ? 2 ? 2 (rt rt) ? 2 rt rt rt rt kxt − x k = kxt − x k − = kxt − x k 1 − · : +1 A A > A > > −1 rt Art rt Art rt A rt Now we relate the rate of convergence to the optimal solution to the condition number of A: Towards this, note that for any z; we have > > > −1 −1 > z Az ≤ λ1z z and z A z ≤ λn z z; 1.2. The Conjugate Gradient Method 3 where λ1 and λn are the largest and the smallest eigenvalues of A respectively. Thus, ? 2 −1 ? 2 kxt+1 − x kA ≤ (1 − κ )kxt − x kA; def λ where κ = κ(A) = 1=λn is the condition number of A: Hence, assuming we start ? ? with x0 = 0; we can find an xt such that kxt − x kA ≤ δkx kA in approximately κ log 1=δ iterations, with the cost of each iteration dominated by O(1) multiplications of the matrix A with a given vector (and O(1) dot product computations). Thus, this gradient descent-based method allows us to compute a δ approximate solution ? to x in time O((tA + n)κ log 1=δ): 1.2 The Conjugate Gradient Method Observe that at any step t of the gradient descent method, we have xt+1 2 Spanfxt; Axt; vg: Hence, for x0 = 0, it follows by induction that for any positive integer k; k xk 2 Spanfv; Av; : : : ; A vg: The running time of the gradient descent-based method is dominated by the time required to compute a basis for this subspace. However, this vector xk may not be a vector from this subspace that minimizes fA: On the other hand, the essence of the Conjugate Gradient method is that it finds the vector in this subspace that minimizes fA, in essentially the same amount of time required by k iterations of the gradient descent-based method. We must address two important questions about the Conjugate Gradient method: (1) Can the best vector be computed efficiently? and (2) What is the approximation guarantee achieved after k iterations? We show that the best vector can be found efficiently, and prove, using a low-degree polynomial approximations to xk, that the Conjugate Gradient method achieves a quadratic improvement over the gradient descent-based method in terms of its dependence on the condition number of A: def Finding the best vector efficiently. Let fv0; : : : ; vkg be a basis for K = Spanfv; Av; : : : ; Akvg (called the Krylov subspace of order k). Hence, any vector in Pk this subspace can be written as i=0 αivi: Our objective then becomes ? X 2 X > X X > ? 2 kx − αivikA = ( αivi) A( αivi) − 2( αivi) v + kx kA : i i i i Solving this optimization problem for αi requires matrix inversion, the very problem we set out to mitigate. The crucial observation is that if the vis are A-orthogonal, > i.e., vi Avj = 0 for i =6 j; then all the cross-terms disappear. Thus, ? X 2 X 2 > > ? 2 kx − αivikA = (αi vi Avi − 2αivi v) + kx kA ; i i and, as in the gradient descent-based method, we can analytically compute the set > vi v of values αi that minimize the objective to be given by αi = > : vi Avi 4 Solving Equations via the Conjugate Gradient Method Hence, if we can construct an A-orthogonal basis fv0; : : : ; vkg for K efficiently, we do at least as well as the gradient descent-based method. If we start with an arbitrary set of vectors and try to A-orthogonalize them via the Gram-Schmidt process (with inner products with respect to A), we need to compute k2 inner products and, hence, for large k; it is not more efficient than the gradient descent- based method. An efficient construction of such a basis is one of the key ideas here. th We proceed iteratively, starting with v0 = v: At the i iteration, we compute Avi−1 and A-orthogonalize it with respect to v0; : : : ; vi−1; to obtain vi: It is trivial to see that the vectors v0; : : : ; vk are A-orthogonal.
Recommended publications
  • John Ellipsoids
    Proceedings of the Edinburgh Mathematical Society (2007) 50, 737–753 c DOI:10.1017/S0013091506000332 Printed in the United Kingdom DUAL Lp JOHN ELLIPSOIDS WUYANG YU∗, GANGSONG LENG AND DONGHUA WU Department of Mathematics, Shanghai University, Shanghai 200444, People’s Republic of China (yu [email protected]; gleng@staff.shu.edu.cn; dhwu@staff.shu.edu.cn) (Received 8 March 2006) Abstract In this paper, the dual Lp John ellipsoids, which include the classical L¨owner ellipsoid and the Legendre ellipsoid, are studied. The dual Lp versions of John’s inclusion and Ball’s volume-ratio inequality are shown. This insight allows for a unified view of some basic results in convex geometry and reveals further the amazing duality between Brunn–Minkowski theory and dual Brunn–Minkowski theory. Keywords: L¨owner ellipsoid; Legendre ellipsoid; Lp John ellipsoid; dual Lp John ellipsoids 2000 Mathematics subject classification: Primary 52A39; 52A40 1. Introduction The excellent paper by Lutwak et al.[28] shows that the classical John ellipsoid JK, the Petty ellipsoid [10,30] and a recently discovered ‘dual’ of the Legendre ellipsoid [24] are all special cases (p = ∞, 1, 2) of a family of Lp ellipsoids, EpK, which can be associated with a fixed convex body K. This insight allows for a unified view of, alternate approaches to and extensions of some basic results in convex geometry. Motivated by their research, we have studied the dual Lp John ellipsoids and show that the classical L¨owner ellipsoid and the Legendre ellipsoid are special cases (p = ∞, 2) of this family of ellipsoids. Bastero and Romance [3] had shown this in a different way.
    [Show full text]
  • John Ellipsoid 5.1 John Ellipsoid
    CSE 599: Interplay between Convex Optimization and Geometry Winter 2018 Lecture 5: John Ellipsoid Lecturer: Yin Tat Lee Disclaimer: Please tell me any mistake you noticed. The algorithm at the end is unpublished. Feel free to contact me for collaboration. 5.1 John Ellipsoid In the last lecture, we discussed that any convex set is very close to anp ellipsoid in probabilistic sense. More precisely, after renormalization by covariance matrix, we have kxk2 = n ± Θ(1) with high probability. In this lecture, we will talk about how convex set is close to an ellipsoid in a strict sense. If the convex set is isotropic, it is close to a sphere as follows: Theorem 5.1.1. Let K be a convex body in Rn in isotropic position. Then, rn + 1 B ⊆ K ⊆ pn(n + 1)B : n n n Roughly speaking, this says that any convex set can be approximated by an ellipsoid by a n factor. This result has a lot of applications. Although the bound is tight, making a body isotropic is pretty time- consuming. In fact, making a body isotropic is the current bottleneck for obtaining faster algorithm for sampling in convex sets. Currently, it can only be done in O∗(n4) membership oracle plus O∗(n5) total time. Problem 5.1.2. Find a faster algorithm to approximate the covariance matrix of a convex set. In this lecture, we consider another popular position of a convex set called John position and its correspond- ing ellipsoid is called John ellipsoid. Definition 5.1.3. Given a convex set K.
    [Show full text]
  • Orlicz–John Ellipsoids
    Advances in Mathematics 265 (2014) 132–168 Contents lists available at ScienceDirect Advances in Mathematics www.elsevier.com/locate/aim ✩ Orlicz–John ellipsoids Du Zou, Ge Xiong ∗ Department of Mathematics, Shanghai University, Shanghai, 200444, PR China a r t i c l e i n f o a b s t r a c t Article history: The Orlicz–John ellipsoids, which are in the framework of the Received 3 July 2013 booming Orlicz Brunn–Minkowski theory, are introduced for Accepted 13 July 2014 the first time. It turns out that they are generalizations of the Available online 15 August 2014 classical John ellipsoid and the evolved Lp John ellipsoids. Communicated by Erwin Lutwak The analog of Ball’s volume-ratio inequality is established for MSC: the new Orlicz–John ellipsoids. The connection between the 52A40 isotropy of measures and the characterization of Orlicz–John ellipsoids is demonstrated. Keywords: © 2014 Elsevier Inc. All rights reserved. Orlicz Brunn–Minkowski theory Lp John ellipsoids Isotropy 1. Introduction A fundamental tool in convex geometry and Banach space geometry is the well-known John ellipsoid, which was originally introduced by Fritz John [26]. For each convex body (compact convex subset with nonempty interior) K in the Euclidean n-space Rn, its John ellipsoid JK is the unique ellipsoid of maximal volume contained in K. For more information about the John ellipsoid, one can refer to [3,19,21,27] and the references within. ✩ Research of the authors was supported by NSFC No. 11001163 and Innovation Program of Shanghai Municipal Education Commission No. 11YZ11. * Corresponding author.
    [Show full text]
  • Obstacle Collision Detection Using Best Ellipsoid Fit
    Journal of Intelligent and Robotic Systems 18: 105–126, 1997. 105 c 1997 Kluwer Academic Publishers. Printed in the Netherlands. Obstacle Collision Detection Using Best Ellipsoid Fit ELON RIMON Dept. of Mechanical Engineering Technion, Israel Institute of Technology STEPHEN P. BOYD Dept. of Electrical Engineering Stanford University (Received: 11 September 1995; in final form: 27 August 1996) Abstract. This paper describes a method for estimating the distance between a robot and its surrounding environment using best ellipsoid fit. The method consists of the following two stages. First we approximate the detailed geometry of the robot and its environment by minimum-volume enclosing ellipsoids. The computation of these ellipsoids is a convex optimization problem, for which efficient algorithms are known. Then we compute a conservative distance estimate using an important but little known formula for the distance of a point from and n-dimensional ellipse. The computation of the distance estimate (and its gradient vector) is shown to be an eigenvalue problem, whose solution can be rapidly found using standard techniques. We also present an incremental version of the distance computation, which takes place along a continuous trajectory taken by the robot. We have implemented the proposed approach and present some preliminary results. Key words: ellipsoid fit, geometric approximation, collision detection. 1. Introduction Knowledge of the distance between the robot and its environment is central for planning collision-free paths. For example, Khatib [19], and Rimon and Koditschek [31], use distance functions as an artificial potential function, whose gradient vector field indicates a direction of repulsion from the obstacles. They combine repulsive distance functions with an attractive potential at the goal, to generate a navigation vector field suitable for a low-level reactive controller.
    [Show full text]
  • A Generalization of the Löwner-John's Ellipsoid Theorem
    A generalization of the Löwner-John’s ellipsoid theorem Jean-Bernard Lasserre To cite this version: Jean-Bernard Lasserre. A generalization of the Löwner-John’s ellipsoid theorem. 2013. hal- 00785158v2 HAL Id: hal-00785158 https://hal.archives-ouvertes.fr/hal-00785158v2 Submitted on 14 Apr 2014 (v2), last revised 22 Dec 2014 (v5) HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. A GENERALIZATION OF LOWNER-JOHN’S¨ ELLIPSOID THEOREM JEAN B. LASSERRE Abstract. We address the following generalization P of the L¨owner- John ellipsoid problem. Given a (non necessarily convex) compact set K ⊂ Rn and an even integer d ∈ N, find an homogeneous polynomial g of degree d such that K ⊂ G := {x : g(x) ≤ 1} and G has minimum volume among all such sets. We show that P is a convex optimization problem even if neither K nor G are convex! We next show that P has n+d−1 a unique optimal solution and a characterization with at most d contacts points in K∩G is also provided. This is the analogue for d > 2 of the L¨owner-John’s theorem in the quadratic case d = 2, but impor- tantly, we neither require the set K nor the sublevel set G to be convex.
    [Show full text]
  • Topics in Statistics and Convex Geometry: Rounding, Sampling, and Interpolation
    ©Copyright 2018 Adam M. Gustafson Topics in Statistics and Convex Geometry: Rounding, Sampling, and Interpolation Adam M. Gustafson A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy University of Washington 2018 Reading Committee: Hariharan Narayanan, Chair Michael D. Perlman Sébastien Bubeck Program Authorized to Offer Degree: Department of Statistics University of Washington Abstract Topics in Statistics and Convex Geometry: Rounding, Sampling, and Interpolation Adam M. Gustafson Chair of the Supervisory Committee: Title of Chair Hariharan Narayanan Department of Statistics We consider a few aspects of the interplay between convex geometry and statistics. We consider three problems of interest: how to bring a convex body specified by a self-concordant barrier into a suitably “rounded” position using an affine-invariant random walk; how to design a rapidly-mixing affine-invariant random walk with maximal volume ellipsoids; and how to perform interpolation in multiple dimensions given noisy observations of the original function on a finite set. We begin with an overview of some background information on convex bodies which recur in this dissertation, discussing polytopes, the Dikin ellipsoid, and John’s ellipsoid in particular. We also discuss cutting plane methods and Vaidya’s algorithm, which we employ in subsequent analysis. We then review Markov chains on general state spaces to motivate designing rapidly mixing geometric random walks, and the means by which the mixing time may be analyzed. Using these results regarding convex bodies and general state space Markov chains, along with recently developed concentration inequalities on general state space Markov chains, we employ an affine-invariant random walk using Dikin ellipsoids to provably bring aconvex body specified by a self-concordant barrier into an approximately “rounded” position.
    [Show full text]
  • Lp JOHN ELLIPSOIDS Erwin Lutwak, Deane Yang, and Gaoyong Zhang
    Lp JOHN ELLIPSOIDS Erwin Lutwak, Deane Yang, and Gaoyong Zhang Department of Mathematics Polytechnic University Brooklyn, NY 11201 Abstract. It is shown that the classical John ellipsoid, the Petty ellipsoid and a recently discovered ‘dual’ of the Legendre ellipsoid are all special cases (p = ∞, p = 1, and p = 2) of a family of Lp ellipsoids which can be associated with a fixed convex body. This insight allows for a unified view of, alternate approaches to, and extensions of some basic results in convex geometry. Introduction Two old questions in convex geometry ask: (1) What is the largest (in volume) ellipsoid that can be squeezed inside a fixed convex body? (2) When SL(n) transfor- mations act on a fixed convex body in Rn, which transformation yields the image with the smallest surface area? One of the aims of this article is to demonstrate that these apparently unrelated questions are special cases of the same problem – that of min- imizing total Lp-curvature. Problem (1) turns out to be the L∞ case while Problem (2) is the L1 case. An often used fact in both convex and Banach space geometry is that associated with each convex body K is a unique ellipsoid JK of maximal volume contained in K. The ellipsoid is called the John ellipsoid (or L¨owner-John ellipsoid) of K and the center of this ellipsoid is called the John point of the body K. The John ellipsoid is extremely useful, see, for example, [2] and [40] for applications. Two important results concerning the John ellipsoid are John’s inclusion and Ball’s volume-ratio inequality.
    [Show full text]
  • Duality of Ellipsoidal Approximations Via Semi-Infinite Programming
    Duality of ellipsoidal approximations via semi-infinite programming Filiz G¨urtuna∗ March 9, 2008 Abstract In this work, we develop duality of the minimum volume circumscribed ellipsoid and the maximum volume inscribed ellipsoid problems. We present a unified treatment of both problems using convex semi–infinite programming. We establish the known duality relationship between the minimum volume circumscribed ellipsoid problem and the optimal experimental design problem in statistics. The duality results are obtained using convex duality for semi–infinite programming developed in a functional analysis setting. Key words. Inscribed ellipsoid, circumscribed ellipsoid, minimum volume, maximum vol- ume, duality, semi–infinite programming, optimality conditions, optimal experimental de- sign, D-optimal design, John ellipsoid, L¨owner ellipsoid. Duality of ellipsoidal approximations via semi-infinite programming AMS(MOS) subject classifications: primary: 90C34, 46B20, 90C30, 90C46, 65K10; secondary: 52A38, 52A20, 52A21, 22C05, 54H15. ∗Department of Mathematics and Statistics, University of Maryland Baltimore County, Baltimore, Mary- land 21250, USA. E-mail: [email protected]. Research partially supported by the National Science Foundation under grant DMS–0411955. 0 1 Introduction Inner and outer approximations of complicated sets by simpler ones is a fundamental ap- proach in applied mathematics. In this context, ellipsoids are very useful objects with their remarkable properties such as simple representation, an associated convex quadratic func- tion, and invariance, that is, an affine transformation of an ellipsoid is an ellipsoid. Therefore, ellipsoidal approximations are easier to work with than polyhedral approximations and their invariance property can be a gain over the spherical approximations in some applications such as clustering. Different criteria can be used for the quality of an approximating ellipsoid, each leading to a different optimization problem.
    [Show full text]
  • Löwner–John Ellipsoids
    Documenta Math. 95 Lowner–John¨ Ellipsoids Martin Henk 2010 Mathematics Subject Classification: 52XX, 90CXX Keywords and Phrases: L¨owner-John ellipsoids, volume, ellipsoid method, (reverse) isoperimetric inequality, Kalai’s 3n-conjecture, norm approximation, non-negative homogeneous polynomials 1 The men behind the ellipsoids Before giving the mathematical description of the L¨owner–John ellipsoids and pointing out some of their far-ranging applications, I briefly illuminate the adventurous life of the two eminent mathematicians, by whom the ellipsoids are named: Charles Loewner (Karel L¨owner) and Fritz John. Karel L¨owner (see Figure 1) was born into a Jewish family in L´any, a small town about 30 km west of Prague, in 1893. Due to his father’s liking for German Figure 1: Charles Loewner in 1963 (Source: Wikimedia Commons) Documenta Mathematica Extra Volume ISMP (2012) 95–106 · 96 Martin Henk style education, Karel attended a German Gymnasium in Prague and in 1912 he began his studies at German Charles-Ferdinand University in Prague, where he not only studied mathematics, but also physics, astronomy, chemistry and meteorology. He made his Ph.D. in 1917 under supervision of Georg Pick on a distortion theorem for a class of holomorphic functions. In 1922 he moved to the University of Berlin, where he made his Habil- itation in 1923 on the solution of a special case of the famous Bieberbach conjecture. In 1928 he was appointed as non-permanent extraordinary profes- sor at Cologne, and in 1930 he moved back to Prague where he became first an extraordinary professor and then a full professor at the German University in Prague in 1934.
    [Show full text]
  • Estimates for the Szegó Kernel on Unbounded Convex Domains
    ESTIMATES FOR THE SZEGÓ KERNEL ON UNBOUNDED CONVEX DOMAINS By María Soledad Benguria Depassier A DISSERTATION SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUJREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY (J'v1ATHEMATICS) at the UNIVERSITY OF WISCONSIN - MADISON 2014 Date of final oral examination: April 21, 2014 The dissertation is approved by the following members of the Final Oral Committee: Professor Alexander J. Nagel, Professor Emeritus, Mathematics Professor Lindsay E. Stovall, Assistant Professor, l'vlathematics Professor Andreas Seeger, Professor, Mathematics Professor Brian T. Street, Assistant Professor, l'vlathematics Professor Andrei H. Caldararu, Professor, Mathematics Abstract We study the size of the Szegéí kernel on the boundary of unbounded domains defined by convex polynomials. Given a convex polynomial b : JR.n -+ lR. such that its mixed terms are dominated by its pure terms, we consider the domain ítb = { z E en+! : Im[zn+d > b(Re[z¡J, ... , Re[zn])}. Given two points (x, y, t) and (x', y', t') in é)ítb, define b( v) = b ( v + x~x') ~ Vb ( x~x} V~ b ( x~x'); S(x, x') = b(x) + b(x') ~ 2b ( x~x'); and w = (t' ~ t) + 'Vb ( x~x') ·(y'~ y). \Ve obtain the following estímate for the Szegéí kernel associated to the domain ítb : 1 IS((x,y,t):(x',y',t'))I;S , 2 Jsz + b(y ~ y')2 + w 2 j{ v : b(v) < Jsz + b(y ~ y')2 + w 2 }j where the constant depends on the degrees of the highest order pure terms of b and the dimension of the space, but is independent of the two given points.
    [Show full text]
  • Alice and Bob Meet Banach the Interface of Asymptotic Geometric Analysis and Quantum Information Theory
    Mathematical Surveys and Monographs Volume 223 Alice and Bob Meet Banach The Interface of Asymptotic Geometric Analysis and Quantum Information Theory Guillaume Aubrun Stanisđaw J. Szarek American Mathematical Society 10.1090/surv/223 Alice and Bob Meet Banach The Interface of Asymptotic Geometric Analysis and Quantum Information Theory Mathematical Surveys and Monographs Volume 223 Alice and Bob Meet Banach The Interface of Asymptotic Geometric Analysis and Quantum Information Theory Guillaume Aubrun Stanisđaw J. Szarek American Mathematical Society Providence, Rhode Island EDITORIAL COMMITTEE Robert Guralnick Benjamin Sudakov Michael A. Singer, Chair Constantin Teleman MichaelI.Weinstein 2010 Mathematics Subject Classification. Primary 46Bxx, 52Axx, 81Pxx, 46B07, 46B09, 52C17, 60B20, 81P40. For additional information and updates on this book, visit www.ams.org/bookpages/surv-223 Library of Congress Cataloging-in-Publication Data Names: Aubrun, Guillaume, 1981- author. | Szarek, Stanislaw J., author. Title: Alice and Bob Meet Banach: The interface of asymptotic geometric analysis and quantum information theory / Guillaume Aubrun, Stanislaw J. Szarek. Description: Providence, Rhode Island : American Mathematical Society, [2017] | Series: Mathe- matical surveys and monographs ; volume 223 | Includes bibliographical references and index. Identifiers: LCCN 2017010894 | ISBN 9781470434687 (alk. paper) Subjects: LCSH: Geometric analysis. | Quantum theory. | AMS: Functional analysis – Normed linear spaces and Banach spaces; Banach lattices – Normed linear spaces and Banach spaces; Banach lattices. msc | Convex and discrete geometry – General convexity – General convex- ity. msc | Quantum theory – Axiomatics, foundations, philosophy – Axiomatics, foundations, philosophy. msc | Functional analysis – Normed linear spaces and Banach spaces; Banach lattices – Local theory of Banach spaces. msc | Functional analysis – Normed linear spaces and Banach spaces; Banach lattices – Probabilistic methods in Banach space theory.
    [Show full text]
  • An Application of John Ellipsoids to the Szegő Kernel on Unbounded Convex Domains
    AN APPLICATION OF JOHN ELLIPSOIDS TO THE SZEGŐ KERNEL ON UNBOUNDED CONVEX DOMAINS SOLEDAD BENGURIA1 Abstract. We use convex geometry tools, in particular John ellipsoids, to obtain a size estimate for n the Szegő kernel on the boundary of a class of unbounded convex domains in C . Given a polynomial n n+1 b : R → R satisfying a certain growth condition, we consider domains of the type Ωb = {z ∈ C : Im[zn+1] > b(Re[z1],..., Re[zn])}. 1. Introduction The study of the behavior of Szegő kernels near the boundary of domains has been of great interest in the field of several complex variables during the past few decades. In this work we obtain a size n estimate for the Szegő kernel on the boundary of a class of unbounded domains in C . Our approach to this problem is the use of classical convex analysis techniques, and in particular an application of John ellipsoids. n 1.1. Background. Given a strictly convex polynomial b : R → R, consider the domain n+1 Ωb = {(z1, . , zn+1) ∈ C : Im[zn+1] > b(Re[z1],..., Re[zn])}. For unbounded domains of this type, it is convenient to define the Szegő projection as in [15]. n n We can identify the boundary ∂Ωb with C × R so that a point (z, t) ∈ C × R corresponds to (z, t + ib(Re[z1],..., Re[zn])) ∈ ∂Ωb. Let O(Ωb) be the set of holomorphic functions in Ωb. Given F ∈ O(Ωb) and > 0, set F(z, t) = F (z, t + ib(Re[z1],..., Re[zn]) + i).
    [Show full text]