Discrete Convex Analysis I

Total Page:16

File Type:pdf, Size:1020Kb

Discrete Convex Analysis I Hausdorff School: Economics and Tropical Geometry Bonn, May 9-13, 2016 Discrete Convex Analysis I: Concepts of Discrete Convex Functions Kazuo Murota (Tokyo Metropolitan University) 160509BonnHCMecon1 1 Convex Function f(x) f(x) convex nonconvex f : Rn ! R is convex () λf(x) + (1 − λ)f(y) ≥ f(λx + (1 − λ)y) (0 < 8 λ < 1) f(x) 6 x z y - k R = R [ f+1g λx + (1 − λ)y 2 Features of Convex Functions • Occurrence in many models motivations, applications • Operations and transformations • Sufficient structure for a theory mathematically beautiful, practically useful • Minimization algorithms 3 Features of Convex Functions = Issues in discrete convex analysis • Occurrence in many models ? motivations, applications • Operations and transformations ? • Sufficient structure for a theory ? mathematically beautiful, practically useful • Minimization algorithms ? 4 Contents of Part I Concepts of Discrete Convex Functions C1. Univariate Discrete Convex Functions C2. Classes of Discrete Convex Functions C3. L-convex Functions C4. M-convex Functions C5. Remarks on Submodular Set Functions Part II: Properties, Part III: Algorithms 5 C1. Univariate Discrete Convex Functions Ingredients of convex analysis 6 Definition of Convex Function f : Z ! R R = R [ f+1g f(x − 1) + f(x + 1) ≥ 2f(x) () f(x) + f(y) ≥ f(x + 1) + f(y − 1) (x < y) () f is convex-extensible, i.e., 9 convex f : R ! R s.t. f(x) = f(x) (8x 2 Z) convex non-convex 7 Local vs Global Optimality f : Z ! R Theorem: x∗: global opt (min) () x∗: local opt (min) f(x∗) ≤ minff(x∗ − 1); f(x∗ + 1)g convex non-convex 8 Intuition of Legendre Transformation y 6 y 6 f(x) f(x) - - x x 9 Intuition of Legendre Transformation y 6 y 6 f(x) - - x x 10 Intuition of Legendre Transformation y 6 y 6 f(x) - - x x 6 6 y y - - x x 11 Legendre Transformation f : Z ! Z (integer-valued) Define discrete Legendre transform of f by f •(p) = supfpx − f(x) j x 2 Zg (p 2 Z) 6 6 y f(x) y −f •(p) slope p - - x x Theorem: (1) f • is Z-valued convex function, f • : Z ! Z (2) (f •)• = f (biconjugacy) 12 Separation Theorem f : Z ! R f(x) f(x) convex p∗ h : Z ! R concave h(x) h(x) Theorem (Discrete Separation Theorem) (1) f(x) ≥ h(x)(8x 2 Z) ) 9 α∗ 2 R, 9 p∗ 2 R: f(x) ≥ α∗ + p∗ x ≥ h(x)(8x 2 Z) (2) f, h: integer-valued ) α∗ 2 Z, p∗ 2 Z 13 Separation Theorem f(x) p∗ f : Z ! R convex h : Z ! R concave h(x) Theorem (Discrete Separation Theorem) (1) f(x) ≥ h(x)(8x 2 Z) ) 9 α∗ 2 R, 9 p∗ 2 R: f(x) ≥ α∗ + p∗ x ≥ h(x)(8x 2 Z) (2) f, h: integer-valued ) α∗ 2 Z, p∗ 2 Z 14 Fenchel Duality (Min-Max) f : Z ! Z: convex, h : Z ! Z: concave Legendre transforms: f•(p) = supfpx − f(x) j x 2 Zg h◦(p) = inffpx − h(x) j x 2 Zg Theorem: inf ff(x) − h(x)g = sup fh◦(p) − f•(p)g x2Z p2Z 15 Five Properties of \Convex" Functions 1. convex extension 2. local opt = global opt 3. Legendre transform (biconjugacy) 4. separation theorem 5. Fenchel duality hold for univariate discrete convex functions 16 C2. Classes of Discrete Convex Functions 17 Classes of Discrete Convex Functions 1. Submodular set fn (on f0; 1gn) 1. Separable-convex fn on Zn 1. Integrally-convex fn on Zn 2. L-convex (L\-convex) fn on Zn 2. M-convex (M\-convex) fn on Zn 3. M-convex fn on jump systems 3. L-convex fn on graphs 18 Submodular Function R = R [ f+1g Set function ρ : 2V ! R is submodular () X [ Y ρ(X) + ρ(Y ) ≥ ρ(X [ Y ) + ρ(X \ Y ) XYX \ Y cf. jXj + jY j = jX [ Y j + jX \ Y j Set function () Function on f0; 1gn 19 Separable-convex Function f : Zn ! R is separable-convex () f(x) = '1(x1) + '2(x2) + ··· + 'n(xn) 'i: univariate convex 'i 6 - 20 Five Properties of \Convex" Functions 1. convex extension 2. local opt = global opt 3. Legendre transform (biconjugacy) 4. separation theorem 5. Fenchel duality hold for separable discrete convex functions 21 Some History 1935 Matroid Whitney, Nakasawa 1965 Submodular function Edmonds 1969 Convex network flow (electr.circuit) Iri 1982 Submodularity and convexity Frank, Fujishige, Lov´asz 1990 Valuated matroid Dress{Wenzel Integrally convex fn Favati{Tardella 1996 Discrete convex analysis Murota 2000 Submodular minimization algorithm Iwata{Fleischer{Fujishige, Schrijver 2006 M-convex fn on jump system Murota 2012 L-convex fn on graph Hirai, Kolmogorov 22 Motivations/Applications/Connections 1. submodular MANY problems graph cut, convex game 1. separable-conv MANY problems min-cost flow, resource allocation 1. integrally-conv [mathematical aesthetics] 2. L-conv (Zn) network tension, image processing OR (inventory, scheduling) 2. M-conv (Zn) network flow, conjestion game economics (game, auction) mixed polynomial matrix 3. M-conv (jump) deg sequence, (2-)matching polynomial (half-plane property) 3. L-conv (graph) multiflow, multifacility location 23 Books (discrete convex analysis) 2000: Murota, Matrices and Matroids for Systems Analysis, Springer 2003: Murota, Discrete Convex Analysis, SIAM 2005: Fujishige, Submodular Functions and Optimization, 2nd ed., Elsevier 2014: Simchi-Levi, Chen, Bramel, The Logic of Logistics, 3rd ed., Springer 24 Convex Extension • f : Zn ! R is convex-extensible , 9 convex f : Rn ! R: f(x) = f(x) (8x 2 Zn) • f is a convex extension of f • convex closure = (pointwise) max convex extension convex-extensible NOT convex-extensible 25 Integrally Convex Function (Favati-Tardella 1990) n n N(x) = fy 2 Z j kx − yk1 < 1g (x 2 R ) x x Local convex extension: f~(x) = supfhp; xi + α j hp; yi + α ≤ f(y)(8y 2 N(x))g p,α Def: f is integrally convex () f~ is convex Ex: f(x1; x2) = jx1 − 2x2j is NOT integrally convex f = 1 f = 0 f~(x) = 1 x = (1; 1=2) f(x) = 0 f = 0 f = 1 26 Integrally Convex Set YES NO 27 Five Properties of \Convex" Functions 1. convex extension 2. local opt = global opt hold for integrally convex functions 3. Legendre transform (biconjugacy) 4. separation theorem 5. Fenchel duality fail for integrally convex functions 28 Definitions 1. submodular (set fn) ρ(X) + ρ(Y ) ≥ ρ(X [ Y ) + ρ(X \ Y ) 1. separable f(x) = '1(x1) + '2(x2) + ··· + 'n(xn) -conv 'i(t − 1) + 'i(t + 1) ≥ 2'i(t)(8t 2 Z) 1. integrally -conv Local convex ext f~(x) is convex 2. L-conv(Zn) 2. M-conv(Zn) 3. M-conv(jump) 3. L-conv(graph) 29 Classes of Discrete Convex Functions f : Zn ! R convex-extensible integrally convex submod M\-convex set fn separable convex L\-convex L-convex M-convex on graph on jump 30 Bivariate L\- and M\-convex Functions f(x) g(p) x2 p2 p1 x1 L\-convex fn M\-convex fn 31 C3. L-convex Functions 32 L-convex Function (L = Lattice) (Murota 98) g : Zn ! R [ f+1g q p _ q p _ q compnt-max p ^ q compnt-min © p p ^ q Def: g is L-convex () • Submodular: g(p) + g(q) ≥ g(p _ q) + g(p ^ q) • Translation: 9r; 8p: g(p + 1) = g(p) + r 1 = (1; 1;:::; 1) 6 L - L\ 33 L\-convexity from Submodularity (Murota 98, Fujishige{Murota 2000) g : Zn ! R L\-convex () g~(p0; p) = g(p − p01) is submodular in (p0; p) g~ : Zn+1 ! R, 1 = (1; 1;:::; 1) q~ p~ _ q~ g~(~p) +g ~(~q) ≥ g~ (~p _ q~) +g ~ (~p ^ q~) p~ p~ ^ q~ \ Ln+1 ' Ln ) Ln 34 L\-convexity from Mid-pt-convexity (Favati-Tardella 1990, Fujishige{Murota 2000) 6 q q =) p+q p - 2 p q p+q p 2 Mid-point convex (g: Rn ! R): p+q g(p) + g(q) ≥ 2g 2 n =) Discrete mid-pointl convexm (jg : Z !kR) p+q p+q g(p) + g(q) ≥ g 2 + g 2 L\-convex function (L = Lattice) 35 Mid-pt Convexity for 01-Vectors l m 6 p+q q = p _ q For p; q 2 f0; 1gn 2 p j k - p+q 2 = p ^ q Discrete mid-pt convexity:l m j k p+q p+q g(p) + g(q) ≥ g 2 + g 2 () Submodularity: g(p) + g(q) ≥ g (p _ q) + g (p ^ q) 36 Translation Submodularity (L\) g(p) + g(q) ≥ g((p − α1) _ q) + g(p ^ (q + α1)) (α ≥ 0) p q q + α1 α = 2 © p − α1 p q discrete mid-pt convex g~(p0; p) = g(p − p01) is submodular in (p0; p) (Fujishige-Murota 00) , translation submodular (Fujishige-Murota 00) , discrete mid-pt convex (Favati-Tardella 90) , submod. integ. convex 37 Rem: L\-convex vs Submodular n = 1 Fact 1: Every g : Z ! R is submodular Fact 2: Function g : Z ! R is L\-convex () g(p − 1) + g(p + 1) ≥ 2g(p) for all p 2 Z 38 L\-convex Function: Examples X X \ Quadratic: g(p) = aijpipj is L -convex i j X , aij ≤ 0 (i 6= j); aij ≥ 0 (8i) j Energy function: For univariate convex i and ij X X g(p) = i(pi) + ij(pi − pj) i i6=j Range: g(p) = maxfp1; p2; : : : ; png − minfp1; p2; : : : ; png Submodular set function: ρ : 2V ! R \ , ρ(X) = g(χX) for some L -convex g Multimodular: h : Zn ! R is multimodular , \ h(p) = g(p1; p1 + p2; : : : ; p1 + ··· + pn) for L -convex g 39 Five Properties of \Convex" Functions 1.
Recommended publications
  • Computational Thermodynamics: a Mature Scientific Tool for Industry and Academia*
    Pure Appl. Chem., Vol. 83, No. 5, pp. 1031–1044, 2011. doi:10.1351/PAC-CON-10-12-06 © 2011 IUPAC, Publication date (Web): 4 April 2011 Computational thermodynamics: A mature scientific tool for industry and academia* Klaus Hack GTT Technologies, Kaiserstrasse 100, D-52134 Herzogenrath, Germany Abstract: The paper gives an overview of the general theoretical background of computa- tional thermochemistry as well as recent developments in the field, showing special applica- tion cases for real world problems. The established way of applying computational thermo- dynamics is the use of so-called integrated thermodynamic databank systems (ITDS). A short overview of the capabilities of such an ITDS is given using FactSage as an example. However, there are many more applications that go beyond the closed approach of an ITDS. With advanced algorithms it is possible to include explicit reaction kinetics as an additional constraint into the method of complex equilibrium calculations. Furthermore, a method of interlinking a small number of local equilibria with a system of materials and energy streams has been developed which permits a thermodynamically based approach to process modeling which has proven superior to detailed high-resolution computational fluid dynamic models in several cases. Examples for such highly developed applications of computational thermo- dynamics will be given. The production of metallurgical grade silicon from silica and carbon will be used to demonstrate the application of several calculation methods up to a full process model. Keywords: complex equilibria; Gibbs energy; phase diagrams; process modeling; reaction equilibria; thermodynamics. INTRODUCTION The concept of using Gibbsian thermodynamics as an approach to tackle problems of industrial or aca- demic background is not new at all.
    [Show full text]
  • Covariant Hamiltonian Field Theory 3
    December 16, 2020 2:58 WSPC/INSTRUCTION FILE kfte COVARIANT HAMILTONIAN FIELD THEORY JURGEN¨ STRUCKMEIER and ANDREAS REDELBACH GSI Helmholtzzentrum f¨ur Schwerionenforschung GmbH Planckstr. 1, 64291 Darmstadt, Germany and Johann Wolfgang Goethe-Universit¨at Frankfurt am Main Max-von-Laue-Str. 1, 60438 Frankfurt am Main, Germany [email protected] Received 18 July 2007 Revised 14 December 2020 A consistent, local coordinate formulation of covariant Hamiltonian field theory is pre- sented. Whereas the covariant canonical field equations are equivalent to the Euler- Lagrange field equations, the covariant canonical transformation theory offers more gen- eral means for defining mappings that preserve the form of the field equations than the usual Lagrangian description. It is proved that Poisson brackets, Lagrange brackets, and canonical 2-forms exist that are invariant under canonical transformations of the fields. The technique to derive transformation rules for the fields from generating functions is demonstrated by means of various examples. In particular, it is shown that the infinites- imal canonical transformation furnishes the most general form of Noether’s theorem. We furthermore specify the generating function of an infinitesimal space-time step that conforms to the field equations. Keywords: Field theory; Hamiltonian density; covariant. PACS numbers: 11.10.Ef, 11.15Kc arXiv:0811.0508v6 [math-ph] 15 Dec 2020 1. Introduction Relativistic field theories and gauge theories are commonly formulated on the basis of a Lagrangian density L1,2,3,4. The space-time evolution of the fields is obtained by integrating the Euler-Lagrange field equations that follow from the four-dimensional representation of Hamilton’s action principle.
    [Show full text]
  • Turbo Decoding As Iterative Constrained Maximum Likelihood Sequence Detection John Maclaren Walsh, Member, IEEE, Phillip A
    1 Turbo Decoding as Iterative Constrained Maximum Likelihood Sequence Detection John MacLaren Walsh, Member, IEEE, Phillip A. Regalia, Fellow, IEEE, and C. Richard Johnson, Jr., Fellow, IEEE Abstract— The turbo decoder was not originally introduced codes have good distance properties, which would be relevant as a solution to an optimization problem, which has impeded for maximum likelihood decoding, researchers have not yet attempts to explain its excellent performance. Here it is shown, succeeded in developing a proper connection between the sub- nonetheless, that the turbo decoder is an iterative method seeking a solution to an intuitively pleasing constrained optimization optimal turbo decoder and maximum likelihood decoding. This problem. In particular, the turbo decoder seeks the maximum is exacerbated by the fact that the turbo decoder, unlike most of likelihood sequence under the false assumption that the input the designs in modern communications systems engineering, to the encoders are chosen independently of each other in the was not originally introduced as a solution to an optimization parallel case, or that the output of the outer encoder is chosen problem. This has made explaining just why the turbo decoder independently of the input to the inner encoder in the serial case. To control the error introduced by the false assumption, the opti- performs as well as it does very difficult. Together with the mizations are performed subject to a constraint on the probability lack of formulation as a solution to an optimization problem, that the independent messages happen to coincide. When the the turbo decoder is an iterative algorithm, which makes the constraining probability equals one, the global maximum of determining its convergence and stability behavior important.
    [Show full text]
  • A Comparative Study of Time Delay Estimation Techniques for Road Vehicle Tracking Patrick Marmaroli, Xavier Falourd, Hervé Lissek
    A comparative study of time delay estimation techniques for road vehicle tracking Patrick Marmaroli, Xavier Falourd, Hervé Lissek To cite this version: Patrick Marmaroli, Xavier Falourd, Hervé Lissek. A comparative study of time delay estimation techniques for road vehicle tracking. Acoustics 2012, Apr 2012, Nantes, France. hal-00810981 HAL Id: hal-00810981 https://hal.archives-ouvertes.fr/hal-00810981 Submitted on 23 Apr 2012 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Proceedings of the Acoustics 2012 Nantes Conference 23-27 April 2012, Nantes, France A comparative study of time delay estimation techniques for road vehicle tracking P. Marmaroli, X. Falourd and H. Lissek Ecole Polytechnique F´ed´erale de Lausanne, EPFL STI IEL LEMA, 1015 Lausanne, Switzerland patrick.marmaroli@epfl.ch 4135 23-27 April 2012, Nantes, France Proceedings of the Acoustics 2012 Nantes Conference This paper addresses road traffic monitoring using passive acoustic sensors. Recently, the feasibility of the joint speed and wheelbase length estimation of a road vehicle using particle filtering has been demonstrated. In essence, the direction of arrival of propagated tyre/road noises are estimated using a time delay estimation (TDE) technique between pairs of microphones placed near the road.
    [Show full text]
  • Thermodynamics
    ME346A Introduction to Statistical Mechanics { Wei Cai { Stanford University { Win 2011 Handout 6. Thermodynamics January 26, 2011 Contents 1 Laws of thermodynamics 2 1.1 The zeroth law . .3 1.2 The first law . .4 1.3 The second law . .5 1.3.1 Efficiency of Carnot engine . .5 1.3.2 Alternative statements of the second law . .7 1.4 The third law . .8 2 Mathematics of thermodynamics 9 2.1 Equation of state . .9 2.2 Gibbs-Duhem relation . 11 2.2.1 Homogeneous function . 11 2.2.2 Virial theorem / Euler theorem . 12 2.3 Maxwell relations . 13 2.4 Legendre transform . 15 2.5 Thermodynamic potentials . 16 3 Worked examples 21 3.1 Thermodynamic potentials and Maxwell's relation . 21 3.2 Properties of ideal gas . 24 3.3 Gas expansion . 28 4 Irreversible processes 32 4.1 Entropy and irreversibility . 32 4.2 Variational statement of second law . 32 1 In the 1st lecture, we will discuss the concepts of thermodynamics, namely its 4 laws. The most important concepts are the second law and the notion of Entropy. (reading assignment: Reif x 3.10, 3.11) In the 2nd lecture, We will discuss the mathematics of thermodynamics, i.e. the machinery to make quantitative predictions. We will deal with partial derivatives and Legendre transforms. (reading assignment: Reif x 4.1-4.7, 5.1-5.12) 1 Laws of thermodynamics Thermodynamics is a branch of science connected with the nature of heat and its conver- sion to mechanical, electrical and chemical energy. (The Webster pocket dictionary defines, Thermodynamics: physics of heat.) Historically, it grew out of efforts to construct more efficient heat engines | devices for ex- tracting useful work from expanding hot gases (http://www.answers.com/thermodynamics).
    [Show full text]
  • Optimization Basic Results
    Optimization Basic Results Michel De Lara Cermics, Ecole´ des Ponts ParisTech France Ecole´ des Ponts ParisTech November 22, 2020 Outline of the presentation Magic formulas Convex functions, coercivity Existence and uniqueness of a minimum First-order optimality conditions (the case of equality constraints) Duality gap and saddle-points Elements of Lagrangian duality and Uzawa algorithm More on convexity and duality Outline of the presentation Magic formulas Convex functions, coercivity Existence and uniqueness of a minimum First-order optimality conditions (the case of equality constraints) Duality gap and saddle-points Elements of Lagrangian duality and Uzawa algorithm More on convexity and duality inf h(a; b) = inf inf h(a; b) a2A;b2B a2A b2B inf λf (a) = λ inf f (a) ; 8λ ≥ 0 a2A a2A inf f (a) + g(b) = inf f (a) + inf g(b) a2A;b2B a2A b2B Tower formula For any function h : A × B ! [−∞; +1] we have inf h(a; b) = inf inf h(a; b) a2A;b2B a2A b2B and if B(a) ⊂ B, 8a 2 A, we have inf h(a; b) = inf inf h(a; b) a2A;b2B(a) a2A b2B(a) Independence For any functions f : A !] − 1; +1] ; g : B !] − 1; +1] we have inf f (a) + g(b) = inf f (a) + inf g(b) a2A;b2B a2A b2B and for any finite set S, any functions fs : As !] − 1; +1] and any nonnegative scalars πs ≥ 0, for s 2 S, we have X X inf π f (a )= π inf f (a ) Q s s s s s s fas gs2 2 s2 As as 2As S S s2S s2S Outline of the presentation Magic formulas Convex functions, coercivity Existence and uniqueness of a minimum First-order optimality conditions (the case of equality constraints) Duality gap and saddle-points Elements of Lagrangian duality and Uzawa algorithm More on convexity and duality Convex sets Let N 2 N∗.
    [Show full text]
  • Comparison of Risk Measures
    Geometry Of The Expected Value Set And The Set-Valued Sample Mean Process Alois Pichler∗ May 13, 2018 Abstract The law of large numbers extends to random sets by employing Minkowski addition. Above that, a central limit theorem is available for set-valued random variables. The existing results use abstract isometries to describe convergence of the sample mean process towards the limit, the expected value set. These statements do not reveal the local geometry and the relations of the sample mean and the expected value set, so these descriptions are not entirely satisfactory in understanding the limiting behavior of the sample mean process. This paper addresses and describes the fluctuations of the sample average mean on the boundary of the expectation set. Keywords: Random sets, set-valued integration, stochastic optimization, set-valued risk measures Classification: 90C15, 26E25, 49J53, 28B20 1 Introduction Artstein and Vitale [4] obtain an initial law of large numbers for random sets. Given this result and the similarities of Minkowski addition of sets with addition and multiplication for scalars it is natural to ask for a central limit theorem for random sets. After some pioneering work by Cressie [11], Weil [28] succeeds in establishing a reasonable result describing the distribution of the Pompeiu–Hausdorff distance between the sample average and the expected value set. The result is based on an isometry between compact sets and their support functions, which are continuous on some appropriate and adapted sphere (cf. also Norkin and Wets [20] and Li et al. [17]; cf. Kuelbs [16] for general difficulties). However, arXiv:1708.05735v1 [math.PR] 18 Aug 2017 the Pompeiu–Hausdorff distance of random sets is just an R-valued random variable and its d distribution is on the real line.
    [Show full text]
  • Direct Optimization Through $\Arg \Max$ for Discrete Variational Auto
    Direct Optimization through arg max for Discrete Variational Auto-Encoder Guy Lorberbom Andreea Gane Tommi Jaakkola Tamir Hazan Technion MIT MIT Technion Abstract Reparameterization of variational auto-encoders with continuous random variables is an effective method for reducing the variance of their gradient estimates. In the discrete case, one can perform reparametrization using the Gumbel-Max trick, but the resulting objective relies on an arg max operation and is non-differentiable. In contrast to previous works which resort to softmax-based relaxations, we propose to optimize it directly by applying the direct loss minimization approach. Our proposal extends naturally to structured discrete latent variable models when evaluating the arg max operation is tractable. We demonstrate empirically the effectiveness of the direct loss minimization technique in variational autoencoders with both unstructured and structured discrete latent variables. 1 Introduction Models with discrete latent variables drive extensive research in machine learning applications, including language classification and generation [42, 11, 34], molecular synthesis [19], or game solving [25]. Compared to their continuous counterparts, discrete latent variable models can decrease the computational complexity of inference calculations, for instance, by discarding alternatives in hard attention models [21], they can improve interpretability by illustrating which terms contributed to the solution [27, 42], and they can facilitate the encoding of inductive biases in the learning process, such as images consisting of a small number of objects [8] or tasks requiring intermediate alignments [25]. Finally, in some cases, discrete latent variables are natural choices, for instance when modeling datasets with discrete classes [32, 12, 23]. Performing maximum likelihood estimation of latent variable models is challenging due to the requirement to marginalize over the latent variables.
    [Show full text]
  • Extreme-Value Theorems for Optimal Multidimensional Pricing
    Extreme-Value Theorems for Optimal Multidimensional Pricing Yang Cai∗ Constantinos Daskalakisy Computer Science, McGill University EECS, MIT [email protected] [email protected] October 28, 2014 Abstract We provide a near-optimal, computationally efficient algorithm for the unit-demand pricing problem, where a seller wants to price n items to optimize revenue against a unit-demand buyer whose values for the items are independently drawn from known distributions. For any chosen accuracy > 0 and item values bounded in [0; 1], our algorithm achieves revenue that is optimal up to an additive error of at most , in polynomial time. For values sampled from Monotone Hazard Rate (MHR) distributions, we achieve a (1 − )-fraction of the optimal revenue in poly- nomial time, while for values sampled from regular distributions the same revenue guarantees are achieved in quasi-polynomial time. Our algorithm for bounded distributions applies probabilistic techniques to understand the statistical properties of revenue distributions, obtaining a reduction in the search space of the algorithm through dynamic programming. Adapting this approach to MHR and regular distri- butions requires the proof of novel extreme value theorems for such distributions. As a byproduct, our techniques establish structural properties of approximately-optimal and near-optimal solutions. We show that, when the buyer's values are independently distributed according to MHR distributions, pricing all items at the same price achieves a constant fraction of the optimal revenue. Moreover, for all > 0, at most g(1/) distinct prices suffice to obtain a (1 − )-fraction of the optimal revenue, where g(1/) is a quadratic function of 1/ that does not depend on the number of items.
    [Show full text]
  • Contact Dynamics Versus Legendrian and Lagrangian Submanifolds
    Contact Dynamics versus Legendrian and Lagrangian Submanifolds August 17, 2021 O˘gul Esen1 Department of Mathematics, Gebze Technical University, 41400 Gebze, Kocaeli, Turkey. Manuel Lainz Valc´azar2 Instituto de Ciencias Matematicas, Campus Cantoblanco Consejo Superior de Investigaciones Cient´ıficas C/ Nicol´as Cabrera, 13–15, 28049, Madrid, Spain Manuel de Le´on3 Instituto de Ciencias Matem´aticas, Campus Cantoblanco Consejo Superior de Investigaciones Cient´ıficas C/ Nicol´as Cabrera, 13–15, 28049, Madrid, Spain and Real Academia Espa˜nola de las Ciencias. C/ Valverde, 22, 28004 Madrid, Spain. Juan Carlos Marrero4 ULL-CSIC Geometria Diferencial y Mec´anica Geom´etrica, Departamento de Matematicas, Estadistica e I O, Secci´on de Matem´aticas, Facultad de Ciencias, Universidad de la Laguna, La Laguna, Tenerife, Canary Islands, Spain arXiv:2108.06519v1 [math.SG] 14 Aug 2021 Abstract We are proposing Tulczyjew’s triple for contact dynamics. The most important ingredients of the triple, namely symplectic diffeomorphisms, special symplectic manifolds, and Morse families, are generalized to the contact framework. These geometries permit us to determine so-called generating family (obtained by merging a special contact manifold and a Morse family) for a Legendrian submanifold. Contact Hamiltonian and Lagrangian Dynamics are 1E-mail: [email protected] 2E-mail: [email protected] 3E-mail: [email protected] 4E-mail: [email protected] 1 recast as Legendrian submanifolds of the tangent contact manifold. In this picture, the Legendre transformation is determined to be a passage between two different generators of the same Legendrian submanifold. A variant of contact Tulczyjew’s triple is constructed for evolution contact dynamics.
    [Show full text]
  • Cesifo Working Paper No. 5428 Category 12: Empirical and Theoretical Methods June 2015
    A Service of Leibniz-Informationszentrum econstor Wirtschaft Leibniz Information Centre Make Your Publications Visible. zbw for Economics Aquaro, Michele; Bailey, Natalia; Pesaran, M. Hashem Working Paper Quasi Maximum Likelihood Estimation of Spatial Models with Heterogeneous Coefficients CESifo Working Paper, No. 5428 Provided in Cooperation with: Ifo Institute – Leibniz Institute for Economic Research at the University of Munich Suggested Citation: Aquaro, Michele; Bailey, Natalia; Pesaran, M. Hashem (2015) : Quasi Maximum Likelihood Estimation of Spatial Models with Heterogeneous Coefficients, CESifo Working Paper, No. 5428, Center for Economic Studies and ifo Institute (CESifo), Munich This Version is available at: http://hdl.handle.net/10419/113752 Standard-Nutzungsbedingungen: Terms of use: Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Documents in EconStor may be saved and copied for your Zwecken und zum Privatgebrauch gespeichert und kopiert werden. personal and scholarly purposes. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle You are not to copy documents for public or commercial Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich purposes, to exhibit the documents publicly, to make them machen, vertreiben oder anderweitig nutzen. publicly available on the internet, or to distribute or otherwise use the documents in public. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, If the documents have been made available under an Open gelten abweichend von diesen Nutzungsbedingungen die in der dort Content Licence (especially Creative Commons Licences), you genannten Lizenz gewährten Nutzungsrechte. may exercise further usage rights as specified in the indicated licence. www.econstor.eu Quasi Maximum Likelihood Estimation of Spatial Models with Heterogeneous Coefficients Michele Aquaro Natalia Bailey M.
    [Show full text]
  • Lecture 1: Historical Overview, Statistical Paradigm, Classical Mechanics
    Lecture 1: Historical Overview, Statistical Paradigm, Classical Mechanics Chapter I. Basic Principles of Stat Mechanics A.G. Petukhov, PHYS 743 August 23, 2017 Chapter I. Basic Principles of Stat Mechanics LectureA.G. Petukhov, 1: Historical PHYS Overview, 743 Statistical Paradigm, ClassicalAugust Mechanics 23, 2017 1 / 11 In 1905-1906 Einstein and Smoluchovski developed theory of Brownian motion. The theory was experimentally verified in 1908 by a french physical chemist Jean Perrin who was able to estimate the Avogadro number NA with very high accuracy. Daniel Bernoulli in eighteen century was the first who applied molecular-kinetic hypothesis to calculate the pressure of an ideal gas and deduct the empirical Boyle's law pV = const. In the 19th century Clausius introduced the concept of the mean free path of molecules in gases. He also stated that heat is the kinetic energy of molecules. In 1859 Maxwell applied molecular hypothesis to calculate the distribution of gas molecules over their velocities. Historical Overview According to ancient greek philosophers all matter consists of discrete particles that are permanently moving and interacting. Gas of particles is the simplest object to study. Until 20th century this molecular-kinetic theory had not been directly confirmed in spite of it's success in chemistry Chapter I. Basic Principles of Stat Mechanics LectureA.G. Petukhov, 1: Historical PHYS Overview, 743 Statistical Paradigm, ClassicalAugust Mechanics 23, 2017 2 / 11 Daniel Bernoulli in eighteen century was the first who applied molecular-kinetic hypothesis to calculate the pressure of an ideal gas and deduct the empirical Boyle's law pV = const. In the 19th century Clausius introduced the concept of the mean free path of molecules in gases.
    [Show full text]