Problem Analysis and Complexity Theory SS16

Total Page:16

File Type:pdf, Size:1020Kb

Problem Analysis and Complexity Theory SS16 Institute for Software Technology Graz University of Technology Welcome to ... Problem Analysis and Complexity Theory 716.054, 3 VU Birgit Vogtenhuber Institute for Software Technology Graz University of Technology email: [email protected] office: Inffeldgasse 16B/II, room IC02044 office hour: Wednesday 10:30-11:30 slides: http://www.ist.tugraz.at/pact16.html Birgit Vogtenhuber Problem Analysis and Complexity Theory, 716.054, summer term 2016 1 Institute for Software Technology Graz University of Technology Last Time finished Space Complexity • showed that TQBF is PSPACE-complete • made some space-summary discussed Questions Birgit Vogtenhuber Problem Analysis and Complexity Theory, 716.054, summer term 2016 2 Institute for Software Technology Graz University of Technology Last Time started Optimization and Approximation • Discussed Decision vs. Optimization • Formalized the notion of Approximation: n v(x;y) opt(x) o ◦ approximation ratio R(x; y) = max opt(x) ; v(x;y) ◦ defined r(n)-approximation algorithm • Defined complexity classes for optimization problems: PO, NPO, APX(r(n)), APX, APX∗ Question:• ConsideredHow the are following these classes optimization defined? problems: Question:MIN VERTEXWhat COVER, are their containmentMAX (k-DEG.) relations? INDEP. SET Birgit Vogtenhuber Problem Analysis and Complexity Theory, 716.054, summer term 2016 3-6 Institute for Software Technology Graz University of Technology Last Time started Optimization and Approximation • Discussed Decision vs. Optimization • Formalized the notion of Approximation: n v(x;y) opt(x) o ◦ approximation ratio R(x; y) = max opt(x) ; v(x;y) ◦ defined r(n)-approximation algorithm • Defined complexity classes for optimization problems: PO, NPO, APX(r(n)), APX, APX∗ • ConsideredShowed that the for following some NP optimization-hard problems, problems: the MINoptimization VERTEX problems COVER, can MAX be efficiently (k-DEG.) approximated INDEP. SET • For some factors, approximation may still be NP-hard Birgit Vogtenhuber Problem Analysis and Complexity Theory, 716.054, summer term 2016 3-7 Institute for Software Technology Graz University of Technology Last Time started Optimization and Approximation • Discussed Decision vs. Optimization • Formalized the notion of Approximation: n v(x;y) opt(x) o ◦ approximation ratio R(x; y) = max opt(x) ; v(x;y) ◦ defined r(n)-approximation algorithm • Defined complexity classes for optimization problems: PO, NPO, APX(r(n)), APX, APX∗ • Considered the following optimization problems: MIN VERTEX COVER, MAX (k-DEG.) INDEP. SET Question: What did we show how for these problems? Birgit Vogtenhuber Problem Analysis and Complexity Theory, 716.054, summer term 2016 3-10 Institute for Software Technology Graz University of Technology This Time / Next Time continue Optimization and Approximation • Repeat some essentials • Consider MIN TSP, MIN TSP∆ and MAX KNAPSACK • Introduce problems with small solution values • Reductions for approximation problems • New concept / class PCP: Probabilistically Checkable Proofs • Topics for Presentations Birgit Vogtenhuber Problem Analysis and Complexity Theory, 716.054, summer term 2016 4 Institute for Software Technology Graz University of Technology Optimization Problems Definition: An optimization problem A can be defined as a 4-tuple A = (I; S; v; d) where • I is the set of instances of A. • S(x): x2I is the set of solutions for instance x. • v(x; s): s2S(x) is the value of soution s for instance x. • d2fmin; maxg is the optimization direction. Definition: The class of all optimization problems whose decision variant is in NP is called NPO. ) x2I decidable in polynomial time. ) y 2S(x) decidable in polynomial time and jyj ≤ p(jxj). ) v(x; s) computable in polynomial time. Birgit Vogtenhuber Problem Analysis and Complexity Theory, 716.054, summer term 2016 5 Institute for Software Technology Graz University of Technology Measuring Approximation Definition: For an optimization problem A = (I; S; v; d), an instance x2I, and a solution y 2S(x), the approximation ratio of y with respect to x is defined as n v(x; y) opt(x) o R(x; y) := max ; ≥ 1 opt(x) v(x; y) | {z } | {z } d=min d=max Definition: Let r : lN ! [1; 1) with r(n + 1) ≥ r(n). M is a r(n)-approximation algorithm for A if 8x 2 I : M(x) 2 S(x) and R(x; M(x)) ≤ r(jxj): The worst case approximation ratio rM (n) of M is defined as rM (n) := supfR(x; M(x)) : jxj ≤ ng. Birgit Vogtenhuber Problem Analysis and Complexity Theory, 716.054, summer term 2016 6 Institute for Software Technology Graz University of Technology The Class APX Definition: Let r : lN ! [1; 1) with r(n + 1) ≥ r(n). • The complexity class APX(r(n)) contains all optimization problems which can be approximated by a polynomial-time algorithm M with worst-case approximation ratio rM (n) ≤ r(n). S • APX = c≥1 APX(c) is the class of optimization problems admitting a polynomial-time constant factor approximation algorithm for some constant c ≥ 1. ∗ T • APX = c>1 APX(c) is the class of optimization problems admitting a polynomial-time constant factor approximation algorithm for any constant c > 1. Birgit Vogtenhuber Problem Analysis and Complexity Theory, 716.054, summer term 2016 7 Institute for Software Technology Graz University of Technology Optimization Problems • Considered several optimization problems: Problem Instance Parameter APX MIN V.C. graph jvertex coverj MAX CLIQUE graph jcliquej APX?MAX IND. SET graph jindep. setj MAX2SAT 2CNF formula jsat. clausesj APX MIN TSP weighted graph cost of tour MAX KNAPSACK objects, t. weight total value Question: Whatif P is6= theNP relation between APX and APX∗? Question: Which of the problems are in APX, APX∗? Make your guess list! Birgit Vogtenhuber Problem Analysis and Complexity Theory, 716.054, summer term 2016 8 Institute for Software Technology Graz University of Technology MIN TSP TSP / MIN TSP: the Travelling Salesman Problem • Problem: How efficient can a salesman make a round trip visiting given locations? • Decision problem: Is it possible to visit all locations of L within time t? • Problem instance: A set L = fl1; : : : lng of locations, with travel costs / times (Ti;j) between them, plus a maximum total time t. Theorem: TSP is (strongly) NP-complete. Birgit Vogtenhuber Problem Analysis and Complexity Theory, 716.054, summer term 2016 9-2 Institute for Software Technology Graz University of Technology MIN TSP TSP / MIN TSP: the Travelling Salesman Problem • Problem: How efficient can a salesman make a round trip visiting given locations? • Optimization problem: What is the minimum time t needed for a round trip that visits all locations of L? • Problem instance: A set L = fl1; : : : lng of locations, with travel costs / times (Ti;j) between them, plus a maximum total time t. Theorem: TSP is (strongly) NP-complete. Corollary: MIN TSP is in NPO. Birgit Vogtenhuber Problem Analysis and Complexity Theory, 716.054, summer term 2016 9-4 Institute for Software Technology Graz University of Technology MIN TSP and APX Theorem: If P 6= NP then MIN TSP is not in APX. Proof: X • Assume there exists a polynomial time c-approximation algorithm A for MIN TSP for some constant c > 1. • Given a directed graph G = (V; E), we construct an input for A by assigning weights to the edges of the complete graph on V : 1 if (i; j) 2 E T = i;j cjV j if (i; j) 62 E ) If G contains a Hamiltonian cycle then A(G)≤cjV j. Otherwise A(G)≥jV j−1+cjV j>cjV j. ) Algorithm A solves HAMCYCLE in polynomial time. Birgit Vogtenhuber Problem Analysis and Complexity Theory, 716.054, summer term 2016 10 Institute for Software Technology Graz University of Technology The Gap Technique Theorem: Given an optimization problem such that 1. for every input x 2 I and every solution s 2 S(x), s does not lie in the interval (a; b), i.e., either v(s; x) ≤ a or v(s; x) ≥ b, and 2. it is NP-hard to determine whether opt(x) ≤ a or opt(x) ≥ b. then it is NP-hard to obtain a worst-case approximation ratio that is smaller than b=a. Proof: Exercise. Question: What was the gap in our proof for MIN TSP 62 APX if P 6= NP? Birgit Vogtenhuber Problem Analysis and Complexity Theory, 716.054, summer term 2016 11-3 Institute for Software Technology Graz University of Technology The Gap Technique Theorem: Given an optimization problem such that 1. for every input x 2 I and every solution s 2 S(x), s does not lie in the interval (a; b), i.e., either v(s; x) ≤ a or v(s; x) ≥ b, and 2. it is NP-hard to determine whether opt(x) ≤ a or opt(x) ≥ b. then it is NP-hard to obtain a worst-case approximation ratio that is smaller than b=a. Proof: Exercise. Remark: The proof for MIN TSP 62 APX if P 6= NP used a gap of (jV j; cjV j + jV j − 1). Birgit Vogtenhuber Problem Analysis and Complexity Theory, 716.054, summer term 2016 11-4 Institute for Software Technology Graz University of Technology MIN TSP with ∆-Inequality Just proved: MIN TSP does not have polynomial time constant factor approximation for any constant factor c. Question: Does this imply that route planning is completely intractible? ∆ MIN TSP : Given set L = fl1; : : : lng of locations, with travel times (Ti;j) between them, such that 8li; lj; lk 2 L : Ti;j + Tj;k ≥ Ti;k; ljwhat is the minimum time t needed for a Tj;k Ti;j round trip that visits all locations of L? lk T li i;k Birgit Vogtenhuber Problem Analysis and Complexity Theory, 716.054, summer term 2016 12-4 Institute for Software Technology Graz University of Technology MIN TSP with ∆-Inequality Just proved: MIN TSP does not have polynomial time constant factor approximation for any constant factor c.
Recommended publications
  • Advanced Complexity Theory
    Advanced Complexity Theory Markus Bl¨aser& Bodo Manthey Universit¨atdes Saarlandes Draft|February 9, 2011 and forever 2 1 Complexity of optimization prob- lems 1.1 Optimization problems The study of the complexity of solving optimization problems is an impor- tant practical aspect of complexity theory. A good textbook on this topic is the one by Ausiello et al. [ACG+99]. The book by Vazirani [Vaz01] is also recommend, but its focus is on the algorithmic side. Definition 1.1. An optimization problem P is a 4-tuple (IP ; SP ; mP ; goalP ) where ∗ 1. IP ⊆ f0; 1g is the set of valid instances of P , 2. SP is a function that assigns to each valid instance x the set of feasible ∗ 1 solutions SP (x) of x, which is a subset of f0; 1g . + 3. mP : f(x; y) j x 2 IP and y 2 SP (x)g ! N is the objective function or measure function. mP (x; y) is the objective value of the feasible solution y (with respect to x). 4. goalP 2 fmin; maxg specifies the type of the optimization problem. Either it is a minimization or a maximization problem. When the context is clear, we will drop the subscript P . Formally, an optimization problem is defined over the alphabet f0; 1g. But as usual, when we talk about concrete problems, we want to talk about graphs, nodes, weights, etc. In this case, we tacitly assume that we can always find suitable encodings of the objects we talk about. ∗ Given an instance x of the optimization problem P , we denote by SP (x) the set of all optimal solutions, that is, the set of all y 2 SP (x) such that mP (x; y) = goalfmP (x; z) j z 2 SP (x)g: (Note that the set of optimal solutions could be empty, since the maximum need not exist.
    [Show full text]
  • Chapter 9. Coping with NP-Completeness
    Chapter 9 Coping with NP-completeness You are the junior member of a seasoned project team. Your current task is to write code for solving a simple-looking problem involving graphs and numbers. What are you supposed to do? If you are very lucky, your problem will be among the half-dozen problems concerning graphs with weights (shortest path, minimum spanning tree, maximum flow, etc.), that we have solved in this book. Even if this is the case, recognizing such a problem in its natural habitat—grungy and obscured by reality and context—requires practice and skill. It is more likely that you will need to reduce your problem to one of these lucky ones—or to solve it using dynamic programming or linear programming. But chances are that nothing like this will happen. The world of search problems is a bleak landscape. There are a few spots of light—brilliant algorithmic ideas—each illuminating a small area around it (the problems that reduce to it; two of these areas, linear and dynamic programming, are in fact decently large). But the remaining vast expanse is pitch dark: NP- complete. What are you to do? You can start by proving that your problem is actually NP-complete. Often a proof by generalization (recall the discussion on page 270 and Exercise 8.10) is all that you need; and sometimes a simple reduction from 3SAT or ZOE is not too difficult to find. This sounds like a theoretical exercise, but, if carried out successfully, it does bring some tangible rewards: now your status in the team has been elevated, you are no longer the kid who can't do, and you have become the noble knight with the impossible quest.
    [Show full text]
  • CS 561, Lecture 24 Outline All-Pairs Shortest Paths Example
    Outline CS 561, Lecture 24 • All Pairs Shortest Paths Jared Saia • TSP Approximation Algorithm University of New Mexico 1 All-Pairs Shortest Paths Example • For the single-source shortest paths problem, we wanted to find the shortest path from a source vertex s to all the other vertices in the graph • We will now generalize this problem further to that of finding • For any vertex v, we have dist(v, v) = 0 and pred(v, v) = the shortest path from every possible source to every possible NULL destination • If the shortest path from u to v is only one edge long, then • In particular, for every pair of vertices u and v, we need to dist(u, v) = w(u → v) and pred(u, v) = u compute the following information: • If there’s no shortest path from u to v, then dist(u, v) = ∞ – dist(u, v) is the length of the shortest path (if any) from and pred(u, v) = NULL u to v – pred(u, v) is the second-to-last vertex (if any) on the short- est path (if any) from u to v 2 3 APSP Lots of Single Sources • The output of our shortest path algorithm will be a pair of |V | × |V | arrays encoding all |V |2 distances and predecessors. • Many maps contain such a distance matric - to find the • Most obvious solution to APSP is to just run SSSP algorithm distance from (say) Albuquerque to (say) Ruidoso, you look |V | times, once for every possible source vertex in the row labeled “Albuquerque” and the column labeled • Specifically, to fill in the subarray dist(s, ∗), we invoke either “Ruidoso” Dijkstra’s or Bellman-Ford starting at the source vertex s • In this class, we’ll focus
    [Show full text]
  • APX Radio Family Brochure
    APX MISSION-CRITICAL P25 COMMUNICATIONS BROCHURE APX P25 COMMUNICATIONS THE BEST OF WHAT WE DO Whether you’re a state trooper, firefighter, law enforcement officer or highway maintenance technician, people count on you to get the job done. There’s no room for error. This is mission critical. APX™ radios exist for this purpose. They’re designed to be reliable and to optimize your communications, specifically in extreme environments and during life-threatening situations. Even with the widest portfolio in the industry, APX continues to evolve. The latest APX NEXT smart radio series delivers revolutionary new capabilities to keep you safer and more effective. WE’VE PUT EVERYTHING WE’VE LEARNED OVER THE LAST 90 YEARS INTO APX. THAT’S WHY IT REPRESENTS THE VERY BEST OF THE MOTOROLA SOLUTIONS PORTFOLIO. THERE IS NO BETTER. BROCHURE APX P25 COMMUNICATIONS OUTLAST AND OUTPERFORM RELIABLE COMMUNICATIONS ARE NON-NEGOTIABLE APX two-way radios are designed for extreme durability, so you can count on them to work under the toughest conditions. From the rugged aluminum endoskeleton of our portable radios to the steel encasement of our mobile radios, APX is built to last. Pressure-tested HEAR AND BE HEARD tempered glass display CLEAR COMMUNICATION CAN MAKE A DIFFERENCE The APX family is designed to help you hear and be heard with unparalleled clarity, so you’re confident your message will always get through. Multiple microphones and adaptive windporting technology minimize noise from wind, crowds and sirens. And the loud and clear speaker ensures you can hear over background sounds. KEEP INFORMATION PROTECTED EVERYDAY, SECURITY IS BEING PUT TO THE TEST With the APX family, you can be sure that your calls stay private, secure, and confidential.
    [Show full text]
  • The Complexity Zoo
    The Complexity Zoo Scott Aaronson www.ScottAaronson.com LATEX Translation by Chris Bourke [email protected] 417 classes and counting 1 Contents 1 About This Document 3 2 Introductory Essay 4 2.1 Recommended Further Reading ......................... 4 2.2 Other Theory Compendia ............................ 5 2.3 Errors? ....................................... 5 3 Pronunciation Guide 6 4 Complexity Classes 10 5 Special Zoo Exhibit: Classes of Quantum States and Probability Distribu- tions 110 6 Acknowledgements 116 7 Bibliography 117 2 1 About This Document What is this? Well its a PDF version of the website www.ComplexityZoo.com typeset in LATEX using the complexity package. Well, what’s that? The original Complexity Zoo is a website created by Scott Aaronson which contains a (more or less) comprehensive list of Complexity Classes studied in the area of theoretical computer science known as Computa- tional Complexity. I took on the (mostly painless, thank god for regular expressions) task of translating the Zoo’s HTML code to LATEX for two reasons. First, as a regular Zoo patron, I thought, “what better way to honor such an endeavor than to spruce up the cages a bit and typeset them all in beautiful LATEX.” Second, I thought it would be a perfect project to develop complexity, a LATEX pack- age I’ve created that defines commands to typeset (almost) all of the complexity classes you’ll find here (along with some handy options that allow you to conveniently change the fonts with a single option parameters). To get the package, visit my own home page at http://www.cse.unl.edu/~cbourke/.
    [Show full text]
  • Approximability Preserving Reduction Giorgio Ausiello, Vangelis Paschos
    Approximability preserving reduction Giorgio Ausiello, Vangelis Paschos To cite this version: Giorgio Ausiello, Vangelis Paschos. Approximability preserving reduction. 2005. hal-00958028 HAL Id: hal-00958028 https://hal.archives-ouvertes.fr/hal-00958028 Preprint submitted on 11 Mar 2014 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Laboratoire d'Analyse et Modélisation de Systèmes pour l'Aide à la Décision CNRS UMR 7024 CAHIER DU LAMSADE 227 Septembre 2005 Approximability preserving reductions Giorgio AUSIELLO, Vangelis Th. PASCHOS Approximability preserving reductions∗ Giorgio Ausiello1 Vangelis Th. Paschos2 1 Dipartimento di Informatica e Sistemistica Università degli Studi di Roma “La Sapienza” [email protected] 2 LAMSADE, CNRS UMR 7024 and Université Paris-Dauphine [email protected] September 15, 2005 Abstract We present in this paper a “tour d’horizon” of the most important approximation-pre- serving reductions that have strongly influenced research about structure in approximability classes. 1 Introduction The technique of transforming a problem into another in such a way that the solution of the latter entails, somehow, the solution of the former, is a classical mathematical technique that has found wide application in computer science since the seminal works of Cook [10] and Karp [19] who introduced particular kinds of transformations (called reductions) with the aim of study- ing the computational complexity of combinatorial decision problems.
    [Show full text]
  • Admm for Multiaffine Constrained Optimization Wenbo Gao†, Donald
    ADMM FOR MULTIAFFINE CONSTRAINED OPTIMIZATION WENBO GAOy, DONALD GOLDFARBy, AND FRANK E. CURTISz Abstract. We propose an expansion of the scope of the alternating direction method of multipliers (ADMM). Specifically, we show that ADMM, when employed to solve problems with multiaffine constraints that satisfy certain easily verifiable assumptions, converges to the set of constrained stationary points if the penalty parameter in the augmented Lagrangian is sufficiently large. When the Kurdyka-Lojasiewicz (K-L)property holds, this is strengthened to convergence to a single con- strained stationary point. Our analysis applies under assumptions that we have endeavored to make as weak as possible. It applies to problems that involve nonconvex and/or nonsmooth objective terms, in addition to the multiaffine constraints that can involve multiple (three or more) blocks of variables. To illustrate the applicability of our results, we describe examples including nonnega- tive matrix factorization, sparse learning, risk parity portfolio selection, nonconvex formulations of convex problems, and neural network training. In each case, our ADMM approach encounters only subproblems that have closed-form solutions. 1. Introduction The alternating direction method of multipliers (ADMM) is an iterative method that was initially proposed for solving linearly-constrained separable optimization problems having the form: ( inf f(x) + g(y) (P 0) x;y Ax + By − b = 0: The augmented Lagrangian L of the problem (P 0), for some penalty parameter ρ > 0, is defined to be ρ L(x; y; w) = f(x) + g(y) + hw; Ax + By − bi + kAx + By − bk2: 2 In iteration k, with the iterate (xk; yk; wk), ADMM takes the following steps: (1) Minimize L(x; yk; wk) with respect to x to obtain xk+1.
    [Show full text]
  • Approximation Algorithms
    Lecture 21 Approximation Algorithms 21.1 Overview Suppose we are given an NP-complete problem to solve. Even though (assuming P = NP) we 6 can’t hope for a polynomial-time algorithm that always gets the best solution, can we develop polynomial-time algorithms that always produce a “pretty good” solution? In this lecture we consider such approximation algorithms, for several important problems. Specific topics in this lecture include: 2-approximation for vertex cover via greedy matchings. • 2-approximation for vertex cover via LP rounding. • Greedy O(log n) approximation for set-cover. • Approximation algorithms for MAX-SAT. • 21.2 Introduction Suppose we are given a problem for which (perhaps because it is NP-complete) we can’t hope for a fast algorithm that always gets the best solution. Can we hope for a fast algorithm that guarantees to get at least a “pretty good” solution? E.g., can we guarantee to find a solution that’s within 10% of optimal? If not that, then how about within a factor of 2 of optimal? Or, anything non-trivial? As seen in the last two lectures, the class of NP-complete problems are all equivalent in the sense that a polynomial-time algorithm to solve any one of them would imply a polynomial-time algorithm to solve all of them (and, moreover, to solve any problem in NP). However, the difficulty of getting a good approximation to these problems varies quite a bit. In this lecture we will examine several important NP-complete problems and look at to what extent we can guarantee to get approximately optimal solutions, and by what algorithms.
    [Show full text]
  • Branch and Price for Chance Constrained Bin Packing
    Branch and Price for Chance Constrained Bin Packing Zheng Zhang Department of Industrial Engineering and Management, Shanghai Jiao Tong University, Shanghai 200240, China, [email protected] Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI 48109, [email protected] Brian Denton Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI 48109, [email protected] Xiaolan Xie Department of Industrial Engineering and Management, Shanghai Jiao Tong University, Shanghai 200240, China, [email protected] Center for Biomedical and Healthcare Engineering, Ecole Nationale Superi´euredes Mines, Saint Etienne 42023, France, [email protected] This article considers two versions of the stochastic bin packing problem with chance constraints. In the first version, we formulate the problem as a two-stage stochastic integer program that considers item-to- bin allocation decisions in the context of chance constraints on total item size within the bins. Next, we describe a distributionally robust formulation of the problem that assumes the item sizes are ambiguous. We present a branch-and-price approach based on a column generation reformulation that is tailored to these two model formulations. We further enhance this approach using antisymmetry branching and subproblem reformulations of the stochastic integer programming model based on conditional value at risk (CVaR) and probabilistic covers. For the distributionally robust formulation we derive a closed form expression for the chance constraints; furthermore, we show that under mild assumptions the robust model can be reformulated as a mixed integer program with significantly fewer integer variables compared to the stochastic integer program. We implement a series of numerical experiments based on real data in the context of an application to surgery scheduling in hospitals to demonstrate the performance of the methods for practical applications.
    [Show full text]
  • MODULES and ITS REVERSES 1. Introduction the Hölder Inequality
    Ann. Funct. Anal. A nnals of F unctional A nalysis ISSN: 2008-8752 (electronic) URL: www.emis.de/journals/AFA/ HOLDER¨ TYPE INEQUALITIES ON HILBERT C∗-MODULES AND ITS REVERSES YUKI SEO1 This paper is dedicated to Professor Tsuyoshi Ando Abstract. In this paper, we show Hilbert C∗-module versions of H¨older- McCarthy inequality and its complementary inequality. As an application, we obtain H¨oldertype inequalities and its reverses on a Hilbert C∗-module. 1. Introduction The H¨olderinequality is one of the most important inequalities in functional analysis. If a = (a1; : : : ; an) and b = (b1; : : : ; bn) are n-tuples of nonnegative numbers, and 1=p + 1=q = 1, then ! ! Xn Xn 1=p Xn 1=q ≤ p q aibi ai bi for all p > 1 i=1 i=1 i=1 and ! ! Xn Xn 1=p Xn 1=q ≥ p q aibi ai bi for all p < 0 or 0 < p < 1: i=1 i=1 i=1 Non-commutative versions of the H¨olderinequality and its reverses have been studied by many authors. T. Ando [1] showed the Hadamard product version of a H¨oldertype. T. Ando and F. Hiai [2] discussed the norm H¨olderinequality and the matrix H¨olderinequality. B. Mond and O. Shisha [15], M. Fujii, S. Izumino, R. Nakamoto and Y. Seo [7], and S. Izumino and M. Tominaga [11] considered the vector state version of a H¨oldertype and its reverses. J.-C. Bourin, E.-Y. Lee, M. Fujii and Y. Seo [3] showed the geometric operator mean version, and so on.
    [Show full text]
  • Distributed Optimization Algorithms for Networked Systems
    Distributed Optimization Algorithms for Networked Systems by Nikolaos Chatzipanagiotis Department of Mechanical Engineering & Materials Science Duke University Date: Approved: Michael M. Zavlanos, Supervisor Wilkins Aquino Henri Gavin Alejandro Ribeiro Dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Mechanical Engineering & Materials Science in the Graduate School of Duke University 2015 Abstract Distributed Optimization Algorithms for Networked Systems by Nikolaos Chatzipanagiotis Department of Mechanical Engineering & Materials Science Duke University Date: Approved: Michael M. Zavlanos, Supervisor Wilkins Aquino Henri Gavin Alejandro Ribeiro An abstract of a dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Mechanical Engineering & Materials Science in the Graduate School of Duke University 2015 Copyright c 2015 by Nikolaos Chatzipanagiotis All rights reserved except the rights granted by the Creative Commons Attribution-Noncommercial Licence Abstract Distributed optimization methods allow us to decompose certain optimization prob- lems into smaller, more manageable subproblems that are solved iteratively and in parallel. For this reason, they are widely used to solve large-scale problems arising in areas as diverse as wireless communications, optimal control, machine learning, artificial intelligence, computational biology, finance and statistics, to name a few. Moreover, distributed algorithms avoid the cost and fragility associated with cen- tralized coordination, and provide better privacy for the autonomous decision mak- ers. These are desirable properties, especially in applications involving networked robotics, communication or sensor networks, and power distribution systems. In this thesis we propose the Accelerated Distributed Augmented Lagrangians (ADAL) algorithm, a novel decomposition method for convex optimization prob- lems.
    [Show full text]
  • User's Guide for Complexity: a LATEX Package, Version 0.80
    User’s Guide for complexity: a LATEX package, Version 0.80 Chris Bourke April 12, 2007 Contents 1 Introduction 2 1.1 What is complexity? ......................... 2 1.2 Why a complexity package? ..................... 2 2 Installation 2 3 Package Options 3 3.1 Mode Options .............................. 3 3.2 Font Options .............................. 4 3.2.1 The small Option ....................... 4 4 Using the Package 6 4.1 Overridden Commands ......................... 6 4.2 Special Commands ........................... 6 4.3 Function Commands .......................... 6 4.4 Language Commands .......................... 7 4.5 Complete List of Class Commands .................. 8 5 Customization 15 5.1 Class Commands ............................ 15 1 5.2 Language Commands .......................... 16 5.3 Function Commands .......................... 17 6 Extended Example 17 7 Feedback 18 7.1 Acknowledgements ........................... 19 1 Introduction 1.1 What is complexity? complexity is a LATEX package that typesets computational complexity classes such as P (deterministic polynomial time) and NP (nondeterministic polynomial time) as well as sets (languages) such as SAT (satisfiability). In all, over 350 commands are defined for helping you to typeset Computational Complexity con- structs. 1.2 Why a complexity package? A better question is why not? Complexity theory is a more recent, though mature area of Theoretical Computer Science. Each researcher seems to have his or her own preferences as to how to typeset Complexity Classes and has built up their own personal LATEX commands file. This can be frustrating, to say the least, when it comes to collaborations or when one has to go through an entire series of files changing commands for compatibility or to get exactly the look they want (or what may be required).
    [Show full text]