Algorithms Outline the Class

Total Page:16

File Type:pdf, Size:1020Kb

Algorithms Outline the Class 1 6117CIT - Adv Topics in Computing Sci at Nathan Algorithms The intelligence behind the hardware Outline ! Approximation Algorithms • The class APX ! Some complexity classes, like PTAS and FPTAS ! Illustration of some PTAS ! Based on • P. Schuurman and G. Woeginger (2001), Approximation Schemes - A Tutorial. • M. Mastrolilli course notes 2 © V. Estivill-Castro The class APX ! (an abbreviation of "approximable") . ! The set of NP optimization problems that allow polynomial-time approximation algorithms with approximation ratio bounded by a constant (or constant-factor approximation algorithms for short). ! Problems in this class have efficient algorithms that can find an answer within some fixed percentage of the optimal answer. ! An approximation algorithm is called a !- approximation algorithm for some constant ! if it can be proven that the solution that the algorithm finds is at most ! times worse than the optimal solution. 3 © V. Estivill-Castro 1 !Review from week 2 ! The vertex cover problem and traveling salesman problem with triangle inequality each have simple 2- approximation algorithms. ! The traveling salesman problem with arbitrary edge-lengths can not be approximated with approximation ratio bounded by a constant as long as the Hamiltonian- path problem can not be solved in polynomial time. 4 © V. Estivill-Castro Alternative view on PTAS ! If there is a polynomial-time algorithm to solve a problem within every fixed percentage (one algorithm for each percentage), then the problem is said to have a polynomial-time approximation scheme (PTAS) • Unless P=NP, it can be shown that there are problems that are in APX but not in PTAS; that is, problems that can be approximated within some constant factor, but not every constant factor. 5 © V. Estivill-Castro More on APX ! A problem is said to be APX-hard if there is a PTAS reduction from every problem in APX to that problem, ! A problem is APX-complete if the problem is APX-hard and also in APX. ! As a consequence of PTAS ! APX, no APX-hard problem is in PTAS. 6 © V. Estivill-Castro 2 Focus on Optimization Problems ! Notation • We use I for an instance of an optimization problem • We use |I|=n, for the length of the input instance • We use Opt(I) for the value of the optimal solution ! We focus on minimization problems in this lecture, but all concepts are symmetric for maximization problems. 7 © V. Estivill-Castro !- Approximation Algorithm We denote A(I) = solution value. An algorithm is an “!-Approximation Algorithm” if A(I) " ! Opt(I) for all instances and the running time is polynomial in |I| ! = worst-case approximation ratio ! >= 1 Good: ! close to 1 8 © V. Estivill-Castro PTAS: Polynomial Time Approximation Scheme This is a family {A#}#>0 of (1+#)-approximation algorithms with running time polynomial in |I| •As observed in last lecture, the scheme fits the definition if the running time is exponential in 1/# : e.g. O(|I|1/#) NOTE: FPTAS: Fully PTAS running time also polynomial in 1/# : e.g. O(|I|/#3) 9 © V. Estivill-Castro 3 Strongly and Weakly NP-hard ! If a problem is NP-hard even if the input is encoded in unary, then it is called strongly NP-hard = NP-hard in the strong sense = unary NP-hard ! If a problem is polynomially solvable under a unary encoding, then it is solvable in pseudo-polynomial time. ! NP-Hard in the strong sense is contained within NP-Hard in the weak sense 10 © V. Estivill-Castro Complexity Classes Relationships NP Pseudo-Poly APX PTAS FPTAS P 11 © V. Estivill-Castro Some Known Approximation Algorithms Non-constant worst-case ratio • Graph coloring O(n1/2-e) • Total flow time O(n1/2) • Set covering O(log n) •Vertex cover O(log n) Constant worst-case ratio • TSP with triangle-inequalities 3/2 • Max Sat 1.2987 PTAS • Bin packing FPTAS • Makespan on 2 machines 12 © V. Estivill-Castro 4 The first Approximation Algorithm (Graham ‘66) !""#$%& Makespan minimization on' $'identical machines (strongly'N!-hard) )* $'identical machines '''' +'jobs with lengths',-.',/.'0.',+ Objective, smallest #$%& 1- 1/ 1$ #$%& 13 © V. Estivill-Castro Algorithm: List-Scheduling (LS) LS: schedule jobs in any given order to the first available (i.e. idle) machine List:'2-.'2/'.'24'.'23'.'26'.'25 1 - 2- 23 26 1/ 2/ 24 25 #$%& 14 © V. Estivill-Castro Analysis of LS-Algorithm ! Define the lower bound • LB=max {max pj; ! pj/m} ! Starting time of the FINAL job • sf : starting time of the job that completes last ! Observation (result by LS-Algorithm): LS 7 #$%& = sf+pf ! Let Ei be the completion (end) time of machine Mi 15 © V. Estivill-Castro 5 LS Analysis (cont) ! LS places the last job in the machine that is mostly available • sf ! Ei (for all other machines i!f) • sf = Ef - pf ! This implies (summing for each machine) that • m sf ! ["i=1 (Ei)] - pf • sf ! (1/m)(["i=1 Ei]-pf ) =(1/m)([" pj] -pf ) ! But LS • Cmax =sf+pf !(1/m) " pj + pf (1-(1/m)) ! Thus LS • Cmax ! [2- (1/m)]Opt 16 © V. Estivill-Castro LS: Analysis Theorem: LS is a (2-1/m)-approximation algorithm. The approximation ratio is tight. Example: p1 = p2 = 1 and p3 = 2 M M1 J J 1 J1 1 2 M J M2 J2 J3 2 3 C =2 Cmax=3 max 17 © V. Estivill-Castro Linear Programming based approximation algorithms ILP LP IDEA relax poly time difficult Opt OptLP round to integral values A(I) # $ Opt 18 © V. Estivill-Castro 6 Example: R2||Cmax R2||Cmax : Makespan minimization on 2 unrelated machines (weakly NP-hard) I: 2 unrelated machines n jobs Job j has length • p1j on machine M1 • p2j on machine M2 M1 M2 Cmax 19 © V. Estivill-Castro Integer Linear Program (ILP) ! We will encode by xij the fact that job j is placed in machine i ! Then the ILP looks as follows • Minimize Cmax • Subject to • x1j + x2j =1, for j=1,…n (each jobs is assigned once) n • !j=1 p1j x1j " Cmax n • !j=1 p1j x1j " Cmax • x1j , x2j ! {0, 1} (it most be on one machine or the other) j=1,…,n 20 © V. Estivill-Castro Linear Program (LP) Relaxation ! We will encode by xij the fact that job j is placed in machine i ! Then the ILP looks as follows • Minimize Cmax • Subject to • x1j + x2j =1, for j=1,…n (each jobs is assigned once) n • !j=1 p1j x1j " Cmax n • !j=1 p1j x1j " Cmax • x1j , x2j " 0 (it most be on some machine or split) j=1,…,n 21 © V. Estivill-Castro 7 Analysis of the number of fractional jobs ! Known: a basic optimal LP solution has the property that the number of variables that get positive values is at most the number of rows in the constraint matrix • Thus, there are at most n+2 variables with positive values ! Since Cmax is always positive, at most n+1 of the xij variables are positive • We reduce the value Cmax of if we make any pair of variables ofr the same job to zero ! Every job has at least one positive variable associated with it • Because x1j + x2j =1 ! CONCLUSION: At most 1 (ONE) job has been split onto two machines 22 © V. Estivill-Castro Rounding J5: fractional M 1 J1 J4 J5 M2 J2 J3 J6 J5 OptLP!!Opt M 1 J1 J4 M2 J2 J3 J6 J5 J5 !!Opt ROUNDING!!"Opt 23 © V. Estivill-Castro How to get a PTAS Output A(I): Input I Algorithm A feasible sol. for I Add structure IDEA: ! Add more structure (depending on ") as "##, additional structure # 0 ! Compare as "##$!%&'""()%*'"(!#&)* 24 © V. Estivill-Castro 8 Structuring the Input ! I I# difficult poly time Opt Opt# back in poly time A(I) " (1+!) Opt 25 © V. Estivill-Castro Example!"P2||Cmax P2||Cmax Makespan minimization on 2 identical machines (weakly NP-hard) I: 2 identical machines n jobs with lengths p1, p2, …, pn M1 M2 C Lower bound is again max LB=max {max pj; # pj/2} Thus LB " Opt " 2 LB 26 © V. Estivill-Castro How to round the input ! I I# ! # pj > ! LB pj := pj “big” $S/ (! LB)% jobs of length ! LB pj " ! LB ! where S= #small pj “small” 27 © V. Estivill-Castro 9 # Analysis of Rounded Instance I $ Recall that ! "# " 2&' •How many big jobs ? $ • a big job has "# ( # &' •This implies $)big* " 2/# •How many conglomerates jobs ? ow man1 small #o4s 5 $ •A conglomerate of small jobs has "# 6 # &' •This implies $)conglomerates* " 2/# LEMMA: The rounded instance has a constant(#) number of jobs. COROLARY: We can find its optimal solution in constant time!! PROOF: Use exhaustive search 28 © V. Estivill-Castro Back to a feasible solution 71 4i: 4i: … #&' #&' … 72 4i: 4i: #&' #&' #&' ;"t$ #&' #&' #&' #&' #&' small "#&' Sum of the small 29 © V. Estivill-Castro Back to a feasible solution (ctd) 71 4i: 4i: … #&' #&' … 72 4i: 4i: #&' #&' #&' ;"t$ 71 4i: 4i: … #&' #&' … 72 4i: 4i: #&' #&' #&' ;"t$ $ $ Cma> " ;"t ? #&' " )1?#* ;"t 30 © V. Estivill-Castro 10 How much error is introduced? # Cmax ! Opt + " LB Cmax ! Opt + 2" LB Opt# ! Opt + " LB ! (1+2") Opt# Wait till next slide 31 © V. Estivill-Castro !pt vs. !pt% (Case 1:LUCKY) M1 big big ... big # Opt ! Opt M2 big ... "LB "LB (Case 2:Optimal solution here has M1 big big ... "LB to be as good as LS) "LB M2 big big ... # LS Opt ! Cmax =sf+pf !(1/m) # pj + pf (1-(1/m)) Since m=2, and (1/m) # pj ! Opt # and pf /2 ! "LB, we have Opt ! Opt + "LB 32 © V. Estivill-Castro Structuring the execution of an algorithm IDEA: take an exact but slow algorithm A and interact with it while it is working.
Recommended publications
  • Advanced Complexity Theory
    Advanced Complexity Theory Markus Bl¨aser& Bodo Manthey Universit¨atdes Saarlandes Draft|February 9, 2011 and forever 2 1 Complexity of optimization prob- lems 1.1 Optimization problems The study of the complexity of solving optimization problems is an impor- tant practical aspect of complexity theory. A good textbook on this topic is the one by Ausiello et al. [ACG+99]. The book by Vazirani [Vaz01] is also recommend, but its focus is on the algorithmic side. Definition 1.1. An optimization problem P is a 4-tuple (IP ; SP ; mP ; goalP ) where ∗ 1. IP ⊆ f0; 1g is the set of valid instances of P , 2. SP is a function that assigns to each valid instance x the set of feasible ∗ 1 solutions SP (x) of x, which is a subset of f0; 1g . + 3. mP : f(x; y) j x 2 IP and y 2 SP (x)g ! N is the objective function or measure function. mP (x; y) is the objective value of the feasible solution y (with respect to x). 4. goalP 2 fmin; maxg specifies the type of the optimization problem. Either it is a minimization or a maximization problem. When the context is clear, we will drop the subscript P . Formally, an optimization problem is defined over the alphabet f0; 1g. But as usual, when we talk about concrete problems, we want to talk about graphs, nodes, weights, etc. In this case, we tacitly assume that we can always find suitable encodings of the objects we talk about. ∗ Given an instance x of the optimization problem P , we denote by SP (x) the set of all optimal solutions, that is, the set of all y 2 SP (x) such that mP (x; y) = goalfmP (x; z) j z 2 SP (x)g: (Note that the set of optimal solutions could be empty, since the maximum need not exist.
    [Show full text]
  • Chapter 9. Coping with NP-Completeness
    Chapter 9 Coping with NP-completeness You are the junior member of a seasoned project team. Your current task is to write code for solving a simple-looking problem involving graphs and numbers. What are you supposed to do? If you are very lucky, your problem will be among the half-dozen problems concerning graphs with weights (shortest path, minimum spanning tree, maximum flow, etc.), that we have solved in this book. Even if this is the case, recognizing such a problem in its natural habitat—grungy and obscured by reality and context—requires practice and skill. It is more likely that you will need to reduce your problem to one of these lucky ones—or to solve it using dynamic programming or linear programming. But chances are that nothing like this will happen. The world of search problems is a bleak landscape. There are a few spots of light—brilliant algorithmic ideas—each illuminating a small area around it (the problems that reduce to it; two of these areas, linear and dynamic programming, are in fact decently large). But the remaining vast expanse is pitch dark: NP- complete. What are you to do? You can start by proving that your problem is actually NP-complete. Often a proof by generalization (recall the discussion on page 270 and Exercise 8.10) is all that you need; and sometimes a simple reduction from 3SAT or ZOE is not too difficult to find. This sounds like a theoretical exercise, but, if carried out successfully, it does bring some tangible rewards: now your status in the team has been elevated, you are no longer the kid who can't do, and you have become the noble knight with the impossible quest.
    [Show full text]
  • CS 561, Lecture 24 Outline All-Pairs Shortest Paths Example
    Outline CS 561, Lecture 24 • All Pairs Shortest Paths Jared Saia • TSP Approximation Algorithm University of New Mexico 1 All-Pairs Shortest Paths Example • For the single-source shortest paths problem, we wanted to find the shortest path from a source vertex s to all the other vertices in the graph • We will now generalize this problem further to that of finding • For any vertex v, we have dist(v, v) = 0 and pred(v, v) = the shortest path from every possible source to every possible NULL destination • If the shortest path from u to v is only one edge long, then • In particular, for every pair of vertices u and v, we need to dist(u, v) = w(u → v) and pred(u, v) = u compute the following information: • If there’s no shortest path from u to v, then dist(u, v) = ∞ – dist(u, v) is the length of the shortest path (if any) from and pred(u, v) = NULL u to v – pred(u, v) is the second-to-last vertex (if any) on the short- est path (if any) from u to v 2 3 APSP Lots of Single Sources • The output of our shortest path algorithm will be a pair of |V | × |V | arrays encoding all |V |2 distances and predecessors. • Many maps contain such a distance matric - to find the • Most obvious solution to APSP is to just run SSSP algorithm distance from (say) Albuquerque to (say) Ruidoso, you look |V | times, once for every possible source vertex in the row labeled “Albuquerque” and the column labeled • Specifically, to fill in the subarray dist(s, ∗), we invoke either “Ruidoso” Dijkstra’s or Bellman-Ford starting at the source vertex s • In this class, we’ll focus
    [Show full text]
  • APX Radio Family Brochure
    APX MISSION-CRITICAL P25 COMMUNICATIONS BROCHURE APX P25 COMMUNICATIONS THE BEST OF WHAT WE DO Whether you’re a state trooper, firefighter, law enforcement officer or highway maintenance technician, people count on you to get the job done. There’s no room for error. This is mission critical. APX™ radios exist for this purpose. They’re designed to be reliable and to optimize your communications, specifically in extreme environments and during life-threatening situations. Even with the widest portfolio in the industry, APX continues to evolve. The latest APX NEXT smart radio series delivers revolutionary new capabilities to keep you safer and more effective. WE’VE PUT EVERYTHING WE’VE LEARNED OVER THE LAST 90 YEARS INTO APX. THAT’S WHY IT REPRESENTS THE VERY BEST OF THE MOTOROLA SOLUTIONS PORTFOLIO. THERE IS NO BETTER. BROCHURE APX P25 COMMUNICATIONS OUTLAST AND OUTPERFORM RELIABLE COMMUNICATIONS ARE NON-NEGOTIABLE APX two-way radios are designed for extreme durability, so you can count on them to work under the toughest conditions. From the rugged aluminum endoskeleton of our portable radios to the steel encasement of our mobile radios, APX is built to last. Pressure-tested HEAR AND BE HEARD tempered glass display CLEAR COMMUNICATION CAN MAKE A DIFFERENCE The APX family is designed to help you hear and be heard with unparalleled clarity, so you’re confident your message will always get through. Multiple microphones and adaptive windporting technology minimize noise from wind, crowds and sirens. And the loud and clear speaker ensures you can hear over background sounds. KEEP INFORMATION PROTECTED EVERYDAY, SECURITY IS BEING PUT TO THE TEST With the APX family, you can be sure that your calls stay private, secure, and confidential.
    [Show full text]
  • The Complexity Zoo
    The Complexity Zoo Scott Aaronson www.ScottAaronson.com LATEX Translation by Chris Bourke [email protected] 417 classes and counting 1 Contents 1 About This Document 3 2 Introductory Essay 4 2.1 Recommended Further Reading ......................... 4 2.2 Other Theory Compendia ............................ 5 2.3 Errors? ....................................... 5 3 Pronunciation Guide 6 4 Complexity Classes 10 5 Special Zoo Exhibit: Classes of Quantum States and Probability Distribu- tions 110 6 Acknowledgements 116 7 Bibliography 117 2 1 About This Document What is this? Well its a PDF version of the website www.ComplexityZoo.com typeset in LATEX using the complexity package. Well, what’s that? The original Complexity Zoo is a website created by Scott Aaronson which contains a (more or less) comprehensive list of Complexity Classes studied in the area of theoretical computer science known as Computa- tional Complexity. I took on the (mostly painless, thank god for regular expressions) task of translating the Zoo’s HTML code to LATEX for two reasons. First, as a regular Zoo patron, I thought, “what better way to honor such an endeavor than to spruce up the cages a bit and typeset them all in beautiful LATEX.” Second, I thought it would be a perfect project to develop complexity, a LATEX pack- age I’ve created that defines commands to typeset (almost) all of the complexity classes you’ll find here (along with some handy options that allow you to conveniently change the fonts with a single option parameters). To get the package, visit my own home page at http://www.cse.unl.edu/~cbourke/.
    [Show full text]
  • Approximability Preserving Reduction Giorgio Ausiello, Vangelis Paschos
    Approximability preserving reduction Giorgio Ausiello, Vangelis Paschos To cite this version: Giorgio Ausiello, Vangelis Paschos. Approximability preserving reduction. 2005. hal-00958028 HAL Id: hal-00958028 https://hal.archives-ouvertes.fr/hal-00958028 Preprint submitted on 11 Mar 2014 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Laboratoire d'Analyse et Modélisation de Systèmes pour l'Aide à la Décision CNRS UMR 7024 CAHIER DU LAMSADE 227 Septembre 2005 Approximability preserving reductions Giorgio AUSIELLO, Vangelis Th. PASCHOS Approximability preserving reductions∗ Giorgio Ausiello1 Vangelis Th. Paschos2 1 Dipartimento di Informatica e Sistemistica Università degli Studi di Roma “La Sapienza” [email protected] 2 LAMSADE, CNRS UMR 7024 and Université Paris-Dauphine [email protected] September 15, 2005 Abstract We present in this paper a “tour d’horizon” of the most important approximation-pre- serving reductions that have strongly influenced research about structure in approximability classes. 1 Introduction The technique of transforming a problem into another in such a way that the solution of the latter entails, somehow, the solution of the former, is a classical mathematical technique that has found wide application in computer science since the seminal works of Cook [10] and Karp [19] who introduced particular kinds of transformations (called reductions) with the aim of study- ing the computational complexity of combinatorial decision problems.
    [Show full text]
  • Admm for Multiaffine Constrained Optimization Wenbo Gao†, Donald
    ADMM FOR MULTIAFFINE CONSTRAINED OPTIMIZATION WENBO GAOy, DONALD GOLDFARBy, AND FRANK E. CURTISz Abstract. We propose an expansion of the scope of the alternating direction method of multipliers (ADMM). Specifically, we show that ADMM, when employed to solve problems with multiaffine constraints that satisfy certain easily verifiable assumptions, converges to the set of constrained stationary points if the penalty parameter in the augmented Lagrangian is sufficiently large. When the Kurdyka-Lojasiewicz (K-L)property holds, this is strengthened to convergence to a single con- strained stationary point. Our analysis applies under assumptions that we have endeavored to make as weak as possible. It applies to problems that involve nonconvex and/or nonsmooth objective terms, in addition to the multiaffine constraints that can involve multiple (three or more) blocks of variables. To illustrate the applicability of our results, we describe examples including nonnega- tive matrix factorization, sparse learning, risk parity portfolio selection, nonconvex formulations of convex problems, and neural network training. In each case, our ADMM approach encounters only subproblems that have closed-form solutions. 1. Introduction The alternating direction method of multipliers (ADMM) is an iterative method that was initially proposed for solving linearly-constrained separable optimization problems having the form: ( inf f(x) + g(y) (P 0) x;y Ax + By − b = 0: The augmented Lagrangian L of the problem (P 0), for some penalty parameter ρ > 0, is defined to be ρ L(x; y; w) = f(x) + g(y) + hw; Ax + By − bi + kAx + By − bk2: 2 In iteration k, with the iterate (xk; yk; wk), ADMM takes the following steps: (1) Minimize L(x; yk; wk) with respect to x to obtain xk+1.
    [Show full text]
  • Approximation Algorithms
    Lecture 21 Approximation Algorithms 21.1 Overview Suppose we are given an NP-complete problem to solve. Even though (assuming P = NP) we 6 can’t hope for a polynomial-time algorithm that always gets the best solution, can we develop polynomial-time algorithms that always produce a “pretty good” solution? In this lecture we consider such approximation algorithms, for several important problems. Specific topics in this lecture include: 2-approximation for vertex cover via greedy matchings. • 2-approximation for vertex cover via LP rounding. • Greedy O(log n) approximation for set-cover. • Approximation algorithms for MAX-SAT. • 21.2 Introduction Suppose we are given a problem for which (perhaps because it is NP-complete) we can’t hope for a fast algorithm that always gets the best solution. Can we hope for a fast algorithm that guarantees to get at least a “pretty good” solution? E.g., can we guarantee to find a solution that’s within 10% of optimal? If not that, then how about within a factor of 2 of optimal? Or, anything non-trivial? As seen in the last two lectures, the class of NP-complete problems are all equivalent in the sense that a polynomial-time algorithm to solve any one of them would imply a polynomial-time algorithm to solve all of them (and, moreover, to solve any problem in NP). However, the difficulty of getting a good approximation to these problems varies quite a bit. In this lecture we will examine several important NP-complete problems and look at to what extent we can guarantee to get approximately optimal solutions, and by what algorithms.
    [Show full text]
  • Branch and Price for Chance Constrained Bin Packing
    Branch and Price for Chance Constrained Bin Packing Zheng Zhang Department of Industrial Engineering and Management, Shanghai Jiao Tong University, Shanghai 200240, China, [email protected] Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI 48109, [email protected] Brian Denton Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI 48109, [email protected] Xiaolan Xie Department of Industrial Engineering and Management, Shanghai Jiao Tong University, Shanghai 200240, China, [email protected] Center for Biomedical and Healthcare Engineering, Ecole Nationale Superi´euredes Mines, Saint Etienne 42023, France, [email protected] This article considers two versions of the stochastic bin packing problem with chance constraints. In the first version, we formulate the problem as a two-stage stochastic integer program that considers item-to- bin allocation decisions in the context of chance constraints on total item size within the bins. Next, we describe a distributionally robust formulation of the problem that assumes the item sizes are ambiguous. We present a branch-and-price approach based on a column generation reformulation that is tailored to these two model formulations. We further enhance this approach using antisymmetry branching and subproblem reformulations of the stochastic integer programming model based on conditional value at risk (CVaR) and probabilistic covers. For the distributionally robust formulation we derive a closed form expression for the chance constraints; furthermore, we show that under mild assumptions the robust model can be reformulated as a mixed integer program with significantly fewer integer variables compared to the stochastic integer program. We implement a series of numerical experiments based on real data in the context of an application to surgery scheduling in hospitals to demonstrate the performance of the methods for practical applications.
    [Show full text]
  • MODULES and ITS REVERSES 1. Introduction the Hölder Inequality
    Ann. Funct. Anal. A nnals of F unctional A nalysis ISSN: 2008-8752 (electronic) URL: www.emis.de/journals/AFA/ HOLDER¨ TYPE INEQUALITIES ON HILBERT C∗-MODULES AND ITS REVERSES YUKI SEO1 This paper is dedicated to Professor Tsuyoshi Ando Abstract. In this paper, we show Hilbert C∗-module versions of H¨older- McCarthy inequality and its complementary inequality. As an application, we obtain H¨oldertype inequalities and its reverses on a Hilbert C∗-module. 1. Introduction The H¨olderinequality is one of the most important inequalities in functional analysis. If a = (a1; : : : ; an) and b = (b1; : : : ; bn) are n-tuples of nonnegative numbers, and 1=p + 1=q = 1, then ! ! Xn Xn 1=p Xn 1=q ≤ p q aibi ai bi for all p > 1 i=1 i=1 i=1 and ! ! Xn Xn 1=p Xn 1=q ≥ p q aibi ai bi for all p < 0 or 0 < p < 1: i=1 i=1 i=1 Non-commutative versions of the H¨olderinequality and its reverses have been studied by many authors. T. Ando [1] showed the Hadamard product version of a H¨oldertype. T. Ando and F. Hiai [2] discussed the norm H¨olderinequality and the matrix H¨olderinequality. B. Mond and O. Shisha [15], M. Fujii, S. Izumino, R. Nakamoto and Y. Seo [7], and S. Izumino and M. Tominaga [11] considered the vector state version of a H¨oldertype and its reverses. J.-C. Bourin, E.-Y. Lee, M. Fujii and Y. Seo [3] showed the geometric operator mean version, and so on.
    [Show full text]
  • Distributed Optimization Algorithms for Networked Systems
    Distributed Optimization Algorithms for Networked Systems by Nikolaos Chatzipanagiotis Department of Mechanical Engineering & Materials Science Duke University Date: Approved: Michael M. Zavlanos, Supervisor Wilkins Aquino Henri Gavin Alejandro Ribeiro Dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Mechanical Engineering & Materials Science in the Graduate School of Duke University 2015 Abstract Distributed Optimization Algorithms for Networked Systems by Nikolaos Chatzipanagiotis Department of Mechanical Engineering & Materials Science Duke University Date: Approved: Michael M. Zavlanos, Supervisor Wilkins Aquino Henri Gavin Alejandro Ribeiro An abstract of a dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Mechanical Engineering & Materials Science in the Graduate School of Duke University 2015 Copyright c 2015 by Nikolaos Chatzipanagiotis All rights reserved except the rights granted by the Creative Commons Attribution-Noncommercial Licence Abstract Distributed optimization methods allow us to decompose certain optimization prob- lems into smaller, more manageable subproblems that are solved iteratively and in parallel. For this reason, they are widely used to solve large-scale problems arising in areas as diverse as wireless communications, optimal control, machine learning, artificial intelligence, computational biology, finance and statistics, to name a few. Moreover, distributed algorithms avoid the cost and fragility associated with cen- tralized coordination, and provide better privacy for the autonomous decision mak- ers. These are desirable properties, especially in applications involving networked robotics, communication or sensor networks, and power distribution systems. In this thesis we propose the Accelerated Distributed Augmented Lagrangians (ADAL) algorithm, a novel decomposition method for convex optimization prob- lems.
    [Show full text]
  • User's Guide for Complexity: a LATEX Package, Version 0.80
    User’s Guide for complexity: a LATEX package, Version 0.80 Chris Bourke April 12, 2007 Contents 1 Introduction 2 1.1 What is complexity? ......................... 2 1.2 Why a complexity package? ..................... 2 2 Installation 2 3 Package Options 3 3.1 Mode Options .............................. 3 3.2 Font Options .............................. 4 3.2.1 The small Option ....................... 4 4 Using the Package 6 4.1 Overridden Commands ......................... 6 4.2 Special Commands ........................... 6 4.3 Function Commands .......................... 6 4.4 Language Commands .......................... 7 4.5 Complete List of Class Commands .................. 8 5 Customization 15 5.1 Class Commands ............................ 15 1 5.2 Language Commands .......................... 16 5.3 Function Commands .......................... 17 6 Extended Example 17 7 Feedback 18 7.1 Acknowledgements ........................... 19 1 Introduction 1.1 What is complexity? complexity is a LATEX package that typesets computational complexity classes such as P (deterministic polynomial time) and NP (nondeterministic polynomial time) as well as sets (languages) such as SAT (satisfiability). In all, over 350 commands are defined for helping you to typeset Computational Complexity con- structs. 1.2 Why a complexity package? A better question is why not? Complexity theory is a more recent, though mature area of Theoretical Computer Science. Each researcher seems to have his or her own preferences as to how to typeset Complexity Classes and has built up their own personal LATEX commands file. This can be frustrating, to say the least, when it comes to collaborations or when one has to go through an entire series of files changing commands for compatibility or to get exactly the look they want (or what may be required).
    [Show full text]