Algorithm Design Using Spectral Graph Theory

Total Page:16

File Type:pdf, Size:1020Kb

Algorithm Design Using Spectral Graph Theory Algorithm Design Using Spectral Graph Theory Richard Peng CMU-CS-13-121 August 2013 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Thesis Committee: Gary L. Miller, Chair Guy E. Blelloch Alan Frieze Daniel A. Spielman, Yale University Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy. Copyright © 2013 Richard Peng This research was supported in part by the National Science Foundation under grant number CCF-1018463, by a Microsoft Research PhD Fellowship, by the National Eye Institute under grant numbers R01-EY01317-08 and RO1- EY11289-22, and by the Natural Sciences and Engineering Research Council of Canada under grant numbers M- 377343-2009 and D-390928-2010. Parts of this work were done while at Microsoft Research New England. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the funding parties. Keywords: Combinatorial Preconditioning, Linear System Solvers, Spectral Graph Theory, Parallel Algorithms, Low Stretch Embeddings, Image Processing To my parents and grandparents iv Abstract Spectral graph theory is the interplay between linear algebra and combinatorial graph theory. Laplace’s equation and its discrete form, the Laplacian matrix, appear ubiquitously in mathematical physics. Due to the recent discovery of very fast solvers for these equations, they are also becoming increasingly useful in combinatorial opti- mization, computer vision, computer graphics, and machine learning. In this thesis, we develop highly efficient and parallelizable algorithms for solv- ing linear systems involving graph Laplacian matrices. These solvers can also be extended to symmetric diagonally dominant matrices and M-matrices, both of which are closely related to graph Laplacians. Our algorithms build upon two decades of progress on combinatorial preconditioning, which connects numerical and combinato- rial algorithms through spectral graph theory. They in turn rely on tools from numeri- cal analysis, metric embeddings, and random matrix theory. We give two solver algorithms that take diametrically opposite approaches. The first is motivated by combinatorial algorithms, and aims to gradually break the prob- lem into several smaller ones. It represents major simplifications over previous solver constructions, and has theoretical running time comparable to sorting. The second is motivated by numerical analysis, and aims to rapidly improve the algebraic connectiv- ity of the graph. It is the first highly efficient solver for Laplacian linear systems that parallelizes almost completely. Our results improve the performances of applications of fast linear system solvers ranging from scientific computing to algorithmic graph theory. We also show that these solvers can be used to address broad classes of image processing tasks, and give some preliminary experimental results. vi Acknowledgments This thesis is written with the help of many people. First and foremost I would like to thank my advisor, Gary Miller, who introduced me to most of the topics discussed here and helped me throughout my studies at CMU. My thesis committee members, Guy Blelloch, Alan Frieze, and Daniel Spielman, provided me with invaluable advice during the dissertation process. I am also grateful towards Ian Munro and Daniel Sleator for their constant guidance. I am indebted to the co-authors who I had the fortune to work with during my graduate studies. Many of the ideas and results in this thesis are due to collaborations with Yiannis Koutis, Kanat Tangwongsan, Hui Han Chin, and Shen Chen Xu. Michael Cohen, Jakub Pachocki, and Shen Chen Xu also provided very helpful comments and feedback on earlier versions of this document. I was fortunate to be hosted by Aleksander M ˛adryand Madhu Sudan while interning at Microsoft Research New England. While there, I also had many enlightening discussions with Jonathan Kelner and Alex Levin. I would like to thank my parents and grandparents who encouraged and cultivated my interests; and friends who helped me throughout graduate school: Bessie, Eric, Hui Han, Mark, Neal, Tom, and Travis. Finally, I want to thank the CNOI, CEMC, IOI, USACO, and SGU for developing my problem solving abilities, stretching my imagination, and always giving me something to look forward to. vii viii Contents 1 Introduction 1 1.1 Graphs and Linear Systems . 1 1.2 Related Work . 4 1.3 Structure of This Thesis . 6 1.4 Applications . 8 1.5 Solving Linear Systems . 9 1.6 Matrices and Similarity . 14 2 Nearly O(m log n) Time Solver 23 2.1 Reductions in Graph Size . 23 2.2 Ultra-Sparsification Algorithm . 26 2.3 Recursive Preconditioning and Nearly-Linear Time Solvers . 30 2.4 Reuse, Recycle, and Reduce the Tree . 34 2.5 Tighter Bound for High Stretch Edges . 36 2.6 Stability Under Fixed Point Arithmetic . 39 3 Polylog Depth, Nearly-Linear Work Solvers 47 3.1 Overview of Our Approach . 49 3.2 Parallel Solver . 51 3.3 Nearly-Linear Sized Parallel Solver Chains . 55 3.4 Alternate Construction of Parallel Solver Chains . 63 4 Construction of Low-Stretch Subgraphs 75 4.1 Low Stretch Spanning Trees and Subgraphs . 76 4.2 Parallel Partition Routine . 79 ix 4.3 Parallel Construction of Low Stretch Subgraphs . 88 4.4 Construction of low `↵-stretch spanning trees . 93 5 Applications to Image Processing 95 5.1 Background and Formulations . 97 5.2 Applications . 99 5.3 Related Algorithms . 101 5.4 Approximating Grouped Least Squares Using Quadratic Minimization . 102 5.5 Evidence of Practical Feasibility . 111 5.6 Relations between Graph Problems and Minimizing LASSO Objectives . 112 5.7 Other Variants . 116 5.8 Remarks . 117 6 Conclusion and Open Problems 119 6.1 Low Stretch Spanning Trees . 119 6.2 Interaction between Iterative Methods and Sparsification . 120 Bibliography 120 A Deferred Proofs 133 B Spectral Sparsification by Sampling 135 B.1 From Matrix Chernoff Bounds to Sparsification Guarantees . 136 B.2 Proof of Concentration Bound . 137 C Partial Cholesky Factorization 145 C.1 Partial Cholesky Factorization on Graph Laplacians . 146 C.2 Errors Under Partial Cholesky Factorization . 147 D Iterative Methods 155 D.1 Richardson Iteration . 155 D.2 Chebyshev Iteration . 158 D.3 Chebyshev Iteration with Round-off Errors . 163 x List of Figures 1.1 Representing a social network as a graph. 2 1.2 A resistive electric network with resistors labeled by their conductances (f).... 3 1.3 Call structure of a 3-level W-cycle algorithm corresponding to the runtime recur- rence given in Equation 1.5 with t =2. Each level makes 2 calls in succession to a problem that’s 4 times smaller. 12 1.4 Call structure of a 3-level V-cycle algorithm. Each level makes 1 calls to a problem of comparable size. A small number of levels leads to a fast algorithm. 14 2.1 The effective resistance RT (e) of the blue off-tree edge in the red tree is 1/4+ 1/5+1/2=0.95. Its stretch strT (e)=weRT (e) is (1/4+1/5+1/2)/(1/2) = 1.9 27 4.1 Cycle on n vertices. Any tree T can include only n 1 of these edges. If the graph − is unweighted, the remaining edge has stretch n 1.................77 − 4.2 Two possible spanning trees of the unit weighted square grid, shown with red edges. 77 4.3 Low stretch spanning tree for the square grid (in red) with extra edges of high stretch (narrower, in green). These edges can be viewed either as edges to be resolved later in the recursive algorithm, or as forming a subgraph along with the tree .......................................... 79 5.1 Outputs of various denoising algorithms on image with AWGN noise. From left to right: noisy version, Split Bregman [GO09], Fixed Point [MSX11], and Grouped Least Squares. Errors listed below each figure from left to right are `2 and `1 norms of differences with the original. 111 5.2 Applications to various image processing tasks. From left to right: image segmen- tation, global feature extraction / blurring, and decoloring. 113 5.3 Example of seamless cloning using Poisson Image Editing . 114 xi xii Chapter 1 Introduction Solving a system of linear equations is one of the oldest and possibly most studied computational problems. There is evidence that humans have been solving linear systems to facilitate economic activities over two thousand years ago. Over the past two hundred years, advances in physical sciences, engineering, and statistics made linear system solvers a central topic of applied mathe- matics. This popularity of solving linear systems as a subroutine is partly due to the existence of efficient algorithms: small systems involving tens of variables can be solved reasonably fast even by hand. Over the last two decades, the digital revolution led to a wide variety of new applications for linear system solvers. At the same time, improvements in sensor capabilities, storage devices, and networks led to sharp increases in data sizes. Methods suitable for systems with a small number of variables have proven to be challenging to scale to the large, sparse instances common in these applications. As a result, the design of efficient linear system solvers remains a crucial algorithmic problem. Many of the newer applications model entities and their relationships as networks, also known as graphs, and solve linear systems based on these networks. The resulting matrices are called graph Laplacians, which we will formally define in the next section. This approach of extracting information using graph Laplacians has been referred to as the Laplacian Paradigm [Ten10]. It has been successfully applied to a growing number of areas including clustering, machine learning, computer vision, and algorithmic graph theory. The main focus of this dissertation is the routine at the heart of this paradigm: solvers for linear systems involving graph Laplacians.
Recommended publications
  • How to Morph Graphs on the Torus
    How to Morph Graphs on the Torus Erin Wolf Chambers† Jeff Erickson‡ Patrick Lin‡ Salman Parsa† Abstract first algorithm to morph graphs on any higher-genus surface. In fact, it is the first algorithm to compute We present the first algorithm to morph graphs on the torus. any form of isotopy between surface graphs; existing Given two isotopic essentially 3-connected embeddings of the algorithms to test whether two graphs on the same surface same graph on the Euclidean flat torus, where the edges in both are isotopic are non-constructive 29 . Our algorithm drawings are geodesics, our algorithm computes a continuous [ ] outputs a morph consisting of O n steps; within each step, deformation from one drawing to the other, such that all edges ( ) all vertices move along parallel geodesics at (different) are geodesics at all times. Previously even the existence of constant speeds, and all edges remain geodesics (“straight such a morph was not known. Our algorithm runs in O n1+!=2 ( ) line segments”). Our algorithm runs in O n1+!=2 time; the time, where ! is the matrix multiplication exponent, and the ( ) running time is dominated by repeatedly solving a linear computed morph consists of O n parallel linear morphing steps. ( ) system encoding a natural generalization of Tutte’s spring Existing techniques for morphing planar straight-line graphs do embedding theorem. not immediately generalize to graphs on the torus; in particular, Cairns’ original 1944 proof and its more recent improvements rely on the fact that every planar graph contains a vertex of degree 1.1 Prior Results (and Why They Don’t Generalize).
    [Show full text]
  • 1 Introduction and Terminology
    A Survey of Solved Problems and Applications on Bandwidth, Edgesum and Pro le of Graphs Yung-Ling Lai National Chiayi Teacher College Chiayi, Taiwan, R.O.C. Kenneth Williams Western Michigan University Abstract This pap er provides a survey of results on the exact bandwidth, edge- sum, and pro le of graphs. A bibliographyof work in these areas is pro- vided. The emphasis is on comp osite graphs. This may be regarded as an up date of the original survey of solved bandwidth problems by Chinn, Chvatalova, Dewdney, and Gibbs[10] in 1982. Also several of the applica- tion areas involving these graph parameters are describ ed. 1 Intro duction and terminology For a graph G, V (G) denotes the set of vertices of G and E (G) denotes the set of edges of G. Let G = (V; E ) be a graph on n vertices. A 1-1 mapping f : V ! f1; 2;:::;ng is called a proper numbering of G. The bandwidth B (G) of aproper f numbering f of G is the number B (G)= maxfjf (u) f (v )j : uv 2 E g; f and the bandwidth B(G) of G is the number B (G)= minfB (G): f is a prop er numb ering of Gg: f A prop er numb ering f is called a bandwidth numbering of G if B (G)=B (G). f For example, Figure 1 shows bandwidth numb erings for the graphs P ;C ;K 4 5 1;4 and K . In general, B (P ) = 1, B (C ) = 2, B (K ) = dn=2e and B (K ) = 2;3 n n 1;n m;n m + dn=2e1form n.
    [Show full text]
  • RESOURCES in NUMERICAL ANALYSIS Kendall E
    RESOURCES IN NUMERICAL ANALYSIS Kendall E. Atkinson University of Iowa Introduction I. General Numerical Analysis A. Introductory Sources B. Advanced Introductory Texts with Broad Coverage C. Books With a Sampling of Introductory Topics D. Major Journals and Serial Publications 1. General Surveys 2. Leading journals with a general coverage in numerical analysis. 3. Other journals with a general coverage in numerical analysis. E. Other Printed Resources F. Online Resources II. Numerical Linear Algebra, Nonlinear Algebra, and Optimization A. Numerical Linear Algebra 1. General references 2. Eigenvalue problems 3. Iterative methods 4. Applications on parallel and vector computers 5. Over-determined linear systems. B. Numerical Solution of Nonlinear Systems 1. Single equations 2. Multivariate problems C. Optimization III. Approximation Theory A. Approximation of Functions 1. General references 2. Algorithms and software 3. Special topics 4. Multivariate approximation theory 5. Wavelets B. Interpolation Theory 1. Multivariable interpolation 2. Spline functions C. Numerical Integration and Differentiation 1. General references 2. Multivariate numerical integration IV. Solving Differential and Integral Equations A. Ordinary Differential Equations B. Partial Differential Equations C. Integral Equations V. Miscellaneous Important References VI. History of Numerical Analysis INTRODUCTION Numerical analysis is the area of mathematics and computer science that creates, analyzes, and implements algorithms for solving numerically the problems of continuous mathematics. Such problems originate generally from real-world applications of algebra, geometry, and calculus, and they involve variables that vary continuously; these problems occur throughout the natural sciences, social sciences, engineering, medicine, and business. During the second half of the twentieth century and continuing up to the present day, digital computers have grown in power and availability.
    [Show full text]
  • 1 Introduction and Contents
    University of Padova - DEI Course “Introduction to Natural Language Processing”, Academic Year 2002-2003 Term paper Fast Matrix Multiplication PhD Student: Carlo Fantozzi, XVI ciclo 1 Introduction and Contents This term paper illustrates some results concerning the fast multiplication of n × n matrices: we use the adjective “fast” as a synonym of “in time asymptotically lower than n3”. This subject is relevant to natural language processing since, in 1975, Valiant showed [27] that Boolean matrix multiplication can be used to parse context-free grammars (or CFGs, for short): as a consequence, a fast boolean matrix multiplication algorithm yields a fast CFG parsing algorithm. Indeed, Valiant’s algorithm parses a string of length n in time proportional to TBOOL(n), i.e. the time required to multiply two n × n boolean matrices. Although impractical because of its high constants, Valiant’s algorithm is the asymptotically fastest CFG parsing solution known to date. A simpler (hence nearer to practicality) version of Valiant’s algorithm has been devised by Rytter [20]. One might hope to find a fast, practical parsing scheme which do not rely on matrix multiplica- tion: however, some results seem to suggest that this is a hard quest. Satta has demonstrated [21] that tree-adjoining grammar (TAG) parsing can be reduced to boolean matrix multiplication. Subsequently, Lee has proved [18] that any CFG parser running in time O gn3−, with g the size of the context-free grammar, can be converted into an O m3−/3 boolean matrix multiplication algorithm∗; the constants involved in the translation process are small. Since canonical parsing schemes exhibit a linear depen- dence on g, it can be reasonably stated that fast, practical CFG parsing algorithms can be translated into fast matrix multiplication algorithms.
    [Show full text]
  • Parallelization of Reordering Algorithms for Bandwidth and Wavefront Reduction
    Parallelization of Reordering Algorithms for Bandwidth and Wavefront Reduction Konstantinos I. Karantasis∗, Andrew Lenharthy, Donald Nguyenz, Mar´ıa J. Garzaran´ ∗, Keshav Pingaliy,z ∗Department of Computer Science, yInstitute for Computational Engineering and Sciences and University of Illinois at Urbana-Champaign zDepartment of Computer Science, fkik, [email protected] University of Texas at Austin [email protected], fddn, [email protected] Abstract—Many sparse matrix computations can be speeded More recently, reordering has become popular even in the up if the matrix is first reordered. Reordering was originally context of iterative sparse solvers where problems like mini- developed for direct methods but it has recently become popular mizing fill do not arise. The key computation in an iterative for improving the cache locality of parallel iterative solvers since reordering the matrix to reduce bandwidth and wavefront sparse solver is sparse matrix-vector multiplication (SpMV) can improve the locality of reference of sparse matrix-vector (say y = Ax). If the matrix is stored in compressed row- multiplication (SpMV), the key kernel in iterative solvers. storage (CRS) and the SpMV computation is performed by In this paper, we present the first parallel implementations of rows, the accesses to y and A enjoy excellent locality, but the two widely used reordering algorithms: Reverse Cuthill-McKee accesses to x may not. One way to improve the locality of (RCM) and Sloan. On 16 cores of the Stampede supercomputer, accesses to the elements of x is to reorder the sparse matrix our parallel RCM is 5.56 times faster on the average than a state-of-the-art sequential implementation of RCM in the HSL A using a bandwidth-reducing ordering (RCM is popular).
    [Show full text]
  • Mathematics People
    Mathematics People Srinivas Receives TWAS Prize Strassen Awarded ACM Knuth in Mathematics Prize Vasudevan Srinivas of the Tata Institute of Fundamental Volker Strassen of the University of Konstanz has been Research, Mumbai, has been named the winner of the 2008 awarded the 2008 Knuth Prize of the Association for TWAS Prize in Mathematics, awarded by the Academy of Computing Machinery (ACM) Special Interest Group on Sciences for the Developing World (TWAS). He was hon- Algorithms and Computation Theory (SIGACT). He was ored “for his basic contributions to algebraic geometry honored for his contributions to the theory and practice that have helped deepen our understanding of cycles, of algorithm design. The award carries a cash prize of motives, and K-theory.” Srinivas will receive a cash prize US$5,000. of US$15,000 and will deliver a lecture at the academy’s According to the prize citation, “Strassen’s innovations twentieth general meeting, to be held in South Africa in enabled fast and efficient computer algorithms, the se- September 2009. quence of instructions that tells a computer how to solve a particular problem. His discoveries resulted in some of the —From a TWAS announcement most important algorithms used today on millions if not billions of computers around the world and fundamentally altered the field of cryptography, which uses secret codes Pujals Awarded ICTP/IMU to protect data from theft or alteration.” Ramanujan Prize His algorithms include fast matrix multiplication, inte- ger multiplication, and a test for the primality
    [Show full text]
  • Prahladh Harsha
    Prahladh Harsha Toyota Technological Institute, Chicago phone : +1-773-834-2549 University Press Building fax : +1-773-834-9881 1427 East 60th Street, Second Floor Chicago, IL 60637 email : [email protected] http://ttic.uchicago.edu/∼prahladh Research Interests Computational Complexity, Probabilistically Checkable Proofs (PCPs), Information Theory, Prop- erty Testing, Proof Complexity, Communication Complexity. Education • Doctor of Philosophy (PhD) Computer Science, Massachusetts Institute of Technology, 2004 Research Advisor : Professor Madhu Sudan PhD Thesis: Robust PCPs of Proximity and Shorter PCPs • Master of Science (SM) Computer Science, Massachusetts Institute of Technology, 2000 • Bachelor of Technology (BTech) Computer Science and Engineering, Indian Institute of Technology, Madras, 1998 Work Experience • Toyota Technological Institute, Chicago September 2004 – Present Research Assistant Professor • Technion, Israel Institute of Technology, Haifa February 2007 – May 2007 Visiting Scientist • Microsoft Research, Silicon Valley January 2005 – September 2005 Postdoctoral Researcher Honours and Awards • Summer Research Fellow 1997, Jawaharlal Nehru Center for Advanced Scientific Research, Ban- galore. • Rajiv Gandhi Science Talent Research Fellow 1997, Jawaharlal Nehru Center for Advanced Scien- tific Research, Bangalore. • Award Winner in the Indian National Mathematical Olympiad (INMO) 1993, National Board of Higher Mathematics (NBHM). • National Board of Higher Mathematics (NBHM) Nurture Program award 1995-1998. The Nurture program 1995-1998, coordinated by Prof. Alladi Sitaram, Indian Statistical Institute, Bangalore involves various topics in higher mathematics. 1 • Ranked 7th in the All India Joint Entrance Examination (JEE) for admission into the Indian Institutes of Technology (among the 100,000 candidates who appeared for the examination). • Papers invited to special issues – “Robust PCPs of Proximity, Shorter PCPs, and Applications to Coding” (with Eli Ben- Sasson, Oded Goldreich, Madhu Sudan, and Salil Vadhan).
    [Show full text]
  • Amortized Circuit Complexity, Formal Complexity Measures, and Catalytic Algorithms
    Electronic Colloquium on Computational Complexity, Report No. 35 (2021) Amortized Circuit Complexity, Formal Complexity Measures, and Catalytic Algorithms Robert Roberey Jeroen Zuiddamy McGill University Courant Institute, NYU [email protected] and U. of Amsterdam [email protected] March 12, 2021 Abstract We study the amortized circuit complexity of boolean functions. Given a circuit model F and a boolean function f : f0; 1gn ! f0; 1g, the F-amortized circuit complexity is defined to be the size of the smallest circuit that outputs m copies of f (evaluated on the same input), divided by m, as m ! 1. We prove a general duality theorem that characterizes the amortized circuit complexity in terms of “formal complexity measures”. More precisely, we prove that the amortized circuit complexity in any circuit model composed out of gates from a finite set is equal to the pointwise maximum of the family of “formal complexity measures” associated with F. Our duality theorem captures many of the formal complexity measures that have been previously studied in the literature for proving lower bounds (such as formula complexity measures, submodular complexity measures, and branching program complexity measures), and thus gives a characterization of formal complexity measures in terms of circuit complexity. We also introduce and investigate a related notion of catalytic circuit complexity, which we show is “intermediate” between amortized circuit complexity and standard circuit complexity, and which we also characterize (now, as the best integer solution to a linear program). Finally, using our new duality theorem as a guide, we strengthen the known upper bounds for non-uniform catalytic space, introduced by Buhrman et.
    [Show full text]
  • Efficient Space-Time Sampling with Pixel-Wise Coded Exposure For
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1 Efficient Space-Time Sampling with Pixel-wise Coded Exposure for High Speed Imaging Dengyu Liu, Jinwei Gu, Yasunobu Hitomi, Mohit Gupta, Tomoo Mitsunaga, Shree K. Nayar Abstract—Cameras face a fundamental tradeoff between spatial and temporal resolution. Digital still cameras can capture images with high spatial resolution, but most high-speed video cameras have relatively low spatial resolution. It is hard to overcome this tradeoff without incurring a significant increase in hardware costs. In this paper, we propose techniques for sampling, representing and reconstructing the space-time volume in order to overcome this tradeoff. Our approach has two important distinctions compared to previous works: (1) we achieve sparse representation of videos by learning an over-complete dictionary on video patches, and (2) we adhere to practical hardware constraints on sampling schemes imposed by architectures of current image sensors, which means that our sampling function can be implemented on CMOS image sensors with modified control units in the future. We evaluate components of our approach - sampling function and sparse representation by comparing them to several existing approaches. We also implement a prototype imaging system with pixel-wise coded exposure control using a Liquid Crystal on Silicon (LCoS) device. System characteristics such as field of view, Modulation Transfer Function (MTF) are evaluated for our imaging system. Both simulations and experiments on a wide range of
    [Show full text]
  • Planar Diameter Via Metric Compression
    Planar Diameter via Metric Compression Jason Li Merav Parter CMU Weizmann Institute [email protected] [email protected] Abstract We develop a new approach for distributed distance computation in planar graphs that is based on a variant of the metric compression problem recently introduced by Abboud et al. [SODA’18]. In our variant of the Planar Graph Metric Compression Problem, one is given an n-vertex planar graph G = (V, E), a set of S ⊆ V source terminals lying on a single face, and a subset of target terminals T ⊆ V. The goal is to compactly encode the S × T distances. One of our key technical contributions is in providing a compression scheme that encodes all S × T distances using Oe(jSj · poly(D) + jTj) bits1, for unweighted graphs with diameter D. This significantly improves the state of the art of Oe(jSj · 2D + jTj · D) bits. We also con- sider an approximate version of the problem for weighted graphs, where the goal is to encode (1 + e) approximation of the S × T distances, for a given input parameter e 2 (0, 1]. Here, our compression scheme uses Oe(poly(jSj/e) + jTj) bits. In addition, we describe how these compression schemes can be computed in near-linear time. At the heart of this compact com- pression scheme lies a VC-dimension type argument on planar graphs, using the well-known Sauer’s lemma. This efficient compression scheme leads to several improvements and simplifications in the setting of diameter computation, most notably in the distributed setting: • There is an Oe(D5)-round randomized distributed algorithm for computing the diameter in planar graphs, w.h.p.
    [Show full text]
  • Applied Analysis & Scientific Computing Discrete Mathematics
    CENTER FOR NONLINEAR ANALYSIS The CNA provides an environment to enhance and coordinate research and training in applied analysis, including partial differential equations, calculus of Applied Analysis & variations, numerical analysis and scientific computation. It advances research and educational opportunities at the broad interface between mathematics and Scientific Computing physical sciences and engineering. The CNA fosters networks and collaborations within CMU and with US and international institutions. Discrete Mathematics & Operations Research RANKINGS DOCTOR OF PHILOSOPHY IN ALGORITHMS, COMBINATORICS, U.S. News & World Report AND OPTIMIZATION #16 | Applied Mathematics Carnegie Mellon University offers an interdisciplinary Ph.D program in Algorithms, Combinatorics, and #7 | Discrete Mathematics and Combinatorics Optimization. This program is the first of its kind in the United States. It is administered jointly #6 | Best Graduate Schools for Logic by the Tepper School of Business (Operations Research group), the Computer Science Department (Algorithms and Complexity group), and the Quantnet Department of Mathematical Sciences (Discrete Mathematics group). #4 | Best Financial Engineering Programs Carnegie Mellon University does not CONTACT discriminate in admission, employment, or Logic administration of its programs or activities on Department of Mathematical Sciences the basis of race, color, national origin, sex, handicap or disability, age, sexual orientation, 5000 Forbes Avenue gender identity, religion, creed, ancestry,
    [Show full text]
  • Metal Complexes of Penicillin and Cephalosporin Antibiotics
    I METAL COMPLEXES OF PENICILLIN AND CEPHALOSPORIN ANTIBIOTICS A thesis submitted to THE UNIVERSITY OF CAPE TOWN in fulfilment of the requirement$ forTown the degree of DOCTOR OF PHILOSOPHY Cape of by GRAHAM E. JACKSON University Department of Chernis try, University of Cape Town, Rondebosch, Cape, · South Africa. September 1975. The copyright of th:s the~is is held by the University of C::i~r:: To\vn. Reproduction of i .. c whole or any part \ . may be made for study purposes only, and \; not for publication. The copyright of this thesis vests in the author. No quotation from it or information derived from it is to be published without full acknowledgementTown of the source. The thesis is to be used for private study or non- commercial research purposes only. Cape Published by the University ofof Cape Town (UCT) in terms of the non-exclusive license granted to UCT by the author. University '· ii ACKNOWLEDGEMENTS I would like to express my sincere thanks to my supervisors: Dr. L.R. Nassimbeni, Dr. P.W. Linder and Dr. G.V. Fazakerley for their invaluable guidance and friendship throughout the course of this work. I would also like to thank my colleagues, Jill Russel, Melanie Wolf and Graham Mortimor for their many useful conrrnents. I am indebted to AE & CI for financial assistance during the course of. this study. iii ABSTRACT The interaction between metal"'.'ions and the penici l)in and cephalosporin antibiotics have been studied in an attempt to determine both the site and mechanism of this interaction. The solution conformation of the Cu(II) and Mn(II) complexes were determined using an n.m.r, line broadening, technique.
    [Show full text]