kth royal institute of technology

Doctoral Thesis in Computer Science Dynamic Matrix Algorithms and Applications in Convex and Combinatorial Optimization

JAN VAN DEN BRAND

Stockholm, Sweden 2021 Dynamic Matrix Algorithms and Applications in Convex and Combinatorial Optimization

JAN VAN DEN BRAND

Academic Dissertation which, with due permission of the KTH Royal Institute of Technology, is submitted for public defence for the Degree of Doctor of Philosophy on Wednesday the 9th of June 2021, at 3:00 p.m. in F3, Lindstedsvägen 26, Stockholm.

Doctoral Thesis in Computer Science KTH Royal Institute of Technology Stockholm, Sweden 2021 © Jan van den Brand © Co-Authors: Yin Tat Lee, Yang P. Liu, Danupon Nanongkai, Richard Peng, Thatchaphol Saranurak, Aaron Sidford, Di Wang, Zhao Song

ISBN 978-91-7873-867-0 TRITA-EECS-AVL-2021:31

Printed by: Universitetsservice US-AB, Sweden 2021 i

Abstract

Dynamic algorithms are used to efficiently maintain solutions to problems where the input undergoes some changes. This thesis studies dynamic algo- rithms that maintain solutions to linear algebra problems and we explore their applications and implications for dynamic graphs and optimization problems. Dynamic graph algorithms maintain properties of changing graphs, such as the distances in a graph that undergoes edge deletions and insertions. The main question is how to maintain the information without recomputing the solution from scratch whenever the graph changes. If maintaining the infor- mation without trivial recomputation is possible, the next natural question is how quickly the information can be maintained. This thesis makes progress on both questions: (i) We construct the first non-trivial fully dynamic graph algorithms for single-source shortest paths, diameter and other problems. This answers open questions stated in, e.g., [Demetrescu-Italiano’04]. (ii) We obtain matching upper and conditional lower bounds for the com- plexity of maintaining reachability, maximum matching, directed cycle detec- tion and many other graph properties. This settles the complexity for these problems and answers an open problem stated in [Abboud-V.Williams’14]. We get these results by reducing the dynamic graph problems to dynamic linear algebra problems for which we develop new algorithms. At the same time, conditional lower bounds for the dynamic graph problems thus imply lower bounds for dynamic linear algebra problems as well. We apply the developed techniques for dynamic linear algebra to algo- rithms for linear programs and obtain optimal (i.e. nearly-linear time) al- gorithms for dense instances of linear programs, Markov decision processes, linear L1 regression, and graph specific special cases thereof such as bipartite matching, minimum-cost flow, and (negative weight) shortest paths. For bi- partite matching on dense graphs, this is the first improvement since the clas- sic algorithms by [Dinic’70;Hopcroft-Karp’71;Karzanov’73;Ibarra-Moran’81]. The results are obtained by using that algorithms (i.e. interior point meth- ods) for these problems are iterative and must repeatedly solve linear systems and other linear algebra problems. By using techniques from dynamic lin- ear algebra (i.e. dynamic matrix algorithms), we are able to maintain the solution to these subproblems, reducing the time required per iteration. The construction of our algorithms relies on a joint analysis of the iterative algo- rithm and the dynamic matrix algorithms. On one hand, we develop robust interior point methods which are able to handle relaxations and approxi- mations to the linear algebra subroutines. On other hand, we develop fast dynamic matrix algorithms that are able to maintain the solution to these relaxed subproblems efficiently.

Keywords: Dynamic Algorithm, Data Structure, Optimization, Linear Program, Bipartite Matching, Shortest Path, Maximum Flow, Minimum Cost Flow, Diameter ii

Sammanfattning

Dynamiska grafalgoritmer uppr¨atth˚alleregenskaper f¨or att ¨andra diagram, t.ex. avst˚anden i en graf som genomg˚arkantraderingar och ins¨attningar. Hu- vudfr˚agan ¨ar hur man uppr¨atth˚allerinformationen utan att ber¨akna l¨osningen fr˚angrunden n¨ar grafen ¨andras. Om det ¨ar m¨ojligt att bibeh˚allainformationen utan trivial omber¨akning ¨ar n¨asta naturliga fr˚agahur snabbt informationen kan uppr¨atth˚allas.Denna avhandling g¨or framsteg i b˚adafr˚agorna: (i) Vi ger de f¨orsta icke-triviala dynamiska grafalgoritmerna f¨or kortaste v¨agar, diameter och andra problem med en k¨alla. Detta besvarar ¨oppna fr˚agor fr˚ant.ex. [Demetrescu-Italiano’04]. (ii) Vi ger matchande ¨ovre och villkorade undre gr¨anser f¨or komplexiteten i att uppr¨atth˚allan˚abarhet, maximal matchning, riktad cykeldetektering och m˚angaandra grafegenskaper. Detta l¨oser komplexiteten f¨or dessa problem och svarar p˚a ¨oppna problem som anges i [Abboud-V.Williams’14]. Vi uppn˚ardessa resultat genom att reducera de dynamiska grafproble- men till dynamiska linj¨ar algebra-problem som vi utvecklar nya algoritmer f¨or. Samtidigt inneb¨ar villkorade undre gr¨anser f¨or dynamiska grafproblem s˚alundaundre gr¨anser f¨or dynamiska linj¨ara algebraproblem ocks˚a. Vi till¨ampar de utvecklade teknikerna f¨or dynamisk linj¨ar algebra p˚aal- goritmer f¨or linj¨ara program och uppn˚aroptimala (dvs. n¨astan linj¨ar tid) al- goritmer f¨or t¨ata instanser av linj¨ara program, MDP, linj¨ar L1-regression, och graf-specifika specialfall som bipartit matchning, fl¨ode av minimal kostnad, kortaste v¨agar (med negativa vikter). F¨or bipartit matchning i t¨ata grafer ¨ar detta den f¨orsta f¨orb¨attringen sedan de klassiska algoritmerna av [Dinic’70; Hopcroft-Karp’71; Karzanov’73; Ibarra-Moran’81]. Resultaten erh˚allsmed hj¨alp av att algoritmer (dvs. inrepunkts-metoder) f¨or dessa problem ¨ar iterativa och m˚aste l¨osa linj¨ara system och andra linj¨ara algebra-problem upprepade g˚anger.Genom att anv¨anda tekniker fr˚andyna- misk linj¨ar algebra kan vi bibeh˚allal¨osningen p˚adessa delproblem, vilket minskar den tid som kr¨avs per iteration. Konstruktionen av v˚araalgoritmer bygger p˚aen f¨orenad analys av den iterativa algoritmen och de dynamiska linj¨ara algebraalgoritmerna. A˚ ena sidan utvecklar vi robusta inre punktme- toder som kan hantera approximativa l¨osningar till de linj¨ara algebrasubruti- nerna. A˚ andra sidan utvecklar vi snabba dynamiska linj¨ara algebraalgoritmer som kan bibeh˚allal¨osningen p˚adessa approximativa delproblem effektivt. Acknowledgement

My time at KTH was one of the most enjoyable stages of life so far. Clearly, the most influence during that time had my advisor Danupon Nanongkai and I am deeply grateful for his support. He provided me with a lot of freedom, e.g. I was able to select my own research questions, was allowed to select my own working hours, and he was always available online whenever I had questions. Even in the late evening or on weekends I could count on his help and advice. I would also like to express my sincere thanks to my major colaborators and inofficial co-advisors Thatchaphol Saranurak, Aaron Sidford, Yin Tat Lee, and Richard Peng. I would like to thank my other collaborators Aaron Bernstein, Joakim Blikstad, Maximilian Probst Gutenberg, Yang P. Liu, Sagnik Mukhopadhyay, Binghui Peng, Zhao Song, He Sun, Di Wang, and Omri Weinstein. I also thank Mikkel Thorup for inviting me to Copenhagen, the discussions we had, and the advice he has given while I was visiting. I thank Per Austrin for reading my thesis, helping with the Swedish summary, and for the enjoyable coffee and lunch breaks. I want to thank my fellow PhD students at KTH: Mohit, for repeatedly pushing me out of my comfort zone, e.g. to apply for the Google PhD Fellowship. Andreas, for lending me an ear whenever I felt like complaining about our landlords, and also for taking care of my apartment whenever I was traveling. Joseph, for introducing me to web fiction and the discus- sions we had. I also thank him, Kilian and Stephan for being available whenever I needed a break. Before coming to KTH, I studied at the Goethe University in Frankfurt and I want to thank faculty there for preparing me for my time as a PhD student. Thorsten Theobald, who was my Liaison Professor at the “Studienstiftung”, has offered me a lot of advice during my time at the Goethe University. He, Amin Coja-Oghlan and Rudolf Mester also offered a lot of advice near the end of my time at the Goethe University regarding how to proceed after graduation. I am also deeply grateful to Amin for allowing me to finish my Master’s Thesis while already working in Sweden as a PhD student. I thank Ronja D¨uffeland Hartwig Bosse for their support and advice related to teaching, and Hartwig’s advice on writing has been very helpful during my PhD. And finally, I thank my family and friends in Frankfurt. My friends always made time for me whenever I visited, even when the visits turned out to be unexpectedly

iii iv frequent. (Again, thanks to Danupon for allowing me this freedom to frequently visit Frankfurt.) My parents who supported me all my life and who have taken great effort to provide me with the opportunity to focus on my education. At last, thank you, Stefanie, for brightening my life, no matter the distance between us. Contents

Contents v

I Thesis 1

1 Introduction 3 1.1 Dynamic Algorithms ...... 4 1.2 Optimization ...... 6

2 Publication List 11

3 Dynamic Linear Algebra 13 3.1 Unifying Matrix Data Structures ...... 13 3.2 Dynamic Matrix Inverse ...... 14

4 Dynamic Distances 21 4.1 New Dynamic Algorithms ...... 22 4.2 From Dynamic Linear Algebra to Dynamic Distances ...... 24

5 Convex and Combinatorial Optimization 27 5.1 Algorithmic Results ...... 28 5.2 From Dynamic Linear Algebra to Optimization ...... 31

II Included Papers 47

A Dynamic Matrix Inverse 49 A.1 Introduction ...... 51 A.2 Overview of Our Algorithms ...... 61 A.3 Preliminaries ...... 66 A.4 Dynamic Matrix Inverse ...... 71 A.5 Conditional Lower Bounds ...... 85 A.6 Look-Ahead Setting ...... 98

v vi CONTENTS

A.7 Open Problems ...... 108 A.H Applications ...... 109

B Unifying Matrix Data Structures 137 B.1 Introduction ...... 139 B.2 Preliminaries ...... 141 B.3 Reducing Formulas to Matrix Inverse ...... 143 B.4 Applications ...... 146 B.5 Appendix ...... 158

C Dynamic Approximate Shortest Paths and Beyond 163 C.1 Introduction ...... 166 C.2 Technical Overview ...... 173 C.3 Preliminaries ...... 179 C.4 Algebraic Dynamic Short Hop Distances ...... 180 C.5 Results for All-Pairs-Distances ...... 187 C.6 Results for Diameter, Radius and Eccentricities ...... 193 C.7 Open Problems ...... 200 C.H Reduction from Distances to Polynomial Matrix Inverse ...... 202 C.I Approximate Diameter, Radius and Eccentricities ...... 203

D A Deterministic Linear Program Solver 215 D.1 Introduction ...... 217 D.2 Outline ...... 220 D.3 Preliminaries ...... 227 D.4 Projection Maintenance ...... 228 D.5 Central Path Method ...... 234 D.6 Open Problems ...... 247 D.G Appendix ...... 248

E Solving Tall Dense Linear Programs 255 E.1 Introduction ...... 256 E.2 Overview of Approach ...... 260 E.3 Preliminaries ...... 268 E.4 Algorithm ...... 269 E.5 Vector Maintenance ...... 284 E.6 Inverse Maintenance with Leverage Score Hints ...... 288 E.7 Acknowledgements ...... 301 E.H Gradient Maintenance ...... 310

F Bipartite Matching in Nearly-linear Time 311 F.1 Introduction ...... 313 F.2 Preliminaries ...... 319 F.3 Overview ...... 321 CONTENTS vii

F.4 IPM ...... 336 F.5 Heavy Hitters ...... 344 F.6 Dual Solution Maintenance ...... 352 F.7 Gradient and Primal Solution Maintenance ...... 360 F.8 Minimum Weight Perfect Bipartite b-Matching ...... 373

G Minimum Cost Flows, MDPs, and L1-Regression 407 G.1 Introduction ...... 409 G.2 Preliminaries ...... 415 G.3 Overview of Approach ...... 416 G.4 IPM ...... 426 G.5 Maintaining Regularized Lewis-Weights ...... 433 G.6 Path Following ...... 442 G.7 Minimum Cost Flow and Applications ...... 447 G.8 General Linear Programs ...... 448 G.I Matrix Data Structures ...... 462 G.J Leverage Score ...... 468 G.K Graph Data Structures ...... 485

Part I

Thesis

1

Chapter 1

Introduction

This thesis focuses on developing efficient algorithms via dynamic linear algebra – algorithmic techniques for efficiently maintaining properties of matrices and vector spaces. The results can be categorized into the two subareas of dynamic algorithms and their resulting improvements for optimization algorithms.

Dynamic Algorithms This type of algorithms concern solving the same prob- lem several times for a slightly different input. This is useful for many iterative algorithms (e.g. Dijkstra’s shortest path, linear programs, matching, etc.) as they often require dynamic algorithms as their subroutines since their final solutions are often built iteratively from solutions of smaller subproblems. In addition to their application in iterative algorithms, dynamic algorithms are also useful when the underlying problem itself is dynamic. For example, a navigation system might ini- tially compute the fastest way from point A to point B, but because of changing traffic conditions (e.g. traffic jams, or when the user takes a wrong turn) the initial solution might not stay optimal. In these cases, computing a new solution from scratch is wasting time and resources, when one could instead try to use the infor- mation of the previously optimal solution as a prior. Dynamic algorithms are data structures that solve these tasks efficiently, where the input problem can change over time.

Optimization Algorithms This thesis also considers fast sequential algorithms for basic optimization problems, such as linear program solvers or maximum flow algorithms. These algorithms have been proven extremely crucial in the industry with applications in, e.g., airline scheduling, network/traffic congestion minimiza- tion, and efficient manufacturing processing. Because of this, algorithms for these problems have been extensively studied for several decades. Yet, finding the most efficient algorithms for these problems remains a puzzling task. This thesis makes significant progress on this front by developing faster algorithms by exploiting the fact that many algorithms can use dynamic algorithms as their subroutines since their final solutions are built iteratively from solutions of smaller subproblems.

3 4 CHAPTER 1. INTRODUCTION

1.1 Dynamic Algorithms

In the classical model of computation, we solve problems only once, i.e. given some input, we run an algorithm and after receiving the answer, we are done. This model can be quite inefficient for many modern scenarios where we have some very large input that keeps changing. For example, routing on a road network under changing congestion and traffic conditions, load balancing on computer networks, or analyzing social networks. Here, running some algorithm over and over again whenever the input changes wastes time and resources. This leads to the central question of dynamic algorithms: How can we prevent repeated recomputation from scratch? Besides large scale scenarios where the input keeps changing, dynamic algo- rithms are also used as data structures – one of the most fundamental tools for efficient algorithm design. For example, a min-heap or balanced binary search tree is able to maintain the minimum element of a set of numbers that keeps changing, which can then be used to speed up algorithms for finding shortest paths. For both purposes of dynamic algorithms, the two most central questions of the area of dynamic algorithms are non-triviality and optimality.

Non-trivial dynamic algorithms The trivial way to handle a dynamic problem is to just recompute the solution from scratch whenever the input changes. Thus the first question to ask is if there exists any way to maintain the solution more effi- ciently (i.e. in less time) than this trivial recomputation. For many problems there exist non-trivial dynamic algorithms (e.g. spanning trees [Wul17, NS17, NSW17], reachability [DI05,San04,HKN14a,BGS20], maximum matching [San07]), while for others the question remains open (e.g. maximum flow).

Optimal time complexity Once a non-trivial algorithm has been found, the next natural question is to ask for the optimal time complexity for maintaining the solution. Naturally, this means to minimize the update time complexity, i.e. the time required to update the solution after the input changes. An alternative goal to reducing the time complexity is to argue that further improvements are impossible. For the latter, there exist different techniques for arguing lower bounds on the time complexity. One tool are information theoretic arguments [PD06, Lar12, CGL15], but these lower bounds tend to be rather weak in the sense that there is often a big gap between the lower bound and the current best upper bounds. Another approach are reductions, where one argues that if one could solve the dynamic problem faster than in time X, then some other problem could be solved in time less than f(X). Together with popular conjectures for the complexity of some algorithmic problems, these reductions then result in so called conditional lower bounds on the time complexity.1 [AW14,HKNS15]

1This is similar to how reductions are used in complexity theory to rule out polynomial time algorithms assuming the conjecture P 6= NP . 1.1. DYNAMIC ALGORITHMS 5

1.1.1 Contribution This thesis makes progress on both of the previously stated questions in the area of dynamic algorithms. We construct the first non-trivial algorithms for many dy- namic problems, and prove matching upper and conditional lower bounds for others.

Non-trivial dynamic algorithms (Chapter 4) We obtain the first non-trivial algorithm that is able to maintain single-source distances in dynamically changing graphs that undergo edge insertions and deletions. In this problem, one is given a graph and a fixed source node, and the dynamic algorithm must return the distances from the source node to every other node whenever an edge is inserted or deleted. It was a major open problem if a non-trivial algorithm exists for this task, stated in, e.g., [DI04,Tho04]. A related open problem was for maintaining extremal distances, such as the diameter (the largest distance) of a graph. We obtain the first non-trivial result for maintaining (1.5 + )-approximate diameter of a graph that undergoes edge inser- tions and deletions. (The result can be extended to also obtain the first non-trivial results for other extremal distances.) Note that a naive solution for maintaining the diameter would be to just compute the distance between every pair of nodes and to then pick the maximum, which would need Ω(n2) time per update. There is a conditional lower bound that rules out faster algorithms when the result is (1.5 − )-approximate [AHR+19]. We show that this approximation ratio is tight, as our (1.5+)-approximate dynamic algorithm has only subquadratic update time.

Optimal time complexity (Section 3.2) For many dynamic problems there exists a gap between the fastest upper bound and the existing lower bounds. These include graph problems like maintaining reachability, matching, or directed cycle detection under edge insertions and deletions. It also includes linear algebra prob- lems such as maintaining determinant, rank, inverse of a matrix, or the product of several matrices, while the input matrices undergo changes to their entries. We close the gap for the aforementioned problems by improving the existing upper bounds and proving matching conditional lower bounds. Closing this gap was stated as an open question by Abboud and V.Williams [AW14].

Dynamic Linear Algebra (Chapter 3) All results for dynamic graphs pre- sented in this thesis are based on dynamic linear algebra. These are algorithmic techniques for maintaining properties of matrices and vector spaces, i.e. dynamic algorithms for linear algebra problems. Especially dynamic matrix inverse is a key subroutine for many of the upper bounds presented here. The task in the dy- namic matrix inverse problem, is to maintain the inverse of a matrix that undergoes changes to its entries. This problem and variants thereof have been studied since the 50s [SM50, Woo50,Kar84, Vai89b,San04, San05b, MS04, San07,LS15, SM10, BNS19, CLS19, LSZ19, JSWZ21]. We construct fast algorithms and matching conditional lower bounds for dynamic matrix inverse and show that the problem is equivalent to maintaining the determinant of a matrix, or the solution of a linear system. We 6 CHAPTER 1. INTRODUCTION also show that a wide range of dynamic linear algebra problems can be reduced to dynamic matrix inverse. Consider for example the problem of maintaining the value of some function f(A1, ..., Ap) while the input matrices A, ..., Ap change over time. We show that, if function f can be written as a formula involving only matrix operations like addition, subtraction, multiplication and inversion, then the task of maintaining f(A1, ..., Ap) can be reduced to the task of just maintaining the inverse of some matrix, i.e. the dynamic matrix inverse problem. Given the wide range of possible functions f, this allows us to effectively unify an entire subarea of data structures to a single problem. This insight allows us to use our dynamic algorithms for matrix inverse to also maintain other properties. Specifically in the optimization area, many algorithms are iterative and must repeatedly compute some expression involving matrices, i.e. some function f involving matrix operations. By maintaining the solution to these expressions via our reduction and dynamic algorithms, we are able to speed up these optimization algorithms. More details on the applications in optimization is given in Section 1.2 and Chapter 5.

1.2 Optimization

Linear programs are used to solve many problems from dif- ferent areas of computer science such as optimal transport in operations research, congestion minimization in network design, Markov decision processes in robotics, `1-regression or empirical risk minimization in machine learning and data science. As the economy and distribution networks become more and more globalized, and with increasingly massive data sets that must be analyzed, the size of linear pro- grams grows as well, resulting in the need for fast algorithms. Besides these practical considerations, ongoing research on linear program solvers is also motivated by theoretical questions: Linear programs capture some of the oldest and most fundamental combinatorial problems studied in computer science whose complexity is not yet well understood, such as matching, (negative weight) shortest paths, and maximum-flow. The fastest algorithms for these combinato- rial problems rely on continuous techniques that stem from analyzing linear pro- grams [Mad13, LS14, CMSV17, LS20, BLN+20, KLS20, AMV20, BLL+21]. Thus a better understanding of the complexity of linear programs is a key avenue for gaining insights into these fundamental problems. The practical and theoretical considerations raise the questions:

How fast can we solve linear programs? What is the optimal time complexity?

Motivated by this question and the need for fast algorithms, many different tech- niques have been developed for solving linear programs and their special cases: e.g. the simplex algorithm [Dan63], ellipsoid algorithm [Kha79], central path based methods [Kar84], and cutting planes [Vai89a], to name just a few. Yet, despite decades of research, the question still remains open. 1.2. OPTIMIZATION 7

Time (O(·)) References e Deterministic m = O(n) 3.5 3.5 [Kar84] m n X 2 1.5 3.5 [Ren88] n m n X 2 1.5 3 [Vai87] n m + nm n X 2 3.373 3.373 [Vai89a] n m + n n X 1.35 1.15 2.5 [Vai89b] m n n X [LS14, LS15] n1.5m n2.5 [CLS19, LSZ19, JSWZ21] m2.373 n2.373 2.373 2.373 This Thesis m n X This Thesis mn + n2.5 n2.5

Table 1.1: History of (weakly polynomial) linear program solvers. Complexities are stated for dense input matrices. Bold complexities are optimal for some parameter regime (i.e. m = O(n) or m = Ω(n1.5)).

1.2.1 Contribution This thesis makes significant progress by obtaining nearly-linear time algorithms for many types of linear programs. As any algorithms must at least read the input, our algorithms settle the optimal time complexity (up to polylog factors) for a wide range of linear programs.2 The results are obtained by applying our dynamic linear algebra techniques to iterative algorithms for linear programs. For example, the central path method is an iterative framework/algorithm where in each iteration one must solve a linear system. This linear system changes from one iteration to the next and by using a dynamic algorithm, so it is natural to maintain the solution to this linear system via some dynamic algorithm [Kar84, Vai89b, LS15, CLS19, LSZ19, Bra20, JSWZ21, BLSS20,BLL+21]. However, the most efficient dynamic algorithms require further modification to the central path method, which is why the construction of our linear program solvers always consists of two steps: (i) We relax the exact requirements of the central path method for the linear systems (e.g. by allowing for approximation errors) and (ii) we develop dynamic algorithm that exploit the relaxed require- ments to efficiently maintain the solution to these linear systems. Thus by tailoring techniques from linear program solvers and dynamic algorithms to one-another, we obtain optimal algorithms for linear programs and their combinatorial special cases. A detailed discussion of our optimization results can be found in Chapter 5, here we give a brief outline of the results we obtain.

2Our results are weakly polynomial and we use Oe(·) to hide polylog factors and the bit- complexity of the linear program, cf. Remark 5.1.2. The question for a strongly polynomial algorithm still remains open. 8 CHAPTER 1. INTRODUCTION

Balanced Linear Programs Consider a linear program of the form min c>x subject to A>x = b, x ≥ 0 where A is an m×n matrix with m = Θ(n). We call this a balanced linear program because the dimensions n and m are roughly the same. Because of A>x = b, any linear program solver must solve a linear system, yielding a conditional lower bound of Ω(nω)-time.3 For balanced linear programs, there exists a randomized algorithm of matching complexity by Cohen, Lee, Song [CLS19] that relies on a novel “stochastic central path method”. Whenever such a new technique is developed, it raises question how powerful it is, and if it is required to achieve the desired result. It was also stated as open question in Song’s PhD Thesis [Son19] if the stochastic techniques are required or if the algorithm could be derandomized. We answer this open problem by obtaining a deterministic algorithm with the same time complexity, thus removing the dependency on the novel “stochastic central path method”. In addition to the time complexity being optimal for balanced linear programs, this is also the first deterministic improvement in 30 years (see Table 1.1). The algorithm relies on using our dynamic linear algebra techniques, which allows us to reduce the time per iteration of the central path method without any stochastic modifications of it.

Tall Linear Programs Consider a linear program of similar form as before but with constraints of the form A>x = b, ` ≤ x ≤ u for some n × m matrix A and m- dimensional vectors `, u. By refining the techniques of the balanced linear program solver and developing new dynamic algorithms, we obtain a linear program solver that runs in Oe(mn + n2.5) time. For m = Ω(n1.5), this upper bound is just Oe(mn), which matches the time required for merely reading the input matrix A when it is dense. So for tall and dense linear programs, our algorithm is nearly-linear time, settling the optimal time complexity for tall, dense linear programs.

Combinatorial Problems We refine the dynamic algorithms used for the pre- vious tall dense linear programs to the special case where A is an edge-vertex incidence matrix. This covers many graph theoretic special cases of linear pro- grams such as (negative weight) shortest paths, maximum bipartite matching and its extension minimum weight perfect bipartite b-matching, maximum flow and its extension minimum-cost flow. The complexity of our linear program solver reduces by factor of n, specifically we solve these problems in Oe(m + n1.5) time when the graph has m edges and n vertices. For moderately dense graphs (m = Ω(n1.5)) this time complexity is again nearly-linear, settling the optimal complexity for these combinatorial problems on moderately dense graphs. This also constitutes the first run-time improvements for bipartite matching on (moderately) dense graphs over the classic algorithms of Dinic, Hopcroft, Karp, Karzanov [Din70,HK73,Kar73] and Ibarra, Moran [IM81] (see Table 1.2).

3Here O(nω) is the number of operation required for multiplying two n×n matrices, where the current best bound is ω ≤ 2.373 [Wil12, Gal14, AW21]. There is currently no linear system solver known that runs in O(nω−) time for constant  > 0 for general linear systems. Faster algorithms are only known for special cases such as symmetric diagonally dominant systems [ST04, KMP10, KMP11, KOSZ13, LS13, CKM+14, KLP+16, KS16], or very sparse systems [PV21]. 1.2. OPTIMIZATION 9

Time (O(·)) Year Authors References e Sparse Dense √ 1969-1973 Hopcroft, Karp, [HK73, Din70] m n Dinic, Karzanov [Kar73] 1981, 2004 Ibarra, Moran, [IM81, MS04] nω Mucha, Sankowski 2013 Madry [Mad13] m10/7 2020 Liu, Sidford [LS20] m11/8+o(1) 2020 Kathuria, Liu, [KLS20] m4/3+o(1) Sidford 2020 This Thesis m + n1.5

Table 1.2: History of algorithms for maximum-cardinality bipartite matching. For a more comprehensive list, see [DP14].

Chapter 2

Publication List

At time of writing this thesis, I have published 10 papers in peer reviewed confer- ences and one manuscript. To keep the length of the thesis to a manageable level, not all these papers are included. Below are the papers that I want to highlight as they tell the story of dynamic matrix algorithms and how to use them to solve other problems in the areas of dynamic graphs and optimization. • Paper A [BNS19] (see page 49 of the full thesis): Dynamic Matrix Inverse: Improved Algorithms and Matching Conditional Lower Bounds. Jan van den Brand, Danupon Nanongkai, and Thatchaphol Saranurak. FOCS 2019. • Paper B [Bra21] (see page 137 of the full thesis): Unifying Matrix Data Structures: Simplifying and Speeding up Iterative Al- gorithms. Jan van den Brand. SOSA 2020. Best Paper. • Paper C [BN19] (see page 163 of the full thesis): Dynamic Approximate Shortest Paths and Beyond: Subquadratic and Worst- Case Update Time. Jan van den Brand, and Danupon Nanongkai. FOCS 2019. • Paper D [Bra20] (see page 215 of the full thesis): A Deterministic Linear Program Solver in Current Matrix Multiplication Time. Jan van den Brand. SODA 2020. • Paper E [BLSS20] (see page 255 of the full thesis): Solving Tall Dense Linear Programs in Nearly Linear Time. Jan van den Brand, Yin Tat Lee, Aaron Sidford, and Zhao Song. STOC 2020. Invited to Special Issue. • Paper F [BLN+20] (see page 311 of the full thesis):

11 12 CHAPTER 2. PUBLICATION LIST

Bipartite Matching in Nearly-linear Time on Moderately Dense Graphs. Jan van den Brand, Yin Tat Lee, Danupon Nanongkai, Richard Peng, That- chaphol Saranurak, Aaron Sidford, Zhao Song, and Di Wang. FOCS 2020. Invited to Special Issue. • Paper G [BLL+21] (see page 407 of the full thesis): Minimum Cost Flows, MDPs, and L1-Regression in Nearly Linear Time for Dense Instances. Jan van den Brand, Yin Tat Lee, Yang P. Liu, Thatchaphol Saranurak, Aaron Sidford, Zhao Song, and Di Wang. STOC 2021. The following is a list of other work, not included in this thesis: • [BS19] Sensitive Distance and Reachability Oracles for Large Batch Update. Jan van den Brand, and Thatchaphol Saranurak. FOCS 2019. • [BBMN21] Breaking the Quadratic Barrier for Matroid Intersection. Joakim Blikstad, Jan van den Brand, Sagnik Mukhopadhyay, and Danupon Nanongkai. STOC 2021. • [BPSW21] Training (Overparameterized) Neural Networks in Near-Linear Time. Jan van den Brand, Binghui Peng, Zhao Song, and Omri Weinstein. ITCS 2021. • [BBG+20] Fully-Dynamic Graph Sparsifiers Against an Adaptive Adver- sary. Aaron Bernstein, Jan van den Brand, Maximilian Probst Gutenberg, Danupon Nanongkai, Thatchaphol Saranurak, Aaron Sidford, and He Sun. Manuscript. The results of Paper A [BNS19] and Paper B [Bra21] on dynamic linear algebra are discussed in Chapter 3. Paper C [BN19] explores the implication of these dynamic algebra techniques for distance problems on dynamic graphs and is discussed in Chapter 4. The applications of dynamic algebra techniques for optimization algo- rithms (Papers D to G) are presented in Chapter 5. For Papers A to D, I am the main contributor. Each of the Papers E to G consist of two parts: a new interior point method and new data structures to efficiently implement these methods. I am the main contributor for these new data structures. Paper D also consists of two such parts, but there I am the main contributor for both the interior point method and the data structures. The main contributor in [BPSW21] is Binghui Peng and in [BBMN21] it is Joakim Blikstad. To reduce the length of this thesis, we omit some sections of Paper A to Paper G. We omit most appendix sections and the sections of which I was not the main contributor (i.e. the interior point sections of Papers E to G). At last, we omit some of the data structure proofs in Paper E because we refine the data structures and analyze them again in Papers F and G. Chapter 3

Dynamic Linear Algebra

3.1 Unifying Matrix Data Structures

Among dynamic algorithms (i.e. data structures), that maintain the solution to some linear algebra problem while the input changes over time, a wide range can be described via the following model:

Problem 3.1.1. Let f be some rational formula over matrices (i.e. a formula consisting only of matrix-addition, -subtraction, -multiplication and -inversion). The task is to maintain f(M1, ..., Mp) while the input matrices M1, ..., Mp change over time.

For example, many interior point based linear program solvers must repeatedly compute the formula DA>(ADA>)−1Ah where matrix D and vector h change over time. Pivoting based algorithms such as the famous simplex algorithm must −1 repeatedly compute (AB) AN where the columns of some matrix A are split into the two matrices AB and AN , and in each iteration a pair of columns is exchanged between these two matrices. Even outside of optimization, one must often repeat- edly compute some matrix formula for a changing input. For example, the “online linear system” problem from symbolic computation and computer algebra area, which is required for computing fast matrix decompositions, falls into this type of iterative problem [SY15]. A data structure that solves Problem 3.1.1 for some formula f can be used to speed up some iterative algorithm that repeatedly evaluates f as a subroutine. In the past, one had to construct data structures that were tailored to the specific for- mula f that must be maintained. For example, there were different data structures developed for the different linear program solvers in [CLS19,LSZ19,Bra20,JSWZ21] – each of these papers had to construct their very own data structure. At the same time, when the formula f grows long and complicated, so does the data structure and its analysis.

13 14 CHAPTER 3. DYNAMIC LINEAR ALGEBRA

Contributions In Paper B we show that Problem 3.1.1 can always be reduced to the special case f(M) = M−1, a task that is generally referred to as dynamic matrix inverse or inverse maintenance. This unifies a wide range of dynamic algo- rithm for linear algebra problems and simplifies the area as (i) we now need fewer distinct data structures, because data structures for just the inverse suffice (ii) it simplifies the proofs because constructing data structures for f(M) = M−1 is easier than constructing data structures for longer more complicated formulas. It also is a powerful tool for future algorithm design: If one develops new iterative algorithms that repeatedly computes some matrix formula, then one no longer needs to con- struct a new data structure for that formula. Instead, one can use the reduction and use an existing data structure for dynamic matrix inverse. As a result, Paper B shows that the three data structures in [CLS19, LSZ19, Bra20] can be reproduced via a single matrix inverse data structure. Further, it allows to speed up simple iterative alorithms to be competitive with more com- plicated ones, e.g. we manage to reproduce/simplify results for basic solutions [BM98], fast QR decompositions and online linear system [SY15]. In addition, the time per iteration of the simplex algorithm for linear programs reduces from O(n2) [Bar68,BG69,Rei82, Gol77] to O(n1.529) per iteration.1

Techniques The reduction from some general formula to the special case of ma- trix inverse is based on the following observation. Let us for simplicity assume f(A, b) = A−1b, and define

 A v   A−1 A−1v N = N−1 = . 0 ... 0 −1 0 ... 0 −1

Thus by being able to maintain N−1 (maintaining just a single column of N−1 suffices already) while N changes, we are able to maintain the value of f(A, b) while the inputs A and b change. We prove in Paper B that for any formula f, consisting of the basic matrix operations, the construction of such a matrix N is possible, i.e. a matrix N where some blocks are exactly the input to f and some submatrix of the inverse N−1 is exactly the value of f.

3.2 Dynamic Matrix Inverse

Background For dynamic matrix inverse, the task is to maintain the inverse of an n × n matrix A, while A undergoes changes. There exist different models for these changes, such as element updates, column updates, or row updates, where entries, rows, or columns of the input matrix A change respectively. When maintaining the

1While theoretic bounds on the complexity of the simplex method are super-polynomial be- cause of the large number of iterations [KM72,Kal92,HZ15], the simplex algorithm is observed to run in O(n) iterations on practical problem instances [Sha87] or O(n3) when adding small random noise to the input instance [ST04, DS05, KS06, Ver09,DH18]. Thus a polynomial speed-up of the time per iteration is non-negligible for those types of instances. 3.2. DYNAMIC MATRIX INVERSE 15 inverse A−1, commonly used variants are to maintain all n2 entries of the inverse explicitly, or implicitly by providing query operations that return information about the inverse. Common query models are entry, row, or column queries which return an entry, row, or column of the inverse respectively. Another common query model is to return A−1v for any given vector v. Dynamic matrix inverse is a key subroutine in many other algorithms, both in the static setting (e.g. interior point methods or the simplex algorithm) and the dynamic setting (maintaining the largest eigenvalue, rank, or determinant of a matrix, or maintaining reachability, distances, maximum matching size, or and k- paths/cycles in a graph). Understanding the complexity of dynamic matrix inverse is the key to understanding these other problems. Maintaining the inverse of a changing matrix was studied during the 50s al- ready when Sherman and Morrison described an algorithm with O(n2) time per element update [SM50,Woo50]. Since then, the O(n2) bound has held strong until Sankowski described an algorithm that requires O(n1.447) time per element update and element query [San04]. These rather restricted element queries are motivated by the fact that they do already suffice to maintain determinant and rank of matrix within same time complexity. Further, one can maintain st-reachability, size of the maximum matching and many other graph properties of a graph, undergoing edge insertions and deletions, within the same O(n1.447) time complexity per update. Sankowki’s result broke the long standing O(n2) time barrier not just for dynamic matrix inverse, but also for these graph theoretic problems. While it may seem counter intuitive that such an algebraic algorithm forms the state of the art algo- rithm for graph problems, previously Abboud and V.Williams have proven that the O(n2) time bound can only be broken by exploiting algebraic techniques [AW14]. Outside the area of dynamic algorithms, e.g. in the area of optimization, dynamic matrix inverse has been studied under the name of inverse maintenance, where it is used as subroutine of fast linear program solvers [Kar84, Vai89b, LS15]. Here the study goes back to the 80s, where Karmarkar sped up his linear program solver by using the techniques from Sherman and Morrison for dynamic matrix in- verse/matrix maintenance [Kar84]. Refinement of this data structure by tailoring it to additional guarantees of the linear program solver lead to the current state of the art linear program solvers [Vai89b, LS15, CLS19, LSZ19, Bra20, BLSS20, JSWZ21], more on that in Chapter 5.

Contribution Sankowksi’s O(n1.447) time algorithm for dynamic matrix inverse relies on a technique called fast matrix multiplication which allows to multiply two n × n matrices in O(nω) operations, where the current best bound is ω ≤ 2.373 [Wil12, Gal14, AW21]. Further, Abboud and V.Williams have shown that every algorithm that beats O(n2) update time must use fast matrix multiplication [AW14]. Thus it is natural to ask for the exact relationship between dynamic matrix inverse and fast matrix multiplication, i.e. how fast can we solve dynamic matrix inverse given some matrix multiplication algorithm. 16 CHAPTER 3. DYNAMIC LINEAR ALGEBRA

Previous work was unable to answer this question: There is a lower bound by Abboud and V.Williams [AW14] which implies an Ω(nω−1)-time lower bound, but it requires that the dynamic matrix inverse algorithm has O(nω−) preprocessing time for some constant  > 0. There is no dynamic algorithm known with such small pre- processing time, and the lower bound does not rule out that faster algorithms could exist if the preprocessing time is larger. Another lower bound proven by Henzinger et al. [HKNS15] does not require the restrictive assumption on small preprocessing time and implies an Ω(n)-time lower bound. However, this lower bound is inde- pendent of ω, so it does not give any insights to the relationship between dynamic matrix inverse and fast matrix multiplication. Finally, for both these lower bounds by Abboud and V.Williams [AW14] and Henzinger et al. [HKNS15], there is a gap to Sankowski’s upper bound, so the question for the optimal time complexity and its dependency on ω remained open. Closing this gap between upper and lower bound was also explicity stated as an open problem by Abboud and V.Williams [AW14]. We close this gap in Paper A by obtaining both improved upper and condi- tional lower bounds that share the same dependency on fast matrix multiplication. For any (future) algorithm for fast matrix multiplication, our conditional lower bound limits how fast dynamic matrix inverse could be solved when using that matrix multiplication algorithm. This complexity is matched by our upper bounds. Specifically for the current best bounds on fast matrix multiplication, our algo- rithm has O(n1.407) element update and element query time and our conditional lower bound states that no improvement is possible except by improving fast matrix multiplication. The exact results are as follows:

Theorem 3.2.1 (Paper A). There exists a dynamic matrix inverse algorithm that preprocesses a given n × n matrix A in O(nω) time. The dynamic algorithm then supports element updates to A in O(na+b + nω(1,a,b)−b + nω(1,1,a)−a) time and ele- ment queries to A−1 in O(na+b) time for any 0 ≤ b ≤ a ≤ 1.2 For the current best upper bounds on matrix multiplication [Wil12,Gal14,GU18, AW21], this is O(n1.407) update and query time. If ω = 2, then the update and query time is O(n1+1/4).

Theorem 3.2.2 (Paper A). Assuming the uMv-hinted uMv hypothesis, for any constant 0 < a ≤ b < 1 and  > 0 there is no dynamic matrix inverse algorithm with polynomial preprocessing time, and O(n−(na+b + nω(1,a,b)−b + nω(1,1,a)−a)) update time and O(na+b−) query time.

Note that both the upper and conditional lower bound allow for trade-offs be- tween update and query time. By choosing smaller parameters a, b, Theorem 3.2.1 has smaller query time by increasing the update time. Further, the conditional lower bounds matches this trade-off: Given an algorithm with small query time, Theorem 3.2.2 yields a lower bound for large the update time must be, matching the upper bound of Theorem 3.2.1.

2Here O(nω(a,b,c)) is the complexity of multiplying a na × nb matrix by an nb × nc matrix. 3.2. DYNAMIC MATRIX INVERSE 17

The stated bounds assume the current best bounds on ω(·, ·, ·) from [GU18]. Variants Prev. upper Prev. lower New upper New lower bound bound bound bound Element update O(n1.447) Ω(n) or O(n1.407) Ω(n1.407) or Element query O(n1.447) Ω(n) O(n1.407) Ω(n1.407) [San04] [HKNS15] (Thm. 3.2.1) (Thm. 3.2.2) same as above O(n1.529) Ω(n1.373) or - Ω(n1.529) or O(n0.529) Ω(n1.373) - Ω(n0.529) [San04] [AW14] (Paper A) Column update O(n2) - O(n1.529) Ω(n1.529) or Row query O(n) - O(n1.529) Ω(n1.529) [SM50, (Paper A) (Paper A) Woo50] Previous table, assuming ω = 2 Element update O(n1+1/3) Ω(n) or O(n1+1/4) Ω(n1+1/4) or Element query O(n1+1/3) Ω(n) O(n1+1/4) Ω(n1+1/4) [San04] [HKNS15] same as above O(n1.5) Ω(n) or - Ω(n1.5) or O(n0.5) Ω(n) - Ω(n0.5) [San04] [AW14] Column update O(n2) - O(n1.5) Ω(n1.5) or Row query O(n) - O(n1.5) Ω(n1.5) [SM50, Woo50]

Figure 3.1: Our new upper and conditional lower bounds compared to the previous ones. The lower bounds hold for update or query time, i.e. it is not possible to improve both complexities. (In fact, our lower bounds predict a trade-off: Any improvement to query complexity must result in an increase in update time. This trade-off is tight with our upper bounds.) 18 CHAPTER 3. DYNAMIC LINEAR ALGEBRA

Adjoint Determinant Strong (division free) (division free) connectivity Polynomial inverse

DAG path Transitive k-cycle st-distance counting closure

Pseudo inverse adjoint inverse Interpolation Linear system polynomial

Largest Matrix determinant rank eigen-/singular product value

k-path Cycle Perfect Bipartite maximum Triangle detection matching matching detection Subgraph triangle Maximum Counting detection matching ST-paths

Figure 3.2: Applications of dynamic matrix inverse for other dynamic problems. Each arrow reflects a reduction and all these problems reduce to dynamic matrix inverse. The yellow box contains algebraic applications for matrices over fields while the green box contains applications for matrices over rings. All remaining reductions are for graph problems.

These tight upper and lower bound hold not only for dynamic matrix inverse, but also for many other dynamic linear algebra problems like determinant or rank, and dynamic graph problems like reachability, maximum matching, directed cycle detection and many more. We thus settle the optimal complexity for a wide range of dynamic problems. Besides this result for element updates and element queries, we also obtain an algorithms (and matching conditional lower bounds) for other models of dynamic matrix inverse. For example, we obtain an algorithm for row updates to A and column queries to A−1 with O(n1.529) time update and query time. This can be used to solve dynamic bipartite matching in O(n1.529) update time, where in each update a node on the left side of the graph is replaced. This reflects use-cases such as assigning users to servers, where users (left side of the graph) keep logging in and out, while the servers are fixed. Another application would be to maintain the solution of a linear system where each update replaces any one of the linear constraints. 3.2. DYNAMIC MATRIX INVERSE 19

Upper Bound Techniques The high-level idea of our dynamic matrix inverse algorithm is to reduce the problem onto itself. Consider some matrix A and let A0 be the same matrix after some updates. Then (if both matrices are non-singular) there exists a matrix T such that A0 = AT, i.e. T is a linear transformation that reflects how A changed to A0. Further, the new inverse is given by A0−1 = T−1A−1. We can assume that the old inverse of A is known, because we can compute it during the initialization/preprocessing of our data structure. Thus the task of obtaining the new inverse A0−1 reduces to the task of obtaining the inverse of the transformation T−1. Note that further updates to A0 will result in updates to T. More accurately, one can prove that an element update to A0 results in a column update to T. Also in order to obtain an entry of A0−1, it suffices to know only a row of T−1. Thus we reduced the element update, element query of dynamic matrix inverse onto the column update row query variant of dynamic matrix inverse. While the latter version seems harder, note that initially before any updates, T must be the identity matrix because A0 = A. Thus after k updates to A0, the matrix T differs in at most k columns from the identity matrix, because an element update to A0 corresponds to a column update to T. Inverting T is quite easy for small k, because then T still shares a lot of structure with the identity matrix, which is easy to invert. By constructing faster algorithms for column update and row query dynamic matrix inverse, we are able to also improve the upper bounds for the element update and element query variant.

Conditional Lower Bounds As mentioned in the previous paragraph, main- taining T is easy as long as k, the number of updates so far, is small. So once k grows too large, it becomes efficient to just restart the algorithm and set A ← A0. This restart is the main bottleneck of the dynamic matrix inverse algorithms as we have to explicitly recompute A−1. Our lower bounds are based on conjectures that reflect this trade-off for when to restart the algorithm. The simplest conjecture can be phrased as follows: The problem consists of 3 phases. In each phase, the algorithm is allows to perform some computation and once the algorithm is done, the next phase starts. 1. The algorithm receives an n × nt matrix M for some constant 0 < t < 1. 2. The algorithm receives an nt × n matrix V. 3. The algorithm receives an index i ∈ {1, ..., n} and must return the ith column of MV.

푖 푛 푀 푉 = 푀푉

푛 푛푡 20 CHAPTER 3. DYNAMIC LINEAR ALGEBRA

The difficulty here is that we do not know which part of the product MV must be returned in the last phase, so it is not clear which information to precompute in Phase 2. The trivial solution for this problem is thus to (a) compute MV during Phase 2 (which takes O(nω(1,t,1)) time), or (b) perform no computation in Phase 2, and compute a matrix vector product of M and the ith column of V in Phase 3 in O(n1+t) time. We conjecture that no algorithm exists that needs polynomial preprocessing time in Phase 1, O(nω(1,t,1)−) time in Phase 2, and O(n1+t−) time in Phase 3, for some constant  > 0. Chapter 4

Dynamic Distances

This chapter explores the impact of dynamic linear algebra (specifically dynamic matrix inverse from Section 3.2) on dynamic distance problems. Among dynamic graph problems, maintaining distances is one of the most extensively studied areas. For example, map services are not just able to tell us the fastest route between two points, they can even adapt the solution based on current traffic conditions and congestion. This is a dynamic graph problem as the edge weights of the underlying road network are constantly changing. Other examples include routing in com- puter networks [NST00] or planning of tasks [KLLF04] in changing environments. In theoretical computer science, maintaining distance information has a rich his- tory going back to the 80s [ES81,AIMN91,KS98,Kin99,DI02,Tho04,DI04,San05b, Tho05, DI06, BHS07, Ber09, RZ11, BR11, RZ12, ACG12, HKN16, ACT14, HKN14a, HKN14b, ACD+16, BC16, Ber16, BC17, ACK17, HKN18, BN19, BGW20, GW20b, GW20a, BGS20, GW20c, GWW20,CS21, CZ21] Here one studies graphs that undergo edge insertions and deletions and the task is to maintain distance information such as: single-source shortest paths, all- pairs shortest paths, st-shortest path, or extremal distances like diameter, radius or eccentricities. In this section, we state the central open problems for each of these dynamic distance problems and the progress this thesis (Paper C) makes on each of them. These result are all based on combining techniques from several distinct areas of computer science, which will be outlined in Section 4.2: We combine graph theoretic techniques, with techniques from symbolic computation area, and the dynamic linear algebra techniques from Paper A [BNS19] (previously discussed in Section 3.2).

21 22 CHAPTER 4. DYNAMIC DISTANCES

T T T worst-case

< T < T > T amortized Time 0 T 2 · T 3 · T

Figure 4.1: A worst-case bound of T means every update needs at most T time, whereas amortized only bounds the average time per update.

4.1 New Dynamic Algorithms

Single-Source The main question in the area of dynamic algorithms is, if one can find any dynamic algorithm for a given problem. That is, does there exist a dynamic algorithm that is faster than just trivially recomputing the solution from scratch whenever the input changes. In case of single-source shortest paths, the question is whether there exists a dynamic algorithm that is faster than just running Dijkstra’s algorithm whenever the graph changes. The only progress on this question so far, was on the so called partially-dynamic setting, where edges are only inserted [CZ21,GWW20], or only deleted [ES81,HKN14a,HKN14b,BC16, BC17, HKN18, BGW20, GW20b, GW20a, BGS20, CS21]. Despite four decades of research in dynamic distances, the fully-dynamic setting (with both edge insertions and deletions) remained elusive and was repeatedly stated as an open problem in, e.g., [DI04, Tho04]. Is there any fully-dynamic algorithm for single-source shortest paths, that can beat trivially running Dijkstra’s algorithm? My thesis answers the open problems by maintaining (1 + )-approximate single- source distances in O(n1.823/2 log W ) time per update1, which beats Dijkstra’s algorithm on dense graphs. Note that Abboud and V.Williams prove a conditional lower bound [AW14], ruling out the existence of a dynamic algorithm that maintains the exact distance in weighted graphs. Thus (1 + )-approximation is required. Theorem 4.1.1. There is a (1 + )-approximate algorithm for explicitly main- taining single-source distances for directed weighted graphs with integer weights in {1, .., W } within Oe(n1.823/ε2 log W ) worst-case update time after the Oe(n2.621)-time preprocessing.

All Pairs When analyzing the update time complexity of dynamic algorithms, we distinguish between worst-case and amortized complexity. A worst-case bound T on the update time means that every update takes at most T time, whereas an amortized bound T means that for all k ≥ 1, the first k updates together take at most k·T time. Thus, when given an amortized time bound, an update may require

1Here W is the largest integer weight on any edge. 4.1. NEW DYNAMIC ALGORITHMS 23 a lot more time than T (see Figure 4.1). That is why for time critical applications, it is important to have good worst-case bounds. When maintaining the distances between every pair of nodes (i.e. all-pairs short- est paths), there exist fast dynamic algorithms with Oe(n2) amortized update time by Demetrescu and Italiano [DI04]. This is optimal, as changing a single edge may change the distance of Ω(n2) many pairs, so just writing down the output (or how the output changes) takes Ω(n2) time. However, for worst-case, the fastest algorithm on weighted graphs needs O(n2+2/3) time and O(n2.5) on unweighted graphs [ACK17, GW20c], resulting in a large gap between worst-case and amor- tized bounds. This raises the question if worst-case is strictly harder than amor- tized, or if similar optimal Oe(n2) time bounds can be achieved in the worst-case setting. We give an affirmative answer for the approximate case2 on unweighted undirected graphs, where our algorithm maintains (1 + )-approximate distances in O(n2/ω+1) worst-case update time. For weighted directed graphs, our algorithm is almost optimal with O(n2.045/2 log W ) worst-case time per update.3

Theorem 4.1.2. There is a (1 + )-approximation algorithm for maintaining all- pairs-distances explicitly in (i) Oe(n2/εω+1) worst-case update time after Oe(n2.53)-time preprocessing for undirected unweighted graphs, and (ii) Oe(n2.045/ε2 log W ), worst-case update time after O(n2.873)-time preprocess- ing for directed weighted graphs with integer weights in {1, ..., W }.

Extremal Distances Extremal distances relate to the largest distances in a graph. More accurately, the eccentricity of a node v is the largest distance from v to any other node. The diameter of a graph is the largest eccentricity and the radius is the smallest eccentricity. These extremal distances can be maintained trivially in Oe(n2) amortized time per update (or Oe(n2/ω+1) worst-case time when allowing for (1 + )-approximation) by running a dynamic all-pairs distance algorithm from the previous paragraph. Ancona et al. [AHR+19] have shown that this quadratic time bound is optimal (assuming the so called SETH conjecture), when maintaining a (1.5 − )-approximation of the diameter or radius, or a (5/3 − )-approximation of the eccentricities. We show that these approximation barriers are tight, by ob- taining dynamic algorithms with subquadratic worst-case update time for nearly (1.5+)-approximate diameter and radius, and nearly (5/3+)-approximate eccen- tricities. Previously, such algorithms had only amortized complexity bounds and worked only in the partially dynamic setting [AHR+19] (i.e. when supporting only edge insertions or only edge deletions).

2For exact all-pairs distances, the question remains open. 3 The√ complexity for the√ weighted case depends on the number of operation required to multiply an n × n matrix by an n × n matrix. The current best bound is O(n2.045) by Le Gall and Urrutia [GU18]. A future improvement to Oe(n2) time for this algebraic problem leads to an optimal Oe(n2/2 log W )-time all-pairs shortest paths algorithm. 24 CHAPTER 4. DYNAMIC DISTANCES

Theorem 4.1.3. We write diam(G), radius(G) for the diameter and radius of graph G respectively, and let ecc(v, G) be the eccentricity of node v in G. There exist algorithms that can maintain the following values with the following time complexities for a dynamic graph G.  2   1.779 1+ω 1. De ∈ 3 − ε diam(G) − 1/3, diam(G) in Oe(n /ε ) worst-case update time after the O(n2.624)-time preprocessing,  2   1.779 1+ω 2. Re ∈ 3 − ε radius(G) − 2/3, radius(G) in Oe(n /ε ) worst-case up- date time after the O(n2.624)-time preprocessing, and  3   1.823 1+ω 3. ecc(f v) ∈ 5 − ε ecc(v, G) − 4/7, ecc(v, G) for all nodes v in Oe(n /ε ) worst-case update time after the O(n2.621)-time preprocessing. The algorithm for Diameter works for directed unweighted graphs, while the others work for undirected unweighted graphs.

4.2 From Dynamic Linear Algebra to Dynamic Distances

All the previously mentioned results for dynamic distances stem from a combination of techniques from graph theoretic algorithms, symbolic computation, and dynamic linear algebra. A detailed proof and overview of the techniques can be found in Paper C. Here we give a brief overview of the ideas used from the previously mentioned areas. The first step of all our results is to reduce the distance problem on graphs to algebraic problems using the following reduction (see [San05b,San05a,BS19,BNS19] or Paper C for a more detailed statement). Let G = (V,E) be an n-node graph and assume for simplicity it is unweighted and let A be the adjacency matrix. Consider the following polynomial matrix (i.e. a matrix with polynomials as entries) M := I − X · A. If we consider the entries to be polynomials modulo Xk for some −1 Pk−1 i i 1 ≤ k ≤ n, then M = i=0 X A . (This can be seen by multiplying both sides by I − X · A and expanding the product.) Note that for all vertices u, v ∈ V i the entry (A )u,v is exactly the number of walks from u to v that consist of i steps. Specifically the smallest degree of any non-zero monomial in the polynomial −1 (M )u,v is thus the distance from u to v (if the distance is less than k). Hence, it suffices to build a data structure for the above polynomial matrix M that can handle updates to the entries of M and can return entries of its inverse M−1. Updating an edge in G corresponds to changing an entry of M, so we can now solve the dynamic graph problem via an algorithm for dynamic matrix inverse on polynomial matrices. Polynomial matrices are studied in the area of symbolic computation, and that area has developed techniques for handling such matrices efficiently, e.g. polyno- mial matrix inversion [ZLS15] or multiplying polynomial matrices [ZLS12]. By combining these techniques with our dynamic matrix inverse algorithms (such as Section 3.2, Paper A) we are able to efficiently maintain the inverse of polynomial matrices. 4.2. FROM DYNAMIC LINEAR ALGEBRA TO DYNAMIC DISTANCES 25

Unfortunately, the dynamic matrix inverse algorithms we construct for polyno- mial matrices are quite slow for large k. Remember, this parameter k − 1 bounds the largest degree of any polynomial we may encounter in the computations of our algorithms, because we perform computations modulo Xk. At the same time, k bounds the largest distance we can detect via our reduction. The naive choice of k = n (as every distance in a graph is at most n − 1) would result in a very slow algorithm, because of the large degree polynomials involved in each step of the algorithm. Here graph theoretic ideas such as hitting-sets, commonly used by other graph theoretic algorithms (see e.g. [UY91, Zwi02]), allow us to bound k to be a small polynomial in n (something in the order of n1/4 to n1/2, depending on whether we want so solve single-source, all-pairs shortest paths, or diameter etc.). The high-level idea of hitting-sets is to decompose any large path in the graph into several shorter segments, which can be detected by a smaller value of k. By combining the dynamic algebraic techniques with these graph theoretic ideas, we are able to solve the previously mentioned open problems on dynamic single- source and all-pairs shortest paths, as well as extremal distances.

Chapter 5

Convex and Combinatorial Optimization

Consider a linear program of the form min c>x subject to A>x = b, x ≥ 0 for an m×n matrix A (m ≥ n), i.e. m non-negative variables and n linear constraints. For a long time, it was not clear if there exists any polynomial time algorithm for solving linear programs, until Khachiyan developed the ellipsoid method [Kha79]. While a breakthrough from a theoretical perspective, this algorithm is very slow in practice. Later, Karmarkar constructed an algorithm based on the central path method which is fast in both theory and practice [Kar84]. Since then, there have been repeated improvements for the complexity [Ren88,Vai87,Vai89b,Vai89a,LS14,LS15], but the optimal complexity remained a puzzling task. It is known, that solving linear programs is at least as hard as solving linear systems of the form A>x = b, for which the current fastest algorithms1 run in Oe(nnz(A) + nω) time [NN13, LMP13, CLM+15], where nω is the time required to multiply two n × n matrices and nnz(A) is the number of nonzero entries in A (i.e. the size of the input). This leads to the question:

Is solving linear programs as easy as solving linear systems?

There are two natural cases for this question: the first being the case where m = O(n) in which case solving linear systems needs only O(nω) time, and the second case being m  n in which case solving a linear system needs only Oe(nnz(A)) time. In this thesis, we obtain optimal algorithms for both these regimes, assuming the constraint matrix A is dense (i.e. nnz(A) = Ω(mn)), settling the optimal complexity for dense linear programs. This also implies optimal algorithms for other problems that can be represented by linear programs, such as linear `1-regression, Markov decision processes, bipartite matching, and min-cost flows, if the problem instances are dense.

1Excluding special cases such as laplacian systems [ST04] or very sparse systems [PV21].

27 28 CHAPTER 5. CONVEX AND COMBINATORIAL OPTIMIZATION

Exponent

3.50

3.25

3.00

2.75

2.50

2.25 Year 1980 1990 2000 2010 2020

Figure 5.1: Development of the matrix multiplication exponent ω (dashed line) and the balanced linear program exponent (continuous line). Improvements on ω stem from [Str69,Pan78,BCRL79,Sch81,Rom82,CW82,Str87,CW87,Sto10,Wil12, Gal14, AW21]. For improvements of linear program solvers, see Table 1.1).

These results stem from an intricate interplay between central path methods and dynamic algorithms that were tailored to perfectly complement each other. Here we first explain the previously mentioned results in more detail, and at the end we outline the interaction between these two areas and how they lead to our results.

5.1 Algorithmic Results

Balanced Linear Programs (Paper D) If we assume that the two dimensions m and n of the linear program are balanced, i.e. m = O(n), then the conditional lower bound for solving linear programs is Ω(nω)-time. Figure 5.1 shows improve- ments to ω over time versus improvements to solving balanced linear programs, highlighting the gap between the lower bound and the best upper bounds. As can be seen in Figure 5.1, Oe(n2.5)-time was a long standing barrier for solving linear programs, while ω ≤ 2.5 has long been proven already. This barrier was finally broken by Cohen, Lee and Song [CLS19] by inventing a new type of randomized central path method which they called ”stochastic central path method”. Their algorithm runs in Oe(nω + n2+1/6) time which matches the current2 lower bounds.

2The current best bound on ω is 2.373 [Wil12, Gal14, AW21]. There exists a lower bounds of 2.168 > 2+1/6 for all current types of algorithms for matrix multiplication [CVZ19,Alm19,AW18a, AW18b]. However, in the future on might discover a new type of algorithm with ω < 2 + 1/6. 5.1. ALGORITHMIC RESULTS 29

Whenever a new method is developed that breaks some long standing barrier, it raises natural questions such as: How powerful is the new method? Is the new method really needed? Especially in regard to future research, should we from now on always use this new method instead of the old one? It was also stated as an open question in Song’s PhD thesis [Son19] if the new randomization techniques would really be required, or if the algorithm could be derandomized. We show that this new stochastic central path method is not required by constructing a deterministic linear program solver that runs in exactly the same Oe(nω + n2+1/6)-time complexity.

> Theorem 5.1.1 (Paper D). Let minA>x=b,x≥0 c x be a linear program for m × n > matrix A with rank(A) = n. Let R be a bound on kxk1 for all x ≥ 0 with A x = b. Then for any 0 < δ ≤ 1 we can compute x ≥ 0 such that   > > X c x ≤ min c x + δkck∞R and kAx − bk1 ≤ δ R |Ai,j| + kbk1 Ax=b,x≥0 i,j in time O((mω + m2+1/6) log2(m) log(m/δ)).

Remark 5.1.2. For integral A, b, c the parameter δ = 2−O(L) is enough to round the approximate solution of Theorem 5.1.1 to an exact solution. Here L = log(1 + detmax +kck∞+kbk∞) is the bit-complexity, where detmax is the largest determinant of any square submatrix of A. For many combinatorial problems L = O(log(n + kbk∞ + kck∞)). It is an open question, if there exists a polynomial time algorithm for solving linear programs without such dependence on the bit-length of the input. There exist methods to remove the complexity dependence on the vectors b, c while increasing the dependence on m [Tar86,DNV20]. There are also other algorithms whose complexity depends only on the constraint matrix, e.g. [VY96,DHNV20].

Tall Linear Programs (Papers E, G) We previously discussed the case where m = O(n). However, for many linear programs we have m  n as large as m = Ω(n2). For example, when representing bipartite matching, m is the number of edges and n the number of nodes of the bipartite graph. Especially in practical linear programs, m may be much larger than n, e.g. when when solving the flight crew scheduling problem for American Airlines, Bixby et al. [BGL+92] used a linear programs with m = 13 million and n = 837. So for now, let us consider m  n and focus on the case where the input matrix is dense. In that case, no algorithm can run faster than in O(mn) time as that is the time required for just reading the input. The question is, whether linear programs can be solved within the same time complexity, which would settle the optimal time complexity. We settle the optimal time complexity for tall and dense linear programs by obtaining such a nearly-linear time algorithm. Building on and refining the tech- niques from my previous deterministic solver for balanced linear programs, we show in Paper E that linear programs can be solved in Oe(mn + n3) time, which is nearly 30 CHAPTER 5. CONVEX AND COMBINATORIAL OPTIMIZATION linear time for m = Ω(n2). We improve the upper bound to Oe(mn + n2.5) time in Paper G and generalized the type of linear program we can solve to be of the form min c>x subject to A>x = b, ` ≤ x ≤ u for `, u ∈ Rm. For comparison, the √ 2.5 1.5 previous fastest algorithm (for√m  n) runs in Oe( n nnz(A) + n ) = Oe(mn ) time [LS14, LS15], which is a n factor slower than the desired nearly-linear time bound.

Theorem 5.1.3 (Paper G). Let A ∈ Rm×n, c, `, u ∈ Rm, and b ∈ Rn. Assume > that there is a point x satisfying A x = b and `i ≤ xi ≤ ui for all i ∈ [m]. Let def maxi(ui−`i) W = max(kck∞, kAk∞, kbk∞, kuk∞, k`k∞, ). For any δ > 0 there is an mini(ui−`i) algorithm running in time Oe((mn+n2.5) log(W/δ)) that with high probability which computes a vector x(final) satisfying

> (final) (final) kA x − bk∞ ≤ δ and `i ≤ xi ≤ ui ∀i and c>x(final) ≤ min c>x + δ. A>x=b `i≤xi≤ui∀i The additional constraints of the form ` ≤ x ≤ u allow our algorithm to be very efficient on a wider class of linear programs. For comparison, representing such constraints via the matrix A would result in an m × m matrix, i.e. the matrix would no longer be tall. By allowing ` ≤ x ≤ u constraints, our matrix stays of smaller size m × n, which allows us to solve various other problems in nearly linear time, such as • Discounted Markov decision processes with |S| states and |A| actions in Oe(|S|2|A| + |S|2.5) time (note that the input size of the state-action-state transition is Ω(|S|2|A|)). 2.5 • Linear `1-regression of m data points in n dimensions in Oe(mn + n ) time (here the input size is Ω(mn)).

Bipartite Matching and Min-cost Flows (Papers F, G) Many graph the- oretic problems can be represented as linear programs. Here we consider the prob- lems which as linear program have the form min c>x subject to A>x = b, ` ≤ x ≤ u where A is the edge-vertex incidence matrix of the underlying graph. This in- cludes bipartite matching, (negative weight) shortest paths, transshipment and min-cost/maximum flow. These are some of the oldest problems in computer sci- ence, e.g. bipartite matching has been studied under the names optimal transport or assignment problem and algorithms for these problems have been invented as early as in the 1830s [Oll10], because of their many applications in economics, scheduling and data analysis. Yet, despite centuries of research, the optimal complexity for these problems remained elusive. While for sparse graphs there have been recent improvements [Mad13, LS20, KLS20] (see Table 1.2), for moderately dense graphs the fastest algorithms have not been improved since Hopcroft-Karp’73 [HK73]. By improving the linear program solvers for tall matrices and tailoring the internally used dynamic algorithms to the 5.2. FROM DYNAMIC LINEAR ALGEBRA TO OPTIMIZATION 31 special graph structure of the constraint matrix A, we obtain Oe(m + n1.5)-time algorithms for the aforementioned graph problems. Note that for moderately dense graphs with m = Ω(n1.5) edges, this constitutes a nearly linear time algorithm, settling the complexity question for these graph problems on moderately dense graphs. We initially prove the result for bipartite matching, negative weight shortest paths and transshipment in Paper F. This is extended to min-cost/maximum flow in Paper G.

Theorem 5.1.4 (Paper G). Let G be a directed graph n vertices, m edges, and with edge-vertex incidence matrix A. Let c ∈ {−W, ..., W }E be integer edge weights, b ∈ {1, ..., W }V be vertex demands, and `, u ∈ {−W, ..., W }E be edge capacities. Then we can solve

min c>x subject to A>x = b, ` ≤ x ≤ u in O(m log W + n1.5 log2 W ) time.

5.2 From Dynamic Linear Algebra to Optimization

Here we give the high level description of how we obtain the previously mentioned results for linear programs and special cases thereof like bipartite matching or min- cost flows by using techniques from dynamic algorithms. The idea behind the so called central path method is to first construct some feasible solution x (i.e. a vector satisfying the constraints A>x = b, ` ≤ x ≤ u) and to then repeatedly improve the solution.3 This improvement is performed via an iterative algorithm, where in each step the vector x is updated by a formula of the form

x ← x + (I − DA(A>DA)−1A>)g (5.1) where A ∈ Rm×n is the constraint matrix of the linear programs. Here g ∈ Rm is a vector and D ∈ Rm×m is a diagonal matrix, both of which change in each iteration. The main bottleneck of linear program solvers is the computation of this expression (5.1). Our algorithmic results from Section 5.1 (Theorems 5.1.1, 5.1.3 and 5.1.4) are obtained by constructing dynamic algorithms that are able to efficiently maintain x and (5.1), greatly reducing the time required in each iteration of the linear program solver.

Balanced Linear Programs Assume for simplicity m = O(n). As mentioned 2.5 before, there was a long standing Ω(m √) barrier for linear program solvers. This is because the central path method needs m iterations and in each iteration one must compute a matrix vector product in (5.1). It is unknown how to multiply matrices

3Technically our algorithms are primal-dual, i.e. they not only construct a solution x, but also a solution y to the dual problem. For simplicity we ignore the dual here and focus only on the primal. 32 CHAPTER 5. CONVEX AND COMBINATORIAL OPTIMIZATION and vectors faster than in O(m2) time, which results in the Ω(m2.5) barrier for linear programs. We circumvent this Ω(m2.5) time barrier, by maintaining the term added to x in (5.1) under updates to D and g:

DA>(ADA>)−1A)g (5.2)

This can be done via the reduction from Section 3.1 together with dynamic matrix inverse algorithms from Section 3.2. The resulting dynamic algorithm for main- taining (5.2) is only efficient if there are not too many changes to D and g. This is because, if vector g could change arbitrarily in every entry, then our data structure would effectively compute a matrix-vector product for some arbitrary vector which requires Ω(m2) time. To guarantee that D and g do not change in too many entries from one iteration to the next, we modify the central path method by making it more robust, i.e. more resilient against approximation errors. In the classic central path method, the matrix D and vector g depend on the current solution x, more accurately Di,i and gi depend on xi, but not on the other coordinates. It is known since the first central path method by Karmarkar [Kar84] that D can also be defined with respect to some approximation x with (1 − )xi ≤ xi ≤ (1 + )xi for all i. We show in Paper D that the same is true for the vector g, i.e. gi can be defined with respect to xi. Note that we only need to change any xi (and thus Di,i and gi) if xi changed enough for xi to no longer be a valid approximation. One can prove that throughout the entire runtime of the algorithm, one must perform only O(m) changes to x, i.e. on √ e average, one must change only O( m) entries of x (and thus D and g) in each of √ e the Oe( m) iterations. These (on average) sparse changes to D and g allow us to maintain (5.2) efficiently via our dynamic algorithms.

Tall Linear Programs To obtain a fast algorithm for tall linear programs, we further improve√ the robust central path method. The previous√ central path method requires Oe( m) iterations, which we improve to only Oe( n) iterations in Paper E using techniques from [LS14]. (For tall linear programs with m  n this is an im- provement.) Further, we show that the inverse (A>DA)−1 in (5.2) can be replaced by any spectral approximation, allowing for further speed up by constructing a dy- namic algorithm that maintains the inverse only approximately instead of exactly. Overall, we develop three data structures to maintain (5.2):

1. The first data structure maintains Ag which√ can be done efficiently thanks to g changing in (on average) only Oe(m/ n) entries per iteration.

2. The second data structure maintains a spectral approximation of (A>DA)−1. This can be solved efficiently because of the allowed spectral approximation.

3. The third data structure that we develop is a dynamic algorithm that returns only large coordinates of some matrix vector product DAv. This is required 5.2. FROM DYNAMIC LINEAR ALGEBRA TO OPTIMIZATION 33

because as mentioned before, our central path method only requires to know some x ≈ x. So for v = (A>DA)−1Ag we only need to know coordinates (DAv)i that are large enough for xi to no longer be a valid approximation of xi. Since the dynamic algorithm must only return large coordinates and not the entire matrix-vector product DAv, this problem can be solved efficiently.

At last, we need a fourth data√ structure that is able to efficiently maintain so called leverage scores of the matrix DA. These values are required√ in order to√ reduce the number of iterations of the central path method from Oe( m) to Oe( n). We develop such a data structure in Paper E by reducing it to the data structures 2 and 3 from the list above. If the linear program also has constraints of the form ` ≤ x ≤ u (instead of just x ≥ 0), then besides leverage scores, we also need to have a data structure that maintains so called lewis weights. We construct such a data structure in Paper G by reducing the task of maintaining lewis weights to maintaining leverage scores. Thus in summary, solving linear programs reduces to only two tasks: (i) Detecting large entries in the matrix vector product DAv. (ii) And maintaining a spectral approximation of the inverse (A>DA)−1.

Bipartite Matching and Min-cost Flows We previously outlined how solving linear programs reduces to only two data structure tasks. So in order to obtain fast algorithms for special cases of linear programs, such as bipartite matching and min-cost flows, all we have to do is construct two efficient data structures. If the constraint matrix A of the linear program is an edge vertex incidence matrix, then the two tasks (i) and (ii) are easy to solve: The matrix A>DA is a Laplacian matrix and in [BBG+20] (not included in this thesis) we constructed a dynamic spectral sparsifier, i.e. a dynamic algorithm that returns a sparse Laplacian matrix H that is also a spectral approximation of A>DA while the dynamic algorithm supports updates to the matrix D. It is known that one can solve Laplacian linear systems in time proportional to the number of non-zero entries [ST04,KMP10,KMP11,KOSZ13,LS13,CKM+14,KLP+16,KS16], so solving a system in H can be done efficiently because of the sparsity of H. This solves task (ii). For finding the large entries in DAv (task (i)), we decompose the graph G represented by DA into so called expander graphs. This corresponds to splitting > the rows of DA into smaller matrices D1A1, ..., DkAk, such that each Ak DkAk (which is the Laplacian of a subgraph of G) is a matrix with the spectral property that the largest and smallest non-zero eigenvalues are only some polylog factor apart from each other. We then solve the task of finding large (DAv)i by solving the task of finding large (DkAkv)i for each k. In Paper F we construct a dynamic algorithm for this task by exploiting the additional spectral properties of the matrices DkAk.

Bibliography

[ACD+16] Ittai Abraham, Shiri Chechik, Daniel Delling, Andrew V. Goldberg, and Renato F. Werneck. On dynamic approximate shortest paths for planar graphs with worst-case costs. In SODA, pages 740–753. SIAM, 2016.

[ACG12] Ittai Abraham, Shiri Chechik, and Cyril Gavoille. Fully dynamic ap- proximate distance oracles for planar graphs via forbidden-set distance labels. In STOC, pages 1199–1218. ACM, 2012.

[ACK17] Ittai Abraham, Shiri Chechik, and Sebastian Krinninger. Fully dy- namic all-pairs shortest paths with worst-case update-time revisited. In SODA, pages 440–452. SIAM, 2017.

[ACT14] Ittai Abraham, Shiri Chechik, and Kunal Talwar. Fully dynamic all-pairs shortest paths: Breaking the o(n) barrier. In APPROX- RANDOM, volume 28 of LIPIcs, pages 1–16. Schloss Dagstuhl - Leibniz- Zentrum fuer Informatik, 2014.

[AHR+19] Bertie Ancona, Monika Henzinger, Liam Roditty, Virginia Vassilevska Williams, and Nicole Wein. Algorithms and hardness for diameter in dynamic graphs. In ICALP, volume 132 of LIPIcs, pages 13:1–13:14. Schloss Dagstuhl - Leibniz-Zentrum f¨urInformatik, 2019.

[AIMN91] Giorgio Ausiello, Giuseppe F. Italiano, Alberto Marchetti-Spaccamela, and Umberto Nanni. Incremental algorithms for minimal length paths. J. Algorithms, 12(4):615–638, 1991. Announced at SODA’90.

[Alm19] Josh Alman. Limits on the universal method for matrix multiplication. In CCC, volume 137 of LIPIcs, pages 12:1–12:24. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2019.

[AMV20] Kyriakos Axiotis, Aleksander Madry, and Adrian Vladu. Circulation control for faster minimum cost flow in unit-capacity graphs. In FOCS, pages 93–104. IEEE, 2020.

35 36 BIBLIOGRAPHY

[AW14] Amir Abboud and Virginia Vassilevska Williams. Popular conjectures imply strong lower bounds for dynamic problems. In FOCS, pages 434– 443. IEEE Computer Society, 2014. [AW18a] Josh Alman and Virginia Vassilevska Williams. Further limitations of the known approaches for matrix multiplication. In ITCS, volume 94 of LIPIcs, pages 25:1–25:15. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2018. [AW18b] Josh Alman and Virginia Vassilevska Williams. Limits on all known (and some unknown) approaches to matrix multiplication. In FOCS, pages 580–591. IEEE Computer Society, 2018. [AW21] Josh Alman and Virginia Vassilevska Williams. A refined laser method and faster matrix multiplication. In SODA, pages 522–539. SIAM, 2021. [Bar68] Richard H Bartels. A numerical investigation of the simplex method. Technical report, Stanford University Department of Computer Science, 1968. [BBG+20] Aaron Bernstein, Jan van den Brand, Maximilian Probst Gutenberg, Danupon Nanongkai, Thatchaphol Saranurak, Aaron Sidford, and He Sun. Fully-dynamic graph sparsifiers against an adaptive adversary. CoRR, abs/2004.08432, 2020. [BBMN21] Joakim Blikstad, Jan van den Brand, Sagnik Mukhopadhyay, and Danupon Nanongkai. Breaking the quadratic barrier for matroid in- tersection. In STOC. ACM, 2021. [BC16] Aaron Bernstein and Shiri Chechik. Deterministic decremental single source shortest paths: beyond the o(mn) bound. In STOC, pages 389– 397. ACM, 2016. [BC17] Aaron Bernstein and Shiri Chechik. Deterministic partially dynamic single source shortest paths for sparse graphs. In SODA, pages 453– 469. SIAM, 2017. [BCRL79] Dario Bini, Milvio Capovani, Francesco Romani, and Grazia Lotti. O(n2.7799) complexity for n*n approximate matrix multiplication. Inf. Process. Lett., 8(5):234–235, 1979. [Ber09] Aaron Bernstein. Fully dynamic (2 + epsilon) approximate all-pairs shortest paths with fast query and close to linear update time. In FOCS, pages 693–702. IEEE Computer Society, 2009. [Ber16] Aaron Bernstein. Maintaining shortest paths under deletions in weighted directed graphs. SIAM J. Comput., 45(2):548–574, 2016. an- nounced at STOC’13. BIBLIOGRAPHY 37

[BG69] Richard H. Bartels and Gene H. Golub. The simplex method of linear programming using LU decomposition. Commun. ACM, 12(5):266–268, 1969. [BGL+92] Robert E. Bixby, John W. Gregory, Irvin J. Lustig, Roy E. Marsten, and David F. Shanno. Very large-scale linear programming: A case study in combining interior point and simplex methods. Oper. Res., 40(5):885–897, 1992. [BGS20] Aaron Bernstein, Maximilian Probst Gutenberg, and Thatchaphol Saranurak. Deterministic decremental reachability, scc, and shortest paths via directed expanders and congestion balancing. In FOCS, pages 1123–1134. IEEE, 2020. [BGW20] Aaron Bernstein, Maximilian Probst Gutenberg, and Christian Wulff- Nilsen. Near-optimal decremental SSSP in dense weighted digraphs. In FOCS, pages 1112–1122. IEEE, 2020. [BHS07] Surender Baswana, Ramesh Hariharan, and Sandeep Sen. Improved decremental algorithms for maintaining transitive closure and all-pairs shortest paths. J. Algorithms, 62(2):74–92, 2007. Announced at STOC’02. [BLL+21] Jan van den Brand, Yin Tat Lee, Yang P. Liu, Thatchaphol Saranurak, Aaron Sidford, Zhao Song, and Di Wang. Minimum cost flows, mdps, and `1-regression in nearly linear time for dense instances. In STOC. ACM, 2021. [BLN+20] Jan van den Brand, Yin Tat Lee, Danupon Nanongkai, Richard Peng, Thatchaphol Saranurak, Aaron Sidford, Zhao Song, and Di Wang. Bi- partite matching in nearly-linear time on moderately dense graphs. In FOCS, pages 919–930. IEEE, 2020. [BLSS20] Jan van den Brand, Yin Tat Lee, Aaron Sidford, and Zhao Song. Solving tall dense linear programs in nearly linear time. In STOC, pages 775– 788. ACM, 2020. [BM98] Peter A. Beling and . Using fast matrix multiplication to find basic solutions. Theor. Comput. Sci., 205(1-2):307–316, 1998. [BN19] Jan van den Brand and Danupon Nanongkai. Dynamic approximate shortest paths and beyond: Subquadratic and worst-case update time. In FOCS, pages 436–455. IEEE Computer Society, 2019. [BNS19] Jan van den Brand, Danupon Nanongkai, and Thatchaphol Saranurak. Dynamic matrix inverse: Improved algorithms and matching condi- tional lower bounds. In FOCS, pages 456–480. IEEE Computer Society, 2019. 38 BIBLIOGRAPHY

[BPSW21] Jan van den Brand, Binghui Peng, Zhao Song, and Omri Weinstein. Training (overparametrized) neural networks in near-linear time. In ITCS, volume 185 of LIPIcs, pages 63:1–63:15. Schloss Dagstuhl - Leibniz-Zentrum f¨urInformatik, 2021.

[BR11] Aaron Bernstein and Liam Roditty. Improved dynamic algorithms for maintaining approximate shortest paths under deletions. In SODA, pages 1355–1365. SIAM, 2011.

[Bra20] Jan van den Brand. A deterministic linear program solver in current matrix multiplication time. In SODA, pages 259–278. SIAM, 2020.

[Bra21] Jan van den Brand. Unifying matrix data structures: Simplifying and speeding up iterative algorithms. In SOSA, pages 1–13. SIAM, 2021.

[BS19] Jan van den Brand and Thatchaphol Saranurak. Sensitive distance and reachability oracles for large batch updates. In FOCS, pages 424–435. IEEE Computer Society, 2019.

[CGL15] Rapha¨elClifford, Allan Grønlund, and Kasper Green Larsen. New unconditional hardness results for dynamic and online problems. In FOCS, pages 1089–1107. IEEE Computer Society, 2015.

[CKM+14] Michael B. Cohen, Rasmus Kyng, Gary L. Miller, Jakub W. Pachocki, Richard Peng, Anup B. Rao, and Shen Chen Xu. Solving sdd linear systems in nearly m log1/2 n time. In Proceedings of the 46th Annual ACM Symposium on Theory of Computing (STOC), pages 343–352, 2014.

[CLM+15] Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, Richard Peng, and Aaron Sidford. Uniform sampling for matrix ap- proximation. In ITCS, pages 181–190. ACM, 2015.

[CLS19] Michael B. Cohen, Yin Tat Lee, and Zhao Song. Solving linear programs in the current matrix multiplication time. In STOC, pages 938–942. ACM, 2019.

[CMSV17] Michael B. Cohen, Aleksander Madry, Piotr Sankowski, and Adrian Vladu. Negative-weight shortest paths and unit capacity minimum cost flow in ˜o(m10/7 log W ) time (extended abstract). In SODA, pages 752–771. SIAM, 2017.

[CS21] Julia Chuzhoy and Thatchaphol Saranurak. Deterministic algorithms for decremental shortest paths via layered core decomposition. In SODA, pages 2478–2496. SIAM, 2021. BIBLIOGRAPHY 39

[CVZ19] Matthias Christandl, P´eterVrana, and Jeroen Zuiddam. Barriers for fast matrix multiplication from irreversibility. In Computational Com- plexity Conference, volume 137 of LIPIcs, pages 26:1–26:17. Schloss Dagstuhl - Leibniz-Zentrum f¨urInformatik, 2019. [CW82] Don Coppersmith and Shmuel Winograd. On the asymptotic complex- ity of matrix multiplication. SIAM J. Comput., 11(3):472–492, 1982. [CW87] Don Coppersmith and Shmuel Winograd. Matrix multiplication via arithmetic progressions. In STOC, pages 1–6. ACM, 1987. [CZ21] Shiri Chechik and Tianyi Zhang. Incremental single source shortest paths in sparse digraphs. In SODA, pages 2463–2477. SIAM, 2021. [Dan63] G. B. Dantzig. Linear programming and extensions. Princeton univer- sity press, 1963. [DH18] Daniel Dadush and Sophie Huiberts. A friendly smoothed analysis of the simplex method. In STOC, pages 390–403. ACM, 2018. [DHNV20] Daniel Dadush, Sophie Huiberts, Bento Natura, and L´aszl´oA. V´egh.A scaling-invariant algorithm for linear programming whose running time depends only on the constraint matrix. In STOC, pages 761–774. ACM, 2020. [DI02] Camil Demetrescu and Giuseppe F. Italiano. Improved bounds and new trade-offs for dynamic all pairs shortest paths. In ICALP, volume 2380 of Lecture Notes in Computer Science, pages 633–643. Springer, 2002. [DI04] Camil Demetrescu and Giuseppe F. Italiano. A new approach to dy- namic all pairs shortest paths. J. ACM, 51(6):968–992, 2004. An- nounced at STOC’03. [DI05] Camil Demetrescu and Giuseppe F. Italiano. Trade-offs for fully dy- namic transitive closure on dags: breaking through the o(n2 barrier. J. ACM, 52(2):147–156, 2005. announced at FOCS’00. [DI06] Camil Demetrescu and Giuseppe F. Italiano. Fully dynamic all pairs shortest paths with real edge weights. J. Comput. Syst. Sci., 72(5):813– 837, 2006. Announced at FOCS’01. [Din70] Efim A Dinic. Algorithm for solution of a problem of maximum flow in networks with power estimation. In Soviet Math. Doklady, volume 11, pages 1277–1280, 1970. [DNV20] Daniel Dadush, Bento Natura, and L´aszl´oA. V´egh. Revisiting tar- dos’s framework for linear programming: Faster exact solutions using approximate solvers. In FOCS, pages 931–942. IEEE, 2020. 40 BIBLIOGRAPHY

[DP14] Ran Duan and Seth Pettie. Linear-time approximation for maximum weight matching. J. ACM, 61(1):1:1–1:23, 2014. [DS05] Amit Deshpande and Daniel A. Spielman. Improved smoothed analysis of the shadow vertex simplex method. In FOCS, pages 349–356. IEEE Computer Society, 2005. [ES81] Shimon Even and Yossi Shiloach. An on-line edge-deletion problem. J. ACM, 28(1):1–4, 1981. [Gal14] Fran¸coisLe Gall. Powers of tensors and fast matrix multiplication. In ISSAC, pages 296–303. ACM, 2014. [Gol77] . On the bartels - golub decomposition for linear pro- gramming bases. Math. Program., 13(1):272–279, 1977. [GU18] Francois Le Gall and Florent Urrutia. Improved rectangular matrix multiplication using powers of the coppersmith-winograd tensor. In SODA, pages 1029–1046. SIAM, 2018. [GW20a] Maximilian Probst Gutenberg and Christian Wulff-Nilsen. Decremental SSSP in weighted digraphs: Faster and against an adaptive adversary. In SODA, pages 2542–2561. SIAM, 2020. [GW20b] Maximilian Probst Gutenberg and Christian Wulff-Nilsen. Determinis- tic algorithms for decremental approximate shortest paths: Faster and simpler. In SODA, pages 2522–2541. SIAM, 2020. [GW20c] Maximilian Probst Gutenberg and Christian Wulff-Nilsen. Fully- dynamic all-pairs shortest paths: Improved worst-case time and space bounds. In SODA, pages 2562–2574. SIAM, 2020. [GWW20] Maximilian Probst Gutenberg, Virginia Vassilevska Williams, and Nicole Wein. New algorithms and hardness for incremental single-source shortest paths in directed graphs. In STOC, pages 153–166. ACM, 2020. [HK73] John E. Hopcroft and Richard M. Karp. An n5/2 algorithm for maxi- mum matchings in bipartite graphs. SIAM J. Comput., 2(4):225–231, 1973. Announced at FOCS’71. [HKN14a] Monika Henzinger, Sebastian Krinninger, and Danupon Nanongkai. Sublinear-time decremental algorithms for single-source reachability and shortest paths on directed graphs. In STOC, pages 674–683. ACM, 2014. [HKN14b] Monika Henzinger, Sebastian Krinninger, and Danupon Nanongkai. A subquadratic-time algorithm for decremental single-source shortest paths. In SODA, pages 1053–1072. SIAM, 2014. BIBLIOGRAPHY 41

[HKN16] Monika Henzinger, Sebastian Krinninger, and Danupon Nanongkai. Dynamic approximate all-pairs shortest paths: Breaking the O(mn) barrier and derandomization. SIAM J. Comput., 45(3):947–1006, 2016. Announced at FOCS’13.

[HKN18] Monika Henzinger, Sebastian Krinninger, and Danupon Nanongkai. Decremental single-source shortest paths on undirected graphs in near- linear total update time. J. ACM, 65(6):36:1–36:40, 2018. Announced at FOCS’14 and ICALP’15.

[HKNS15] Monika Henzinger, Sebastian Krinninger, Danupon Nanongkai, and Thatchaphol Saranurak. Unifying and strengthening hardness for dy- namic problems via the online matrix-vector multiplication conjecture. In STOC, pages 21–30. ACM, 2015.

[HZ15] Thomas Dueholm Hansen and Uri Zwick. An improved version of the random-facet pivoting rule for the simplex algorithm. In STOC, pages 209–218. ACM, 2015.

[IM81] Oscar H. Ibarra and Shlomo Moran. Deterministic and probabilistic algorithms for maximum bipartite matching via fast matrix multiplica- tion. Inf. Process. Lett., 13(1):12–15, 1981.

[JSWZ21] Shunhua Jiang, Zhao Song, Omri Weinstein, and Hengjie Zhang. A faster algorithm for solving general lp. In STOC. ACM, 2021.

[Kal92] Gil Kalai. A subexponential randomized simplex algorithm (extended abstract). In STOC, pages 475–482. ACM, 1992.

[Kar73] Alexander V Karzanov. On finding maximum flows in networks with special structure and some applications. Matematicheskie Voprosy Up- ravleniya Proizvodstvom, 5:81–94, 1973.

[Kar84] Narendra Karmarkar. A new polynomial-time algorithm for linear programming. Combinatorica, 4(4):373–396, 1984. Announced at STOC’84.

[Kha79] Leonid G Khachiyan. A polynomial algorithm in linear programming. In Doklady Academii Nauk SSSR, volume 244, pages 1093–1096, 1979.

[Kin99] Valerie King. Fully dynamic algorithms for maintaining all-pairs short- est paths and transitive closure in digraphs. In FOCS, pages 81–91. IEEE Computer Society, 1999.

[KLLF04] Sven Koenig, Maxim Likhachev, Yaxin Liu, and David Furcy. Incre- mental heuristic search in AI. AI Mag., 25(2):99–112, 2004. 42 BIBLIOGRAPHY

[KLP+16] Rasmus Kyng, Yin Tat Lee, Richard Peng, Sushant Sachdeva, and Daniel A. Spielman. Sparsified cholesky and multigrid solvers for con- nection laplacians. In STOC’16: Proceedings of the 48th Annual ACM Symposium on Theory of Computing, 2016.

[KLS20] Tarun Kathuria, Yang P. Liu, and Aaron Sidford. Unit capacity maxflow in almost o(m4/3) time. In FOCS, pages 119–130. IEEE, 2020.

[KM72] Victor Klee and George J Minty. How good is the simplex algorithm. Inequalities, 3(3):159–175, 1972.

[KMP10] Ioannis Koutis, Gary L. Miller, and Richard Peng. Approaching op- timality for solving SDD systems. In Proceedings of the 51st Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 235–244, 2010.

[KMP11] Ioannis Koutis, Gary L. Miller, and Richard Peng. A nearly m log n- time solver for SDD linear systems. In Proceedings of the 52nd Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 590–598, 2011.

[KOSZ13] Jonathan A. Kelner, Lorenzo Orecchia, Aaron Sidford, and Zeyuan Allen Zhu. A simple, combinatorial algorithm for solving SDD systems in nearly-linear time. In STOC, pages 911–920. ACM, 2013.

[KS98] Philip N. Klein and Sairam Subramanian. A fully dynamic approx- imation scheme for shortest paths in planar graphs. Algorithmica, 22(3):235–249, 1998.

[KS06] Jonathan A. Kelner and Daniel A. Spielman. A randomized polynomial- time simplex algorithm for linear programming. In STOC, pages 51–60. ACM, 2006.

[KS16] Rasmus Kyng and Sushant Sachdeva. Approximate gaussian elimina- tion for laplacians - fast, sparse, and simple. In FOCS, pages 573–582. IEEE Computer Society, 2016.

[Lar12] Kasper Green Larsen. Higher cell probe lower bounds for evaluating polynomials. In FOCS, pages 293–301. IEEE Computer Society, 2012.

[LMP13] Mu Li, Gary L. Miller, and Richard Peng. Iterative row sampling. In FOCS, pages 127–136. IEEE Computer Society, 2013.

[LS13] Yin Tat Lee and Aaron Sidford. Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems. In 2013 IEEE 54th Annual Symposium on Foundations of Computer Science, pages 147–156. IEEE, 2013. BIBLIOGRAPHY 43

[LS14] Yin Tat Lee and Aaron Sidford. Path finding methods for linear pro- gramming: Solving linear programs in ˜o(vrank)iterations and faster algorithms for maximum flow. In FOCS, pages 424–433. IEEE Com- puter Society, 2014. [LS15] Yin Tat Lee and Aaron Sidford. Efficient inverse maintenance and faster algorithms for linear programming. In FOCS, pages 230–249. IEEE Computer Society, 2015. [LS20] Yang P. Liu and Aaron Sidford. Faster energy maximization for faster maximum flow. In STOC, pages 803–814. ACM, 2020. [LSZ19] Yin Tat Lee, Zhao Song, and Qiuyi Zhang. Solving empirical risk mini- mization in the current matrix multiplication time. In COLT, volume 99 of Proceedings of Machine Learning Research, pages 2140–2157. PMLR, 2019. [Mad13] Aleksander Madry. Navigating central path with electrical flows: From flows to matchings, and back. In FOCS, pages 253–262. IEEE Computer Society, 2013. [MS04] Marcin Mucha and Piotr Sankowski. Maximum matchings via gaussian elimination. In FOCS, pages 248–255. IEEE Computer Society, 2004. [NN13] Jelani Nelson and Huy L. Nguyen. OSNAP: faster numerical linear algebra algorithms via sparser subspace embeddings. In FOCS, pages 117–126. IEEE Computer Society, 2013. [NS17] Danupon Nanongkai and Thatchaphol Saranurak. Dynamic spanning forest with worst-case update time: adaptive, las vegas, and o(n1/2 - )- time. In STOC, pages 1122–1129. ACM, 2017. [NST00] Paolo Narv´aez,Kai-Yeung Siu, and Hong-Yi Tzeng. New dynamic algo- rithms for shortest path tree computation. IEEE/ACM Trans. Netw., 8(6):734–746, 2000. [NSW17] Danupon Nanongkai, Thatchaphol Saranurak, and Christian Wulff- Nilsen. Dynamic minimum spanning forest with subpolynomial worst- case update time. In FOCS, pages 950–961. IEEE Computer Society, 2017. [Oll10] Fran¸coisOllivier. Jacobi’s bound and normal forms computations. a historical survey, 2010. [Pan78] Victor Y. Pan. Strassen’s algorithm is not optimal: Trililnear technique of aggregating, uniting and canceling for constructing fast algorithms for matrix operations. In FOCS, pages 166–176. IEEE Computer Soci- ety, 1978. 44 BIBLIOGRAPHY

[PD06] Mihai Patrascu and Erik D. Demaine. Logarithmic lower bounds in the cell-probe model. SIAM J. Comput., 35(4):932–963, 2006.

[PV21] Richard Peng and Santosh S. Vempala. Solving sparse linear systems faster than matrix multiplication. In SODA, pages 504–521. SIAM, 2021.

[Rei82] John K. Reid. A sparsity-exploiting variant of the bartels - golub decom- position for linear programming bases. Math. Program., 24(1):55–69, 1982.

[Ren88] James Renegar. A polynomial-time algorithm, based on newton’s method, for linear programming. Math. Program., 40(1-3):59–93, 1988.

[Rom82] Francesco Romani. Some properties of disjoint sums of tensors related to matrix multiplication. SIAM J. Comput., 11(2):263–267, 1982.

[RZ11] Liam Roditty and Uri Zwick. On dynamic shortest paths problems. Algorithmica, 61(2):389–401, 2011. announced at ESA’04.

[RZ12] Liam Roditty and Uri Zwick. Dynamic approximate all-pairs shortest paths in undirected graphs. SIAM J. Comput., 41(3):670–683, 2012. Announced at FOCS’04.

[San04] Piotr Sankowski. Dynamic transitive closure via dynamic matrix inverse (extended abstract). In FOCS, pages 509–517. IEEE Computer Society, 2004.

[San05a] Piotr Sankowski. Shortest paths in matrix multiplication time. In ESA, volume 3669 of Lecture Notes in Computer Science, pages 770– 778. Springer, 2005.

[San05b] Piotr Sankowski. Subquadratic algorithm for dynamic shortest dis- tances. In COCOON, volume 3595 of Lecture Notes in Computer Sci- ence, pages 461–470. Springer, 2005.

[San07] Piotr Sankowski. Faster dynamic matchings and vertex connectivity. In SODA, pages 118–126. SIAM, 2007.

[Sch81] Arnold Sch¨onhage. Partial and total matrix multiplication. SIAM J. Comput., 10(3):434–455, 1981.

[Sha87] Ron Shamir. The efficiency of the simplex method: a survey. Manage- ment science, 33(3):301–334, 1987.

[SM50] Jack Sherman and Winifred J Morrison. Adjustment of an inverse matrix corresponding to a change in one element of a given matrix. The Annals of Mathematical Statistics, 21(1):124–127, 1950. BIBLIOGRAPHY 45

[SM10] Piotr Sankowski and Marcin Mucha. Fast dynamic transitive closure with lookahead. Algorithmica, 56(2):180–197, 2010.

[Son19] Zhao Song. Matrix theory: optimization, concentration, and algorithms. PhD thesis, University of Texas, 2019.

[ST04] Daniel A. Spielman and Shang-Hua Teng. Smoothed analysis of algo- rithms: Why the simplex algorithm usually takes polynomial time. J. ACM, 51(3):385–463, 2004.

[Sto10] Andrew James Stothers. On the complexity of matrix multiplication. 2010.

[Str69] Volker Strassen. Gaussian elimination is not optimal. Numerische math- ematik, 13(4):354–356, 1969.

[Str87] Volker Strassen. Relative bilinear complexity and matrix multiplication. Journal f¨urdie reine und angewandte Mathematik, 1987(375-376):406– 443, 1987.

[SY15] Arne Storjohann and Shiyun Yang. A relaxed algorithm for online matrix inversion. In ISSAC, pages 339–346. ACM, 2015.

[Tar86] Eva´ Tardos. A strongly polynomial algorithm to solve combinatorial linear programs. Oper. Res., 34(2):250–256, 1986.

[Tho04] Mikkel Thorup. Fully-dynamic all-pairs shortest paths: Faster and allowing negative cycles. In SWAT, volume 3111 of Lecture Notes in Computer Science, pages 384–396. Springer, 2004.

[Tho05] Mikkel Thorup. Worst-case update times for fully-dynamic all-pairs shortest paths. In STOC, pages 112–119. ACM, 2005.

[UY91] Jeffrey D. Ullman and Mihalis Yannakakis. High-probability parallel transitive-closure algorithms. SIAM J. Comput., 20(1):100–125, 1991. Announced at SPAA’90.

[Vai87] Pravin M. Vaidya. An algorithm for linear programming which requires o(((m+n)nˆ2 + (m+n)ˆ1.5 n)l) arithmetic operations. In STOC, pages 29–38. ACM, 1987.

[Vai89a] Pravin M. Vaidya. A new algorithm for minimizing convex functions over convex sets (extended abstract). In FOCS, pages 338–343. IEEE Computer Society, 1989.

[Vai89b] Pravin M. Vaidya. Speeding-up linear programming using fast matrix multiplication (extended abstract). In FOCS, pages 332–337. IEEE Computer Society, 1989. 46 BIBLIOGRAPHY

[Ver09] Roman Vershynin. Beyond hirsch conjecture: Walks on random poly- topes and smoothed complexity of the simplex method. SIAM J. Com- put., 39(2):646–678, 2009. [VY96] Stephen A. Vavasis and Yinyu Ye. A primal-dual interior point method whose running time depends only on the constraint matrix. Math. Pro- gram., 74:79–120, 1996. [Wil12] Virginia Vassilevska Williams. Multiplying matrices faster than coppersmith-winograd. In STOC, pages 887–898. ACM, 2012. [Woo50] Max A Woodbury. Inverting modified matrices. Memorandum report, 42(106):336, 1950. [Wul17] Christian Wulff-Nilsen. Fully-dynamic minimum spanning forest with improved worst-case update time. In STOC, pages 1130–1143. ACM, 2017. [ZLS12] Wei Zhou, George Labahn, and Arne Storjohann. Computing minimal nullspace bases. In ISSAC, pages 366–373. ACM, 2012. [ZLS15] Wei Zhou, George Labahn, and Arne Storjohann. A deterministic algo- rithm for inverting a polynomial matrix. J. Complex., 31(2):162–173, 2015. [Zwi02] Uri Zwick. All pairs shortest paths using bridging sets and rectangular matrix multiplication. J. ACM, 49(3):289–317, 2002. Part II

Included Papers

47