INTERIOR-POINT SOLVERS

HANDE Y. BENSON

Abstract. We present an overview of available software for solving linear pro- gramming problems using interior-point methods. Some of the codes discussed include primal and dual solvers as well, but we focus the discussion on the implementation of the interior-point solver. For each solver, we present types of problems solved, available distribution modes, input formats and mod- eling languages, as well as algorithmic details, including problem formulation, use of higher corrections, presolve techniques, ordering heuristics for symbolic Cholesky factorization, and the specifics of numerical factorization.

We present an overview of available software for solving linear programming problems using interior-point methods. We consider both open-source and pro- prietary commercial codes, including BPMPD ([37], [36]), CLP [16], FortMP [15], GIPALS32 [45], GLPK [34], HOPDM ([24], [28]), IBM ILOG CPLEX [30], LINDO [31], LIPSOL [49], LOQO [48], Microsoft Solver Foundation [38], MOSEK [2], PCx [13], SAS/OR [44], and Xpress Optimizer [43]. Some of the codes discussed include primal and dual simplex solvers as well, but we focus the discussion on the implementation of the interior-point solver. For each solver, we present types of problems solved, available distribution modes, input formats and modeling languages, as well as algorithmic details, including problem formulation, use of higher corrections, presolve techniques, ordering heuristics for symbolic Cholesky factorization, and the specifics of numerical factorization. Many of the solvers allow user-control over some algorithmic details such as the type of presolve techniques applied to the problem and the choice of ordering heuristic. As the following discussion will show, the solvers tend to all have similar charac- teristics, and performance differences are generally due to subtle changes in the presolve phase, the implementation of the routines, and other special considerations, such as scaling, to promote numerical stability. We start with a short discussion of input formats and modeling languages avail- able for LPs. Then, we will present details of a wide-range of interior-point codes.

1. Input formats and Modeling Languages for Linear Programming For a standard LP of the form maximize cT x (1) subject to Ax = b x ≥ 0, it suffices to pass the A and the vectors b and to the solver for a full- description of the problem. Of course, the general form of an LP can incorporate constraints as well as bounds on variables, but such deviations from (1)

Date: October 3, 2009. Key words and phrases. interior-point methods, linear programming, optimization software. 1 2 HANDE Y. BENSON require either the user to include slack variables to convert the problem to (1) or the modeling mechanism to accommodate the specification of additional data vectors, such as upper and lower bounds on the variables and indicator arrays for types. We will discuss two types of modeling mechanisms here: input formats, which allow for a specifically formatted means to enter problem data which can then be interpreted by the solver, and modeling languages, which allow for a flexible algebraic expression of the model as well as the data and are accompanied by full- fledged modeling systems. Many of the solvers we will discuss can also be used as callable libraries, so the model can be expressed in a variety of programming languages, such as C, , and Java, and various routines implemented in the solver library can be called to load and solve the LP. 1.1. MPS format. MPS, or Mathematical Programming System, is the oldest input format for LPs, and it is the most widely accepted input format among both academic and commercial solvers. We refer the reader to [41] for a detailed description and provide a brief overview here. MPS uses ASCII text files to store the problem and is modeled after punch cards. Header cards specify which component of (1) is to be introduced next. The ROWS card specifies the names and types of objective functions and constraints, the COLUMNS card specifies variable names and the entries of c and A, the RHS card specifies the elements of the right-hand side vector b, and the BOUNDS card specifies any upper and lower bounds on x. The format is column-oriented, e.g. rather than using the ROWS card to enter each of the variable coefficients appearing in a constraint, the coefficients of a single variable as it appears in the objective and any constraints are listed together in the COLUMNS card. All model components are assigned names of eight characters or less, and the information is presented in fields that begin in columns 2, 5, 15, 25, 40, and 50. Each data element has to be individually listed in the file, and at most two elements can be specified per line. Because of its punch card format, MPS files may be hard for a user to read, but it is very easy for a solver or modeling environment to read and write in this format. With additional card types, the format has been extended to QPs and MIPs, and free format variations exist, as well. 1.2. LP format. The LP input format is an alternative to the MPS format that allows for entering algebraic expressions to describe the problem. While it may be more natural to read a model in LP format, it is nonetheless not a standard- ized format and different solvers/parsers may interpret the problem components differently. Two common variants are CPLEX LP format and Xpress LP format. As in the MPS format, the model is expressed in an ASCII text file, and there are specified sections to express each type of model component. Each section starts with a header, and there are Objective, Constraints, Bounds, and Variable Type sections available for LPs. Within the Constraints section, for example, each constraint is given a name, followed by a colon, and then the constraint expression, and each constraint is written on a separate line. Unlike the MPS format, there are no fixed field widths for separating problem components. For further details of the LP format, we refer the reader to [10]. 1.3. AMPL. AMPL [17] is a for algebraically expressing a wide-range of optimization problems. Originally developed at Bell Laboratories, Interior-Point LP Solvers 3 it has become the standard for formulating highly complex problems. AMPL is more than just a compiler for the optimization model, as it also includes pre- and post-processing routines, as well as first- and second-order differentiation capabil- ities for NLPs. The indexing and looping constructs in AMPL allow for a concise expression of large-scale LPs, and the processor allows for the separation of the model and data into multiple files. Many LP solvers have AMPL interfaces, and a tool for converting MPS format to AMPL, m2a, is also available.

1.4. GAMS. GAMS [7], or the General Algebraic Modeling System, is another widely used modeling system for optimization. Originally funded by The World Bank, it is the oldest modelling language for mathematical programming, and it is currently developed and distributed by the GAMS Development Corporation. It is similar to AMPL in terms of capabilities, and the GAMS-AMPL service available through the NEOS Server [12] allows for the solution of models formulated in GAMS by AMPL solvers.

1.5. Other Input Formats and Modeling Languages. Almost every LP solver to be discussed accepts models expressed in one or more of the four input formats and modeling languages mentioned above. There are several other means of com- municating LPs to the solvers, and we will briefly mention them here: • LPP, or LP-Plain, is a variant of the LP input format. • OPF, or Format, is a variant of the CPLEX LP input format. • MPL, or Mathematical Programming Language, is an algebraic language and modeling system. • OML, or Optimization Modeling Language, is the input format used by Microsoft Solver Foundation.

2. Interior-Point LP Solvers In this section, we will provide an overview of some of the most widely used LP solvers. The solvers are presented in alphabetical order.

2.1. BPMPD. BPMPD ([37], [36]) is a solver developed by Cs. M´esz´arosfor solving LPs and convex QPs. It is written in C and is available both as an executable and as a callable library. It accepts input in LP and MPS formats for LP and in QPS format for convex QP. The solver can also be called from AMPL and is available for use on the NEOS Server [12]. There is a MEX interface [40] to use the callable library from within Matlab. BPMPD implements an infeasible interior-point method as described in [8] and [39], and uses Mehrotra’s [35] predictor-corrector framework and barrier parameter updates, along with higher order corrections [25]. The strength of the solver is in the extensive linear algebra routines, in particular for presolve and Cholesky fac- torization. During presolve, BPMPD removes empty rows and columns, singleton rows, and redundant variable bounds and constraints, eliminates free or implied free variables, substitutes duplicated variables, and sparsifies the constraint ma- trix. A heuristic is used to decide between the normal equation and augmented system approaches for solving for the step directions, and minimum degree and minimum local-fill options are available for symbolic ordering. For the numeri- cal factorization, a left-looking supernodal factorization is implemented, 4 HANDE Y. BENSON and a supernodal backsolve is used. During both factorization and backsolve, loop unrolling is used.

2.2. CLP. CLP [16] (COIN Linear Program code) is an open-source solver dis- tributed by the COIN-OR project [33]. It is implemented in C++ and is available as an executable and a callable library. It can also be used over the NEOS server [12] to solve LPs written in MPS format. While mainly a simplex solver, CLP also implements a barrier method for handling LPs and convex QPs. We focus here on the barrier code. CLP implements Mehrotra’s [35] predictor-corrector algorithm and uses higher order corrections [25]. The distribution contains native code for sparse Cholesky factorization, but it is recommended that a third-party factorization code be used. There are interfaces for WSSMP [29], which provides parallel enabled ordering and factorization, TAUCS [46], which uses third-party ordering and implements paral- lel enabled factorization, and AMD [1], which provides an approximate minimum degree ordering and can be used with CLP’s native code for factorization.

2.3. FortMP. FortMP [15] is a commercial solver distributed by Optirisk Systems. It can handle LPs and convex QPs, as well as mixed-integer linear and quadratic problems. The code is written in Fortran 77 and accepts models written in AMPL as well as input in MPS format. It is available for use on the NEOS Server [12]. FortMP is primarily a simplex solver, and an interior-point method implementation, referred to as FortMP IPM, is included for large-scale LPs and convex QPs. FortMP IPM is an infeasible primal-dual interior-point method. There are three alternatives implemented for step direction calculations: an affine scaling algo- rithm, a barrier algorithm, and a predictor-corrector algorithm. According to the documentation, the latter is recommended for most LPs, whereas the affine scaling algorithm is to be preferred for extremely sparse problems, and the barrier algo- rithm is backup in case of failure. For barrier parameter updates, the user can choose between a multiple of the current gap and the updating scheme of [35]. The code includes native routines for Cholesky factorization and allows for the use of supernodes.

2.4. GIPALS32. GIPALS32 [45] is a commercial solver distributed by Optimalon Software. It is a Windows-based callable library and includes an application pro- gramming interface (API). GIPALS32 is the solver library for the linear program- ming environment GIPALS, which uses a graphical user interface that includes tables and forms for easily entering problem data and can read and write MPS files. GIPALS32 implements Mehrotra’s [35] predictor-corrector algorithm and uses higher order corrections [25].

2.5. GLPK. GLPK (GNU Linear Programming Kit) [34] is a toolkit for LP and MIP, distributed by the Free Software Foundation, Inc. as a part of the GNU Project under the GNU General Public License. It is written in ANSI C and is available as a callable library. The solver routine within the toolkit is glpsol. GLPK includes both simplex and interior-point codes, and accepts input in the GNU MathProg modeling language, a subset of the AMPL language, and in GAMS as CoinGLPK. It is available for use on the NEOS Server [12]. Interior-Point LP Solvers 5

GIPALS32 implements an infeasible primal-dual interior-point method with Mehro- tra’s [35] predictor-corrector algorithm. For symbolic Cholesky factorization, it has multiple ordering options, including quotient minimum degree and approximate minimum degree, and includes interfaces for third-party ordering codes AMD [1] and COLAMD [14]. One should note, however, that the simplex methods imple- mented within GLPK are more mature than the interior-point method, which does not include some important features such as dense column processing and scaling to provide numerical stability.

2.6. HOPDM. HOPDM ([24], [28]) is a package for solving large scale linear, con- vex quadratic and convex problems, written by J. Gondzio and distributed by the University of Edinburgh. It is written in C and is available as an executable and a callable library. It accepts models written in MPS format and can be called from AMPL. HOPDM implements an infeasible primal-dual interior point method. It uses multiple centrality correctors [25], and the number of correctors is chosen automat- ically to reduce the overall solution time. HOPDM includes an extensive presolve routine ([22], [26]), and chooses between the normal equations and the augmented system approach in order to improve the efficiency of the factorization method [23]. Symbolic Cholesky factorization approaches include multiple minimum degree or- dering [19] and the quotient tree minimum degree ordering [18]. New versions also include iterative solvers for the Newton system. The algorithm is also capable of warm-starting [27] after a data perturbation in the objective, right-hand side, or the constraint matrix.

2.7. IBM ILOG CPLEX. CPLEX [30] is a package for solving LPs, convex QPs, problems with convex quadratic constraints, integer, and mixed-integer program- ming problems. It is currently distributed as an executable and a callable library by ILOG, an IBM company. It accepts models written in MPS and CPLEX LP input formats and can be called from ILOG OPL Development Studio, AMPL, GAMS, MPL, and Matlab. A parallel implementation is also available. CPLEX implements three : a primal simplex method, a dual sim- plex method, and a primal-dual logarithmic barrier method. We will focus on the barrier method here, which is a primal-dual interior-point method using the predictor-corrector framework [35]. It uses multiple centrality correctors [25], and the maximum number of correctors can be chosen by the user or automatically determined by the algorithm. For symbolic Cholesky factorization, there are three options: approximate minimum degree, approximate minimum local fill, and nested dissection. CPLEX can automatically choose the best ordering heuristic to use, or this can be user specified. The numerical factorization is performed out-of-core to improve memory usage and efficiency, and special handling of dense columns is available.

2.8. LINDO Systems. LINDO Systems, Inc. distributes three software packages [31] for mathematical programming: LINDO API, the callable library of solvers, LINGO, the modeling system for building and solving problems, and What’s Best, the add-in. These packages can model and solve a variety of differ- ent types of optimization problems, including LPs, QPs, and NLPs. For LPs, the LINDO API can handle problems written in the MPS input format and in LINDO, 6 HANDE Y. BENSON and What’s Best accepts spreadsheet models developed using Microsoft Excel. Be- sides the interior-point method, the packages also contain implementations of the primal and dual simplex methods. A parallel implementation is available.

2.9. LIPSOL. LIPSOL [49], or Linear programming Interior-Point SOLvers, is a Matlab-based package for solving LPs, written and distributed by Y. Zhang. It takes advantage of the sparse matrix operations available in Matlab and also uses Fortran and C routines linked via MEX. Besides problem data stored in a MAT file, LIPSOL can also accept LPs written in MPS and LPP input formats. LIPSOL implements a primal-dual infeasible interior-point method and uses Mehrotra’s predictor-corrector framework [35]. The barrier parameter updates are a modified version of that in [35]. Instead of using Matlab’s built-in sparse Cholesky factorization code, which would need to be modified for numerical stability, LIPSOL uses two Fortran codes: the multiple minimum degree ordering package [32] and the sparse Cholesky factorization package [42]. For efficiency, dense columns are split, so to ensure numerical stability, a preconditioned conjugate approach is used.

2.10. LOQO. LOQO [48] is a solver for LPs, QPs, and NLPs, written by .J. Van- derbei and distributed by Princeton University. It is written in C and is available as an executable or a callable library. It accepts LPs written in the MPS input format and can be called from AMPL and Matlab, via a MEX interface. It is available for use over the NEOS Server [12]. Despite its current focus as an NLP code, LOQO was originally developed as an infeasible interior-point method for LPs and convex QPs. It implements an infeasible interior-point method and uses Mehrotra’s predictor-corrector framework [35]. Free variables and equality constraints are split and allow for the use of primal or dual orderings in the symbolic Cholesky factorization. Furthermore, priorities, or tiers, are assigned to the columns of the matrix [6] by constraint and variable types, and within each tier, minimum local fill and multiple minimum degree orderings [47] are available. For the numerical factorization, supernodes are used with loop unrolling to improve efficiency.

2.11. Microsoft Solver Foundation. Microsoft Solver Foundation [38] is an in- tegrated modeling and solver environment for LPs, convex QPs, second-order cone programming problems, and integer and mixed- problems, as well as constraint programming and unconstrained nonlinear programming prob- lems. The models can be expressed in OML format. For linear programming, the environment contains both a simplex and an interior-point solver, and we focus here on the latter. Microsoft Solver Foundation implements a primal-dual interior-point method that solves a homogeneous self-dual model. Each equality constraint is split into a range constraint with equal bounds, and free variables are handled as inequality constraints, that is, a free variable xj is replaced by the requirement that xj + sj ≥ 0 with sj ≥ 0. Any numerical issues arising from such a treatment of equality constraints and free variables are further alleviated by the regularization scheme of [5]. The normal equations approach is used when solving the Newton system. The default ordering method for Cholesky factorization is the approximate minimum degree ordering, but an option to switch to the approximate minimum fill is also Interior-Point LP Solvers 7 available. For the numerical factorization, a left-looking supernodal factorization algorithm is implemented, and it is multicore enabled. 2.12. MOSEK. MOSEK [2] is a package for solving LPs, conic QPs, general con- vex NLPs, and mixed-integer problems, distributed by MOSEK ApS. It accepts LPs written in the MPS, LP, and OPF input formats and can be called from AMPL and Matlab. for use with programs written in C/C++, C#, .NET, Java, Delphi, and Python are provided. It is also available for use over the NEOS Server [12]. Additionally, there is a parallel implementation of MOSEK [3]. MOSEK implements the homogeneous self-dual algorithm described in [4]. Prior to solving the LP, a presolve phase is used to check for and remove redundancies and linear dependence, eliminate fixed variables, substitute out free variables, and reduce problem size. A dualizer routine decides whether to solve the primal or the dual LP using heuristics. Scaling is performed by multiplying the necessary constraints and variables by constants to improve numerical stability. For the step direction calculations, Mehrotra’s predictor-corrector framework [35] is used along with higher-order corrections [25]. The solution of the Newton system is done via the normal equations approach. The default ordering method for Cholesky factorization is the approximate minimum degree heuristic [1], but implementations of multiple minimum degree and minimum local-fill are also provided. For the numerical factorization, supernodes are split into blocks to improve cache usage, and up to 16-way loop unrolling is available to further increase efficiency. A push Cholesky framework, where an iteration consists of a particular column performing all its updates on all other remaining columns to the right, is used. 2.13. PCx. PCx [13] is an LP solver distributed by the Optimization Technology Center at Argonne National Laboratory and Northwestern University. The code is written in C and incorporates a modified version of Ng and Peyton’s sparse Cholesky factorization package, written in Fortran 77. It accepts LPs written in the MPS input format and can be called from AMPL and Matlab. Additionally, a graphical user interface (PCxGUI) written in Java is available. PCx is available for use over the NEOS Server [12]. A parallel implementation, pPCx, [9] also exists. PCx implements Mehrotra’s predictor-corrector algorithm [35] and also uses higher-order corrections [25]. The scaling technique of Curtis and Reid [11], which minimizes the deviations of the nonzero elements of the coefficient matrix A from 1.0, is applied prior to solution to improve the numerical stability of the code. A presolver removes redundancies, empty rows and columns, singleton rows, and fixed variables, eliminates singleton columns, and substitutes duplicated variables. For solving the Newton system, PCx defaults to the normal equations approach, and uses Ng and Peyton’s sparse Cholesky factorization package [42], modified to handle small pivot elements. The ordering scheme is multiple minimum degree as described in [19] and [20]. For the numerical factorization, supernodes are split into blocks to make better use of hierarchical memory, and loop unrolling further increases efficiency. 2.14. SAS/OR. SAS/OR software [44] is a commercial package which includes codes for mathematical programming, project and resource scheduling, and discrete event simulation. It is distributed by SAS Institute, Inc. Models are expressed in the OptModel language, also distributed by SAS, and the input format MPS can also be used for LPs. The optimization routines include solvers for LPs, QPs, 8 HANDE Y. BENSON

NLPs, and mixed-integer problems. There are three options for solving LPs: primal simplex, dual simplex, and interior-point methods. We will focus on the interior- point method here. SAS/OR incorporates an infeasible primal-dual interior-point method, as de- scribed in [8] and uses Mehrotra’s [35] predictor-corrector framework. A presolve routine is used to check for and remove redundancies, handle singleton rows and columns, eliminate fixed variables, and reduce problem size. The minimum degree heuristic is used for symbolic Cholesky factorization. For numerical factorization, the left-looking, or pull, Cholesky [21], where an iteration consists of a particular column receiving all its updates from all applicable columns to the left, is used. 2.15. Xpress Optimizer. Xpress Optimizer [43] is a commercial solver for LP and QPs, distributed as a part of the Xpress-MP suite by FICO/Dash Optimization. It can be used via Console Xpress, which is the graphical user interface of the suite, or as a callable library with C, C++, Java, Fortran, VB6 and .NET programming interfaces. It accepts models written in MPS and LP input formats and can also be called from AMPL or GAMS. Another tool from the Xpress-MP suite, Xpress- Mosel, can be used to take advantage of the object-oriented construction for building advanced models. Xpress Optimizer, as well as the other tools of Xpress-MP, are available for use over the NEOS Server [12]. There is also a parallel implementation. There are three solvers implemented in the Xpress Optimizer: primal simplex, dual simplex, and a barrier interior-point method. We will focus on the interior-point method here. Xpress Optimizer implements the homogeneous self-dual algorithm described in [4] and uses Mehrotra’s predictor-corrector framework [35] with Gondzio’s higher- order corrections [25]. A presolve stage determines redundant constraints and may tighten variable bounds as necessary. By default, the code determines whether to solve the primal or the dual LP using the barrier method. For sparse Cholesky factorization, the user can choose among three options for finding an ordering: minimum degree, minimum local fill, or nested dissection. For the numerical fac- torization, they are given a choice between pull Cholesky, where an iteration consists of a particular column receiving all its updates from all other applicable columns to the left, and push Cholesky, where an iteration consists of a particular column performing all its updates on all other applicable columns to the right. The default is to use the pull Cholesky.

3. Conclusion We have presented an overview of available interior-point codes for linear pro- gramming. This list is not exhaustive, as there are numerous codes for conic op- timization, such as SDPT3 and SeDuMi, and for nonlinear optimization, such as IPOPT and KNITRO, for which linear programming problems constitute a special case. We have chosen to focus here on solvers that are either primarily for LP (and by natural extension, QP) or have special features for handling LPs. Many of the solvers share common features. For example, the use of the predictor- corrector framework and even higher order corrections is quite common, and most solvers implement a minimum degree ordering or a close variation for symbolic Cholesky factorization. The presolve phase also plays an important role, and many of the solvers balance the loss of computing time during presolve with possible gains in efficiency and numerical stability. Interior-Point LP Solvers 9

References [1] P. Amestoy, T. A. Davis, and I. S. Duff. Algorithm 837: AMD, an approximate minimum degree ordering algorithm. ACM Transactions on Mathematical Software, 30(3):381–388, September 2004. [2] E. Andersen and K. Andersen. The MOSEK optimization software. EKA Consulting ApS, Denmark. [3] E.D. Andersen and K.D. Andersen. A parallel interior-point based linear programming solver for shared-memory multiprocessor computers: A case study based on the XPRESS LP solver. Technical report, COTE, UCL, Belgium, 1997. [4] E.D. Andersen and K.D. Andersen. The MOSEK interior-point optimizer for linear program- ming: An implementation of the homogeneous algorithm. In High Performance Optimization, by H Frenk et al. (eds), pages 197–232, 2000. [5] M.F. Anjos and S. Burer. On Handling Free Variables in Interior-Point Methods for Conic Linear Optimization. SIAM Journal on Optimization, 18(4):1310–1325, 2007. [6] A.J. Berger, J.M. Mulvey, E. Rothberg, and R.J. Vanderbei. Solving multistage stochastic programs using tree dissection. Technical Report, Statistics and , SOR- 95-07, 1995. [7] A. Brooke, D. Kendrick, and A. Meeraus. GAMS: A User’s Guide. Scientific Press, 1988. [8] T.J. Carpenter, I.J. Lustig, J.M. Mulvey, and D.F. Shanno. Higher order predictor-corrector interior point methods with application to quadratic objectives. SIAM Journal on Optimiza- tion, 3:696–725, 1993. [9] T.F. Coleman, J. Czyzyk, C. Sun, M. Wagner, and S.J. Wright. pPCx: Parallel software for linear programming. In Proceedings of the Eighth SIAM Conference on Parallel Processing in Scientific Computing, March 1997. [10] W. Cook. LP format: BNF definition. http://www2.isye.gatech.edu/˜wcook/qsopt/hlp/ff lp bnf.htm. [11] A. R. Curtis and J. K. Reid. On the automatic scaling of matrices for gaussian elimination. J. Inst. Maths Applics, 10:118–124, 1972. [12] J. Czyzyk, M. Mesnier, and J. Mor´e.The neos server. IEEE Journal on and Engineering, 5:68–75, 1998. [13] Joseph Czyzyk, Sanjay Mehrotra, Michael Wagner, and Stephen J. Wright. PCx: an interior- point code for linear programming. Optimization Methods and Software, 11(1):397430, 1999. [14] T. A. Davis, J. R. Gilbert, S. Larimore, and E. Ng. Algorithm 836: Colamd, a column approx- imate minimum degree ordering algorithm. ACM Transactions on Mathematical Software, 30(3):377–380, September 2004. [15] E.F.D. Ellison, M. Hajian, H. Jones, R. Levkovitz, I. Maros, G. Mitra, and D. Sayers. FortMP manual. Technical report, OptiRisk Systems, April 2008. [16] J. Forrest, D. de la Nuez, and R. Lougee-Heimer. CLP user’s guide. Technical report, IBM Corporation, 2004. [17] R. Fourer, D.M. Gay, and B.W. Kernighan. AMPL: A Modeling Language for Mathematical Programming. Scientific Press, 1993. [18] A. George and J. Liu. Computer Solution of Large Sparse Positive Definite Systems. Prentice- Hall, 1981. [19] A. George and J.W.H. Liu. The evolution of the minimum degree ordering algorithm. SIAM Review, 31:1–19, 1989. [20] J.A. George, J.W.H. Liu, and E. Ng. User guide for SPARSPAK, Waterloo sparse linear equations package. Technical Report 78-30 (revised 1980), Department of Computer Science, University of Waterloo, 1980. [21] J.A. George, J.W.H. Liu, and B.W. Peyton. Computer solution of positive definite systems. Unpublished book available from the authors. [22] J. Gondzio. Splitting dense columns of constraint matrix in interior point methods for large scale linear programming. Optimization, 24:285–297, 1992. [23] J. Gondzio. Implementing cholesky factorization for interior point methods of linear program- ming. Optimization, 27:121–140, 1993. [24] J. Gondzio. HOPDM (version 2.12) - a fast LP solver based on a primal-dual interior point method. European Journal of Operational Research, 85(1):221–225, 1995. [25] J. Gondzio. Multiple centrality corrections in a primal dual method for linear programming. Comp. Opt. and Appl., 6:137–156, 1996. 10 HANDE Y. BENSON

[26] J. Gondzio. Presolve analysis of linear programs prior to applying an interior point method. ORSA Journal on Computing, 9(1):73–91, January 2001. [27] J. Gondzio and A. Grothey. Reoptimization with the primal-dual interior point method. SIAM Journal on Optimization, 13(3):842–864, 2003. [28] J. Gondzio and M. Makowski. HOPDM - modular solver for LP problems, user’s guide to version 2.12. Working Paper WP-95-50, International Institute for Applied Systems Analysis, Laxenburg, Austria, 1995. [29] Anshul Gupta, Mahesh Joshi, and Vipin Kumar. WSSMP: A high-performance serial and parallel symmetric sparse linear solver, volume 1541/1998. Springer-Verlag, November 2006. [30] IBM/ILOG. Cplex 11.0 reference manual. http://www.ilog.com/products/cplex/. [31] Inc. LINDO Systems. Lindo 6.0, user manual. http://www.lindo.com. [32] J.W. Liu, E.G. Ng, and B.W. Peyton. On finding supernodes for sparse matrix computations. SIAM J. Matrix Anal. and Appl., 1:242–252, 1993. [33] Robin Lougee-Heimer. The Common Optimization INterface for Operations Research. IBM Journal of Research and Development, 47(1):57–66, January 2003. [34] A. Makhorin. GLPK (GNU linear programming kit). http://www.gnu.org/software/glpk/glpk.html. [35] S. Mehrotra. On the implementation of a (primal-dual) interior point method. SIAM Journal on Optimization, 2:575–601, 1992. [36] Cs. M´esz´aros.The efficient implementation of interior point methods for linear programming and their applications. E¨otv¨osLor´and University of Sciences, Ph.D thesis, 1996. [37] Cs. M´esz´aros.Fast cholesky factorization for interior point methods of linear programming. Computers and Mathematics with Applications, 31(4/5):49–51, 1996. [38] Microsoft Corporation. Microsoft Solver Foundation http://www.solverfoundation.com. [39] R.D.C. Monteiro and I. Adler. Interior path following primal-dual algorithms. part ii: Convex . Mathematical Programming, 44:43–66, 1989. [40] C.E. Murillo-S´anchez. A Matlab MEX interface for the BPMPD interior point solver. http://www.pserc.cornell.edu/bpmpd/. [41] J.L. Nazareth. Computer Solutions of Linear Programs. Oxford University Press, Oxford, 1987. [42] E.G. Ng and B. W. Peyton. Block sparse cholesky algorithms on advanced uniprocessor computers. SIAM J. Sci. Comput., 14:1034–1056, 1993. [43] Dash Optimization. Xpress optimizer reference manual. Release 13(2001). [44] Inc. SAS Institute. Sas/or software. http://www.sas.com/technologies/analytics/optimization/or/. [45] Optimalon Software. GIPALS32. http://www.optimalon.com/product gipals32.htm. [46] S. Toledo, D. Chen, and V. Rotkin. A library of sparse linear solvers, 2003. http://www.tau.ac.il/˜stoledo/taucs. [47] R.J. Vanderbei. A comparison between the minimum-local-fill and minimum-degree algo- rithms. Technical report, AT&T Bell Laboratories, 1990. [48] R.J. Vanderbei. LOQO user’s manual—version 3.10. Optimization Methods and Software, 12:485–514, 1999. [49] Y. Zhang. Solving large-scale linear programs by interior-point methods under the Matlab environment. Optimization Methods and Software, 10(1):1–31, 1998.

Hande Y. Benson, Drexel University, Philadelphia, PA