i i book 2008/10/23 page 707 i i

Bibliography

[1] J. Abadie, On the Kuhn-Tucker theorem, in , J. Abadie, ed., North–Holland, Amsterdam, 1967, pp. 19–36.

[2] , The GRG method for nonlinear programming, in Design and Implementation of Optimization Software, H. Greenberg, ed., Sijthoff and Noordhoff, The Nether- lands, 1978, pp. 335–362.

[3] J. Abadie and J. Carpentier, Généralisation de la méthode du réduit de Wolfe au cas de contraintes non-linéaires, Note HRR 6678, Électricité de France, Paris, 1965.

[4] C. Ablow and G. Brigham, An analog solution of programming problems, Opera- tions Research, 3 (1955), pp. 388–394.

[5] I. Adler, The expected number of pivots needed to solve parametric linear programs and the efficiency of the self-dual simplex method, manuscript, Department of Indus- trial Engineering and Operations Research, University of California, Berkeley, CA, 1983.

[6] R. K.Ahuja, T. L. Magnanti, and J. B. Orlin, Network Flows: Theory, Algorithms, and Applications, Prentice–Hall, Englewood Cliffs, NJ, 1993.

[7] M. Aizerman, E. Braverman, and L. Rozonoer, Theoretical foundation of the potential function method in pattern recognition learning, Automation and Remote Control, 25 (1964), pp. 821–837.

[8] M. Al-Baali, Descent property and global convergence of the Fletcher-Reeves method with inexact , IMA Journal on Numerical Analysis, 5 (1985), pp. 121–124.

[9] E. D. Andersen and K. D. Andersen, Presolving in , Mathe- matical Programming, 71 (1995), pp. 221–245.

[10] E. D. Andersen, J. Gondzio, C. Meszaros, and X. Xu, Implementation of interior point methods for large scale linear programming, in Interior Point Methods in Math- ematical Programming, T. Terlaky, ed., KluwerAcademic Publishers, Dordrecht, The Netherlands, 1996, pp. 189–252.

707

i i

i i i i book 2008/10/23 page 708 i i

708 Bibliography

[11] K. M. Anstreicher, A monotonic projective algorithm for fractional linear pro- gramming, Algorithmica, 1 (1986), pp. 483–498. [12] K. M. Anstreicher and R. A. Bosch, A new infinity-norm path following algorithm for linear programming, SIAM Journal on Optimization, 5 (1995), pp. 236–246. [13] M. Avriel, Nonlinear Programming: Analysis and Methods, Prentice–Hall, Engle- wood Cliffs, NJ, 1976. Reprinted by Dover Publications, Mineola, New York, 2003. [14] E. R. Barnes, A variation on Karmarkar’salgorithm for solving linear programming problems, Mathematical Programming, 36 (1986), pp. 174–182. [15] C. Barnhart, E. L. Johnson, G. L. Nemhauser, and P.H.Vance, Crew scheduling, in Handbook ofTransportation Science, R.W.Hall, ed., KluwerAcademic Publishers, Dordrecht, The Netherlands, 1999, pp. 493–521. [16] R. H. Bartels and G. H. Golub, The simplex method of linear programming using LU decomposition, Communications of the ACM, 12 (1969), pp. 266–268. [17] J. Barutt and T. Hull, Airline crew scheduling: Supercomputers and algorithms, SIAM News, 23 (1990), p. 1. [18] M. S. Bazaraa, J. J. Jarvis, and H. D. Sherali, Linear Programming and Network Flows, Wiley, New York, 1990. [19] E. Beale, An alternative method for linear programming, Proceedings of the Cam- bridge Philosophical Society, 50 (1954), pp. 513–523. [20] , Cycling in the dual , Naval Research Logistics Quarterly, 2 (1955), pp. 269–275. [21] D. P. Bertsekas, Constrained Optimization and Lagrange Multiplier Methods, Aca- demic Press, New York, 1982. Reprinted by Athena Scientific, Belmont, MA, 1996. [22] D. P. Bertsekas and P. Tseng, The relax codes for linear minimum cost network flow problems, Annals of Operations Research, 13 (1988), pp. 125–190. [23] N. Bi´cani´cand K. Johnson, Who was “Raphson”?, International Journal for Nu- merical Methods in Engineering, 14 (1979), pp. 148–152. [24] R. E. Bixby, Implementing the simplex method: The initial basis, ORSA Journal on Computing, 4 (1993), pp. 267–284. [25] R. E. Bixby, J. W. Gregory, I. J. Lustig, R. E. Marsten, and D. F. Shanno, Very large-scale linear programming: A case study in combining interior point and simplex methods, Operations Research, 40 (1992), pp. 885–897. [26] R. G. Bland, New finite pivoting rules for the simplex method, Mathematics of Operations Research, 2 (1977), pp. 103–107. [27] R. G. Bland, D. Goldfarb, and M. J. Todd, The : A survey, Operations Research, 29 (1981), pp. 1039–1091.

i i

i i i i book 2008/10/23 page 709 i i

Bibliography 709

[28] G. A. Bliss, of Variations, Open Court, Chicago, 1925.

[29] P. T. Boggs and J. W. Tolle, Sequential , Acta Numerica, 4 (1995), pp. 1–52.

[30] O. Bolza, Lectures on the Calculus of Variations, University of Chicago Press, Chicago, 1904. Reprinted by Scholarly Publishing Office, University of Michigan Library, Ann Arbor, MI, 2005.

[31] K.-H. Borgwardt, The average number of pivot steps required by the simplex method is polynomial, Zeitschrift für Operations Research, 26 (1982), pp. 157–177.

[32] , Some distribution-independent results about the asymptotic order of the av- erage number of pivot steps of the simplex method, Mathematics of Operations Re- search, 7 (1982), pp. 441–462.

[33] , The Simplex Method, Springer-Verlag, New York, 1987.

[34] B. Boser, I. M. Guyon, and V. Vapnik, A training algorithm for optimal margin classifiers, in Proceedings of the Fifth Annual Workshop on Computational Learning Theory, ACM, New York, 1992, pp. 144–152.

[35] A. Brearly, G. Mitra, and H. Williams, Analysis of mathematical program- ming problems prior to applying the simplex method, Mathematical Programming, 8 (1975), pp. 54–83.

[36] R. C. Buck, Advanced Calculus, (third edition), McGraw–Hill, New York, 1978. Reprinted by Waveland Press, Long Grove, IL, 2003.

[37] C. J. Burges, A tutorial on support vector machines for pattern recognition, Data Mining and Knowledge Discovery, 2 (1998), pp. 121–167.

[38] R. H. Byrd, M. E. Hribar, and J. Nocedal, An interior point algorithm for large- scale nonlinear programming, SIAM Journal on Optimization, 9 (1999), pp. 877– 900.

[39] R. H. Byrd, P. Lu, J. Nocedal, and C. Zhu, A limited memory algorithm for bound constrained optimization, SIAM Journal on Scientific Computing, 16 (1995), pp. 1190–1208.

[40] R. H. Byrd, J. Nocedal, and Y.-X.Yuan, Global convergence of a class of quasi- methods on convex problems, SIAM Journal on Numerical Analysis, 24 (1987), pp. 1171–1190.

[41] A. Cauchy, Mémoire sur la détermination des orbites des planètes et des comètes, Compte Rendu des Séances de L’Académie des Sciences, XXV (1847), pp. 401–413.

[42] , Mémoire sur les maxima et minima conditionnels, Compte Rendu des Séances de L’Académie des Sciences, XXIV (1847), pp. 757–763.

i i

i i i i book 2008/10/23 page 710 i i

710 Bibliography

[43] A. Cauchy, Méthode générale pour la résolution des systèmes d’équations simultanées, Compte Rendu des Séances de L’Académie des Sciences, XXV (1847), pp. 536–538. [44] A. Charnes, Optimality and degeneracy in linear programming, Econometrica, 20 (1952), pp. 160–170. [45] K. Chen, Matrix Preconditioning Techniques and Applications, Cambridge University Press, Cambridge, UK, 2005. [46] J. Cheriyan and S. N. Maheshwari, Analysis of preflow push algorithms for maximum network flow, SIAM Journal on Computing, 18 (1989), pp. 1057–1086. [47] V. Chvátal, Linear Programming, W. H. Freeman and Company, New York, 1983. [48] T. F. Coleman, A superlinear penalty function method to solve the nonlinear programming problem, Ph.D. thesis, University of Waterloo, Waterloo, Ontario, Canada, 1979. [49] P. Concus, G. H. Golub, and D. P. O’Leary, A generalized conjugate for the numerical solution of elliptic partial differential equations, in Sparse Matrix Computations, J. Bunch and D. Rose, eds., Academic Press, New York, 1976, pp. 309–332. [50] A. R. Conn, Constrained optimization using a nondifferentiable penalty function, SIAM Journal on Numerical Analysis, 10 (1973), pp. 760–784. [51] A. R. Conn, N. I. M. Gould, and P. L. Toint, Trust-Region Methods, SIAM, Philadelphia, 2000. [52] A. R. Conn, N. M. Gould, and P. L. Toint, LANCELOT: A Fortran Package for Large-Scale Nonlinear Optimization, Springer-Verlag, Berlin, 1992. [53] S. D. Conte and C. W. de Boor, Elementary Numerical Analysis: An Algorithmic Approach, McGraw–Hill, New York, 1980. [54] R. Courant and D. Hilbert, Methods of Mathematical Physics, vol. I, Interscience, New York, 1953. Reprinted by Wiley-Interscience, New York, 1989. [55] R. Courant, Variational methods for the solution of problems of equilibrium and vibrations, Bulletin of the American Mathematical Society, 49 (1943), pp. 1–23. [56] N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Ma- chines and Other Kernel-Based Learning Methods, Cambridge University Press, Cambridge, UK, 2000. [57] J. K. Cullum and R. A. Willoughby, Lanczos Algorithms for Large Symmetric Eigenvalue Computations. Vol. 1: Theory, Birkhäuser, Boston, 1985. Reprinted by SIAM, Philadelphia, 2002. [58] A. Curtis, M. J. Powell, and J. K. Reid, On the estimation of sparse Jacobian matrices, Journal of the Institute of Mathematics and Its Applications, 13 (1974), pp. 117–119.

i i

i i i i book 2008/10/23 page 711 i i

Bibliography 711

[59] G. B. Dantzig, Computational algorithm of the , Report RM 1266, The Rand Corporation, Santa Monica, CA, 1953.

[60] , Linear Programming and Extensions, Princeton University Press, Princeton NJ, 1963. Reprinted by Princeton University Press, 1998.

[61] , Making progress during a stall in the simplex algorithm, Linear Algebra and Its Applications, 114/115 (1989), pp. 251–259.

[62] G. B. Dantzig and W. Orchard-Hays, The product form for the inverse in the simplex method, Mathematical Tables and Other Aids to Computation, 8 (1954), pp. 64–67.

[63] G. B. Dantzig, A. Orden, and P. Wolfe, The generalized simplex method for minimizing a linear form under linear inequality restraints, Pacific Journal of Mathematics, 5 (1955), pp. 183–195.

[64] G. B. Dantzig and P. Wolfe, The decomposition principle for linear programs, Operations Research, 8 (1960), pp. 101–111.

[65] W. C. Davidon, Variable metric method for minimization, SIAM Journal on Optimization, 1 (1991), pp. 1–17.

[66] T. A. Davis, Direct Methods for Sparse Linear Systems, SIAM, Philadelphia, 2006.

[67] R. S. Dembo, S. C. Eisenstat, and T. Steihaug, Inexact Newton methods, SIAM Journal on Numerical Analysis, 19 (1982), pp. 400–408.

[68] D. den Hertog, Interior point approach to linear, quadratic and convex program- ming, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1994.

[69] J. E. Dennis, Jr. and J. J. Moré, A characterization of superlinear convergence and its application to quasi-Newton methods, Mathematics of Computation, 28 (1974), pp. 549–560.

[70] , Quasi-Newton methods, motivation and theory, SIAM Review, 19 (1977), pp. 46–89.

[71] J. E. Dennis, Jr. and R. B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Prentice–Hall, Englewood Cliffs, NJ, 1983. Reprinted by SIAM, Philadelphia, 1996.

[72] , A view of unconstrained optimization, in Optimization, G. Nemhauser, A. Rinnooy Kan, and M. J. Todd, eds., Elsevier, Amsterdam, 1989, pp. 1–72.

[73] E. W. Dijkstra, A note on two problems in connexion with graphs, Numerische Mathematik, 1 (1959), pp. 269–271.

[74] I. Dikin, Iterative solution of problems of linear and quadratic programming, Doklady Akademiia Nauk SSSR, 174 (1967), pp. 747–748.

i i

i i i i book 2008/10/23 page 712 i i

712 Bibliography

[75] A. S. Drud, CONOPT—a large-scale GRG code, ORSA Journal on Computing, 6 (1992), pp. 207–216. [76] J. Edmonds and R. M. Karp, Theoretical improvements in algorithmic efficiency of network flow problems, Journal of the ACM, 19 (1972), pp. 248–264. [77] K. Eisemann, The trim problem, Management Science, 3 (1957), pp. 279–284. [78] J. Farkas, Über die Theorie der einfachen Ungleichungen, Journal für die reine und angewandte Mathematik, 124 (1901), pp. 1–27. [79] A. V. Fiacco, Historical Survey of Sequential Unconstrained Methods for Solving Constrained Minimization Problems, Technical Paper RAC–TP–267, Research Analysis Corporation, 1967. [80] A. V. Fiacco and G. P. McCormick, Nonlinear Programming: Sequential Unconstrained Minimization Techniques, John Wiley and Sons, New York, 1968. Reprinted by SIAM, Philadelphia, 1990. [81] R. Fletcher, Practical Methods of Optimization, 2nd ed., Wiley, New York, 2000. [82] R. Fletcher, N. I. M. Gould, S. Leyffer, P. L. Toint, and A. Wächter, Global convergence of a trust-region SQP-filter algorithm for general nonlinear programming, SIAM Journal on Optimization, 13 (2002), pp. 635–659. [83] R. Fletcher and S. Leyffer, Nonlinear programming without a penalty function, Mathematical Programming, 91 (2002), pp. 239–270. [84] R. Fletcher, S. Leyffer, and P. L. Toint, A brief history of filter methods, SIAG/OPT Views and News, 18 (2007), pp. 2–12. [85] R. Fletcher and M. J. Powell, A rapidly convergent descent method for minimization, Computer Journal, 6 (1963), pp. 163–168. [86] R. Fletcher and C. Reeves, Function minimization by conjugate , Computer Journal, 6 (1964), pp. 149–154. [87] C. A. Floudas and P. M. Pardalos, eds., Recent Advances in Global Optimization, Princeton University Press, Princeton, NJ, 1992. Reprinted by Princeton University Press, 2007. [88] L. Ford, Jr. and D. Fulkerson, A suggested computation for maximal multi- commodity network flows, Management Science, 5 (1958), pp. 97–101. [89] , Flows in Networks, Princeton University Press, Princeton, NJ, 1962. [90] J. J. Forrest and D. Goldfarb, Steepest-edge simplex algorithms for linear programming, Mathematical Programming, 57 (1992), pp. 341–374. [91] J. J. Forrest and J. Tomlin, Updating triangular factors of the basis to maintain sparsity in the product-form simplex method, Mathematical Programming, 2 (1972), pp. 263–278.

i i

i i i i book 2008/10/23 page 713 i i

Bibliography 713

[92] R. Fourer, D. M. Gay, and B. W. Kernighan, AMPL: A Modeling Language for Mathematical Programming (second edition), Thomson/Brooks/Cole, Pacific Grove, CA, 2003.

[93] M. Fredman and R. E. Tarjan, Fibonacci heaps and their uses in improved network optimization algorithms, Journal of the ACM, 34 (1987), pp. 596–615.

[94] K. Frisch, The Logarithmic Potential Method of Convex Programming, Memoran- dum 13, University Institute of Economics, Oslo, Norway, 1955.

[95] P. Gács and L. Lovász, Khachiyan’s algorithm for linear programming, Mathe- matical Programming Study, 14 (1981), pp. 61–68.

[96] T. Gal, Postoptimal Analysis, Parametric Programming and Related Topics, McGraw–Hill, New York, 1979.

[97] D. Gale, H. W. Kuhn, and A. W. Tucker, Linear programming and the theory of games, in Activity Analysis of Production and Allocation, T. Koopmans, ed., Wiley, New York, 1951, pp. 317–329.

[98] S. I. Gass and T. Saaty, The computational algorithm for the parametric objective function, Naval Research Logistics Quarterly, 2 (1955), pp. 39–45.

[99] C. F. Gauss, Theoria Motus Corporum Cœlestium in Sectionibus Conicus Solem Ambientum, Dover Press, New York, 1809.

2 + 2 + ··· + 2 = 2 [100] , Bestimmung des kleinsten Werthes der Summe x1 x2 xn R für m gegebene Ungleichungen u ≥ 0, in Werke, vol. X (part II), Gedruckt in der Dieterichschen Universitätsdruckerei (W.F. Kaestner), Göttingen, 1850–51, pp. 473–482.

[101] D. M. Gay, Computing optimal locally constrained steps, SIAM Journal on Scientific and Statistical Computing, 2 (1981), pp. 186–197.

[102] , A variant of Karmarkar’s algorithm for problems in standard form, Mathematical Programming, 37 (1987), pp. 81–90.

[103] I. Gelfand and S. Fomin, Calculus of Variations, Prentice–Hall, Englewood Cliffs, NJ, 1963. Reprinted by Dover Publications, New York, 2000.

[104] A. George and J. Liu, Computer Solution of Large Sparse Positive Definite Systems, Prentice–Hall, Englewood Cliffs, NJ, 1981.

[105] J. C. Gilbert and C. Lemaréchal, Some numerical experiments with vari- able storage quasi-Newton algorithms, Mathematical Programming, 45 (1989), pp. 407–436.

[106] J. C. Gilbert and J. Nocedal, Global convergence properties of conjugate gradient methods for optimization, SIAM Journal on Optimization, 2 (1992), pp. 21–42.

i i

i i i i book 2008/10/23 page 714 i i

714 Bibliography

[107] P. E. Gill and W. Murray, Quasi-Newton methods for unconstrained optimization, Journal of the Institute for Mathematics and Its Applications, 9 (1972), pp. 91–108. [108] , Newton-type methods for unconstrained and linearly constrained optimiza- tion, Mathematical Programming, 28 (1974), pp. 311–350. [109] , Safeguarded Steplength Algorithms for Optimization Using Descent Methods, Report NAC 37, National Physical Laboratory, Teddington, England, 1974. [110] P. E. Gill, W. Murray, M. A. Saunders, J. A. Tomlin, and M. H. Wright, On pro- jected Newton barrier methods for linear programming and an equivalence to Kar- markar’s projective method, Mathematical Programming, 36 (1986), pp. 183–209. [111] P. E. Gill, W. Murray, and M. H. Wright, Practical Optimization, Academic Press, New York, 1981. [112] , Numerical Linear Algebra and Optimization, vol. 1, Addison–Wesley, Redwood City, CA, 1991. [113] P. C. Gilmore and R. E. Gomory, A linear programming approach to the cutting stock problem, Operations Research, 9 (1961), pp. 849–859. [114] , A linear programming approach to the cutting stock problem—Part II, Operations Research, 11 (1963), pp. 863–888. [115] , Multistage cutting stock problems of two and more dimensions, Operations Research, 13 (1965), pp. 94–120. [116] A. Goldberg, A New Max-Flow Algorithm, Technical Report MIT LCS TM–291, Laboratory for Computer Science, MIT, Cambridge, MA, 1985. [117] A. Goldberg and R. E. Tarjan, A new approach to the maximum flow problem, Journal of the ACM, 35 (1988), pp. 921–940. [118] D. Goldfarb and J. K. Reid, A practical steepest-edge simplex algorithm, Mathematical Programming, 12 (1977), pp. 361–371. [119] A. J. Goldman and A. W. Tucker, Theory of linear programming, in Linear Inequalities and Related Systems, H. W. Kuhn and A. W. Tucker, eds., Princeton University Press, Princeton, NJ, 1956, pp. 53–97. [120] H. Goldstine, A History of the Calculus of Variations from the 17th through the 19th Century, Springer-Verlag, New York, 1980. [121] G. H. Golub and C. Van Loan, Matrix Computations (third edition), The Johns Hopkins University Press, Baltimore, 1996. [122] J. Gondzio, Multiple centrality corrections in a primal-dual method for linear programming, Computational Optimization andApplications, 6 (1996), pp. 137–156. [123] C. C. Gonzaga, Path-following methods for linear programming, SIAM Review, 34 (1992), pp. 167–224.

i i

i i i i book 2008/10/23 page 715 i i

Bibliography 715

[124] R. Gopalan and K. T. Talluri, Mathematical models in airline schedule planning: A survey, Annals of Operations Research, 76 (1998), pp. 155–185. [125] F. J. Gould and J. W. Tolle, A necessary and sufficient qualification for constrained optimization, SIAM Journal on , 20 (1971), pp. 164–172. [126] J. Gregory and C. Lin, Constrained Optimization in the Calculus of Variations and Optimal Control Theory, Van Nostrand Reinhold, New York, 1992. Reprinted by Springer-Verlag, New York, 2007. [127] A. Griewank, Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation, SIAM, Philadelphia, 2000. [128] A. Griewank and G. F. Corliss, eds., Automatic Differentiation of Algorithms: Theory, Implementation, and Application, Proceedings of the First SIAM Workshop (Breckenridge, CO, January 6–8, 1991), SIAM, Philadelphia, 1991. [129] A. Griewank and P. L. Toint, Partitioned variable metric updates for large structured optimization problems, Numerische Mathematik, 39 (1982), pp. 119–137. [130] I. Griva and R. J. Vanderbei, Case studies in optimization: Catenary problem, Optimization and Engineering, 6 (2005), pp. 463–482. [131] M. Guignard, Generalized Kuhn–Tucker conditions for mathematical programming problems in a Banach space, SIAM Journal on Control, 7 (1969), pp. 232–241. [132] O. Güler and Y. Ye, Convergence behavior of interior-point algorithms, Mathe- matical Programming, 60 (1993), pp. 215–228. [133] W. W. Hager, Updating the inverse of a matrix, SIAM Review, 31 (1989), pp. 221–239. [134] M. Haimovich, The Simplex Method Is Very Good!—On the Expected Number of Pivot Steps and Related Properties of Random Linear Programs, preprint, Columbia University, New York, 1983. [135] E. Hansen and G. W. Walster, Global Optimization Using Interval Analysis (second edition), CRC Press, Boca Raton, FL, 2003. [136] P. M. Harris, Pivot selection methods of the Devex LP code, Mathematical Programming, 5 (1973), pp. 1–28. [137] M. R. Hestenes, Calculus of Variations and Optimal Control Theory, John Wiley & Sons, New York, 1966. [138] , Multiplier and gradient methods, Journal of Optimization Theory and Applications, 4 (1969), pp. 303–320. [139] M. R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, Journal of Research of the National Bureau of Standards, 49 (1952), pp. 409–436.

i i

i i i i book 2008/10/23 page 716 i i

716 Bibliography

[140] N. Higham, Is fast matrix multiplication of practical use?, SIAM News, 23 (1990), pp. 12–14.

[141] J. Ho, T. Lee, and R. Sundarraj, Decomposition of linear programs using parallel computation, Mathematical Programming, 42 (1988), pp. 391–405.

[142] A. Hoffman, M. Mannos, D. Sokolowsky, and N. Wiegmann, Computational experience in solving linear programs, J. Soc. Indust. Appl. Math., 1 (1953), pp. 17–33.

[143] A. J. Hoffman, Cycling in the Simplex Algorithm, Report 2974, National Bureau of Standards, Gaithersburg, MD, 1953.

[144] R. A. Horn and C. R. Johnson, Topics in Matrix Analysis, Cambridge University Press, Cambridge, UK, 1991. Reprinted by Cambridge University Press, 1994.

[145] R. Horst, P. M. Pardalos, and N. V. Thoai, Introduction to Global Optimization (second edition), Kluwer, Dordrecht, The Netherlands, 2000.

[146] F. John, Extremum problems with inequalities as subsidiary conditions, in Studies and Essays Presented to R. Courant on his 60th Birthday, K. Friedricks, O. E. Neugebauer, and J. Stoker, eds., Wiley-Interscience, New York, 1948, pp. 187–204.

[147] C. A. Johnson and A. Sofer,Aprimal-dual method for large-scale image reconstruction in emission tomography, SIAM Journal on Optimization, 11 (2001), pp. 691–715.

[148] K. L. Jones, I. J. Lustig, J. M. Farvolden, and W. B. Powell, Multicommod- ity network flows: The impact of formulation on decomposition, Mathematical Programming, 62 (1993), pp. 95–117.

[149] A. R. Kan and G. Timmer, Global optimization, in Optimization, G. Nemhauser, A. R. Kan, and M. J. Todd, eds., Elsevier, Amsterdam, 1989, pp. 631–659.

[150] L. Kantorovich, Mathematical methods of organizing and planning production, Leningrad, 1939 (in Russian). English translation in Management Science,6 (1959/1960), pp. 366–422.

[151] N. Karmarkar, A new polynomial-time algorithm for linear programming, Combinatorica, 4 (1984), pp. 373–395.

[152] L. G. Khachiyan, A polynomial algorithm in linear programming, Doklady Akademii Nauk SSSR, 244 (1979), pp. 1093–1096.

[153] V. Klee and P. Kleinschmidt, Geometry of the Gass-Saaty parametric cost LP algorithm, Discrete and Computational Geometry, 5 (1990), pp. 13–26.

[154] V. Klee and G. J. Minty, How good is the simplex algorithm?, in Inequalities, III, O. Shisha, ed., Academic Press, New York, 1972, pp. 159–175.

i i

i i i i book 2008/10/23 page 717 i i

Bibliography 717

[155] M. Kojima, S. Mizuno, and A. Yoshise, A primal-dual interior point algorithm for linear programming, in Progress in Mathematical Programming: Interior Point and Related Methods, N. Megiddo, ed., Springer-Verlag, New York, 1989, pp. 29–47. [156] T. G. Kolda, R. M. Lewis, and V. Torczon, Optimization by direct search: New perspectives on some classical and modern methods, SIAM Review, 45 (2003), pp. 385–482. [157] H. W. Kuhn, Nonlinear programming: A historical note, in History of Mathematical Programming, J. Lenstra, A. Rinnooy Kan, and A. Schrijver, eds., North–Holland, Amsterdam, 1991, pp. 82–96. [158] H. W. Kuhn and A. W. Tucker, Nonlinear programming, in Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, J. Neyman, ed., University of California Press, Berkeley, CA, 1951, pp. 481–492. [159] J. L. Lagrange, Oeuvres de Lagrange, vols. XI and XII, Gauthier-Villars, Paris, 1888–1889. [160] C. Lanczos, An iteration method for the solution of the eigenvalue problem of linear differential and integral operators, Journal of Research of the National Bureau of Standards, 45 (1950), pp. 255–282. [161] K. Lange and R. Carson, Reconstruction algorithms for emission and transmission tomography, Journal of Computer Assisted Tomography, 8 (1984), pp. 306–316. [162] L. Lasdon, A. Waren, A. Jain, and M. Ratner, Design and testing of a GRG code for nonlinear optimization, ACM Transactions on Mathematical Software, 4 (1978), pp. 34–50. [163] E. K. Lee and J. O. Deasy, Optimization in intensity modulated radiation therapy, SIAG/OPT Views and News, 17 (2006), pp. 20–32. [164] C. Lemaréchal, An extension of Davidon methods to non-differentiable problems, Mathematical Programming Study, 3 (1975), pp. 95–109. [165] C. Lemke, The dual method of solving the linear programming problem, Naval Research Logistics Quarterly, 1 (1954), pp. 36–47. [166] K. Levenberg, A method for the solution of certain problems in least squares, Quarterly of Applied Mathematics, 2 (1944), pp. 164–168. [167] L. Liberti and N. Maculan, eds., Global Optimization: From Theory to Imple- mentation (Nonconvex Optimization and Its Applications), Springer-Verlag, New York, 2006. [168] D. Liu and J. Nocedal, On the limited memory BFGS method for large scale optimization, Mathematical Programming, 45 (1989), pp. 503–528. [169] D. G. Luenberger, Linear and Nonlinear Programming (second edition), Springer-Verlag, New York, 2003.

i i

i i i i book 2008/10/23 page 718 i i

718 Bibliography

[170] I. J. Lustig, R. E. Marsten, and D. F. Shanno, The interaction of algorithms and architectures for interior point methods, in Advances in Optimization and Parallel Computing, P. Pardalos, ed., North–Holland, Amsterdam, 1992, pp. 190–205.

[171] , Computational experience with a globally convergent primal-dual predictor- corrector algorithm for linear programming, Mathematical Programming, 66 (1994), pp. 123–135.

[172] , Interior point methods for linear programming: Computational state of the art, ORSA Journal on Computing, 6 (1994), pp. 1–14.

[173] J. N. Lyness and C. B. Moler, Numerical differentiation of analytic functions, SIAM Journal on Numerical Analysis, 4 (1967), pp. 202–210.

[174] T. L. Magnanti and J. B. Orlin, Parametric linear programming and anti-cycling pivoting rules, Mathematical Programming, 41 (1988), pp. 317–325.

[175] O. L. Mangasarian, Nonlinear Programming, McGraw–Hill, New York, 1969. Reprinted by SIAM, Philadelphia, 1994.

[176] A. Manne, Programming of economic lot sizes, Management Science, 4 (1958), pp. 115–135.

[177] H. M. Markowitz, The elimination form of the inverse and its application to linear programming, Management Science, 3 (1957), pp. 255–269.

[178] H. M. Markowitz and G. P. Todd, Mean-Variance Analysis in Portfolio Choice and Capital Markets (revised reissue of 1987 edition), John Wiley and Sons, New York, 2000.

[179] D. W. Marquardt, An algorithm for least-squares estimation of nonlinear parameters, SIAM Journal on Applied Mathematics, 11 (1963), pp. 431–441.

[180] A. Mayer, Begründung der Lagrange’schen Multiplicatorenmethode in der Variationsrechnung, Mathematische Annalen, 26 (1886), pp. 74–82.

[181] D. Mayne and N. Maratos, A first-order, exact penalty function algorithm for equality constrained optimization problems, Mathematical Programming, 16 (1979), pp. 303–324.

[182] N. Megiddo, Pathways to the optimal set in linear programming, in Progress in Mathematical Programming: Interior Point and Related Methods, N. Megiddo, ed., Springer-Verlag, New York, 1989, pp. 131–158.

[183] S. Mehrotra, On the implementation of a primal-dual interior point method, SIAM Journal on Optimization, 2 (1992), pp. 575–601.

[184] J. Meijerink and H. V. D. Vorst, An iterative solution method for linear equation systems of which the coefficient matrix is a symmetric M-matrix, Mathematics of Computation, 31 (1977), pp. 148–162.

i i

i i i i book 2008/10/23 page 719 i i

Bibliography 719

[185] S. Mizuno, M. J. Todd, and Y. Ye, An adaptive-step primal-dual interior point algorithms for linear programming, Mathematics of Operations Research, 18 (1993), pp. 964–981.

[186] R. Monteiro and I. Adler, Interior path following primal-dual algorithms: Part I: Linear programming, Mathematical Programming, 44 (1989), pp. 27–41.

[187] J. J. Moré, Recent developments in algorithms and software for methods, in Mathematical Programming: The State of the Art (Bonn, 1982), A. Bachem, M. Grötschel, and B. Korte, eds., Springer, Berlin, 1983, pp. 258–287.

[188] J. J. Moré and D. C. Sorensen, Computing a trust region step, SIAM Journal on Scientific and Statistical Computing, 4 (1983), pp. 553–572.

[189] J. J. Moré and D. Thuente, Line search algorithms with guaranteed sufficient decrease, ACM Transactions on Mathematical Software, 20 (1994), pp. 286–307.

[190] W. Murray, Analytical expressions for the eigenvalues and eigenvectors of the Hessian matrices of barrier and penalty functions, Journal of Optimization Theory and Applications, 7 (1971), pp. 189–196.

[191] K. G. Murty, Linear Programming, Wiley, New York, 1983.

[192] , Network Programming, Prentice–Hall, Englewood Cliffs, NJ, 1992. Reprinted by Prentice–Hall, 1998.

[193] S. G. Nash, Newton-type minimization via the Lanczos method, SIAM Journal on Numerical Analysis, 21 (1984), pp. 770–788.

[194] , Preconditioning of truncated-Newton methods, SIAM Journal on Scientific and Statistical Computing, 6 (1985), pp. 599–616.

[195] , A survey of truncated-Newton methods, Journal of Computational and Applied Mathematics, 124 (2000), pp. 45–59.

[196] S. G. Nash and J. Nocedal, A numerical study of the limited memory BFGS method and the truncated-Newton method for large scale optimization, SIAM Journal on Optimization, 1 (1991), pp. 358–372.

[197] S. G. Nash and A. Sofer, Assessing a search direction within a truncated-Newton method, Operations Research Letters, 9 (1990), pp. 219–221.

[198] , A general-purpose parallel algorithm for unconstrained optimization, SIAM Journal on Optimization, 4 (1991), pp. 530–547.

[199] , Algorithm 711: BTN: Software for parallel unconstrained optimization, ACM Transactions on Mathematical Software, 18 (1992), pp. 414–448.

[200] , A barrier method for large-scale constrained optimization, ORSA Journal on Computing, 5 (1993), pp. 40–53.

i i

i i i i book 2008/10/23 page 720 i i

720 Bibliography

[201] J. Nazareth, Computer Solution of Linear Programs, Oxford University Press, New York, 1987.

[202] J. Nelder and R. Mead, A simplex method for function minimization, Computing Journal, 7 (1965), pp. 308–313.

[203] G. L. Nemhauser and L. A. Wolsey, Integer and Combinatorial Optimization, Wiley, New York, 1988. Reprinted by Wiley-Interscience, 1999.

[204] A. S. Nemirovskii, Interior point polynomial time methods in convex programming, Lecture Notes, Faculty of Industrial Engineering and Management, Technion—The Israel Institute of Technology, Haifa, Israel, 1994.

[205] Y. Nesterov and A. Nemirovskii, Interior-Point Polynomial Algorithms in Convex Programming, SIAM, Philadelphia, 1994.

[206] Y. E. Nesterov and M. J. Todd, Primal-dual interior-point methods for self-scaled cones, SIAM Journal on Optimization, 8 (1998), pp. 324–364.

[207] J. Nocedal, Theory of algorithms for unconstrained optimization, Acta Numerica, 1 (1992), pp. 199–242.

[208] W. Orchard-Hays, Background, Development and Extensions of the Revised Simplex Method, Report RM 1433, The Rand Corporation, Santa Monica, CA, 1954.

[209] J. B. Orlin, Genuinely Polynomial Simplex and Non-Simplex Algorithms for the Minimum Cost Flow Problem, Technical Report 1615–84, Sloan School of Management, MIT, Cambridge, MA, 1984.

[210] J. B. Orlin, S. A. Plotkin, and É. Tardos, Polynomial dual network simplex algorithms, Mathematical Programming, 60 (1993), pp. 255–276.

[211] J. M. Ortega and W. C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 1970. Reprinted by SIAM, Philadelphia, 2000.

[212] G. Ostrovskii, Y. Wolin, and W. Borisov, Über die Berechnung von Ableitun- gen, Wissenschaftliche Zeitschrift der Technischen Hochschule für Chemie, Leuna-Merseburg, 13 (1971), pp. 382–384.

[213] C. C. Paige and M. A. Saunders, Solution of sparse indefinite systems of linear equations, SIAM Journal on Numerical Analysis, 12 (1975), pp. 617–629.

[214] E. R. Panier and A. L. Tits, On combining feasibility, descent and superlinear convergence in inequality constrained optimization, Mathematical Programming, 59 (1993), pp. 261–276.

[215] C. H. Papadimitriou and K. Steiglitz, Combinatorial Optimization: Algorithms and Complexity, Prentice–Hall, Englewood Cliffs, NJ, 1982. Reprinted by Dover Publications, Mineola, NY, 1998.

i i

i i i i book 2008/10/23 page 721 i i

Bibliography 721

[216] B. N. Parlett, The Symmetric Eigenvalue Problem, Prentice–Hall, Englewood Cliffs, NJ, 1980. Reprinted by SIAM, Philadelphia, 1998. [217] A. Perry, A Class of Conjugate-Gradient Algorithms with a Two-Step Variable- Metric Memory, Discussion paper 269, Center for Mathematical Studies in Economics and Management Science, Northwestern University, Evanston, IL, 1977. [218] T. Pietrzykowski, An exact potential method for constrained maxima, SIAM Journal on Numerical Analysis, 6 (1969), pp. 299–304. [219] R. Polyak, Modified barrier functions (theory and methods), Mathematical Programming, 54 (1992), pp. 177–222. [220] M. J. D. Powell, A method for nonlinear constraints in minimization problems,in Optimization, R. Fletcher, ed., Academic Press, New York, 1969, pp. 283–298. [221] , A new algorithm for unconstrained optimization, in Nonlinear Programming, J. Rosen, O. Mangasarian, and K. Ritter, eds., Academic Press, New York, 1970, pp. 31–65.

[222] , Convergence properties of a class of minimization algorithms, in Nonlinear Programming 2, O. Mangasarian, R. Meyer, and S. Robinson, eds., Academic Press, New York, 1975, pp. 1–27.

[223] , Restart procedures for the conjugate gradient method, Mathematical Programming, 12 (1977), pp. 241–254.

[224] , Problems related to unconstrained optimization, in Numerical Methods for Unconstrained Optimization, W. Murray, ed., Academic Press, London, New York, 1972, pp. 29–55. [225] M. J. D. Powell and P. L. Toint, On the estimation of sparse Hessian matrices, SIAM Journal on Numerical Analysis, 16 (1979), pp. 1060–1074. [226] J. Renegar, A polynomial-time algorithm based on Newton’s method for linear programming, Mathematical Programming, 40 (1988), pp. 59–93. [227] T. R. Rockafellar, Conjugate Duality and Optimization, SIAM, Philadelphia, 1974. [228] C. Roos and J.-P. Vial, A polynomial method of approximate centers for linear programming, Mathematical Programming, 54 (1992), pp. 295–306. [229] C. Roos, J.-P.Vial, and T. Terlaky, Interior Point Methods for Linear Optimization (second edition), Springer-Verlag, New York, 2005. [230] J. B. Rosen, The gradient projection method for nonlinear programming, Part I. Linear constraints, J. Soc. Indust. Appl. Math., 8 (1960), pp. 181–217. [231] A. Ruszczynski and R. J. Vanderbei, Frontiers of stochastically nondominated portfolios, Econometrica, 71 (2003), pp. 1287–1297.

i i

i i i i book 2008/10/23 page 722 i i

722 Bibliography

[232] Y. Saad, Krylov subspace methods for solving large unsymmetric linear systems, Mathematics of Computation, 37 (1981), pp. 105–126. [233] , Iterative Methods for Sparse Linear Systems (second edition), SIAM, Philadelphia, 2003. [234] T. Sauer, Numerical Analysis, Addison–Wesley, Boston, 2006. [235] R. B. Schnabel and E. Eskow, A new modified Cholesky factorization, SIAM Journal on Scientific and Statistical Computing, 11 (1990), pp. 1136–1158. [236] B. Schölkopf, C. J. Burges, and A. J. Smola, eds., Advances in Kernel Methods: Support Vector Learning, MIT Press, Cambridge, MA, 1999. [237] R. Schrader, Ellipsoid methods, in Modern Applied Mathematics—Optimization and Operations Research, B. Korte, ed., North–Holland, Amsterdam, 1982, pp. 265–311. [238] , The ellipsoid method and its implications, OR Spektrum, 5 (1983), pp. 1–13. [239] A. Schrijver, Theory of Linear and , John Wiley & Sons, New York, 1986. Reprinted by John Wiley & Sons, 1998. [240] D. F. Shanno, Conjugate-gradient methods with inexact searches, Mathematics of Operations Research, 3 (1978), pp. 244–256. [241] D. M. Shepard, M. C. Ferris, G. H. Olivera, and T. R. Mackie, Optimizing the delivery of radiation therapy to cancer patients, SIAM Review, 41 (1999), pp. 721–744. [242] L. A. Shepp and Y. Vardi, Maximum likelihood reconstruction for emission tomography, IEEE Transaction on Medical Imaging, 1 (1982), pp. 113–122. [243] N. Shor, On the structure of algorithms for the numerical solution of optimal planning and design problems, Ph.D. thesis, Cybernetics Institute, Academy of Sciences of the Ukrainian SSR, Kiev, 1964. [244] T. Simpson, Essays on Several Curious and Useful Subjects in Speculative and Mix’d Mathematicks, Illustrated by a Variety of Examples, London, 1740. [245] R. Skeel, Scaling for numerical stability in Gaussian elimination, Journal of the ACM, 26 (1979), pp. 494–526. [246] D. C. Sorensen, Newton’s method with a model trust region modification, SIAM Journal on Numerical Analysis, 19 (1982), pp. 409–426. [247] D. A. Spielman and S.-H. Teng, Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time, Journal of the ACM, 51 (2004), pp. 385–463. [248] W. Squire and G. Trapp, Using complex variables to estimate derivatives of real functions, SIAM Review, 40 (1998), pp. 110–112.

i i

i i i i book 2008/10/23 page 723 i i

Bibliography 723

[249] T. Steihaug, The conjugate gradient method and trust regions in large scale optimization, SIAM Journal on Numerical Analysis, 20 (1983), pp. 626–637. [250] U. H. Suhl and L. Suhl, Computing sparse LU factorizations for large-scale linear programming bases, ORSA Journal on Computing, 2 (1990), pp. 325–335. [251] A. Swietanowski´ , A new steepest edge approximation for the simplex method for linear programming, Computational Optimization and Applications, 10 (1998), pp. 271–281. [252] É. Tardos, A strongly polynomial minimum cost circulation algorithm, Combina- torica, 5 (1985), pp. 247–255. [253] S. Thomas, Sequential Estimation Techniques for Quasi-Newton Algorithms, Ph.D. thesis, Cornell University, Ithaca, New York, 1975. [254] M. J. Todd, Semidefinite optimization, Acta Numerica, 10 (2001), pp. 515–560. [255] M. J. Todd and B. Burrell, An extension of Karmarkar’s algorithm for linear programming using dual variables, Algorithmica, 1 (1986), pp. 409–424. [256] P.L. Toint, Towards an efficient sparsity exploiting Newton method for minimization, in Sparse Matrices and Their Uses, I. Duff, ed., Academic Press, New York, 1981, pp. 57–87. [257] L. N. Trefethen and D. Bau, III, Numerical Linear Algebra, SIAM, Philadelphia, 1997. [258] A. W. Tucker, Dual systems of homogeneous linear equations, Annals of Mathematics Studies, 38 (1956), pp. 3–18. [259] A. M. Turing, On computable numbers, with an application to the Entschei- dungsproblem, Proceedings of the London Mathematical Society, 42 (1936), pp. 230–265. [260] R. J. Vanderbei, Linear Programming: Foundations and Extensions (third edition), Springer, Berlin, 2007. [261] R. J. Vanderbei, M. Meketon, and B. Freedman, A modification of Karmarkar’s linear programming algorithm, Algorithmica, 1 (1986), pp. 395–407. [262] R. J. Vanderbei and D. F. Shanno, An interior-point algorithm for nonconvex nonlinear programming, Computational Optimization and Applications, 13 (1999), pp. 231–252. [263] V. Vapnik, ed., The Nature of Statistical Learning Theory (second edition), Springer-Verlag, New York, 1998. [264] J. von Neumann, Über ein ökonomisches Gleichungssystem und eine Verallge- meinerung des Brouwerschen Fixpunktsatzes, 1937. English translation in The Review of Economic Studies 13 (1945/1946), pp. 1–9.

i i

i i i i book 2008/10/23 page 724 i i

724 Bibliography

[265] J. von Neumann, Discussion of a maximum problem, in John von Neumann (Collected Works), A. Taub, ed., vol. VI, Pergamon Press, Oxford, 1963, pp. 89–95. [266] A. Wächter and L. T. Biegler, On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming, Mathematical Programming, 106 (2006), pp. 25–57. [267] R. Wengert, A simple automatic derivative evaluation program, Communications of the ACM, 7 (1964), pp. 463–464. [268] D. Whiteside, ed., The Mathematical Papers of , vols. 1–7, Cambridge University Press, Cambridge, UK, 1967–1976. [269] R. Wilson, A Simplicial Algorithm for Concave Programming, Ph.D. thesis, Harvard University, Cambridge, MA, 1963. [270] P. Wolfe, Convergence conditions for ascent methods, SIAM Review, 11 (1969), pp. 226–235. [271] H. Wolkowicz, R. Saigal, and L. Vandenberghe, eds., Handbook of Semidefinite Programming—Theory, Algorithms, and Applications, KluwerAcademic Publishers, Dordrecht, The Netherlands, 2000. [272] L. A. Wolsey, Integer Programming, Wiley-Interscience, New York, 1998. [273] M. H. Wright, Interior methods for constrained optimization, Acta Numerica, 1 (1992), pp. 341–407. [274] , Some properties of the Hessian of the logarithmic , Mathematical Programming, 67 (1994), pp. 265–295. [275] S. J. Wright, Primal-Dual Interior-Point Methods, SIAM, Philadelphia, 1997. [276] X. Xu, P. Hung, andY.Ye, A simplified homogeneous self-dual linear programming algorithm and its implementation, Annals of Operations Research, 62 (1996), pp. 151–171. [277] H. Yamashita and H. Yabe, Superlinear and quadratic convergence of some primal-dual interior point methods for constrained optimization, Mathematical Programming, 75 (1996), pp. 377–397.

[278] , Quadratic convergence of a primal-dual interior point method for degenerate nonlinear optimization problems, Computational Optimization and Applications, 31 (2005), pp. 123–143. [279] Y. Ye, Interior-Point Algorithms: Theory and Analysis, John Wiley and Sons, New York, 1997. [280] Y. Ye and M. Kojima, Recovering optimal dual solutions in Karmarkar’s polyno- mial algorithm for linear programming, Mathematical Programming, 39 (1987), pp. 305–317.

i i

i i i i book 2008/10/23 page 725 i i

Bibliography 725 √ [281] Y.Ye, M. J. Todd, and S. Mizuno, An o( nL) iteration homogeneous and self-dual linear programming algorithm, Mathematics of Operations Research, 19 (1994), pp. 53–67. [282] T. J. Ypma, Historical development of the Newton–Raphson method, SIAM Review, 37 (1995), pp. 531–551. [283] D. Yudin and A. S. Nemirovskii, Informational complexity and efficient methods for the solution of convex extremal problems, Ekonomika i Matematicheskie Metody, 12 (1976), pp. 357–369. [284] N. Zadeh, A bad network problem for the simplex method and other minimum cost flow algorithms, Mathematical Programming, 5 (1973), pp. 255–266.

[285] , Near Equivalence of Network Flow Algorithms, Technical Report, Department of Operations Research, Stanford University, Stanford, CA, 1979. [286] W. I. Zangwill, Algorithm for the Chebyshev problem, Management Science, 14 (1967), pp. 58–78. [287] Y. Zhang and R. A. Tapia, A superlinearly convergent polynomial primal-dual interior-point algorithm for linear programming, SIAM Journal on Optimization, 3 (1993), pp. 118–133.

i i

i i