CEJOR DOI 10.1007/s10100-009-0128-9

ORIGINAL PAPER

ABS methods for continuous and integer linear equations and optimization

Spedicato Emilio · Bodon Elena · Xia Zunquan · Mahdavi-Amiri Nezam

© Springer-Verlag 2009

Abstract ABS methods are a large class of algorithms for solving continuous and integer linear algebraic equations, and nonlinear continuous algebraic equations, with applications to optimization. Recent work by Chinese researchers led by Zunquan Xia has extended these methods also to stochastic, fuzzy and infinite systems, extensions not considered here. The work on ABS methods began almost thirty years. It involved an international collaboration of mathematicians especially from Hungary, England, China and Iran, coordinated by the university of Bergamo. The ABS method are based on the reducing matrix update due to Egerváry and can be considered as the most fruitful extension of such technique. They have led to unification of classes of methods for several problems. Moreover they have produced some special algorithms with better complexity than the standard methods. For the linear integer case they have provided the most general polynomial time class of algorithms so far known; such algorithms have been extended to other integer problems, as linear inequalities and LP problems, in over a dozen papers written by Iranian mathematicians led by Nezam Mahdavi-Amiri. ABS methods can be implemented generally in a stable way, techniques existing to enhance their accuracy. Extensive numerical experiments have

There are now probably over 400 papers on ABS methods. A partial list up to 1999 of about 350 papers was published by Spedicato et al. (2000) in the journal Ricerca Operativa. Here we give only the handful of references most related to the content of this review paper.

S. Emilio (B) · B. Elena University of Bergamo, Bergamo, Italy e-mail: [email protected]

X. Zunquan University of Dalian, Dalian, China

M.-A. Nezam Sharif University, Tehran, Iran 123 S. Emilio et al. shown that they can outperform standard methods in several problems. Here we pro- vide a review of their main properties, for linear systems and optimization. We also give the results of numerical experiments on some linear systems. This paper is dedi- cated to Professor Egerváry, developer of the rank reducing matrix update, that led to ABS methods.

Keywords Egerváry rank reduction matrix update · ABS methods · Linear systems · Diophantine Linear systems · Quasi-Newton equation · Feasible direction methods for linearly constrained optimization · Simplex method · Primal-dual interior point methods · ABSPACK

1 ABS methods 1981–2008, main results

The work on ABS methods started in 1981 after a seminar given by Jozsef Abaffy at the University of Bergamo, and developed via an international collaboration involving mathematicians mainly from Hungary, China, Iran, led by prof. Emilio Spedicato of Bergamo University. It is available in: Ð two monographs, see the references, one published in 1989 by Abaffy and Spedicato (1989), one in 1999 by Xia and Zhang (1999), in Chinese, English updated version planned Ð the Proceedings of three international conferences held in China (one in Luoyang and two in Beijing) Ð over 400 papers, of which more than 300 are listed in Spedicato et al. (2000) The main results, obtained in almost thirty years, can be summarized as follows: Ð general unification of methods for linear algebraic equations and for linearly con- strained optimization, including the LP problem; the unification is provided by the construction of a multiparameter class which, for special choices of the parame- ters, provides all possible methods of a general type for solving certain problems; in practice essentially all methods found insofar in the literature can be shown to correspond to some choices in this class; moreover some open problems have been solved within the provided general frame Ð the class is based upon the rank-reducing matrix update of Egerváry, that was exploited by him in just a few papers. It has shown the immense potential of the Egerváry’s discovery Ð the class contains the so called implicit LX method, due essentially to Zunquan Xia, hence the acronym; this is the most efficient general solver of a linear system of the classic type, improving over because of lower storage, no need of pivoting and different roundoff properties that seem to provide better accuracy; the overhead is the same as in Gaussian elimination Ð via the implicit LX algorithm we get an implementation of the simplex algorithm for the LP problem that saves a factor 8 in memory, and up to one order operations over the Forrest-Goldfarb method, the choice classical method Ð nonlinear ABS methods, see for a partial review Galántai and Spedicato (2007), are in many cases more efficient than Newton method, requiring less storage and 123 ABS methods for continuous and integer linear equations and optimization

keeping a quadratic rate of convergence even when the Jacobian is rank deficient in the solution Ð a new class of algorithms for solving a linear Diophantine system, which is Hilbert 10-th problem in the linear case; this class can be extended to linear integer inequalities and integer LP problems Ð reduction of condition number in linear systems arising in important problems, as in the primal-dual interior point method Ð the package ABSPACK1 developed in FORTRAN 77 is usually better than LAPACK on problems that are difficult due to ill conditioning or low numeri- cal rank.

2 The Egerváry-ABS class of algorithms for systems of linear equations

This is a class of methods using the rank reducing matrix recursion of Egerváry (1955Ð1956), that was first published by Abaffy et al. (1984) in the unscaled form, with three parameters, while Abaffy and Spedicato (1989, 1990) gave the scaled form, with four parameters (this class also appeared with a different derivation in Abaffy (1984)). Preliminary restricted versions of the unscaled class were given by Abaffy (1979), with three scalar parameters, and by Abaffy and Spedicato (1984), with two parameters. The relation with the Egerváry’s technique was noticed only later, thanks to Aurel Galantai. It deals with the following problem: Given the linear system

= ( T = ) = ,..., Ax b ai x bi i 1 m with

T n m A = (a1,...,am) , ai , x ∈ R , b ∈ R , m ≤ n, rank(A) = q ≤ m

find x1,...,xm+1 such that for x1 arbitrary, xm+1 solves Ax = b via the iteration

xi+1 = xi − αi pi so that the Petrov-Galerkin criterion is satisfied, namely

vT ( ) = ( ≤ , ( ) = − ) , j r xi+1 0 j i r x Ax b

m with v1,...,vi linearly independent vectors in R . The scaled ABS class can be shown to provide a complete realization of the above process. Equivalent processes are found in the literature via alternative algebra:

Ð Steward 1973—conjugate direction method, where the search directions are com- puted via standard linear solvers Ð Broyden (1985)—general finite iteration, where some properties are investigated, but no procedure is given to compute the search directions 123 S. Emilio et al.

Ð Dennis-Turner 1987—generalized conjugate directions, under conditions that are more restrictive than in the ABS class.

3 The scaled ABS class (Abaffy 1984, Abaffy and Spedicato, 1989, 1990)

The class extends the unscaled or basic class introduced by Abaffy et al. (1984), that was derived via the idea of generating a sequence of approximations xi to the solution having the property that the approximation xi+1 is a particular solution of the first i equations. From the approximation xi one moves to the next one by taking a step along T a search direction pi built by multiplying a matrix Hi by a certain parameter vec- tor. Then the matrix is updated by the Egerváry process. In the 1984 paper by Abaffy et al. (1984) the matrix Hi was called Abaffian, a name that we apply to the case V = I in the class below. For the general case we prefer to call it Egervarian to recognize the original derivation by Egerváry. The scaled class can be obtained nicely by applying the unscaled algorithm to the scaled system V T Ax = V T b, whose set of solutions is the same as the one of the original system Ax = b. The class is characterized by the parameters H1, zi ,wi ,vi . They define four matri- ces of parameters, but one can show that only two of such matrices can generate all possible sets of approximations xi to the solution. We define the class by the following process (Scaled ABS class):

n m n,n (A) Let x1 ∈ R ,v1 ∈ R be arbitrary nonzero and H1 ∈ R be arbitrary non- singular; set i = 1 (B) Here we check for redundant equations or incompatible ones. Compute the resid- ual vector ri = Axi − bi .If ri = 0 stop, xi is a solution, otherwise compute = T v ,τ= vT si Hi A i i i ri . If si = 0 goto(C). If si = 0 and τi = 0,set xi+1 = xi , Hi+1 = Hi and go to (D),thei-th equation is redundant. Otherwise stop, the system is unsolvable. (C) Basic recursions Compute a search vector by = T T v = pi Hi zi zi Hi A i 0 = − α α = vT / Update the estimate of the solution by xi+1 xi i pi where i i ri T T v pi A i . Update the Egervarian by

T v wT   Hi A i i Hi T T H + = H − w H A v = 0 i 1 i wT T v i i i i Hi A i

(D) Stop if i = m. Otherwise give vi+1 not dependent on v1,...,vi , increment i by one, go to (B). 123 ABS methods for continuous and integer linear equations and optimization

3.1 Basic properties of the above iterates

For proofs in the statements here and below see the monograph of Abaffy and Spedicato (1989), or its updated versions in the Chinese (1991) or Russian (1996) translations; see also Galántai (2003) for a very general approach to projection matrices and pro- jections methods.

T Ð xi+1solves the first i equations of the scaled (left preconditioned) system V Ax = T V b, (V = (v1,...,vm)), hence as a trivial corollary xm+1 solves the initially given system. Ð The set of all solutions is given by the following representation, of fundamental use in the ABS applications to optimization

= + T ( ∈ n ) x xm+1 Hm+1q q R arbitrary

Ð Implicit factorization of the matrix A

V T AP = L

with L lower triangular and P = (p1,...,pm) (the above relation is a necessary and sufficient condition for satisfying the Petrov-Galerkin criterion). Ð Explicit factorization of the inverse for A square nonsingular:

A−1 = PL−1V T

with L−1 diagonal, if V = GP , GT A = (GT A)T , G being a SPD matrix. Ð All possible factorizations of the matrix A can be generated by suitable parameter choices. Ð The Egerváry matrices Hi are well defined in terms of the parameter choices iff = T T = (w ,...,w ) the matrix T V AH1 W is strongly nonsingular, where W 1 m . A milder condition for the feasibility holds for the important case of the block ABS formulation developed by Aurel Galantai, see the monograph of Abaffy and T Spedicato (1989), leading to the look ahead methods. The matrix H1 can be shown to play the role of a right preconditioner. The topology of break-down points can be described from the zero determinant condition on the principal submatrices of matrix T . Ð ABS methods provide a null space characterization for the matrix A. Indeed from = = ( ,..., ) T T = the basic property Hi+1 Ai 0 where Ai a1 ai we obtain Ai Hi+1 0, ( T ) implying that the rows of Hi+1 span Null Ai . Notice that Hi+1 can be con- ( T ) structed to have only i nonzero rows, that are a basis of Null Ai ; thus ABS methods are null space methods. Such methods are quite important especially in applications in optimization. 123 S. Emilio et al.

4 A variational characterization of the approximations xi to the solution

−T T Let m = n and let A be nonsingular. Then if x1 = 0, V = A YP and Y = Y > 0 the following properties, due to Elena Bodon, can be proven, the last three ones giving the variational characterization of the ABS iteration: Ð L is diagonal T = = Ð pi Ypj 0 i j + T + Ð xi+1 = argmin(x − x ) Y (x − x ) x ∈ Span(p1,...,pi ) + T + + T + Ð (xi+1 − x ) Y (xi+1 − x ) ≤ (xi − x ) Y (xi − x )

T ≥ T . Ð xi+1Yxi+1 xi Yxi

Notice that the function above minimized by xi+1 is a convex quadratic function, namely a Y -weighted Euclidean error norm. Notice that unless Y = I minimization moves down a sequence of elongated multidimensional ellipsoid surfaces, where the ratio of the largest to the smallest axis is the condition number of Y . Hence the process may take at each step farther and farther away from the solution, especially if the matrix A is ill conditioned, the solution being obtained just at the last step (in theory, since in practice catastrophic error will have arisen). Special parameter choices in the above are the following, leading to well known methods: Let A = AT > 0, and select Y = A ; then we obtain a subclass which contains the ABS formulation of the Hestenes-Stiefel, Lanczos and Choleski methods. The minimized function is the energy norm. Let Y = AT A > 0, then we obtain a subclass which contains the ABS formulation of the QR and GMRES methods and of the Hestenes-Stiefel method applied to the normal equations. The minimized function is the residual norm. Let Y = I = AT A−T > 0, then we obtain a subclass which contains the ABS formulation of the Gram-Schmidt and Craig methods. The minimized function is the standard unweighted Euclidean norm. From the above considerations methods in this subclass are expected to be better in terms of approaching the solution. Thus here we can see another reason in favour of the ill treated Craig’s method, in addition to the five reasons given by Mike Saunders at the first Householder conference held at the Stan- ford conference centre on the San Bernardino mountains, attended by a nonagenarian Alston Householder.

5 Refining the solution

The computed solution is generally affected by the roundoff errors in the numeri- cal implementation of the procedure. It can be improved by reprojection or stepwise “reorthogonalization” in the following way. = 2 = → ( ) T → If H1 I and noticing that Hi Hi ,set Hq H Hq , H q H T (H T q) . Then analysis by Broyden (1985) shows that the error in solution is reduced up to 2 orders in the growth parameter ηi defined by:

T T ηi =A vi /Hi A vi  123 ABS methods for continuous and integer linear equations and optimization

The geometric reason behind the above approach is that in a multiplication Hg, of a certain vector g and singular matrix H, the component of g lying in the null space of H should give zero, a condition that is fundamental for the termination of the process. Due to numerical errors the result will be nonzero generally, but doing one extra multiplication is expected to reduce such component to a smaller value. In practice this works, at least if the condition number of H is not too big. We also tested if more than one reprojection would be useful, but one seems to be enough. The classic iterative refinement procedure can also be applied to ABS methods in the following way, the extra cost being that we need to keep all the computed search directions.

x˜1 = xm+1 x˜i+1 =˜xi − zi , where the correction zi is computed by the ABS procedure using the old vectors vk, pk: t1 = 0 For k = 1tom set

vT (− ˜ + + ) k Axi b Atk t + = t − α p ,α= k 1 k k k k T T v pk A k

zi = tm+1 Excellent results were obtained in experiments by Mirmia (1996); just one iterative refinement step was enough, as is the case for refinement in classical methods.

6 Restart and truncated versions

If we restart the iteration after k < m steps, then the finite termination property is lost generally and a large class of iterative algorithms is obtained. Convergence conditions have been given by Li (1996). They apply to the whole class, thereby avoiding the need for analysis of special cases. There are several alternative formulations of the linear algebra involved in the ABS class, of which half a dozen have been explored till now, among which factor- ized forms due to Bodon (1989) and block versions due to Aurel Galantai. These are partly described in Abaffy and Spedicato (1989), and are very important for structured problems and for implementations on parallel or vector computers, see Bertocchi and Spedicato (1990). Here we only consider two versions that use vectors instead of a matrix. In the first formulation, that produces generalized Gram-Schmidt type formulas, we use two sets of vectors for the unscaled class, three for the scaled class, where the number of vectors in each set is increased by one at each iteration; this results in reduction of memory and overhead if m is sufficiently small compared with n.Inthe case V = I we have the following formulas, which can be seen as a generalization 123 S. Emilio et al. of the classic Gram-Schmidt method, and define the limited memory formulation:

i = − T = si+1 H1ai s j u j ai s1 H1a1 j=1 i = T w − T w = T w ui+1 H1 i u j s j i u1 H1 1 j=1 = − T , = ( ,..., ), = ( ,..., ) Hi+1 H1 Si Ui Si s1 si Ui u1 ui

This formulation is very convenient for underdetermined systems with few equations, a situation often arising in linearly constrained optimization. Again for this process the convergence has been studied by Li (1996), providing a result for the whole class. Li’s analysis is based on the variational properties given above. In another approach we can use instead of the Egervarian matrix a set of vectors, = whose number decreases by one at each iteration.. For the case V I this process is given by the following formula, for w j = z j ,

1 = T , = ,..., u j H1 z j j 1 n T i + a u ui 1 = ui − i j ui , j = i + ,...,n j j T i i 1 ai ui = i pi ui

˜ = + i i = ( i+1,..., i+1) The general solution has the form x xi+1 U q, with U ui+1 un .This approach is convenient when m =∼ n. It is a generalization of a method proposed by Sloboda (1978). If no more than k terms are used in the vector-wise formulation, iterative methods are obtained, including those of Gauss-Seidel, Kaczmarz, De la Grazia. Convergence conditions are given by Li (1996), again for whole classes, thereby avoiding the need for analysis of special cases.

7 Perturbation results

 Fast recomputation of the Egervarian matrix Hm+1 = H is an important matter after low rank changes in W or in A that characterize many problems. Let U, V be n by k < n and R, S be m by k < m full rank matrices and consider perturbations to W and A of the form

 = + T , T  W W UV AH1 W strongly nonsingular  = + T ,  T A A RS A H1 W strongly nonsingular

Notice that the above two cases are related by duality, implying similar dual formulas for the correction. The simplest but important case arises if the correction has rank one, 123 ABS methods for continuous and integer linear equations and optimization eg in the simplex method via its ABS generalizations and in active set methods. In both cases the update of the Egervarian matrices can be performed in order two operations, see Zhang (1995). Formulas are available also for changes in H1, which characterize e.g. the ABS formulation of the method of Karmarkar (actually a special case of a class of methods proposed by mathematicians of the school of Yuri Evtushenko), and for perturbation in the sequence xi .

8 The Huang (or implicit Gram-Schmidt) method

We will now see three special methods in the unscaled ABS class, for other methods also in the scaled class see the ABS monograph of Abaffy and Spedicato (1989) and several papers since. The first method considered here is the Huang method (1975), that was the starting point for the development of ABS methods. It is given by the parameter selections H1 = V = I , zi = wi = ai . The modified form (with reprojec- tion), due to Spedicato, is given by

pi = Hi (Hi ai ), T pi pi + = − . Hi 1 Hi T pi pi

If x1 is an arbitrary multiple of a1, e.g. it is the zero vector, then xm+1 is the solution of least Euclidean norm and if it is the zero vector the solution is approached mono- tonically from below, which is an important regularization property, say, for x+ the (unique) least squares solution of the above problem

xi ≤xi+1, + + xi+1 − x ≤xi − x .

The truncated form with k = 1 is the Kaczmarz method, globally converging to the least Euclidean norm solution even for underdetermined problems and problems with rank deficient matrix. Zhang (1995) has shown that using a Goldfarb-Idnani strategy, Huang algorithm on a system of linear inequalities in finite steps either finds a least −1 Euclidean norm solution or decides there are no solutions. If H1 = W similar results are true when reformulated in a W-weighted Euclidean norm.

9 The implicit LU algorithm

This method is defined by the choices H1 = V = I and zi = wi = ei , where ei is the i-th unit vector. It is well defined iff A is regular, else column pivoting is necessary (or row if m = n ). A basic property of this method is that in the general ABS factorization AP = L the matrix P is upper triangular, so that we have the factorization associated with Gaussian elimination or LU method. The matrix Hi+1 123 S. Emilio et al. has a special block structure given by   00} i Hi+1 = Si In−i

Overhead is n3/3 (contrary to the classic case, it cannot be reduced if A = AT ). The intrinsic storage is n2/4, hence less than in classic LU or Gaussian elimination. If x1 = 0 then xi+1 is special basic type vector, where the first n − i components are zero. This method is related in terms of the generated sequence of approximations xi to the solution to the Purcell-Mozkin, Fox-Huskey-Wilkinson, Brown, Sloboda meth- ods. However, it uses the Egerváry matrix formulation, which has different roundoff properties, and, especially, it provides an improvement over Gaussian elimination in terms of the intrinsic required storage. The truncated form of this method with k = 1 is just the Gauss-Seidel method.

10 The implicit LX algorithm

Let A ∈ Rm,n, m ≤ n, rank(A) = m. Then for any ABS method of the unscaled ABS class we have

Hi ai = 0 implying that an index ki exists such that

eT H a = 0 ki i i

The implicit LX algorithm is given by the parameter choices, with eki the unit vector of index ki

= , = w = . H1 I zi i eki

Some properties of this method are the following: Ð it is well defined without conditions on A, a fact not true for Gaussian elimination or the implicit LU algorithm, which is the ABS version of Gaussian elimination Ðthek j -th rows of Hi+1 are identically zero, for j = 1,...,i Ð Hi+1ep = ep p = k1,...,ki 2( − 2 ) 3/ = Ð the total multiplication number is m n 3 m giving n 3form n Ð the maximum storage is n2/4 Ðifx1 = 0 then xi+1 is a basic type vector with n − i zero components, in positions corresponding to the indices not yet selected (for the implicit LU method theyareinthelastn − i positions, such fixed positions being the reason why pivoting is necessary unless A is regular) Ð it is equivalent to the implicit LU algorithm with column pivoting performed implicitly (the i-th column being exchanged with the ki -th one). 123 ABS methods for continuous and integer linear equations and optimization

The implicit LX algorithm, having the same overhead as Gaussian elimination, but not needing pivoting and with less intrinsic storage requirement, can be considered as the computationally best known algorithm of the standard type for solving a general system. Indeed, the only competitors are the methods based upon Strassen technique for multiplying two by two matrices in seven instead of eight multiplications, a result that has been exploited by scholars as Milvio Capovani, Dario Bini and Viktor Pan. However, such methods may be numerically unstable, while the implicit LX algorithm has shown to be more stable than classic LU method. Moreover the advantage of the multiplications reduction is counteracted by the increase in additions, a fact that is important on modern computers, where additions and multiplications take about the same time. The implicit LX method has a natural application to the LP problem in standard form, say

min cT x Ax = b, x ≥ 0

Indeed one can show, see Spedicato et al. (1997) the following, where e with a certain subindex is the unit vector of that subindex:

Ð xi+1 provides a vertex of the polytope defined by the constraint set if the nonzero components are nonnegative = − T −1 Ð The relative cost vector, classically given by r c A AB cB, B defining the basis column set, can be expressed as r = Hm+1c. Ð The search direction vector, classically given by the following formula, with N ∗ − =− 1 ∗ the index of the nonbasis column taken into the basis dB AB AN can be = T ∗ expressed as dB Hm+1eN . Ð The exchange of the two basis-nonbasis columns corresponds to the substitution of the parameter wB∗ = eB∗ with the parameter wN ∗ = eN ∗ , both unit vectors. ∗ T ˜ N can be chosen by minimizing, with respect to i, yi = c Hm+1ei , i ∈ B (set of indices of basis columns), but other strategies might be considered, this being an open research topic in the analysis of the simplex method via the ABS approach. B∗ is chosen usually by maximizing the step along an edge while keeping the basic variables nonnegative, i.e., by minimizing with respect to i ∈ B˜ , with x current solution

w = T / T T ∗ , T T ∗ > i x ei ei Hm+1eN ei Hm+1eN 0

If all elements of Hm+1 are negative, then x is a solution. Ð Different formulas can be used for the update of H after parameter exchanges. Direct use of the previously quoted perturbation formulas of Zhang leads to the formula

 = − ( ∗ − ∗ ) T / T ∗ . H H HeB eB eN ∗ H eN ∗ HeB

A count of the multiplications gives m(n − m), a number less than in the Forrest- Goldfarb method, recommended as the best one, if m > n/2. Using the alternative 123 S. Emilio et al.

representation for H,

H = I − SU T , H  = I − SU T ,

then formulas are obtained with multiplication number 2m2, hence faster than by the previous formula if m < n/3. Hence the choice of the method in terms of complexity depends on the relative values of m, n, a fact also true when we compare ABS methods for special problems, e.g. those with KT matrices, to clas- sical methods.

11 Numerical experiments

ABS algorithms have been tested extensively on sequential and vector-parallel machines (Alliant, IBM 3090) versus codes from the NAG, LINPACK, LAPACK and MATLAB packages. The ABS codes were written in FORTRAN 77, except those produced for running on CERFACS Alliant FX80 with 8 processors, where a CERFACS version of FORTRAN 90 was used. No special use was made of cache memories, that at the time of the experiments were just beginning to become impor- tant. Thus it would certainly be interesting to retest the algoritms on present computers to see if the results do not change essentially. In the testing we considered: Ð Different formulations of same algorithm, i.e., matrix or vector formulations, includ- ing also the block versions Ð Versions for special sparse problems without structure or with a structure (banded, generalized Hessemberg, KT matrices) Ð Algorithms tested were the following: Huang, implicit LU, implicit QR, ABS-Craig, ABS-minimum residual, ABS-Hestenes-Stiefel. Our extensive results correspond to more than six years of full time work mainly by Elena Bodon with final checking by prof. Ladislav Luksan of the Czech Academy of Sciences, see for instance, among several published reports, Bodon et al. (2000). The experiments lead to the following conlusions: Ð ABS versions are stable, against opinions initially expressed by some leading authorities in numerical analysis, and they are often better than standard methods Ð Total timing is lower on vector-parallel machines, where megaflops count is better. Ð Implicit LU may be much better on some sparse ill conditioned problems, see Tuma (1993) who experimented on the Boeing collection where the condition number of some matrices reaches 1018, but implicit LX is usually even better than implicit LU. In the following by SOL-ER we indicate the error in the solution, i.e., the Euclid- ean norm of the difference between the calculated solution and the true solution ( all tested problems were defined so that the true solution is known for the actual data represented in the computer, a fact that we noticed not to be true in many published experiments); SOL-RES gives the Euclidean norm of the final computed residual. In the second Table k is the bandsize. The tested ABS algorithm was a version of the implicit LU algorithm. Here we should note that the overhead for banded matrices is 123 ABS methods for continuous and integer linear equations and optimization better for ABS methods only for bandsize sufficiently small. The total time, however, depends also on the not so easily analyzable time to access data, which is usually less in ABS methods! The first time we noticed that ABS methods could outperform classic codes was when a Bergamo student worked for his dissertation at ISMES, a leading center for designing dams; then he noticed that his unsophisticated code for a tridiagonal system ran about a hundred times faster than the IBM code used for years at that center. Other IBM related problems were noticed by Spedicato and Bertocchi working on the IBM 3090 vector-parallel machine at the Rome IBM center. Here we noticed that the machine had hardware problems; it could give wrong results even in dividing two integers one multiple of the other and both below the maximum repre- sentable size. A problem that was signalled, exactly of the type that led INTEL to withdraw over a million Pentium 1 processors, but that did not result as far as we know in modification of the IBM arithmetic unit, probably since IBM fell out of the market.

Results on tridiagonal systems n Method Time mflops SOL-ER SOL-RES 104 LAPACK 0.040 2.03 3 E−83E−8 ABS 0.022 3.90 5 E−84E−8 5 × 104 LAPACK 0.197 2.03 4 E−83E−8 ABS 0.092 4.60 6 E−84E−8 105 LAPACK 0.394 2.03 4 E−83E−8 ABS 0.181 4.70 6 E−84E−8

Results on banded systems nkMethod Time mflops SOL-ER RES-ER 1000 5 LAPACK 0.213 0.09 2 E−54E−7 ABS 0.058 5.70 2 E−53E−6 1000 21 LAPACK 0.231 1.08 2 E−32E−6 ABS 0.223 10.06 2 E−31E−5 8000 5 LAPACK 1.707 0.09 5 E−64E−7 GAUSS 0.424 6.30 7 E−62E−6 8000 21 LAPACK 1.858 1.08 2 E−46E−6 GAUSS 1.840 9.84 3 E−38E−6

The following results are part of extensive testing done by Tuma (1993)onthe problems of the Boeing collection. 123 S. Emilio et al.

Results on sparse problems from boeing set Problem n Condit. Method Fill-in SOL-RES Time NNC1 1374 1 E15 MA28 80585 3 E+1 208 ABS 157637 5 E−12 1466 NNC2 261 9 E14 MA28 7053 1 E+1 3.7 ABS 4851 3 E−13 6.5 WEST 156 1 E31 MA28 410 2 E−08 0.08 ABS 156 9 E−10 0.34 The following results were obtained by Bodon (1993) on the 8 processors Alliant com- puter at CERFACS, Toulouse. Matrices and solution were generated having random integer entries (right hand side computed from matrix and solution), the codes were run in single precision. We see that accuracies are similar, but the ABS codes are faster, especially the Modified Huang algorithm. This result is impressive since such a method has an overhead about 5 times greater than the LU method. It is a consequence of the fact that the megaflops counter is better for ABS methods. The reason is that they use rectangular matrices, instead of the triangular matrices in LU or QR classic method, which are less favorable for both vector and parallel computing, at least on the considered architectures.

Experiments on alliant FX/80 computer n Method SOL-ER RES-ER Seconds mflops 100 LAPACK 3 E−51E−60.20 3.4 IMPL.LU 9 E−61E−60.10 12.3 MOD.HUANG 4 E−62E−70.15 19.7 300 LAPACK 3 E−44E−61.99.6 IMPL.LU 2 E−42E−60.928.0 MOD.HUANG 2 E−63E−71.349.7 500 LAPACK 1 E−47E−66.213.4 IMPL.LU 9 E−55E−63.035.1 MOD.HUANG 1 E−53E−74.266.4 In the above reference is made to the following algorithms: LAPACK: SGETF2- SGETRF (with row pivoting), IMPL.LU: block form, block size 50, row pivoting , MOD.HUANG: block form, block size 50. The implicit LX method was tested by Mirmia (1996) on a Digital workstation on 21 families of problems having the follow- ing features Ð n up to 800 Ð entries exactly represented Ð treated as full problem We do not give here the actual results. They show that the implicit LX algorithm is more accurate than the implicit LU, the modified Huang and the Lapack LU solver, 123 ABS methods for continuous and integer linear equations and optimization especially for the relative solution error defined by η =˜x − x+/b. The superi- ority is more evident on the ill conditioned problems. Over all problems, the implicit LX algorithm in double precision was better in 41, Matlab in 14, accuracy difference was less that one percent in 5. In single precision the implicit LX algorithm was better in 25, Lapack in 17, accuracy less than one percent in 18. Similar results in testing the implicit LX algorithm were obtained by Bodon and checked by prof Luksan within the ABSPACK project. Also similar results showed up in testing ABS versus LAPACK for linear least squares. Problems with low numerical rank occur often. Much effort has been devoted to decide if the problem is compatible and which equations should be dropped once considered to be dependent equations. Usually one uses the SVD (Singular Value Decomposition) or the Rank Revealing QR factorization. The decision in the ABS methods is taken in step (B) of the general class defined above via a suitable choice of a tolerance  in the zero testing for si and τi . Analysis of the best choice for the tolerance is a research question yet to be addressed, but in practice setting it to a value order of the zero machine works well. In fact an industrial problem was solved by Spedicato and Bodon for the Italian Electric Power Industry where the commercial code used, MA28, failed in cases of numerical rank deficiency. The modified Huang algorithm dealt with it very well identifying the dependent equations. Thus a problem that affected the code for twenty years was removed and the corrected code was sold to French Electrical Industry at over five times the cost of our consultancy over one year. Here we show results by Bodon where the Modified Huang algorithm compares well in accuracy with the special LAPACK solvers for this problem, but is much faster for the problems with very low numerical rank. Both ABS and LAPACK codes compute the same numerical rank, or very close ranks.

Problems with low numerical rank mn Cond. Method Rank SOL-ER RES-ER Time 950 1050 2 E18 M.HUANG 2 0. 0. 1 QR LAPACK 950 2.E+2 6.E−15 17 GQR LAPACK 2 2.E−15 5.E−16 26 SVD LAPACK 2 2.E−15 9.E−17 178 1000 1000 1 E60 M.HUANG 772 5.E−01 3.E−15 61 QR LAPACK 1000 6.E+1 2.E−13 17 GQR LAPACK 772 5.E−01 4.E−15 29 LU LAPACK 972 1.E+3 1.E+3 7 2000 2000 2 E19 M.HUANG 4 1.E+0 9.E−13 7 QR LAPACK 2000 3.E+3 9.E−13 137 GQR LAPACK 3 1.E+0 2.E−15 226 LU LAPACK 2000 7.E+3 2.E−12 53 1050 950 2 E20 M.HUANG 2 1.E+03 2.E−10 0 QR LAPACK 950 4.E+12 8.E−03 17 GQR LAPACK 2 1.E+01 2.E−15 27 SVD LAPACK 2 1.E+01 2.E−15 145

123 S. Emilio et al.

The results presented in this section, based on a few years of testing work, certainly need more work, especially on structured problems, but show anyway the potential offered by ABS methods in terms of better accuracy and less time. Certainly this is not true in any case, and we suspect that for some classes of problems ABS are better, while for others standard methods are better. The better accuracy is due partly to the easily implementable reprojection technique that we described for the modified Huang algorithm but that can be applied to most other methods (but not to implicit LU or LX methods, where the property H 2 = H is true independently of any roundoff error made). It is partly due to the use of the Egerváry recursion, whose analysis in terms of roundoff has been done only preliminarly by Aurel Galantai and Zunquan Xia, and by Laurence Dixon for the implicit LU problem. The analysis of this problem is as usual a very hard task, just recall what Wilkinson did for some classic methods.

12 ABS solution of linear Diophantine systems

One of the most important of the 23 problems that Hilbert proposed at the Paris Inter- national Mathematics conference in 1900 is the 10-th problem, which generalizes a problem considered since some 2000 years: Find a method that determines if a given polynomial Diophantine system is solvable. Hilbert 10-th problem is solvable for the linear case, a result proved over a hundred years ago, while for the general case Matiyasevich (1970) proved that it is unsolvable, thereby providing a case where mathematical powers are bounded. So here we consider solving a linear Diophantine or integer, determined or underdetermined system. We look for a special solution and for the representation of all solutions if there is more than one. The system is stated, with Z m the space of n-dimensional integer vectors, as:

Ax = b, b ∈ Z m, A ∈ Rm,n, x ∈ Z n.

Here we recall some literature results, noting that the total number of papers in this area appeared in our literature search to be extremely small. We couldn’t find more than twenty at the time we solved by the ABS method this historical prob- lem, say some ten years before writing this review. Now a similar number has been produced within the ABS framework, mainly by the Iranian researchers led by Nezam Mahdavi-Amiri. The above problem at that time had been solved for the fol- lowing cases, to our knowledge:

Ð m = 1, n = 2 by Diophantus and Euclides, possibly also by Chinese and Indian mathematicians Ð m = 1, general n : by Bertrand and Betti, circa 1850 Ð general m ≤ n : Ð via reduction to Hermite form Ð via congruence techniques Ð by Egerváry via his rank reduction process only for the homogeneous case, whose presentation by Emilio Spedicato at his first visit to Iran led to the pres- ent ABS results. 123 ABS methods for continuous and integer linear equations and optimization

Our solution has been obtained via the ABS algorithm, being a generalization of the result of Egerváry. It should be noted that Rosser, whose result is used in the ABS procedure, was a student of Goedel, hence it is an interesting historical question if Goedel himself paid attention to this problem. Also we note that Rosser’s algorithm, developed for a single equation, has been recently extended by Amiri-Mahdavi and Khorramizadeh (2008) to a whole system, where it becomes a special case of the ABS class for Diophantine problems.

The theorem of Euler A fundamental result for a single linear is due to Euler. Consider the single linear equation in n variables:

aT x = η, a, x ∈ Z n and let δ = gcd(a) ≥ 1 be the greatest common divisor of the components of a. Then the equation is solvable in integers iff δ divides η and there are infinite solutions if n > 1. A special solution and a representation of all solutions of such a single equation can be obtained via the following methods: Ð BertrandÐBetti 1850 method Ð BarnettÐMendel 1942 method Ð Rosser 1941 method. ABS methods for solving a Diophantine system were first given by Esmaeili et al. (2001), and essentially independently by Fodor (2001). They have the following prop- erties: Ð They define a class of methods for computing a solution via a sequence of integer iterates Hi , pi , αi , xi Ð The fundamental Euler theorem is generalized via condition that gcd(Hi ai ) must T − be a divisor ai xi bi Ð A representation of all solutions is provided via formula

= + T , ∈ m x xm+1 Hm+1q q Z

The basic ABS algorithm for a Diophantine linear system A sequence of integer Egervarian matrices and search vectors is obtained by the parameter choices

n,n H1 ∈ Z unimodular (|detH1|=1)

n and zi ,wi ∈ Z such that

T = wT = δ = ( ) zi Hi ai i Hi ai i gcd Hi ai implying that

wT Hi ai i Hi H + = H − i 1 i wT i Hi ai 123 S. Emilio et al. is integer, as well as pi .If x1 is integer, from the above choices xi+1 is integer if

aT x − b α = i i i i T pi ai is integer. The generalization of the Euler theorem is: the i-th equation is solvable in δ = T T − ,w T = integers iff i pi ai divides ai xi bi . Notice that zi i such that zi Hi ai wT = δ i Hi ai i always exist because of Euler theorem and can be computed in polynomial time by Rosser’s algorithm. Since without loss of generality we can set zi = wi ,atmost m single linear Diophantine equations have to be solved using Rosser’s algorithm.

Further topics

Ð The extension to the scaled ABS class has been provided by Amiri-Mahdavi and Khorramizadeh (2007). Ð Modify the scaled ABS class to reduce the single Diophantine equations to be solved with Rosser’s algorithm in all equations to just one, by transforming the system into one where only the last equation has nonzero right hand side Ð Analysis of the growth of generated integers (work is in progress by Fodor) Ð Parameter selection for special solutions, including the least norm solution, whose computation in a polynomial time is not known, and the determination of all positive solutions (Frobenius problem, unsolved for n > 3) Ð Application to special problems, e.g. the zeroÐone problem Ð Testing versus “classical” methods and software production (work also here in progress by Fodor using JAVA ).

13 Unifying feasible direction methods for linearly constrained optimization

Now we briefly consider some applications of ABS methods to optimization. Let us consider the problem: min f (x) subject to Ax = b x ∈ Rn, A ∈ Rm,n, rank(A) = m, b ∈ Rm, m < n. Let x1 be a feasible estimate of the solution; then a feasible direction d for an iteration procedure of the form

x = x − αd satisfies the condition

Ad = 0 123 ABS methods for continuous and integer linear equations and optimization giving the general solution in the ABS form

= T d Hm+1q

Thus we get a general feasible iteration of the form

 = − α T x x Hm+1q

The descent condition

T T 0 < d ∇ f (x) = q Hm+1∇ f (x) can always be satisfied by some choices of q, unless we have

Hm+1∇ f (x) = 0 implying

∇ f (x) = AT λ meaning that x is a Kuhn-Tucker point. Descent directions are obtained if we take q in the following form, with W an arbitrary SPD matrix

T q = WHm+1∇ f (x) W = W > 0

We conjecture that all feasible descent direction methods belong to the following iteration

 = − α T ∇ ( ) x x Hm+1WHm+1 f x

Some special cases for W = I are the following, see Xia (1994), or Xia, Zhang and Feng (1999):

Ð Hm+1 computed by implicit LU method gives Wolfe (1967) reduced gradient method Ð Hm+1 computed by Huang method gives Rosen (1960) gradient projection method = −1 = w = Ð Hm+1 computed by the choices H1 Gk , zi i ai (characterizing a method of Xia for quadratic programming) gives the Goldfarb-Idnani 1983 method, where Gk is the SPD matrix appearing at step k in the Goldfarb-Idnani method In case inequalities are present of the following form, with D a m by n matrix, d an m vector

Dx ≤ d two approaches are possible: 123 S. Emilio et al.

2 2 (I) via active set strategies; then we can update Hm+1 in order (n , m , mn) operations when one constraint is dropped or added, using the Zhang perturbation formula. Preliminary trials of such approach have shown that ABS methods may be faster. (II) via the standard form approach, where the problem is formulated as

min f (x), Ax = b, x ≥ 0

Assuming that x1 is feasible, we add a relaxation parameter η in the general feasible iteration

 x = x − ηαHm+1∇ f (x) where α is the ideal stepsize, minimizing the function in the search direction and η is such that 0 <η≤ 1 keeps x > 0, if x > 0. If f (x) is nonlinear, then the ABS parameters in Hm+1 may be kept constant. They must be changed if f is linear, say f (x) = cT x , since the gradient becomes constant. By such a procedure we obtain an infinity of ABS methods for the LP problem in standard form, once positivity conditions are added to x. The iteration then has the form

 x = x − ηαHm+1c

The simplex method is obtained by using the implicit LX algorithm and changing the parameters wi (or a block of parameter wi ). The so called Karmarkar method follows by using the method of Xia, a variation of the Huang method where the ini- tial matrix is not the unit matrix but G = Diag(|xi |). See Spedicato et al. (1997) and Spedicato and Xia (1999) for a relation of the ABS formulation of the simplex method with one proposed by Fletcher at the Cambridge conference for Powell’s six- tieth birthday. The above quoted results are quite meaningful but have been developed essentially at only a theoretical level. However, the ABS formulation of the sim- plex method had preliminary testing versus the above quoted Fletcher’s new method; the results were quite good, in accuracy terms, see Spedicato and Bodon (2002). Generally speaking, testing and software development of ABS methods for constrained optimization should be considered as a main research area for ABS methods in the next years.

14 Application to Quasi-Newton methods

Since the quasi-Newton equation can be seen as a linear system of n equations in n2 variables (i.e. the elements of the matrix approximating the inverse Jacobian or Hessian), ABS methods can provide the general solution. Notice that only the special solution where the new matrix is given as a rank one or rank two correction was consid- ered before in the literature, especially in the frame developed by Charles Broyden for Quasi-Newton methods for nonlinear systems and unconstrained optimization. Define 123 ABS methods for continuous and integer linear equations and optimization the system (Quasi-Newton equation) by dT B = yT =  − =  − ,  = (  ,...,  ), ⇒ T  = where d x x, y g g B b1 bn d b j y j . We may also consider extra conditions of the form (+) T = η bi ek ik ηik = 0 ⇒ sparsity ηik = const ⇒ fixed elements ηik = ηki i > k ⇒ symmetry. If we consider only the Quasi-Newton equation, then the general solution can be writ- ten in a form already considered by Adachi (1971) who used the theory of generalized inverses   (BT d − y)T WddT B = B − Wd + I − Q. dT Wd dT Wd where Q is arbitrary and W is arbitrary symmetric positive definite. Solution of multiple and weak quasi-Newton equation have been given by Zhang and Zhu (1995). The general solution of the Quasi-Newton equation with symmetry and sparsity con- ditions has been given by Spedicato and Zhao (1993) in a closed form, a result never before obtained. They showed that quasi SPD solutions can be obtained, in the sense that at most one eigenvalue is nonpositive. Also quasi-diagonally dominant updates exist, see Deng, Li and Spedicato (1996). Again we note that the above papers are only theoretical, hence numerical experiments should be performed. Here we also mention very briefly the existence of ABS methods for solving non- linear algebraic equations, that are documented in some thirty papers due mainly to Aurel Galantai, Zhijian Huang, Deng Naiyang and Zunquan Xia. Also for these meth- ods, that provide in several cases an improvement over Newton method, the published work is mainly theoretical, hence here is another important field of work for ABS methods.

15 Application to interior point methods

Interior point methods have been developed since some thirty years as an efficient alternative to the simplex method for LP problems, and even for more general problems. The number of published papers is over 3000, almost 5000 according to some estimates. They have a non trivial generalization to the so called semidefinite programming methods, that allow solution of classes of nonlinear problems and even have led to a characterization of the open problem P=NP? The best interior point method seems to be the one based on the primal-dual approach. Here the path to the solution is approached by solving via Newton’s method the nonlinear system provided by the optimality conditions. Each step of Newton’s method is characterized by the following linear system, where A is usually a sparse m by n matrix, with m < n 123 S. Emilio et al.

AD−2 AT x = b, where the i-th element Dii = µi /xi of the diagonal matrix D close to the solution tends to either zero or infinity (here µi is a Lagrange multiplier). Thus a very ill conditioned coefficient matrix arises and convergence, theoretically global and with quadratic rate, slows down. Writing the system as BBT x = b with B = AD−1 we get a system of normal equations of the second kind. Following the idea used in the ABS frame to solve the normal equations of the first kind arising in linear least squares, see Spedicato (1985), we write it as the following equivalent extended system where only B appears, not the product BBT :

BT x = y, By = b

The first subsystem in the above is overdetermined and must be solvable, hence y must lie in the column space of BT , i.e., in the row space of B. Now there is a unique solution of an underdetermined system that lies in the row space, and that is the solu- tion of least Euclidean norm, computable by the Huang algorithm started with the zero vector. For such a solution the first subsystem is solvable, with n − m redundant equations (if A is full rank) that are identified and removed in step (B) of the ABS algorithm. Thereby our approach provides Ð substantial reduction of the condition number, essentially to its square root. Preliminary experiments, see Bonomi et al. (2007), have shown that using the above extended system on certain ill conditioned B matrices gives a residual smaller up to 16 orders in Euclidean norm than by using as usual the Choleski method. Ð reduction of the total work, since the search vectors used to solve the first subsys- tem are independent of the scaling matrix, hence they may be computed only once at the first iteration. Such a fact is typical of nonlinear systems solved by ABS methods where part of the equations are linear. Our approach preserves sparsity in A. How much such a sparsity may reduce the work for computing the search directions is presently a research question.

References

Abaffy J (1979) A linearis egyenletrendszerek ltalnos megoldsnak egy modszerosztlya. Alkamazott Matematikai Lapok 5:223Ð240 Abaffy J (1984) A generalization of the ABS class. In: Proceedings of the 2nd conference on automata languages and mathematical systems. University of Economics, Budapest, pp 7Ð11 Abaffy J, Broyden C, Spedicato E (1984) A Class of direct methods for linear equations. Numerische Mathematik 45:361Ð376 Abaffy J, Spedicato E (1984) A generalization of Huang’s method for solving systems of linear algebraic equations. Bollettino Unione Matematica Italiana 3-B:517Ð529 Abaffy J, Spedicato E (1989) ABS projection algorithms, mathematical techniques for linear and nonlinear equations. Ellis Horwood, Chichester Abaffy J, Spedicato E (1990) A class of scaled direct methods for linear systems. Ann Inst Stat Math 42:187Ð201 Amiri-Mahdavi N, Khorramizadeh M (2007) Integer extended ABS algorithm and possible control of intermediate results for linear Diophantine systems. Report DOI 10, Sharif University of Technology 123 ABS methods for continuous and integer linear equations and optimization

Amiri-Mahdavi N, Khorramizadeh M (2008) On solving linear Diophantine systems using generalized Rosser’s algorithm. Bull Iran Math Soc 34:1Ð25 Bertocchi M, Spedicato E (1990) Block ABS algorithms for dense linear systems in a vector processor environment. In: Proceedings of the conference on Supercomputing, Pisa, December 1989, Franco Angeli, pp 39Ð46 Bodon E (1989) Biconjugate algorithms in the ABS class I: alternative formulation and theoretical proper- ties. Report DMSIA 3/89, University of Bergamo Bodon E (1993) Numerical experiments with ABS algorithms on banded systems of linear equations. Report TR/PA/93/13, CERFACS, Toulouse Bodon E, Luksan L, Spedicato E (2000) Computational experiments with linear ABS algorithms. Report 817, Institute of Computer Science, Academy of Sciences of Czech Republic , Prague Bonomi M, Del Popolo A, Spedicato E (2007) ABS solution of normal equations of second kind and application to the primal-dual interior point method for linear programming. Iran J Oper Res 1:28Ð34 Broyden C (1985) On the numerical stability of Huang and related methods. J Optim Theory Appl 47:401Ð 412 Egerváry E (1955Ð1956) Aufloesungen eines homogenen linearen diophantisches Gleichungsystems mit Hilf von Projektormatrizen. Public Math Debrecen. 4:481Ð483 Esmaeili H, Mahdavi-Amiri N, Spedicato E (2001) A class of ABS algorithms for Diophantine linear sys- tems. Numerische Mathematik 90:101Ð115 Fodor S (2001) Symmetric and nonsymmetric ABS methods for solving Diophantine systems of equations. Ann Oper Res 103:291Ð314 Galántai A (2003) Projectors and Projection Methods. Kluwer, Dordrecht Galántai A, Spedicato E (2007) ABS methods for nonlinear systems of algebraic equations. Iran J Oper Res 1:50Ð73 Huang HY (1975) A direct method for the general solution of a system of linear equations. J Optim Theory Appl 16:429Ð445 Li ZF (1996) Restarted and truncated versions of ABS methods for large linear systems: a basic framework. Report DMSIA 8/96, Department of Mathematics, University of Bergamo Matiyasevich Y (1970) Enumerable sets are Diophantine. Dokl Akad Nauk SSSR 191:279Ð282 Mirmia K (1996) Iterative refinement in ABS methods, Report DMSIA 32/96. University of Bergamo Sloboda F (1978) A parallel projection methods for linear algebraic systems. Apl Mat Ceskosl Akad Ved 23:185Ð198 Spedicato E (1985) On the solution of linear least squares through the ABS class for linear systems. Pro- ceedings Giornate di Lavoro AIRO Spedicato E, Bodon E (2002) Numerical experiments with the simplex method for the LP problem based upon Fletcher’s factors update and the ABS implicit LU and LX algorithms. Report DMSIA 02/6, University of Bergamo Spedicato E, Xia Z (1999) ABS formulation of Fletcher implicit LU method for factorizing a matrix and solving linear equations and LP problems. Report DMSIA 1/99, University of Bergamo Spedicato E, Zhao JX (1993) Explicit general solution of the Quasi-Newton equation with sparsity and symmetry. Optim Methods Softw 2:311Ð319 Spedicato E, Xia ZQ, Zhang LW (1997) The implicit LX method of the ABS class. Optim Methods Soft 8:99Ð110 Spedicato E, Bodon E, Del Popolo A, Xia ZQ (2000) ABS algorithms for linear systems and optimization: a review and bibliography. Ricerca Operativa 29(89):39Ð88 Tuma M (1993) Solving sparse unsymmetric sets of linear equations based on the implicit Gauss projection. Report 556, Institute Computer Science, Academy of Sciences, Prague Xia ZQ (1994) An efficient ABS algorithm for linearly constrained optimization. Report DMSIA 15/94, University of Bergamo Zhang LW (1995) An algorithm for the least Euclidean norm solution of a linear system of inequalities via the Huang ABS algorithm and the Goldfarb-Idnani strategy. Report DMSIA 2/95, University of Bergamo Zhang ZW, Zhu M (1995) Solving a least change problem under the weak secant condition by an ABS approach. Report MA 95-20, City University, Hong Kong

123