Institute of Computer Science Efficient Tridiagonal Preconditioner for The

Institute of Computer Science Efficient Tridiagonal Preconditioner for The

Institute of Computer Science Academy of Sciences of the Czech Republic Efficient tridiagonal preconditioner for the matrix-free truncated Newton method Ladislav Lukˇsan,Jan Vlˇcek Technical report No. 1177 January 2013 Pod Vod´arenskou vˇeˇz´ı2, 182 07 Prague 8 phone: +420 2 688 42 44, fax: +420 2 858 57 89, e-mail:e-mail:[email protected] Institute of Computer Science Academy of Sciences of the Czech Republic Efficient tridiagonal preconditioner for the matrix-free truncated Newton method Ladislav Lukˇsan,Jan Vlˇcek 1 Technical report No. 1177 January 2013 Abstract: In this paper, we study an efficient tridiagonal preconditioner, based on numerical differentiation, applied to the matrix-free truncated Newton method for unconstrained optimization. It is proved that this preconditioner is positive definite for many practical problems. The efficiency of the resulting matrix-free truncated Newton method is demonstrated by results of extensive numerical experiments. Keywords: 1This work was supported by the Institute of Computer Science of the AS CR (RVO:67985807) 1 Introduction We consider the unconstrained minimization problem x∗ = arg min F (x); x2Rn where function F : D(F ) ⊂ Rn ! R is twice continuously differentiable and n is large. We use the notation g(x) = rF (x);G(x) = r2F (x) and the assumption that kG(x)k ≤ G, 8x 2 D(F ). Numerical methods for unconstrained minimization are usually iterative and their iteration step has the form xk+1 = xk + αkdk; k 2 N; where dk is a direction vector and αk is a step-length. In this paper, we will deal with the Newton method, which uses the quadratic model 1 F (x + d) ≈ Q(x + d) = F (x ) + gT (x )d + dT G(x )d (1) k k k k 2 k for direction determination in such a way that dk = arg min Q(xk + d): (2) d2Mk There are two basic possibilities for direction determination: the line-search method, where n n Mk = R ; and the trust-region method, where Mk = fd 2 R : kdk ≤ ∆kg (here ∆k > 0 is the trust region radius). Properties of line search and trust region methods together with comments concerning their implementation are exhaustively introduced in [3], [19], so no more details are given here. In this paper, we assume that neither matrix Gk = G(xk) nor its sparsity pattern are explicitly known. In this case, direct methods based on Gaussian elimination cannot be used, so it is necessary to compute the direction vector (2) iteratively. There are many various iterative methods making use of a symmetry of the Hessian matrix, see [23]. Some of them, e.g. [7], [8], [21] allow us to consider indefinite Hessian matrices. Even if these methods are of theoretical interest and lead to nontraditional preconditioners, see [9] and [10], we confine our attention to modifications of the conjugate gradient method [24], [25], [26], which are simple and very efficient (also in the indefinite case). We studied and tested both the line search and the trust region approaches, but the second approach did not give significantly better results than the first one. Therefore, we restrict our attention to the line search implementation of the truncated Newton method. Since matrix G(x) is not given explicitly, we use numerical differentiation instead of matrix multiplication. Thus the product G(x)p is replaced by the difference g(x + δp) − g(x) G(x)p ≈ (3) δ 1 p where δ = "=kpk (usually " ≈ "M where "M is the machine precision). The resulting method is called the truncated Newton method. This method has been theoretically studied in many papers, see [4], [5], [17], [20]. The following theorem, which easily follows from the mean value theorem, confirms the choice (3). Theorem 1 Let function F : Rn ! R have Lipschitz continuous second order derivatives (with the Lipschitz constant L). Let q = G(x)p and g(x + δp) − g(x) " q~ = ; δ = : δ kpk Then it holds 1 kq~ − qk ≤ "Lkpk: 2 To make the subsequent investigations clear, we briefly describe the preconditioned conjugate gradient subalgorithm proposed in [24] where matrix multiplications are replaced by gradient differences (the outer index k is for the sake of simplicity omitted). Truncated Newton PCG subalgorithm: −1 T − d1 = 0; g1 = g; h1 = C g1; ρ1 = g1 h1; p1 = h1: Do i = 1 to n + 3 k k − T δi = "= pi ; q~i = (g(x + δpi) g(x))/δi; σi = pi q~i: 2 If σi < "kpik then d = di; stop: αi = ρi/σi; di+1 = di + αipi; gi+1 = gi + αiq~i; −1 T hi+1 = C gi+1; ρi+1 = gi+1hi+1: If kgi+1k ≤ !kg1k or i = m then d = di; stop: βi = ρi+1/ρi; pi+1 = −hi+1 + βipi: End do A disadvantage of the truncated Newton PCG subalgorithm with C = I (unprecondi- tioned) consists in the fact that it requires a large number of inner iterations (i.e. a large number of gradient evaluations) if matrix G = G(x) is ill-conditioned. Thus a suitable preconditioner should be used. Unfortunately, the sparsity pattern of G is not known, so the standard preconditioning methods requiring the knowledge of the sparsity pattern (e.g. methods based on the incomplete Choleski decomposition) cannot be chosen. There are various ways for building positive definite preconditioners, which can be utilized in the truncated Newton PCG subalgorithm: • Preconditioners based on the limited memory BFGS updates. This very straightfor- ward approach is studied in [12] and [16]. • Band preconditioners obtained by the standard BFGS method equivalent to the preconditioned conjugate gradient method. This approach is described in [18], where it is used for building diagonal preconditioners. More general band preconditioners of this type are studied in [14]. 2 • Band preconditioners obtained by numerical differentiation. This approach is used in [22] for building diagonal preconditioners. More general band preconditioners of this type are studied in [14]. • Preconditioners determined by the Lanczos method equivalent to the conjugate gra- dient method. This approach is studied in [9], [10] and [17]. In this paper, we propose new results concerning tridiagonal preconditioners obtained by numerical differentiation and show that they are very efficient in connection with the truncated Newton method. This efficiency can be observed from tables and figures in- troduced in Section 3, where the comparison of our two implementations of the tridiag- onally preconditioned truncated Newton method with the unpreconditioned method and the method preconditioned by the limited memory BFGS updates is given. 2 Tridiagonal preconditioners based on the numerical differentiation If the Hessian matrix is tridiagonal, its elements can be simply approximated by numerical differentiation. If the Hessian matrix is not tridiagonal, we can use this process to determine a suitable tridiagonal preconditioner. Numerical differentiation is performed only once at the beginning of the outer step of the Newton method. In order to determine all elements of the tridiagonal Hessian matrix, it suffices to use two − − gradient differences g(x+"v1) g(x) and g(x+"vp2) g(x), where v1 = [δ1; 0; δ3; 0; δ5; 0;:::], v2 = [0; δ2; 0; δ4; 0; δ6;:::], and " > 0 (usually " ≈ "M ), which means to compute two extra gradients during each outer step of the Newton method. The differences δi, 1 ≤ i ≤ n, can be chosen by two different ways: q (1) We set δi = δ, 1 ≤ i ≤ n, where δ > 0 (usually δ ≈ 2=n). (2) We set δi = max(jxij; 1), 1 ≤ i ≤ n. Theorem 2 Let the Hessian matrix of function F : Rn ! R be tridiagonal of the form 2 3 α β ::: 0 0 6 1 1 7 6 7 6 β1 α2 ::: 0 0 7 6 7 T = 6 ::::::::::::::: 7 : (4) 6 7 4 0 0 : : : αn−1 βn−1 5 0 0 : : : βn−1 αn Set v1 = [δ1; 0; δ3; 0; δ5; 0;:::], v2 = [0; δ2; 0; δ4; 0; δ6;:::], where δi > 0, 1 ≤ i ≤ n. Then for 2 ≤ i ≤ n − 1 one has g1(x + "v1) − g1(x) g1(x + "v2) − g1(x) α1 = lim ; β1 = lim ; "!0 εδ1 "!0 εδ2 gi(x + "v1) − gi(x) gi(x + "v2) − gi(x) δi−1 αi = lim ; βi = lim − βi−1 ; mod(i; 2) = 1; "!0 εδi "!0 εδi+1 δi+1 3 gi(x + "v2) − gi(x) gi(x + "v1) − gi(x) δi−1 αi = lim ; βi = lim − βi−1 ; mod(i; 2) = 0; "!0 εδi "!0 εδi+1 δi+1 gn(x + "v1) − gn(x) αn = lim ; mod(n; 2) = 1; "!0 εδn gn(x + "v2) − gn(x) αn = lim ; mod(n; 2) = 0: "!0 εδn Proof Theorem 1 implies that g(x + "v1) − g(x) = "G(x)v1 + o("), g(x + "v2) − g(x) = "G(x)v2 +o("), so after substituting G(x) = T , where T is a tridiagonal matrix of the form (4), and rearranging individual elements we obtain g1(x + "v1) − g1(x) g1(x + "v2) − g1(x) = α1 + o(1); = β1 + o(1); εδ1 εδ2 gi(x + "v1) − gi(x) gi(x + "v2) − gi(x) δi−1 = αi + o(1); = βi + βi−1 + o(1); mod(i; 2) = 1; εδi εδi δi+1 gi(x + "v2) − gi(x) gi(x + "v1) − gi(x) δi−1 = αi + o(1); = βi + βi−1 + o(1); mod(i; 2) = 0; εδi εδi δi+1 gn(x + "v1) − gi(x) = αn + o(1); mod(n; 2) = 1; εδi gn(x + "v2) − gi(x) = αn + o(1); mod(n; 2) = 0; εδi where 2 ≤ i ≤ n−1. Since ratios δi−1/δi+1, 2 ≤ i ≤ n−1, are indepenent of ", the theorem is proved. 2 Remark 1 Theorem 2 specifies an efficient way for building a tridiagonal preconditioner. We choose fixed numbers ", δi, 1 ≤ i ≤ n, and compute elements of the tridiagonal matrix C = T (") according to formulas mentioned in Theorem 2, where the limits are omitted.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    20 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us