Solving Hankel Matrix Approximation Problem Using Semidefinite Programming

Solving Hankel Matrix Approximation Problem Using Semidefinite Programming

Journal of Computational and Applied Mathematics 202 (2007) 304–314 www.elsevier.com/locate/cam Solving Hankel matrix approximation problem using semidefinite programming Suliman Al-Homidan∗ Department of Mathematical Sciences, King Fahd University of Petroleum and Minerals, PO Box 119, Dhahran 31261, Saudi Arabia Received 9 July 2005; received in revised form 22 February 2006 Abstract Positive semidefinite Hankel matrices arise in many important applications. Some of their properties may be lost due to rounding or truncation errors incurred during evaluation. The problem is to find the nearest matrix to a given matrix to retrieve these properties. The problem is converted into a semidefinite programming problem as well as a problem comprising a semidefined program and second-order cone problem. The duality and optimality conditions are obtained and the primal–dual algorithm is outlined. Explicit expressions for a diagonal preconditioned and crossover criteria have been presented. Computational results are presented. A possibility for further improvement is indicated. © 2006 Elsevier B.V. All rights reserved. MSC: 65F99; 99C25; 65K30 Keywords: Primal–dual interior-point method; Hankel matrix; Semidefinite programming 0. Introduction In view of its beautiful theory having significant applications, finite and infinite Hankel matrices have attracted attention of researchers working in different disciplines, see for example [21,8]. The related concept of structural matrix approximation problem having applications to image processing has been studied in [13,18]. Park et al. [20] have presented a method for structure preserving low-rank approximation of matrix which is based on structured total least norm (STLN). They have observed that a classical problem with a new STLN-based algorithm can have a significant impact on the model reduction problem with Hankel matrices. It may be noted that such a problem arises in many branches of engineering such as speech encoding, filter design and higher-order state-space models of dynamic systems. Motivated by [2–4,17], we have studied, as an extension of the work in [1], approximation of an arbitrary matrix by positive semidefinite Hankel matrices. Macinnes [17] has used a full rank of a particular Hankel matrix while Al-Homidan [2,3] has used a low-rank Hankel matrix based on the projection algorithm and the Newton method. The projection algorithm provides global convergence with slow convergence rate. This method has been combined with the Newton method to yield the best features of both approaches. ∗ Tel.: +966 386 04490; fax: +966 386 02340. E-mail address: [email protected]. 0377-0427/$ - see front matter © 2006 Elsevier B.V. All rights reserved. doi:10.1016/j.cam.2006.02.033 S. Al-Homidan / Journal of Computational and Applied Mathematics 202 (2007) 304–314 305 Anjos et al. [5] have studied a semidefinite programming approach for the nearest correlation matrix problem. It may be recalled that the nearest correlation matrix problem is to find a positive definite matrix with unit diag- onal that is nearest in the Frobenius norm to a given symmetric matrix. This problem has been formulated as an optimization problem with quadratic objective function and semidefinite programming constraints by Anjos et al. and they have developed several algorithms to solve this problem. In the present paper we have studied a related problem to the nearest correlation matrix problem where the positive semidefinite matrix with unit diagonal is re- placed by a positive semidefinite Hankel matrix. More precisely, we have discussed the solution of the following problem: Given an arbitrary data matrix A ∈ Rn×n, find the nearest positive semidefinite Hankel matrix H to A, i.e., = 1 H − A2 minimize 2 F subject to H ∈ P ∩ O, (0.1) n where P is the set of all n × n symmetric positive semidefinite matrices P ={H : H 0,H ∈ S }, O is the set of Sn A2 = n A2 all Hankel matrices, and is the set of all symmetric matrices. Here, F i,j=1 ij is the Frobenius norm squared. Section 1 is devoted to the basic results needed in subsequent sections while the projection method for solving (0.1) is given in Section 2. In Section 3, we formulate the problem first as a semidefined programming problem (SDP), then as a mixed SDP and second-order cone problem (SOC). The duality and optimality conditions for quadratic programs are presented in Section 4. The primal–dual algorithm is outlined in Section 5 including explicit expressions for diagonal preconditioner and crossover criteria. Computational results are presented in Section 6 and concluding remarks are given in Section 7. 1. Notation n n Define M to be the space of n × n matrices, then for a general rectangular matrix M =[m1 m2 ... mn]∈M , v = vec (M) forms a vector from columns of M. The inverse mapping, vec −1, and the adjoint mapping, vec ∗, are given by Mat = vec −1 = vec ∗, the adjoint formula following from vec (M), u=M,vec ∗(u). We use the trace inner product M,N=trace MTN, which induces the Frobenius norm. With this inner product, Mat (and vec ) is an isometry. M† denotes the Moore–Penrose generalized inverse, e.g., [7]. Define e the vector of ones and ek the zero vector with one in the k-position. Define PO(W) to be the orthogonal projection of W onto the subspace of Hankel matrices O. We also need the operator H : R2n−1 −→ O ⎡ ⎤ x1 x2 ... xn ⎢ ⎥ ⎢ ⎥ ⎢ x x ... xn+ ⎥ ⎢ 2 3 1 ⎥ H(x) = ⎢ ⎥ ⎢ . ⎥ . (1.1) ⎢ . .. ⎥ ⎣ ⎦ xn xn+1 ... x2n−1 Also, define the isometry operator hvec : R2n−1 −→ O −→ R2n−1 as √ √ √ T hvec (H(x)) = hvec (x) =[x1 2x2 ··· nxn ··· 2x2n−2 x2n−1] (1.2) for any x ∈ R2n−1. hvec is a linear operator satisfying the following: for any x,y ∈ R2n−1 H(x) · H(y) = hvec (x)Thvec (y), (1.3) H(x) − H(y)2 = ( (x) − (y))T( (x) − (y)) F hvec hvec hvec hvec . (1.4) 306 S. Al-Homidan / Journal of Computational and Applied Mathematics 202 (2007) 304–314 n 2 −1 n Here U · U = trace(UU) = i,j Ui,j . Let hMat = hvec denote the inverse mapping into S , i.e., the one–one mapping between R2n−1 and O. The adjoint operator hMat ∗ = hvec , since hMat (v), H(s)=trace hMat (v)H(s) ⎛ ⎞ v2 vn v1 √ ... √ ⎛ ⎞ ⎜ 2 n ⎟ s s ... s ⎜ ⎟ 1 2 n ⎜ ⎟ ⎜ ⎟ ⎜ v2 v3 vn+1 ⎟ ⎜ ⎟ ⎜ √ √ ... √ ⎟ ⎜ s s ... sn+ ⎟ ⎜ n − ⎟ ⎜ 2 3 1 ⎟ = ⎜ 2 3 1 ⎟ ⎜ ⎟ trace ⎜ ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ . .. ⎟ ⎜ . ⎟ ⎝ ⎠ ⎜ . ⎟ ⎝ ⎠ vn vn+1 sn sn+1 ... s2n−1 √ √ ... v2n−1 n n − 1 v2 vn = v1s1 + 2√ s2 +···+n√ sn +···+v2n−1s2n−1 2 n ∗ =hvec (H(s)), v=hMat (H(s)), v. 2. The projection method The method of successive cyclic projections onto closed subspaces Ci’s was first proposed by vonNeumann [24] and independently by Wiener [25]. They showed that if, for example, C1 and C2 are subspaces and D is a given point, then the nearest point to D in C1 ∩ C2 could be obtained by Algorithm 2.1. Alternating projection algorithm Let X1 = D For k = 1, 2, 3,... Xk+1 = P1(P2(Xk)). Then Xk converges to the nearest point D in C1 ∩ C2, where P1 and P2 are the orthogonal projections on C1 and C2, respectively. Dykstra [11] modified von Neumann’s algorithm to handle the situation when C1 and C2 are replaced by convex sets. Other proofs and connections to duality along with applications were given in [15]. The modified Neumann’s algorithm when applied to (0.1) for a given data matrix A: Algorithm 2.2. Modified alternating projection algorithm Let A1 = A For j = 1, 2, 3,... Aj+1 = Aj +[PP(PO(Aj )) − PO(Aj )]. Then {PO(Aj )} and {PP(PO(Aj ))} converge in Frobenius norm to the solution. Here, PP(A) is the projection of A onto the convex cone P. It is simply setting the negative eigenvalues of the spectral decomposition of A to zero. 3. Mixed-cone formulation In this section we introduce briefly a direct approach for solving (0.1) which is obtained by formulating it first as an SDP problem then as a mixed SDP and second-order cone problem, more details given in [1]. To take the advantage of the isometry operator hvec, we need A to be Hankel. If we project A onto O,wegetPO(A). The following lemma shows that the nearest symmetric Hankel positive semidefinite matrix to A is exactly equal to the nearest symmetric Hankel positive semidefinite matrix to PO(A). S. Al-Homidan / Journal of Computational and Applied Mathematics 202 (2007) 304–314 307 Lemma 3.1. Let H(x) be the nearest symmetric Hankel positive semidefinite matrix to PO(A), then H(x) is so for A. Proof. If PO(A) is positive semidefinite, then we are done. If not, then for any H(x) ∈ O,wehave (H(x) − PO(A)) · (PO(A) − A) = 0 since PO(A) is the orthogonal projection of A. Thus, H(x) − A2 =H(x) − P (A)2 +P (A) − A2 F O F O F. This complete the proof since the second part of the above equation is constant. In view of Lemma 3.1, (0.1) is equivalent to the problem ∗ = 1 H(x) − P (A)2 minimize 2 O F subject to H(x)0. (3.1) Now, we have the following equivalences (for 0 ∈ R): H(x) − P (A)2 O F ⇔ (hvec (x) − hvec (a))T(hvec (x) − hvec (a)) by (1.4) ⇔ − (hvec (x) − hvec (a))TI(hvec (x) − hvec (a))0 I(hvec (x) − hvec (a)) ⇔ 0 by Schur complement, (3.2) (hvec (x) − hvec (a))T where H(a) = PO(A).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    11 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us