LANCZOS and GOLUB-KAHAN REDUCTION METHODS APPLIED to ILL-POSED PROBLEMS a Dissertation Submitted to Kent State University In
Total Page:16
File Type:pdf, Size:1020Kb
LANCZOS AND GOLUB-KAHAN REDUCTION METHODS APPLIED TO ILL-POSED PROBLEMS A dissertation submitted to Kent State University in partial fulfillment of the requirements for the degree of Doctor of Philosophy by Enyinda N. Onunwor May, 2018 Dissertation written by Enyinda N. Onunwor A.A., Cuyahoga Community College, 1998 B.S., Youngstown State University, 2001 B.S., Youngstown State University, 2001 M.S., Youngstown State University, 2003 M.A., Kent State University, 2011 Ph.D., Kent State University, 2018 Approved by , Chair, Doctoral Dissertation Committee Lothar Reichel , Member, Doctoral Dissertation Committee Jing Li , Member, Doctoral Dissertation Committee Jun Li , Member, Outside Discipline Arden Ruttan , Member, Graduate Faculty Representative Arvind Bansal Accepted by , Chair, Department of Mathematical Sciences Andrew Tonge , Dean, College of Arts and Sciences James L. Blank TABLE OF CONTENTS LIST OF FIGURES . v LIST OF TABLES . vii ACKNOWLEDGEMENTS . x NOTATION . xii 1 Introduction . 1 1.1 Overview . 1 1.2 Regularization methods . 2 1.2.1 Truncated singular value decomposition (TSVD) . 2 1.2.2 Truncated eigenvalue decomposition (TEVD) . 4 1.2.3 Tikhonov regularization . 5 1.2.4 Regularization parameter: the discrepancy principle . 7 1.3 Krylov subspace methods . 8 1.3.1 The Arnoldi method . 9 1.3.2 The symmetric Lanczos process . 11 1.3.3 Golub-Kahan bidiagonalization . 13 1.3.4 Block Krylov methods . 15 1.4 The test problems . 16 1.4.1 Descriptions of the test problems . 16 2 Reduction methods applied to discrete ill-posed problems . 20 2.1 Introduction . 20 iii 2.2 Application of the symmetric Lanczos method . 21 2.3 Application of the Golub–Kahan reduction method . 29 2.4 Computed examples . 31 2.5 Conclusion . 42 3 Computation of a truncated SVD of a large linear discrete ill-posed problem . 44 3.1 Introduction . 44 3.2 Symmetric linear discrete ill-posed problems . 45 3.3 Nonsymmetric linear discrete ill-posed problems . 47 3.4 Computed examples . 48 3.5 Conclusion . 61 4 Solution methods for linear discrete ill-posed problems for color image restoration . 62 4.1 Solution by partial block Golub–Kahan bidiagonalization . 66 4.2 The GGKB method and Gauss-type quadrature . 72 4.3 Golub–Kahan bidiagonalization for problems with multiple right-hand sides . 77 4.4 Computed examples . 81 4.5 Conclusion . 87 BIBLIOGRAPHY . 88 iv LIST OF FIGURES 1 Behavior of the bounds (2.2.1) (left), (2.2.7) (center), and (2.3.1) (right), with respect to the iteration index `. The first test matrix is symmetric positive definite, the second is symmetric indefinite, and the third is unsymmetric. The left-hand side of each inequality is represented by crosses, the right-hand side by circles. 31 2 The graphs in the left column display the relative error Rl,k between the eigenvalues of the symmetric test problems, and the corresponding Ritz values generated by the Lanczos process. The right column shows the behavior of Rsˆ ,k for the unsymmetric problems; see (2.4.1) and (2.4.3). 33 3 Distance between the subspace spanned by the first dk/3e eigenvectors (resp. singular vec- tors) of the symmetric (resp. nonsymmetric) test problems, and the subspace spanned by the corresponding Lanczos (resp. Golub–Kahan) vectors; see (2.4.2) and (2.4.4). 35 T (2) 4 Distance kVk,iVn−ik, i = 1,2,:::,k, between the subspace spanned by the first i eigenvectors of the Foxgood (left) and Shaw (right) matrices, and the subspace spanned by the corre- sponding i Ritz vectors at iteration k = 10. 36 5 Distance between the subspace spanned by the first dk/2e eigenvectors (resp. singular vec- tors) of selected symmetric (resp. nonsymmetric) test problems and the subspace spanned by the corresponding Lanczos (resp. Golub–Kahan) vectors. The index ` ranges from 1 to either the dimension of the matrix (n = 200) or to the iteration where there is a breakdown in the factorization process. 37 T (2) T (2) 6 Distance maxfkVk,iVn−ik,kUk,iUn−ikg, i = 1,2,:::,k, between the subspace spanned by the first i singular vectors of the Heat (left) and Tomo (right) matrices and the subspace spanned by the corresponding i Golub–Kahan vectors at iteration k = 100. 38 v 7 The first four LSQR solutions to the Baart test problem (thin lines) are compared to the corresponding TSVD solutions (dashed lines) and to the exact solution (thick line). The size of the problem is n = 200, the noise level is d = 10−4. The thin and dashed lines are very close. 41 8 Convergence history for the LSQR and TSVD solutions to the Tomo example of size n = −2 225, with noise level d = 10 . The error ELSQR has a minimum at k = 66, while ETSVD is minimal for k = 215. 42 9 Solution by LSQR and TSVD to the Tomo example of size n = 225, with noise level d = 10−2: exact solution (top left), optimal LSQR solution (top right), TSVD solution cor- responding to the same truncation parameter (bottom left), optimal TSVD solution (bottom right). 43 1 Example 2: Original image (left), blurred and noisy image (right) . 85 2 Example 2: Restored image by Algorithm 5 (left), and restored image by Algorithm 6 (right). 85 3 Example 3: Cross-channel blurred and noisy image (left), restored image by Algorithm 6 (right). 86 vi LIST OF TABLES 1 Solution of symmetric linear systems: the errors ELanczos and ETEIG are optimal for truncated Lanczos iteration and truncated eigenvalue decomposition. The corresponding truncation parameters are denoted by kLanczos and kTEIG. Three noise levels d are considered; ` denotes the number of Lanczos iterations performed. 38 2 Solution of nonsymmetric linear systems: the errors ELSQR and ETSVD are optimal for LSQR and TSVD. The corresponding truncation parameters are denoted by kLSQR and kTSVD. Three noise levels are considered; ` denotes the number of Golub–Kahan iterations performed. 40 1 foxgood test problem. 49 2 shaw test problem. 50 3 shaw test problem. 51 4 phillips test problem. 52 5 baart test problem. 53 6 baart test problem. 53 7 Inverse Laplace transform test problem. 54 8 Example 3.6: Relative errors and number of matrix-vector products, d˜ = 10−2. The initial vector for the first Golub–Kahan bidiagonalization computed by irbla is a unit random vector. 54 9 Example 3.6: Relative errors and number of matrix-vector products, d˜ = 10−2. The initial vector for the first Golub–Kahan bidiagonalization computed by irbla is b/kbk. 54 10 Example 3.6: Relative errors and number of matrix-vector products, d˜ = 10−4. The initial vector for the first Golub–Kahan bidiagonalization computed by irbla is b/kbk. 55 11 Example 3.6: Relative errors and number of matrix-vector products, d˜ = 10−6. The initial vector for the first Golub–Kahan bidiagonalization computed by irbla is b/kbk. 55 12 Relative errors and number of matrix-vector product evaluations, d˜ = 10−2. 60 vii 13 Relative errors and number of matrix-vector product evaluations, d˜ = 10−4. 60 14 Relative errors and number of matrix-vector product evaluations, d˜ = 10−6. 61 1 Results for the phillips test problem . 82 2 Results for the baart test problem . 83 3 Results for the shaw test problem . 83 4 Results for Example 2 . 84 viii To Olivia and Kristof ix ACKNOWLEDGEMENTS This work would not have been possible without the wisdom, support, and tireless assistance of my advisor, Lothar Reichel. I genuinely appreciate both his patience with me and the guidance he has given me over the years. His invaluable encouragement and counsel have been critical in facilitating the progress I have made to this point. He has truly been a blessing, and he has made a positive impact on my life. In addition, I extend my undying gratitude to my committee: Jing Li, Jun Li, Arden Ruttan, and Arvind Bansal. I am tremendously indebted to them for their collective time, effort, and direction. I would be remiss if I failed to recognize the important contributions made by the following collaborators: Silvia Gazzola, Giuseppe Rodriguez, Mohamed El Guide, Abdeslem Bentbib, and Khalide Jbilou. A special thanks to Xuebo Yu for helping me debug my codes and for his valuable input. I honor the memory of my parents, HRH Sir Wobo Weli Onunwor and Dame Nchelem Onunwor. Their legacy of love, strength, determination, support, and faith imbued me with the courage I needed to achieve this objective, and they will forever endure in my spirit and in my work. My sister, Chisa, is one of the most brilliant people I know; her fortitude and determination are unmatched, and I am inspired by her integrity and work ethic. My oldest brother, HRH Nyema Onunwor, sets the example for the rest of us; he helps us maintain a calm demeanor in the face of the challenges we encounter and remains a constant voice of reason. I offer my deep respect and admiration to my other siblings, Rommy, Acho, and Emenike, for helping me maintain my sanity through this process. Their stimulating conversations and the familial communion we share sustained and comforted me when I was in need of a respite during challenging moments. My thanks to Dike Echendu for is wisdom and advice. Special thanks to two of my closest friends, Dennis Frank-Ito and Ian Miller for their mathematical insights and constant encouragement. My cousins Anderson, Blessing, Charles, Mary-Ann, and Gloria are like siblings to me, and their parents, Dr. Albert and Ezinne Charity Nnewihe, have acted as my parental figures. I will be eternally grateful to them for their emotional support x and loving guidance.