
OPTIMAL CONDITIONING OF VANDERMONDE-LIKE MATRICES AND A MEASUREMENT PROBLEM A dissertation submitted to Kent State University in partial fulfillment of the requirements for the degree of Doctor of Philosophy by Mykhailo Kuian May, 2019 Dissertation written by Mykhailo Kuian B.S.,Taras Shevchenko National University of Kyiv, 2010 M.S.,Taras Shevchenko National University of Kyiv, 2012 M.S., Kent State University, 2014 Ph.D., Kent State University, May, 2019 Approved by Lothar Reichel, Sergij Shiyanovskii , Chairs, Doctoral Dissertation Committee Jing Li , Members, Doctoral Dissertation Committee Xiaoyu Zheng , Arden Ruttan , Robin Selinger Accepted by Dr. Andrew Tonge , Chair, Department of Mathematical Sciences Dr. James Blank , Dean, College of Arts and Sciences TABLE OF CONTENTS TABLE OF CONTENTS . iii ACKNOWLEDGEMENTS . v 1 Introduction .................................... 1 1.1 Conditioning and condition numbers . 1 1.2 Interpolation, least squares problems and Vandermonde matrices . 4 1.3 Description of the measurement problem . 7 2 Optimally conditioned Vadermonde-like matrices ............ 9 2.1 Introduction . 9 2.2 Square Vandermonde-like matrices . 11 2.3 Rectangular Vandermonde-like matrices . 15 2.4 Szeg}o-Vandermonde matrices . 24 2.5 General Vandermonde-type matrices . 35 2.6 Chapter summary . 42 3 Fast factorization of rectangular Vandermonde matrices with Cheby- shev nodes ...................................... 43 3.1 Introduction . 43 3.2 Fast factorization methods for Vandermonde matrices defined by zeros of Chebyshev polynomials. 48 iii 3.3 Fast factorization methods for Vandermonde matrices defined by extrema of Chebyshev polynomials and extensions . 52 3.4 Numerical experiments. 54 3.5 Chapter summary . 59 4 Conditioning optimization of a measurement problem . 62 4.1 Introduction . 62 4.2 Determination of control parameters for CMPM systems . 64 4.3 Application of the proposed method to the PolScope . 70 4.4 Noise contamination of the measurement process . 80 4.5 Chapter summary . 84 5 Conclusion and future work .......................... 86 5.1 Conclusion . 86 5.2 Future work . 87 Bibliography . 89 iv ACKNOWLEDGEMENTS First of all, I would like to thank my advisors, Dr. Lothar Reichel and Dr. Sergij Shiyanovskii for wisdom, guidance and numerous advises which made it possible for this work to come true. They both have a great impact on my knowledge, professional devel- opment, and shared a lot of exciting ideas. I am very happy that I met those people in my life. I am grateful to all professors I have had a pleasure to take classes from at Kent State and Kyiv State Universities, and to my high school teachers. I would especially like to thank my family who always helped and motivated me. v CHAPTER 1 Introduction 1.1 Conditioning and condition numbers The main purpose of this dissertation is to describe several problems of the optimization of conditioning. Conditioning is one of the fundamental issues of numerical analysis, an essential tool for error analysis. Computations are performed in floating point arithmetic with inevitable and unavoidable errors of rounding. Furthermore, the data itself may be the result of an experiment or measurement that is subject to errors. Conditioning indicates how rounding errors in computations and errors in the data affect the computed solution. Let us start with the definition of conditioning. Conditioning of a problem concerns the sensitivity to perturbation of a mapping f from a normed vector space of data X to a normed vector space of solutions Y . The conditioning measures how much the output Y can change due to a small perturbation in the input X, which may stem from errors in the data or round-off errors introduced during the solution process. In other words, conditioning is an upper bound of the ratio of the size of the solution error to the size of the data error. The conditioning is measured by a condition number. Definition 1.1.1 Let δx denote small perturbation of x and let δf = f(x + δx) − f(x). The absolute condition number κ~ =κ ~(x) is defined as kδfk κ~ = lim sup : δ&0 kδxk≤δ kδxk Definition 1.1.2 The relative condition number κ~ =κ ~(x) is defined as kδf(x)k kδxk κ~ = lim sup = : δ&0 kδxk≤δ kf(x)k kxk 1 A small relative condition number indicates that a small relative error in the data only causes a small relative error in the solution, in this case problem is called well-conditioned. Conversely, a large condition number signals that the computed solution may be very sensitive to an error in the data; such a problem is called ill-conditioned. The conditioning of the system of linear equations (1.1) Ax = b; A 2 Cn×n; x 2 Cn; b 2 Cn; where the matrix A is nonsingular is of fundamental importance in numerical linear algebra. Theorem 1.1.1 Let the vector b be fixed and consider the problem of computing the solu- tion x of the system (1:1). The relative condition number of this problem with respect to perturbations in the matrix A is κ(A) = kAkkA−1k: Theorem 1.1.2 Let matrix A be fixed and consider the problem of computing solution x of the system (1:1). The relative condition number of this problem with respect to perturbations in the vector b is also κ(A) = kAkkA−1k: The least-squares problem can be formulated as residual minimization (1.2) minkAx − bk2; x2Cn N×n N where A 2 C is of full rank N ≥ n; b 2 C and k·k2 denotes the Euclidian vector norm. The solution x of (1:2) is given by x = Ayb; 2 where Ay denotes the Moore-Penrose pseudoinverse of A, Ay = (A∗A)−1A∗; when N ≥ n; and Ay = A(AA∗)−1; when n > N: Obviously Ay reduces to the inverse A−1 when n = N. The product (1.3) κ(A) = kAkkAyk is defined as condition number of matrix A. The next theorem describes conditioning properties of the least squares problem. Theorem 1.1.3 Let b 2 CN , the matrix A 2 CN×n be of full rank, and define y = AAyb, kAkkxk kyk η = , θ = arccos . The least squares problem (1:2) has the following spec- kyk kbk tral norm relative condition numbers: a) The relative condition number with respect to perturbations in the vector b equals κ(A) (1.4) κ = : b!x η cos θ b) The relative condition number with respect to perturbations in the matrix A equals κ(A)2 tan θ (1.5) κ = ; A!x η see Trefethen and Bau [50] for more details. As we can see from Theorems 1:1:1 and 1:1:2, the condition number of the matrix (1:3) furnishes a bound for the relative error in the solution of a linear system of equations (1:1). At the same time, from relations (1:4) and (1:5) of Theorem 1:1:3, the condition number also furnishes a bound for the relative error in the solution of least squares problems. Minimization of the matrix condition number (1:1:1) minimizes an upper bound for relative error in the computed solutions. The following section briefly describes Vandermonde matrices. 3 1.2 Interpolation, least squares problems and Vandermonde matrices Vandermonde matrices, see (1:6), arise frequently in computational mathematics in problems that require polynomial approximation, differentiation, or integration. These matrices are defined by a set of N distinct nodes x1, x2,...,xN and a monomial basis, 2 3 n−1 1 x1 ··· x1 6 7 6 n−1 7 6 1 x2 ··· x2 7 (1.6) V := 6 7 2 N×n: N;n 6 . 7 C 6 . 7 6 . ··· . 7 4 5 n−1 1 xN ··· xN When n = N, the Vandermonde matrix (1:6) has an explicit formula for the determinant, Y that equals (xi − xj). Quadratic, and consequently also rectangular Vandermonde 1≤j<i≤n matrices, are of full rank precisely when the nodes xi are distinct. The polynomial interpolation problem with distinct interpolation points and the poly- nomial represented in the power basis gives rise to a linear system of equations with a Vandermonde matrix. Suppose we are given n distinct points x1; :::; xn 2 C and data y1; :::; yn 2 C at these points. Then there exists a unique polynomial n−1 (1.7) p(x) = c0 + c1x + ::: + cn−1x ; such that p(xi) = yi, i = 1; :::; n. The relationship between the sets fxig and fyig is ex- pressed via a Vandermode matrix 4 2 3 2 3 2 3 n−1 1 x1 ··· x1 c0 y1 6 7 6 7 6 7 6 n−1 7 6 . 7 6 . 7 6 1 x2 ··· x2 7 6 . 7 6 . 7 (1.8) 6 7 6 7 = 6 7 6 . 7 6 7 6 7 6 . 7 6 7 6 7 6 . ··· . 7 6 7 6 7 4 5 4 5 4 5 n−1 1 xn ··· xn cn−1 yn Least squares fitting problem is an indispensable tool, where rectangular Vandermonde matrices appear N n−1 X 2 X j T (1.9) min (p(c; xi) − yi) ; p(c; x) = cjx ; c = [c0; c1; : : : ; cn−1] ; c2Rn i=1 j=0 PN 2 2 The sum of squares i=1 (p(c; xi) − yi) is equal to the square of the residual kV x − bk2 for the rectangular Vandermonde system 2 3 2 3 2 3 n−1 1 x1 ··· x1 c0 y1 6 7 6 7 6 7 6 n−1 7 6 . 7 6 . 7 6 1 x2 ··· x2 7 6 . 7 6 . 7 6 7 6 7 ≈ 6 7 6 . 7 6 7 6 7 6 . 7 6 7 6 7 6 .
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages99 Page
-
File Size-