B553 Lecture 5: Matrix Algebra Review

Total Page:16

File Type:pdf, Size:1020Kb

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in Rn and gradients of functions. Matrices represent linear transformations of vector quantities. This lecture will present standard matrix notation, conventions, and basic identities that will be used throughout this course. During the course of this discussion we will also drop the boldface notation for vectors, and it will remain this way for the rest of the class. 1 Matrices A matrix A represents a linear transformation of an n-dimensional vector space to an m-dimensional one. It is given by an m×n array of real numbers. Usually matrices are denoted as uppercase letters (e.g., A; B; C), with the entry in the i'th row and j'th column denoted in the subscript ·i;j, or when it is unambiguous, ·ij (e.g., A1;2;A1p). 2 3 A1;1 ··· A1;n 6 . 7 A = 4 . 5 (1) Am;n ··· Am;n 1 1.1 Matrix-Vector Product An m × n matrix A transforms vectors x = (x1; : : : ; xn) into m-dimensional vectors y = (y1; : : : ; ym) = Ax as follows: n X y1 = A1jxj j=1 ::: (2) n X ym = Amjxj j=1 Pn Or, more concisely, yi = j=1 Aijxj for i = 1; : : : ; m. (Note that matrix- vector multiplication is not symmetric, so xA is an invalid operation.) Linearity of matrix-vector multiplication. We can see that matrix- vector multiplication is linear, that is A(ax+by) = aAx+bAy for all a, b, x, and y. It is also linear in terms of component-wise addition and multiplica- tion of matrices, as long as the matrices are of the same size. More precisely, if A and B are both m × n matrices, then (aA + bB)x = aAx + bBx for all a, b, and x. Identity matrix. One special matrix that occurs frequently is the n × n identity matrix In, which has 0's in all off-diagonal positions Iij with i 6= j, and 1's in all diagonal positions Iii. It is significant because Inx = x for all x 2 Rn. 1.2 Matrix Product When two linear transformations are performed one after the other, the result is also a linear transformation. Suppose A is m × n, B is n × p, and x is a p-dimensional vector, and consider the result of A(Bx) (that is, first multiplying by B and then multiplying the result by A). We see that p p X X Bx = ( B1jxj;:::; Bnjxj) (3) j=1 j=1 and n n X X Ay = ( A1kyk;:::; Amkyk) (4) k=1 k=1 2 So n p n p ! X X X X A(Bx) = A1k( Bkjxj);:::; Amk( Bkjxj) : (5) k=1 j=1 k=1 j=1 Rearranging the summations, we see that p n p n ! X X X X A(Bx) = ( A1kBkj)xj);:::; ( AmkBkjxj) : (6) j=1 k=1 j=1 k=1 In other words, we could have A(Bx) = Cx if we were to form a matrix C such that n X Cij = AikBkj (7) k=1 This is exactly the definition of the matrix product, and we say C = AB. The entry Cij of can also be obtained taking the dot-product of the i'th column of A and the j'th column of B. Matrix product is associative but not symmetric. By the above derivation we can drop the parentheses A(Bx) = (AB)x. So, matrix-vector and matrix-matrix multiplication are associative. Note again however that matrix-matrix multiplication is not symmetric, that is AB 6= BA in general. Column and row vectors. Note that if we were to write an n-dimensional vector x stacked in a n × 1 matrix x (denoted in lowercase), we can turn the matrix-vector y = Ax into the matrix product y = Ax. Here, if A is an m × n matrix, then y is an m × 1 matrix. 2 3 2 3 2 3 y1 A1;1 ··· A1;n x1 6 . 7 6 . 7 6 . 7 4 . 5 = 4 . 5 4 . 5 (8) ym Am;n ··· Am;n xn Hence, there is a one-to-one correspondence between vectors and matrices with one column. These matrices are called column vectors and will be our default notation for vectors throughout the rest of the course. We will occasionally also deal with row vectors, which are matrices with a single row. 1.3 Transpose The transpose AT of a matrix A simply switches A's rows and columns. T (A )ij = Aji: (9) 3 If A is m × n, then AT is n × m. Symmetric matrix. If A = AT , then A is symmetric. 1.4 Matrix Inverse An inverse A−1 of an n × n square matrix A is a matrix that satisfies the following equation: −1 −1 AA = A A = In (10) where In is the identity matrix. Not all square matrices have an inverse, in which case we say A is not invertible (or singular). Invertible matrices are significant because the unique solution x to the system of linear equations Ax = b, is simply A−1b. This holds for any b. If the matrix is not invertible, then such an equation may or may not have a solution. Orthogonal matrix. An orthogonal matrix is a square matrix that satisfies T AA = In. In other words, its transpose is its inverse. 1.5 Matrix identities Identities involving the transpose: • (cA)T = cAT for any real value c. • (A + B)T = AT + BT . • (AB)T = BT AT . • All 1×1 matrices are symmetric, the identity matrix is symmetric, and all uniform scalings of a symmetric matrix are symmetric. • A + AT is symmetric. • The dot product x·y is equal to xT y, with x and y denoting the column vector representations of x and y, respectively. • xT Ay = yT AT x, with x and y column vectors. Identities involving the inverse: −1 • In = In. 4 −1 1 −1 • (cA) = c A for any real value c 6= 0. • (AB)−1 = B−1A−1 if both B and A are invertible. • If A and B are invertible, then (ABA−1)−1 = AB−1A−1. 1.6 Common mistakes Matrix expressions are similar to standard expressions regarding real num- bers in that addition and subtraction are equivalent, multiplication is nearly equivalent, and inverses give an approximation of division. But, this similar- ity leads to common pitfalls when manipulating matrix equations. Here are some common mistakes that you should look out for. 1. Swapping the arguments of a matrix product. 2. Propagating transposes or inverses into a matrix product without swap- ping the order of arguments. 3. Assuming that a matrix is invertible (or worse, assuming a non-square matrix is invertible). 4. Performing operations on matrices of incompatible size. 2 Rank, Null space, and Definiteness If A is not invertible (for instance, it may not be square) then the system of linear equations Ax = b may not have a solution x. Or, it may have an infinite number of solutions. Or, it may have solutions for some b's and not others. We would like to characterize, based on properties of A, when such equations can be solved. 2.1 Matrix rank Consider the columns of A as a list of vectors a1; : : : ; an. Recall that if b 2 Span(a1; : : : ; an), then b is a linear combination of a1; : : : ; an. If this holds, then it is sufficient to set each component xi to the respective coefficient on ai in order to solve Ax = b. On the other hand, if b2 = Span(a1; : : : ; an), then 5 there is not solution. So, the set of vectors b such that Ax = b has a solution is precisely Span(a1; : : : ; an). Rank. The rank of an m × n matrix A is the size of the largest subset of fa1; : : : ; ang that is linearly independent. In other words, if A has rank k, m then Span(a1; : : : ; an) is an k-dimensional subspace of R . If k = n, then A is said to have full column rank, and such problems have at most one solution. If k = m, then A is said to have full row rank, and such problems have at least one solution. If k = m = n, then A is invertible. Overdetermined system. Now suppose that the rank of A is k < m. Then there are some possible values of b that are not attainable by linear combi- nations of a1; : : : ; an. Such systems are known as overdetermined because there are more constraints than can be fulfilled by adjusting the values of x. Overdetermined systems are usually not solved exactly, but are more often 2 solved in a least squares sense minx jjAx − bjj . Underdetermined system. If the rank of A is k < n, then there are an infinite number of solutions x to the equation Ax0 = Ax. To see this, let some column of A be linearly dependent on the remaining columns. Suppose Pn this column is a1 without loss of generality. Then, a1 − i=2 ciai = 0 for some coefficients ci. So, any multiple of the vector v = (1; −c2;:::; −cn) can be added to x0 without affecting the value of A(x0 + cv). Such systems are known as underdetermined because they may be solved by multple values of x. A system can be both underdetermined and overdetermined if k < m and k < n. This means there are some values of b for which there is no solution, but for those that do have a solution, there are an infinite number of solutions.
Recommended publications
  • 8.5 Least Squares Solutions to Inconsistent Systems
    8.5 Least Squares Solutions to Inconsistent Systems Performance Criterion: 8. (f) Find the least-squares approximation to the solution of an inconsistent system of equations. Solve a problem using least-squares approximation. (g) Give the least squares error and least squares error vector for a least squares approximation to a solution to a system of equations. Recall that an inconsistent system is one for which there is no solution. Often we wish to solve inconsistent systems and it is just not acceptable to have no solution. In those cases we can find some vector (whose components are the values we are trying to find when attempting to solve the system) that is “closer to being a solution” than all other vectors. The theory behind this process is part of the second term of this course, but we now have enough knowledge to find such a vector in a “cookbook” manner. Suppose that we have a system of equations Ax = b. Pause for a moment to reflect on what we know and what we are trying to find when solving such a system: We have a system of linear equations, and the entries of A are the coefficients of all the equations. The vector b is the vector whose components are the right sides of all the equations, and the vector x is the vector whose components are the unknown values of the variables we are trying to find. So we know A and b and we are trying to find x. If A is invertible, the solution vector x is given by x = A−1 b.
    [Show full text]
  • Linear Algebra I
    Linear Algebra I Gregg Waterman Oregon Institute of Technology c 2016 Gregg Waterman This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. The essence of the license is that You are free: to Share to copy, distribute and transmit the work • to Remix to adapt the work • Under the following conditions: Attribution You must attribute the work in the manner specified by the author (but not in • any way that suggests that they endorse you or your use of the work). Please contact the author at [email protected] to determine how best to make any attribution. Noncommercial You may not use this work for commercial purposes. • Share Alike If you alter, transform, or build upon this work, you may distribute the resulting • work only under the same or similar license to this one. With the understanding that: Waiver Any of the above conditions can be waived if you get permission from the copyright • holder. Public Domain Where the work or any of its elements is in the public domain under applicable • law, that status is in no way affected by the license. Other Rights In no way are any of the following rights affected by the license: • Your fair dealing or fair use rights, or other applicable copyright exceptions and limitations; ⋄ The author’s moral rights; ⋄ Rights other persons may have either in the work itself or in how the work is used, such as ⋄ publicity or privacy rights. Notice For any reuse or distribution, you must make clear to others the license terms of this • work.
    [Show full text]
  • A New Mathematical Model for Tiling Finite Regions of the Plane with Polyominoes
    Volume 15, Number 2, Pages 95{131 ISSN 1715-0868 A NEW MATHEMATICAL MODEL FOR TILING FINITE REGIONS OF THE PLANE WITH POLYOMINOES MARCUS R. GARVIE AND JOHN BURKARDT Abstract. We present a new mathematical model for tiling finite sub- 2 sets of Z using an arbitrary, but finite, collection of polyominoes. Unlike previous approaches that employ backtracking and other refinements of `brute-force' techniques, our method is based on a systematic algebraic approach, leading in most cases to an underdetermined system of linear equations to solve. The resulting linear system is a binary linear pro- gramming problem, which can be solved via direct solution techniques, or using well-known optimization routines. We illustrate our model with some numerical examples computed in MATLAB. Users can download, edit, and run the codes from http://people.sc.fsu.edu/~jburkardt/ m_src/polyominoes/polyominoes.html. For larger problems we solve the resulting binary linear programming problem with an optimization package such as CPLEX, GUROBI, or SCIP, before plotting solutions in MATLAB. 1. Introduction and motivation 2 Consider a planar square lattice Z . We refer to each unit square in the lattice, namely [~j − 1; ~j] × [~i − 1;~i], as a cell.A polyomino is a union of 2 a finite number of edge-connected cells in the lattice Z . We assume that the polyominoes are simply-connected. The order (or area) of a polyomino is the number of cells forming it. The polyominoes of order n are called n-ominoes and the cases for n = 1; 2; 3; 4; 5; 6; 7; 8 are named monominoes, dominoes, triominoes, tetrominoes, pentominoes, hexominoes, heptominoes, and octominoes, respectively.
    [Show full text]
  • Iterative Properties of Birational Rowmotion II: Rectangles and Triangles
    Iterative properties of birational rowmotion II: Rectangles and triangles The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation Grinberg, Darij, and Tom Roby. "Iterative properties of birational rowmotion II: Rectangles and triangles." Electronic Journal of Combinatorics 22(3) (2015). As Published http://www.combinatorics.org/ojs/index.php/eljc/article/view/ v22i3p40 Publisher European Mathematical Information Service (EMIS) Version Final published version Citable link http://hdl.handle.net/1721.1/100753 Terms of Use Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. Iterative properties of birational rowmotion II: rectangles and triangles Darij Grinberg∗ Tom Robyy Department of Mathematics Department of Mathematics Massachusetts Institute of Technology University of Connecticut Massachusetts, U.S.A. Connecticut, U.S.A. [email protected] [email protected] Submitted: May 1, 2014; Accepted: Sep 8, 2015; Published: Sep 20, 2015 Mathematics Subject Classifications: 06A07, 05E99 Abstract Birational rowmotion { a birational map associated to any finite poset P { has been introduced by Einstein and Propp as a far-reaching generalization of the (well- studied) classical rowmotion map on the set of order ideals of P . Continuing our exploration of this birational rowmotion, we prove that it has order p+q on the (p; q)- rectangle poset (i.e., on the product of a p-element chain with a q-element chain); we also compute its orders on some triangle-shaped posets. In all cases mentioned, it turns out to have finite (and explicitly computable) order, a property it does not exhibit for general finite posets (unlike classical rowmotion, which is a permutation of a finite set).
    [Show full text]
  • Underdetermined and Overdetermined Linear Algebraic Systems
    Underdetermined and Overdetermined Linear Algebraic Systems ES100 March 1, 1999 T.S. Whitten Objectives ● Define underdetermined systems ● Define overdetermined systems ● Least Squares Examples Review B 500 N = − ° = ∑ Fx 0; 500 FBC sin 45 0 = ° − = ∑ Fy 0; FBC cos 45 FBA 0 FBC FBA − sin 45° 0 F − 500 ⋅ BC = ° 1cos42445 4341 1F2BA3 0 coefficients variables Review cont. − sin 45° 0 F − 500 ⋅ BC = ° 1cos42445 4341 1F2BA3 0 coefficients variables The system of matrices above is of the form: Ax = b and can be solved using MATLAB left division thus, x = A\b × results in a 1 2 matrix of values for FBC and FBA Review Summary ● A system of two Equations and two unknowns may yield a unique solution. ● The exception is when the determinant of A is equal to zero. Then the system is said to be singular. ● The left division operator will solve the linear system in one step by combining two matrix operations ●A\B is equivalent to A-1*B Graphical Representation of Unique vs. Singular Systems unique solution y singular system x Underdetermined Systems ● A system of linear equations is may be undetermined if; 1 The determinant of A is equal to zero A = 0 2 The matrix A is not square, i.e. the are more unknowns than there are equations + + = x x 3y 2z 2 1 3 2 2 ⋅ y = x + y + z = 4 1 1 1 4 z Overdetermined Systems ● The converse of an underdetermined system is an overdetermined system where there are more equations than there are variables ● This situation arises frequently in engineering.
    [Show full text]
  • Solutions of Overdetermined Linear Problems (Least Square Problems)
    Chapter 3 Solutions of overdetermined linear problems (Least Square problems) 1. Over-constrained problems See Chapter 11 from the textbook 1.1. Definition In the previous chapter, we focused on solving well-defined linear problems de- fined by m linear equations for m unknowns, put into a compact matrix-vector form Ax = b with A an m m square matrix, and b and x m long column vectors. We focussed on using⇥ direct methods to seek exact solutions− to such well-defined linear systems, which exist whenever A is nonsingular. We will re- visit these problems later this quarter when we learn about iterative methods. In this chapter, we look at a more general class of problems defined by so- called overdetermined systems – systems with a larger numbers of equations (m) than unknowns (n): this time, we have Ax = b with A an m n matrix with m>n, x an n-long column vector and b an m long column vector.⇥ This time there generally are no exact solutions to the problem.− Rather we now want to find approximate solutions to overdetermined linear systems which minimize the residual error E = r = b Ax (3.1) || || || − || using some norm. The vector r = b Ax is called the residual vector. − Any choice of norm would generally work, although, in practice, we prefer to use the Euclidean norm (i.e., the 2-norm) which is more convenient for nu- merical purposes, as they provide well-established relationships with the inner product and orthogonality, as well as its smoothness and convexity (we shall see later).
    [Show full text]
  • Arxiv:1806.05346V4 [Math.AG]
    How many zeroes? Counting the number of solutions of systems of polynomials via geometry at infinity Part I: Toric theory Pinaki Mondal arXiv:1806.05346v4 [math.AG] 26 Mar 2020 Preface In this book we describe an approach through toric geometry to the following problem: “estimate the number (counted with appropriate multiplicity) of isolated solutions of n polynomial equations in n variables over an algebraically closed field k.” The outcome of this approach is the number of solutions for “generic” systems in terms of their Newton polytopes, and an explicit characterization of what makes a system “generic.” The pioneering work in this field was done in the 1970s by Kushnirenko, Bernstein and Khovanskii, who completely solved the problem of counting solutions of generic systems on the “torus” n (k 0 ) . In the context of our problem, however, the natural domain of solutions is not the torus, but the \{ } n affine space k . There were a number of works on extension of Bernstein’s theorem to the case of affine space, and recently it has been completely resolved, the final steps having been carried out by the author. The aim of this book is to present these results in a coherent way. We start from the beginning, namely Bernstein’s beautiful theorem which expresses the number of solutions of generic systems in terms of the mixed volume of their Newton polytopes. We give complete proofs, over arbitrary algebraically closed fields, of Bernstein’s theorem and its recent extension to the affine space, and describe some open problems. We also apply the developed techniques to derive and generalize Kushnirenko’s results on Milnor numbers of hypersurface singularities which in 1970s served as a precursor to the development of toric geometry.
    [Show full text]
  • Math 253 - Homework 2 Due in Class on Wednesday, February 19
    Math 253 - Homework 2 Due in class on Wednesday, February 19 Write your answers clearly and carefully, being sure to emphasize your answer and the key steps of your work. You may work with others in this class, but the solutions handed in must be your own. If you work with someone or get help from another source, give a brief citation on each problem for which that is the case. Part I While you are expected to complete all of these problems, do not hand in the problems in Part I. You are encouraged to write complete solutions and to discuss them with me or your peers. As extra motivation, some of these problems will appear on the weekly quizzes. 1. Practice Problems: (a) Section 1.3: 1, 2 2. Exercises: (a) Section 1.3: 1-13 odd, 19, 21, 25, 27, 29, 33 Part II Hand in each problem separately, individually stapled if necessary. Please keep all problems together with a paper clip. n 1. The center of mass of v1;:::; vk in R , with a mass of mi at vi, for i = 1; : : : ; k and total mass m = m1 + ··· + mk, is given by m v + ··· + m v m m v¯ = 1 1 k k = 1 v + ··· + k v : m m 1 m k n The centroid, which can be thought of as the geometric center, of v1;:::; vk in R is the center of mass with mi = 1 for each i = 1; : : : ; k, i.e. v + ··· + v 1 1 c¯ = 1 k = v + ··· + v : k k 1 k k 243 2−13 2 1 3 2−53 2 1 3 Pk (a) Let v1 = 415 ; v2 = 4 0 5 ; v3 = 4 3 5 ; v4 = 4 0 5 ; v5 = 4−45 : Let uk = i=1 vi.
    [Show full text]
  • Investigating an Overdetermined System of Linear Equations by Using Convex Functions
    Hacettepe Journal of Mathematics and Statistics Volume 46 (5) (2017), 865 874 Investigating an overdetermined system of linear equations by using convex functions Zlatko Pavi¢ ∗ y and Vedran Novoselac z Abstract The paper studies the application of convex functions in order to prove the existence of optimal solutions of an overdetermined system of lin- ear equations. The study approaches the problem by using even convex functions instead of projections. The research also relies on some spe- cial properties of unbounded convex sets, and the lower level sets of continuous functions. Keywords: overdetermined system, convex function, global minimum. 2000 AMS Classication: 15A06, 26B25. Received : 30.08.2016 Accepted : 19.12.2016 Doi : 10.15672/ HJMS.2017.423 1. Introduction We consider a system of m linear equations with n unknowns over the eld of real numbers given by a11x1 + ::: + a1nxn = b1 . (1.1) . .. : am1x1+ ::: + amnxn = bm Including the matrices 2 a11 : : : a1n 3 2 x1 3 2 b1 3 . (1.2) 6 . .. 7 6 . 7 6 . 7 A = 4 . 5 ; x = 4 . 5 ; b = 4 . 5 ; am1 : : : amn xn bm the given system gets the matrix form (1.3) Ax = b: ∗Department of Mathematics, Mechanical Engineering Faculty in Slavonski Brod, University of Osijek, Croatia, Email: [email protected] yCorresponding Author. zDepartment of Mathematics, Mechanical Engineering Faculty in Slavonski Brod, University of Osijek, Croatia, Email: [email protected] 866 n m Identifying the matrix A with a linear operator from R to R , the column matrix x n m with a vector of R , and the column matrices Ax and b with vectors of R , the given system takes the operator form.
    [Show full text]
  • System of Linear Equations - Wikipedia, the Free Encyclopedia
    System of linear equations - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/System_of_linear_equations System of linear equations From Wikipedia, the free encyclopedia In mathematics, a system of linear equations (or linear system) is a collection of linear equations involving the same set of variables. For example, is a system of three equations in the three variables x, y, z. A solution to a linear system is an assignment of numbers to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by A linear system in three variables determines a collection of planes. The intersection point is the solution. since it makes all three equations valid.[1] In mathematics, the theory of linear systems is the basis and a fundamental part of linear algebra, a subject which is used in most parts of modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in engineering, physics, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system. Very often, the coefficients of the equations are real or complex numbers and the solutions are searched in the same set of numbers, but the theory and the algorithms apply for coefficients and solutions in any field. For solutions in an integral domain like the ring of the integers, or in other algebraic structures, other theories have been developed.
    [Show full text]
  • On the Generic Low-Rank Matrix Completion
    On the Generic Low-Rank Matrix Completion Yuan Zhang, Yuanqing Xia∗, Hongwei Zhang, Gang Wang, and Li Dai Abstract—This paper investigates the low-rank matrix com- where the matrix to be completed consists of spatial coordi- pletion (LRMC) problem from a generic vantage point. Unlike nates and temporal (frame) indices [9, 10]. most existing work that has focused on recovering a low-rank matrix from a subset of the entries with specified values, the The holy grail of the LRMC is that, the essential information only information available here is just the pattern (i.e., positions) of a low-rank matrix could be contained in a small subset of of observed entries. To be precise, given is an n × m pattern matrix (n ≤ m) whose entries take fixed zero, unknown generic entries. Therefore, there is a chance to complete a matrix from values, and missing values that are free to be chosen from the a few observed entries [5, 11, 12]. complex field, the question of interest is whether there is a matrix completion with rank no more than n − k (k ≥ 1) for almost all Thanks to the widespread applications of LRMC in di- values of the unknown generic entries, which is called the generic verse fields, many LRMC techniques have been developed low-rank matrix completion (GLRMC) associated with (M,k), [4, 5, 8, 12–16]; see [11, 17] for related survey. Since the rank and abbreviated as GLRMC(M,k). Leveraging an elementary constraint is nonconvex, the minimum rank matrix completion tool from the theory on systems of polynomial equations, we give a simple proof for genericity of this problem, which was first (MRMC) problem which asks for the minimum rank over observed by Kirly et al., that is, depending on the pattern of the all possible matrix completions, is NP-hard [4].
    [Show full text]
  • Low-Rank Approximation
    Ivan Markovsky Low-Rank Approximation Algorithms, Implementation, Applications January 19, 2018 Springer vi Preface Preface Low-rank approximation is a core problem in applications. Generic examples in systems and control are model reduction and system identification. Low-rank approximation is equivalent to the principal component analysis method in machine learning. Indeed, dimensionality reduction, classification, and information retrieval problems can be posed and solved as particular low-rank approximation problems. Sylvester structured low-rank approximation has applications in computer algebra for the decoupling, factorization, and common divisor computation of polynomials. The book covers two complementary aspects of data modeling: stochastic esti- mation and deterministic approximation. The former aims to find from noisy data that is generated by a low-complexity system an estimate of that data generating system. The latter aims to find from exact data that is generated by a high com- plexity system a low-complexity approximation of the data generating system. In applications, both the stochastic estimation and deterministic approximation aspects are present: the data is imprecise due to measurement errors and is possibly gener- Simple linear models are commonly used in engineering despite of the fact that ated by a complicated phenomenon that is not exactly representable by a model in the real world is often nonlinear. At the same time as being simple, however, the the considered model class. The development of data modeling methods in system models have to be accurate. Mathematical models are obtained from first princi- identification and signal processing, however, has been dominated by the stochastic ples (natural laws, interconnection, etc.) and experimental data.
    [Show full text]