Basis and Dimension

Total Page:16

File Type:pdf, Size:1020Kb

Basis and Dimension (II.B) Basis and dimension How would you explain that a plane has two dimensions? Well, you can go in two independent directions, and no more. To make this idea precise, we formulate the DEFINITION 1. A (finite) subset f~v1,...,~vmg of a vector space V (over a field F) is a basis of V if (a) they are linearly independent (over F), and (b) they span V (over F).1 The “over F” means that the linear combinations are understood to be of the form ∑ ai~vi, with ai 2 F. As usual, our “default” will be F = R. EXAMPLE 2. A basis of Rm is given by the standard basis vectors m feˆigi=1. Of course, one could say the same for any field: the standard basis vectors are also basis for Cm, viewed as a vector space over C. But what if we view Cm as a vector space over R? Then you need the 2m vectors eˆ1,..., eˆm and ieˆ1,..., ieˆm! (Why?) EXAMPLE 3. Let P2(x, y) denote the vector space of quadratic polynomials in two variables x and y, with real coefficients. This has 2 6 2 2 the “monomial basis” fri (x, y)gi=1 = f1, x, y, xy, x , y g. Similarly, 3 10 2 2 2 2 3 3 fri (x, y)gi=1 = f1, x, y, xy, x , y , x y, y x, x , y g gives a basis for the space P3(x, y) of cubic polynomials. We’ll give a geometric application of this example below. 1I am not saying that you can’t have infinite bases; indeed, non-finitely-generated vector spaces (like P¥(x) := all polynomials) may have such a basis. I’m just not defining or discussing them at this stage. 1 2 (II.B) BASIS AND DIMENSION EXAMPLE 4. Let A be an m × n matrix, and V be the vector sub- space of Rn consisting of solutions to A~x = 0. How do you compute a basis of V? By making use of the row-reduction algorithm, of course! For instance, we might have 0 0 0 1 −1 −1 1 0 1 2 0 3 0 1 B C B − C B 2 4 2 4 2 C B 0 0 1 1 0 C A = B C 7−! rre f (A) = B C . @ 2 4 3 3 3 A @ 0 0 0 0 1 A 3 6 6 3 6 0 0 0 0 0 The free variables are x2 and x4 , and an arbitrary solution is given by plugging arbitrary values in for these and solving for the pivot variables x1, x3, x5 . To get a basis , you plug in (x2 = 1, x4 = 0) and (x2 = 0, x4 = 1) to obtain 0 1 0 1 −2 −3 B C B C B 1 C B 0 C B C B C B 0 C , B 1 C . B C B C B C B C @ 0 A @ 1 A 0 0 More generally, the ith basis element is given by setting the ith free variable to 1 and the rest to zero. EXAMPLE 5. Suppose A is an invertible m × m matrix. I claim m m that its columns f~cigi=1 are a basis for R . To establish this, check that: • f~cig are linearly independent : g1~c1 + ... + gm~cm = 0 =) A~g = 0 =) ~g = 0 (since A is invertible). m m • f~cig span R : show every ~y 2 R can be written ~y = g1~c1 + ... + −1 gm~cm (= A~g). This is easy: just put ~g = A ~y . We say that V is finitely generated if some finite subset f~v1,...,~vmg spans V . Here is the precise reason why dimension makes sense: PROPOSITION 6. If V is finitely generated then (i) it has a (finite) basis, and (ii) any 2 bases of V have the same number of elements. (II.B) BASIS AND DIMENSION 3 DEFINITION 7. We define the dimension dimF(V) of V over F to be the number of elements in (ii).2 We get (i) by going through the generating subset f~v1,...,~vmg and throwing out any ~vi that is a linear combination of the earlier vectors. What remains is a basis. In order to show (ii) we first prove the following LEMMA 8. If ~v1,...,~vm span V then any independent set ⊆ V has no more than m vectors. PROOF. Consider any set ~u1,...,~un , n > m (of vectors in V ). Since ~v1,...,~vm span V , each ~uj is a linear combination of the ~vi : m ~uj = ∑ Aij~vi , for A m × n. i=1 Since m < n, there is a nontrivial solution to A~x = ~0, some nonzero ~x 2 Rn. That is, n ∑ Aijxj = 0 for each i, j=1 and so n n m m n ! m ~ ∑ xj~uj = ∑ xj ∑ Aij~vi = ∑ ∑ xj Aij ~vi = ∑ 0~vi = 0. j=1 j=1 i=1 i=1 j=1 i=1 n We conclude that f~ujgj=1 is not an independent set! Now let f~v1,...,~vmg and fw~ 1,..., w~ ng be two bases for V . The f~vig span V and the fw~ jg are independent, so the Lemma =) n ≤ m . The same argument with f~vig and fw~ jg swapped =) n ≥ m, and so we’ve proved the Proposition. We can rephrase the Lemma as follows: if dim(V) = m then (a) less than m vectors cannot span V , while (b) more than m vectors are linearly dependent. A Geometric Application. Given 5 points A, B, C, D, E “in gen- eral position” in the plane, there is a unique conic passing through 2The F is typically omitted when there is no ambiguity. That said, in the first n n example we had dimC(C ) = n while dimR(C ) = 2n. 4 (II.B) BASIS AND DIMENSION them. Here’s why: the equation of a conic Q has the form 6 2 2 2 0 = fQ(x, y) = ∑ airi (x, y) = a1 + a2x + a3y + a4xy + a5x + a6y , i=1 and the statement that it passes through A is 0 = fQ(xA, yA) = ∑ airi(xA, yA). Continuing this for B thru E we get 5 equations in 6 unknowns ai , a 3 system which “generically” has a “line” of solutions: if (a1,..., a6) solves the system then so does (c · a1,..., c · a6). But this just gives multiples of the same fQ , which won’t change the shape of the curve, and so there is really only one nontrivial solution: through five general points in the plane there exists a unique conic Q . Similarly, given 8 points A, B, C, D, E, P1, P2, P3 what can we say about the “vector space” of cubics passing through them (i.e. of cubic polynomials in (x,y) vanishing at all 8 points)? First of all, the vector space of all cubics 10 3 3 3 ∑ airi (x, y) = a1 + a2x + a3y + ... + a9x + a10y i=1 should be thought of in terms of the coefficient vector~a . The 8 con- straints give a linear system = 10 3( ) 9 0 ∑i=1 airi xA, yA > . = . 8 equations > = 10 3( ) ; 0 ∑i=1 airi xP3 , yP3 whose space of solutions “in general” (i.e., under the assumption of maximal rank[=8]) is two-dimensional – there are 8 leading 1 s in rre f of the 8 × 10 [r] matrix, and therefore 2 parameters to choose freely. So start with A, B, C, D, E in “general position”; we’d like to construct points on the (unique) conic through them, using a straight- edge. Begin by drawing the lines AB , DE , CD , and BC ; label 3more precisely, “generically” means here that the matrix [r] (with rows indexed by A thru E , columns by i = 1 thru 6 ) is of maximal rank (= 5 ). (II.B) BASIS AND DIMENSION 5 AB \ DE by P1.(Q is drawn in a background color because we don’t know what it looks like yet.) B Q C A D E BC AB CD DE P1 Now draw (almost) any line ` through A – the choice is free ex- cept that ` shouldn’t go through B, C, D, or E . All the rest of the construction depends on this choice; in particular the final point q depends on ` , and so by varying ` we can vary q and in this way (we hope) construct enough points on Q to get an idea of what it looks like. Label ` \ CD =: P2 ; then draw P1P2 and label P1P2 \ BC =: P3 ; finally set q := EP3 \ `. We must prove q 2 Q . Adding dotted lines to our diagram for the lines depending on `, Q B C A D q E BC AB CD EP l 3 DE P P P 1 2 3 6 (II.B) BASIS AND DIMENSION Now consider the following three cubic equations (where e.g. fAB(x, y) = 0 is the linear equation of the line AB , and so on): f1(x, y) := fQ(x, y) · fP1P2 (x, y) = 0, f2(x, y) := fAB(x, y) · fCD(x, y) · fEP3 (x, y) = 0, f3(x, y) := f`(x, y) · fBC(x, y) · fDE(x, y) = 0. The graphical solutions of these equations are (respectively) Q [ P1P2 , AB [ CD [ EP3 , ` [ BC [ DE. All 3 of these unions contain the 8 points A, B, C, D, E, P1, P2, P3; that is, the cubic polynomials f1 , f2 , and f3 each vanish at all 8 points. Above we explained that the (vector) space of such cubic polynomials was 10 − 8 = 2 dimensional; it follows that 3 “vectors” f1 , f2 , f3 cannot be linearly independent and there is a nontrivial relation a f1(x, y) + b f2(x, y) + g f3(x, y) = 0 .
Recommended publications
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • 10 7 General Vector Spaces Proposition & Definition 7.14
    10 7 General vector spaces Proposition & Definition 7.14. Monomial basis of n(R) P Letn N 0. The particular polynomialsm 0,m 1,...,m n n(R) defined by ∈ ∈P n 1 n m0(x) = 1,m 1(x) = x, . ,m n 1(x) =x − ,m n(x) =x for allx R − ∈ are called monomials. The family =(m 0,m 1,...,m n) forms a basis of n(R) B P and is called the monomial basis. Hence dim( n(R)) =n+1. P Corollary 7.15. The method of equating the coefficients Letp andq be two real polynomials with degreen N, which means ∈ n n p(x) =a nx +...+a 1x+a 0 andq(x) =b nx +...+b 1x+b 0 for some coefficientsa n, . , a1, a0, bn, . , b1, b0 R. ∈ If we have the equalityp=q, which means n n anx +...+a 1x+a 0 =b nx +...+b 1x+b 0, (7.3) for allx R, then we can concludea n =b n, . , a1 =b 1 anda 0 =b 0. ∈ VL18 ↓ Remark: Since dim( n(R)) =n+1 and we have the inclusions P 0(R) 1(R) 2(R) (R) (R), P ⊂P ⊂P ⊂···⊂P ⊂F we conclude that dim( (R)) and dim( (R)) cannot befinite natural numbers. Sym- P F bolically, we write dim( (R)) = in such a case. P ∞ 7.4 Coordinates with respect to a basis 11 7.4 Coordinates with respect to a basis 7.4.1 Basis implies coordinates Again, we deal with the caseF=R andF=C simultaneously. Therefore, letV be an F-vector space with the two operations+ and .
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Selecting a Monomial Basis for Sums of Squares Programming Over a Quotient Ring
    Selecting a Monomial Basis for Sums of Squares Programming over a Quotient Ring Frank Permenter1 and Pablo A. Parrilo2 Abstract— In this paper we describe a method for choosing for λi(x) and s(x), this feasibility problem is equivalent to a “good” monomial basis for a sums of squares (SOS) program an SDP. This follows because the sum of squares constraint formulated over a quotient ring. It is known that the monomial is equivalent to a semidefinite constraint on the coefficients basis need only include standard monomials with respect to a Groebner basis. We show that in many cases it is possible of s(x), and the equality constraint is equivalent to linear to use a reduced subset of standard monomials by combining equations on the coefficients of s(x) and λi(x). Groebner basis techniques with the well-known Newton poly- Naturally, the complexity of this sums of squares program tope reduction. This reduced subset of standard monomials grows with the complexity of the underlying variety, as yields a smaller semidefinite program for obtaining a certificate does the size of the corresponding SDP. It is therefore of non-negativity of a polynomial on an algebraic variety. natural to explore how algebraic structure can be exploited I. INTRODUCTION to simplify this sums of squares program. Consider the Many practical engineering problems require demonstrat- following reformulation [10], which is feasible if and only ing non-negativity of a multivariate polynomial over an if (2) is feasible: algebraic variety, i.e. over the solution set of polynomial Find s(x) equations.
    [Show full text]
  • Bases for Infinite Dimensional Vector Spaces Math 513 Linear Algebra Supplement
    BASES FOR INFINITE DIMENSIONAL VECTOR SPACES MATH 513 LINEAR ALGEBRA SUPPLEMENT Professor Karen E. Smith We have proven that every finitely generated vector space has a basis. But what about vector spaces that are not finitely generated, such as the space of all continuous real valued functions on the interval [0; 1]? Does such a vector space have a basis? By definition, a basis for a vector space V is a linearly independent set which generates V . But we must be careful what we mean by linear combinations from an infinite set of vectors. The definition of a vector space gives us a rule for adding two vectors, but not for adding together infinitely many vectors. By successive additions, such as (v1 + v2) + v3, it makes sense to add any finite set of vectors, but in general, there is no way to ascribe meaning to an infinite sum of vectors in a vector space. Therefore, when we say that a vector space V is generated by or spanned by an infinite set of vectors fv1; v2;::: g, we mean that each vector v in V is a finite linear combination λi1 vi1 + ··· + λin vin of the vi's. Likewise, an infinite set of vectors fv1; v2;::: g is said to be linearly independent if the only finite linear combination of the vi's that is zero is the trivial linear combination. So a set fv1; v2; v3;:::; g is a basis for V if and only if every element of V can be be written in a unique way as a finite linear combination of elements from the set.
    [Show full text]
  • Determinants Math 122 Calculus III D Joyce, Fall 2012
    Determinants Math 122 Calculus III D Joyce, Fall 2012 What they are. A determinant is a value associated to a square array of numbers, that square array being called a square matrix. For example, here are determinants of a general 2 × 2 matrix and a general 3 × 3 matrix. a b = ad − bc: c d a b c d e f = aei + bfg + cdh − ceg − afh − bdi: g h i The determinant of a matrix A is usually denoted jAj or det (A). You can think of the rows of the determinant as being vectors. For the 3×3 matrix above, the vectors are u = (a; b; c), v = (d; e; f), and w = (g; h; i). Then the determinant is a value associated to n vectors in Rn. There's a general definition for n×n determinants. It's a particular signed sum of products of n entries in the matrix where each product is of one entry in each row and column. The two ways you can choose one entry in each row and column of the 2 × 2 matrix give you the two products ad and bc. There are six ways of chosing one entry in each row and column in a 3 × 3 matrix, and generally, there are n! ways in an n × n matrix. Thus, the determinant of a 4 × 4 matrix is the signed sum of 24, which is 4!, terms. In this general definition, half the terms are taken positively and half negatively. In class, we briefly saw how the signs are determined by permutations.
    [Show full text]
  • 28. Exterior Powers
    28. Exterior powers 28.1 Desiderata 28.2 Definitions, uniqueness, existence 28.3 Some elementary facts 28.4 Exterior powers Vif of maps 28.5 Exterior powers of free modules 28.6 Determinants revisited 28.7 Minors of matrices 28.8 Uniqueness in the structure theorem 28.9 Cartan's lemma 28.10 Cayley-Hamilton Theorem 28.11 Worked examples While many of the arguments here have analogues for tensor products, it is worthwhile to repeat these arguments with the relevant variations, both for practice, and to be sensitive to the differences. 1. Desiderata Again, we review missing items in our development of linear algebra. We are missing a development of determinants of matrices whose entries may be in commutative rings, rather than fields. We would like an intrinsic definition of determinants of endomorphisms, rather than one that depends upon a choice of coordinates, even if we eventually prove that the determinant is independent of the coordinates. We anticipate that Artin's axiomatization of determinants of matrices should be mirrored in much of what we do here. We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in the definition of the characteristic polynomial takes us outside the class of vector spaces over fields. We want to give a conceptual proof for the uniqueness part of the structure theorem for finitely-generated modules over principal ideal domains. Multi-linear algebra over fields is surely insufficient for this. 417 418 Exterior powers 2. Definitions, uniqueness, existence Let R be a commutative ring with 1.
    [Show full text]
  • 1 Expressing Vectors in Coordinates
    Math 416 - Abstract Linear Algebra Fall 2011, section E1 Working in coordinates In these notes, we explain the idea of working \in coordinates" or coordinate-free, and how the two are related. 1 Expressing vectors in coordinates Let V be an n-dimensional vector space. Recall that a choice of basis fv1; : : : ; vng of V is n the same data as an isomorphism ': V ' R , which sends the basis fv1; : : : ; vng of V to the n standard basis fe1; : : : ; eng of R . In other words, we have ' n ': V −! R vi 7! ei 2 3 c1 6 . 7 v = c1v1 + ::: + cnvn 7! 4 . 5 : cn This allows us to manipulate abstract vectors v = c v + ::: + c v simply as lists of numbers, 2 3 1 1 n n c1 6 . 7 n the coordinate vectors 4 . 5 2 R with respect to the basis fv1; : : : ; vng. Note that the cn coordinates of v 2 V depend on the choice of basis. 2 3 c1 Notation: Write [v] := 6 . 7 2 n for the coordinates of v 2 V with respect to the basis fvig 4 . 5 R cn fv1; : : : ; vng. For shorthand notation, let us name the basis A := fv1; : : : ; vng and then write [v]A for the coordinates of v with respect to the basis A. 2 2 Example: Using the monomial basis f1; x; x g of P2 = fa0 + a1x + a2x j ai 2 Rg, we obtain an isomorphism ' 3 ': P2 −! R 2 3 a0 2 a0 + a1x + a2x 7! 4a15 : a2 2 3 a0 2 In the notation above, we have [a0 + a1x + a2x ]fxig = 4a15.
    [Show full text]
  • Tight Monomials and the Monomial Basis Property
    PDF Manuscript TIGHT MONOMIALS AND MONOMIAL BASIS PROPERTY BANGMING DENG AND JIE DU Abstract. We generalize a criterion for tight monomials of quantum enveloping algebras associated with symmetric generalized Cartan matrices and a monomial basis property of those associated with symmetric (classical) Cartan matrices to their respective sym- metrizable case. We then link the two by establishing that a tight monomial is necessarily a monomial defined by a weakly distinguished word. As an application, we develop an algorithm to compute all tight monomials in the rank 2 Dynkin case. The existence of Hall polynomials for Dynkin or cyclic quivers not only gives rise to a simple realization of the ±-part of the corresponding quantum enveloping algebras, but also results in interesting applications. For example, by specializing q to 0, degenerate quantum enveloping algebras have been investigated in the context of generic extensions ([20], [8]), while through a certain non-triviality property of Hall polynomials, the authors [4, 5] have established a monomial basis property for quantum enveloping algebras associated with Dynkin and cyclic quivers. The monomial basis property describes a systematic construction of many monomial/integral monomial bases some of which have already been studied in the context of elementary algebraic constructions of canonical bases; see, e.g., [15, 27, 21, 5] in the simply-laced Dynkin case and [3, 18], [9, Ch. 11] in general. This property in the cyclic quiver case has also been used in [10] to obtain an elementary construction of PBW-type bases, and hence, of canonical bases for quantum affine sln. In this paper, we will complete this program by proving this property for all finite types.
    [Show full text]
  • Coordinatization
    MATH 355 Supplemental Notes Coordinatization Coordinatization In R3, we have the standard basis i, j and k. When we write a vector in coordinate form, say 3 v 2 , (1) “ »´ fi 5 — ffi – fl it is understood as v 3i 2j 5k. “ ´ ` The numbers 3, 2 and 5 are the coordinates of v relative to the standard basis ⇠ i, j, k . It has p´ q “p q always been understood that a coordinate representation such as that in (1) is with respect to the ordered basis ⇠. A little thought reveals that it need not be so. One could have chosen the same basis elements in a di↵erent order, as in the basis ⇠ i, k, j . We employ notation indicating the 1 “p q coordinates are with respect to the di↵erent basis ⇠1: 3 v 5 , to mean that v 3i 5k 2j, r s⇠1 “ » fi “ ` ´ 2 —´ ffi – fl reflecting the order in which the basis elements fall in ⇠1. Of course, one could employ similar notation even when the coordinates are expressed in terms of the standard basis, writing v for r s⇠ (1), but whenever we have coordinatization with respect to the standard basis of Rn in mind, we will consider the wrapper to be optional. r¨s⇠ Of course, there are many non-standard bases of Rn. In fact, any linearly independent collection of n vectors in Rn provides a basis. Say we take 1 1 1 4 ´ ´ » 0fi » 1fi » 1fi » 1fi u , u u u ´ . 1 “ 2 “ 3 “ 4 “ — 3ffi — 1ffi — 0ffi — 2ffi — ffi —´ ffi — ffi — ffi — 0ffi — 4ffi — 2ffi — 1ffi — ffi — ffi — ffi —´ ffi – fl – fl – fl – fl As per the discussion above, these vectors are being expressed relative to the standard basis of R4.
    [Show full text]
  • Polynomial Interpolation CPSC 303: Numerical Approximation and Discretization
    Lecture Notes 2: Polynomial Interpolation CPSC 303: Numerical Approximation and Discretization Ian M. Mitchell [email protected] http://www.cs.ubc.ca/~mitchell University of British Columbia Department of Computer Science Winter Term Two 2012{2013 Copyright 2012{2013 by Ian M. Mitchell This work is made available under the terms of the Creative Commons Attribution 2.5 Canada license http://creativecommons.org/licenses/by/2.5/ca/ Outline • Background • Problem statement and motivation • Formulation: The linear system and its conditioning • Polynomial bases • Monomial • Lagrange • Newton • Uniqueness of polynomial interpolant • Divided differences • Divided difference tables and the Newton basis interpolant • Divided difference connection to derivatives • Osculating interpolation: interpolating derivatives • Error analysis for polynomial interpolation • Reducing the error using the Chebyshev points as abscissae CPSC 303 Notes 2 Ian M. Mitchell | UBC Computer Science 2/ 53 Interpolation Motivation n We are given a collection of data samples f(xi; yi)gi=0 n • The fxigi=0 are called the abscissae (singular: abscissa), n the fyigi=0 are called the data values • Want to find a function p(x) which can be used to estimate y(x) for x 6= xi • Why? We often get discrete data from sensors or computation, but we want information as if the function were not discretely sampled • If possible, p(x) should be inexpensive to evaluate for a given x CPSC 303 Notes 2 Ian M. Mitchell | UBC Computer Science 3/ 53 Interpolation Formulation There are lots of ways to define a function p(x) to approximate n f(xi; yi)gi=0 • Interpolation means p(xi) = yi (and we will only evaluate p(x) for mini xi ≤ x ≤ maxi xi) • Most interpolants (and even general data fitting) is done with a linear combination of (usually nonlinear) basis functions fφj(x)g n X p(x) = pn(x) = cjφj(x) j=0 where cj are the interpolation coefficients or interpolation weights CPSC 303 Notes 2 Ian M.
    [Show full text]
  • Arxiv:1805.04488V5 [Math.NA]
    GENERALIZED STANDARD TRIPLES FOR ALGEBRAIC LINEARIZATIONS OF MATRIX POLYNOMIALS∗ EUNICE Y. S. CHAN†, ROBERT M. CORLESS‡, AND LEILI RAFIEE SEVYERI§ Abstract. We define generalized standard triples X, Y , and L(z) = zC1 − C0, where L(z) is a linearization of a regular n×n −1 −1 matrix polynomial P (z) ∈ C [z], in order to use the representation X(zC1 − C0) Y = P (z) which holds except when z is an eigenvalue of P . This representation can be used in constructing so-called algebraic linearizations for matrix polynomials of the form H(z) = zA(z)B(z)+ C ∈ Cn×n[z] from generalized standard triples of A(z) and B(z). This can be done even if A(z) and B(z) are expressed in differing polynomial bases. Our main theorem is that X can be expressed using ℓ the coefficients of the expression 1 = Pk=0 ekφk(z) in terms of the relevant polynomial basis. For convenience, we tabulate generalized standard triples for orthogonal polynomial bases, the monomial basis, and Newton interpolational bases; for the Bernstein basis; for Lagrange interpolational bases; and for Hermite interpolational bases. We account for the possibility of common similarity transformations. Key words. Standard triple, regular matrix polynomial, polynomial bases, companion matrix, colleague matrix, comrade matrix, algebraic linearization, linearization of matrix polynomials. AMS subject classifications. 65F15, 15A22, 65D05 1. Introduction. A matrix polynomial P (z) ∈ Fm×n[z] is a polynomial in the variable z with coef- ficients that are m by n matrices with entries from the field F. We will use F = C, the field of complex k numbers, in this paper.
    [Show full text]