10 7 General Vector Spaces Proposition & Definition 7.14

Total Page:16

File Type:pdf, Size:1020Kb

10 7 General Vector Spaces Proposition & Definition 7.14 10 7 General vector spaces Proposition & Definition 7.14. Monomial basis of n(R) P Letn N 0. The particular polynomialsm 0,m 1,...,m n n(R) defined by ∈ ∈P n 1 n m0(x) = 1,m 1(x) = x, . ,m n 1(x) =x − ,m n(x) =x for allx R − ∈ are called monomials. The family =(m 0,m 1,...,m n) forms a basis of n(R) B P and is called the monomial basis. Hence dim( n(R)) =n+1. P Corollary 7.15. The method of equating the coefficients Letp andq be two real polynomials with degreen N, which means ∈ n n p(x) =a nx +...+a 1x+a 0 andq(x) =b nx +...+b 1x+b 0 for some coefficientsa n, . , a1, a0, bn, . , b1, b0 R. ∈ If we have the equalityp=q, which means n n anx +...+a 1x+a 0 =b nx +...+b 1x+b 0, (7.3) for allx R, then we can concludea n =b n, . , a1 =b 1 anda 0 =b 0. ∈ VL18 ↓ Remark: Since dim( n(R)) =n+1 and we have the inclusions P 0(R) 1(R) 2(R) (R) (R), P ⊂P ⊂P ⊂···⊂P ⊂F we conclude that dim( (R)) and dim( (R)) cannot befinite natural numbers. Sym- P F bolically, we write dim( (R)) = in such a case. P ∞ 7.4 Coordinates with respect to a basis 11 7.4 Coordinates with respect to a basis 7.4.1 Basis implies coordinates Again, we deal with the caseF=R andF=C simultaneously. Therefore, letV be an F-vector space with the two operations+ and . Let alson := dim(V)< and choose · ∞ a basis =(b 1,...,b n) ofV. B Since is a generating system and linearly independent, eachv fromV has a linear B combination v=α 1b1 +...+α nbn (7.4) where the coefficientsα 1,...,α n F are uniquely determined. We call these numbers ∈ the coordinates ofv with respect to the basis and sometimes writev B for the vector B consisting of these numbers: n A vectorv inV and its coordinate vectorv B inF α1 . n v=α 1b1 +...+α nbn V v B = . F . (7.5) ∈ ←→ . ∈ αn Whenfixing a basis inV , then each vectorv V uniquely determines a coordinate n B ∈ vectorv B F – and vice versa. ∈ Forming the coordinate vector is a linear map The translation of a vectorv V into the n ∈ coordinate vectorv B F defines a linear ∈ map: vector spaceV n Φ :V F ,Φ (v)=v B B → B 1 Φ Φ− More concretely: B � B Φ (α1b1 +...+α nbn) =α 1e1 +...+α nen B vector� space F n For allx,y V andλ F, the mapΦ sat- ∈ ∈ isfies two properties: Φ (x+y)=Φ (x) +Φ (y) (+) B B B Φ (λx) =λΦ (x) ( ) B B · α1 . n v=α 1b1 +...+α nbn V Φ (v)= . F . (7.6) ∈ ←→ B ∈ αn The linear mapΦ is called the basis isomorphism with respect to the basis and B B is completely defined byΦ(b j) =e j forj=1, . , n. 12 7 General vector spaces Example 7.16. An abstract vector is represented by numbers The three functions sin, cos and arctan from the vector space (R), cf. Example 7.4, F span a subspace: V := Span(sin, cos, arctan) (R). ⊂F In the same manner as before, we can show that the three functions are linearly independent. Hence, they form a basis ofV: := (sin, cos, arctan). B Now, if we look at another functionv V given by ∈ v(x) = 8 cos(x) + 15 arctan(x) for allx R, hencev=8 cos + 15arctan. ∈ Then: v=8 cos + 15arctan arctan sin cos Wefind v=8 cos + 15arctan directly by using its coordinates: 0 Φ (v)= 8 . B �15� Rule of thumb:V is completely represented byF n EachF-vector spaceV withn := dim(V)< is represented byF n if youfix a basis =(b ,...,b ) ∞ B 1 n For each vectorv V , there is exactly one coordinate vectorΦ (v) F n. Instead ∈ B ∈ α1 . n of usingv V , one can also do calculations withΦ (v)= . F . ∈ B ∈ αn 7.4 Coordinates with respect to a basis 13 3 2 Example 7.17. The polynomialsp,q 3(R) given byp(x) = 2x x + 7 andq(x) = 2 ∈P − x +3 can be represented with the monomial basis =(m 0,m 1,m 2,m 3) by the coordinate B vectors: 1 2 2 2 Example 7.18. The matrixA= R × has the following coordinate vector with 0 3 ∈ respect to the basis from equation ( 7.1): B � � 1 Φ (A) = 2 . B �3� 3 The matrix3A has the coordinate vector 6 . �9� The matrix 5 0 5 C= 0 2 0 5 0 5 7.4.2 Change of basis For a given basis =(b 1,...,b n) of the vector spaceV , we have the linear map B n Φ :V F ,Φ (bj) =e j for allj B → B 14 7 General vector spaces which is also invertible. We have called it the basis isomorphism. For a given element v=α 1b1 +...+α nbn, we can write: 1 1 1 v=α 1b1 +...+α nbn =α 1Φ− (e1) +...+α nΦ− (en) =Φ − (Φ (v)). B B B B 1 n 1 Φ− :F V,Φ − (ej) =b j for allj B → B Example 7.19. Consider the already introduced monomial basis =(m 2,m 1,m 0) = 2 B (x x , x x, x 1) of the space 2(R) and the polynomialp 2(R) defined by �→ �→ �→ P ∈P p(x) = 4x2 + 3x 2. Then: − 4 1 1 p=Φ − 3 =Φ − (Φ (p)), sincep=4m 2 + 3m1 2m 0. B � 2� B B − − 1 0 3 Example 7.20. LetV=R 2 and choose the basis =( , ). The vectorx= V B 1 1 7 ∈ can then be represented as: � � � � � � 3 1 0 1 0 3 1 x= = 3 + 4 = =Φ − (Φ (x)) 7 1 1 1 1 4 B B � � � � � � � � � � Now, let =(c 1,...,c n) be another basis ofV. C Question: Old versus new coordinates How to switch between both coordinate vectors? α1 α1� . ? . -coordinatesΦ (v)= . -coordinatesΦ (v)= . B B ←→C C αn αn� V 1 1 Φ− Φ− B ? C Fn Fn ? 7.4 Coordinates with respect to a basis 15 n n 1 To get the map from “left to right” byf:F F ,f :=Φ Φ − . More concretely for n → C ◦ B all canonical unit vectorse j F , we get: ∈ 1 f(e j) =Φ (Φ− (ej)) =Φ (bj) (7.7) C B C Sincef is a linear map, wefind a uniquely determined matrixA such thatf(x) =Ax for allx F n. This matrix is determined by equation 7.7 and given a suitable name: ∈ Transformation matrix n n T := Φ (b1) Φ (bn) F × (7.8) C←B C ··· C ∈ The corresponding linear map gives us a sense of switching from basis to the basis . B C Also a good mnemonic is: 1 1 n Φ− T x=Φ − x for allx F (7.9) C C←B B ∈ Now, if we have a vectorv V and its coordinate vectorΦ (v) andΦ (v), respectively, ∈ B C then we can calculate: 1 T Φ (v)=Φ (Φ− (Φ (v))) =Φ (v). C←B B C B B C Wefix our result: Transformation formula Φ (v) =T Φ (v). (7.10) C C←B B v V ∈ 1 1 Φ− Φ− B T C C←B Φ (v) F n Φ (v) F n B ∈ T C ∈ B←C 1 T = (T )− . (7.11) B←C C←B 16 7 General vector spaces Rule of thumb: How to get the transformation matrixT C←B The notationT means: We put the vector in -coordinates in (from the right) C←B B and get out the vector in -coordinates. To get the transformation matrixT C C←B write the basis vectors of in -coordinates and put them as columns in a matrix. B C Example 7.21. We already know the polynomial basis =(m ,m ,m ) = (x x 2, x x, x 1) B 2 1 0 �→ �→ �→ =:b 1 =:b 2 =:b 3 in 2(R). Now, we can easily���� show���� that���� P =(m 1 m ,m + 1 m ,m ). C 2 − 2 1 2 2 1 0 =:c 1 =:c 2 =:c 3 ���� defines also a basis of 2(R). Now� �� we know� � how�� to� change between these two bases. P Therefore, we calculate the transformation matrices. Thefirst thing you should note is that the basis is already given in linear combinations of the basis vectors from . Hence C B we get: 1 1 0 1 1 = T = /2 /2 0 . B←C ⇒ −0 0 1 1 By calculating the inverse, we get the other transformation matrixT = (T )− : C←B B←C 1 /2 1 0 1 − T = Φ (b1) Φ (b2) Φ (b3) = /2 1 0 C←B C C C 0 0 1 For an arbitrary polynomialp(x) = ax 2 + bx+c with a, b, c R, we get: ∈ 1 a /2 1 0 a /2 b 1 − a − Φ (p) =T Φ (p) = /2 1 0 b = /2 +b . C C←B B 0 0 1 c c 7.4 Coordinates with respect to a basis 17 Hence, for our examplep(x) = 4x 2 + 3x 2,a=4,b=3 andc= 2, we get: − − 4 /2 3 1 4 − − Φ (p) = /2 + 3 = 5 . C 2 2 − − Let us check again if this was all correct: ( 1) (x2 1 x) +5 (x2 + 1 x) +( 2) 1 = ( 1 + 5)x2 + ( 1 + 5 )x+( 2)1 = 4x 2 + 3x 2.
Recommended publications
  • Selecting a Monomial Basis for Sums of Squares Programming Over a Quotient Ring
    Selecting a Monomial Basis for Sums of Squares Programming over a Quotient Ring Frank Permenter1 and Pablo A. Parrilo2 Abstract— In this paper we describe a method for choosing for λi(x) and s(x), this feasibility problem is equivalent to a “good” monomial basis for a sums of squares (SOS) program an SDP. This follows because the sum of squares constraint formulated over a quotient ring. It is known that the monomial is equivalent to a semidefinite constraint on the coefficients basis need only include standard monomials with respect to a Groebner basis. We show that in many cases it is possible of s(x), and the equality constraint is equivalent to linear to use a reduced subset of standard monomials by combining equations on the coefficients of s(x) and λi(x). Groebner basis techniques with the well-known Newton poly- Naturally, the complexity of this sums of squares program tope reduction. This reduced subset of standard monomials grows with the complexity of the underlying variety, as yields a smaller semidefinite program for obtaining a certificate does the size of the corresponding SDP. It is therefore of non-negativity of a polynomial on an algebraic variety. natural to explore how algebraic structure can be exploited I. INTRODUCTION to simplify this sums of squares program. Consider the Many practical engineering problems require demonstrat- following reformulation [10], which is feasible if and only ing non-negativity of a multivariate polynomial over an if (2) is feasible: algebraic variety, i.e. over the solution set of polynomial Find s(x) equations.
    [Show full text]
  • 1 Expressing Vectors in Coordinates
    Math 416 - Abstract Linear Algebra Fall 2011, section E1 Working in coordinates In these notes, we explain the idea of working \in coordinates" or coordinate-free, and how the two are related. 1 Expressing vectors in coordinates Let V be an n-dimensional vector space. Recall that a choice of basis fv1; : : : ; vng of V is n the same data as an isomorphism ': V ' R , which sends the basis fv1; : : : ; vng of V to the n standard basis fe1; : : : ; eng of R . In other words, we have ' n ': V −! R vi 7! ei 2 3 c1 6 . 7 v = c1v1 + ::: + cnvn 7! 4 . 5 : cn This allows us to manipulate abstract vectors v = c v + ::: + c v simply as lists of numbers, 2 3 1 1 n n c1 6 . 7 n the coordinate vectors 4 . 5 2 R with respect to the basis fv1; : : : ; vng. Note that the cn coordinates of v 2 V depend on the choice of basis. 2 3 c1 Notation: Write [v] := 6 . 7 2 n for the coordinates of v 2 V with respect to the basis fvig 4 . 5 R cn fv1; : : : ; vng. For shorthand notation, let us name the basis A := fv1; : : : ; vng and then write [v]A for the coordinates of v with respect to the basis A. 2 2 Example: Using the monomial basis f1; x; x g of P2 = fa0 + a1x + a2x j ai 2 Rg, we obtain an isomorphism ' 3 ': P2 −! R 2 3 a0 2 a0 + a1x + a2x 7! 4a15 : a2 2 3 a0 2 In the notation above, we have [a0 + a1x + a2x ]fxig = 4a15.
    [Show full text]
  • Tight Monomials and the Monomial Basis Property
    PDF Manuscript TIGHT MONOMIALS AND MONOMIAL BASIS PROPERTY BANGMING DENG AND JIE DU Abstract. We generalize a criterion for tight monomials of quantum enveloping algebras associated with symmetric generalized Cartan matrices and a monomial basis property of those associated with symmetric (classical) Cartan matrices to their respective sym- metrizable case. We then link the two by establishing that a tight monomial is necessarily a monomial defined by a weakly distinguished word. As an application, we develop an algorithm to compute all tight monomials in the rank 2 Dynkin case. The existence of Hall polynomials for Dynkin or cyclic quivers not only gives rise to a simple realization of the ±-part of the corresponding quantum enveloping algebras, but also results in interesting applications. For example, by specializing q to 0, degenerate quantum enveloping algebras have been investigated in the context of generic extensions ([20], [8]), while through a certain non-triviality property of Hall polynomials, the authors [4, 5] have established a monomial basis property for quantum enveloping algebras associated with Dynkin and cyclic quivers. The monomial basis property describes a systematic construction of many monomial/integral monomial bases some of which have already been studied in the context of elementary algebraic constructions of canonical bases; see, e.g., [15, 27, 21, 5] in the simply-laced Dynkin case and [3, 18], [9, Ch. 11] in general. This property in the cyclic quiver case has also been used in [10] to obtain an elementary construction of PBW-type bases, and hence, of canonical bases for quantum affine sln. In this paper, we will complete this program by proving this property for all finite types.
    [Show full text]
  • Polynomial Interpolation CPSC 303: Numerical Approximation and Discretization
    Lecture Notes 2: Polynomial Interpolation CPSC 303: Numerical Approximation and Discretization Ian M. Mitchell [email protected] http://www.cs.ubc.ca/~mitchell University of British Columbia Department of Computer Science Winter Term Two 2012{2013 Copyright 2012{2013 by Ian M. Mitchell This work is made available under the terms of the Creative Commons Attribution 2.5 Canada license http://creativecommons.org/licenses/by/2.5/ca/ Outline • Background • Problem statement and motivation • Formulation: The linear system and its conditioning • Polynomial bases • Monomial • Lagrange • Newton • Uniqueness of polynomial interpolant • Divided differences • Divided difference tables and the Newton basis interpolant • Divided difference connection to derivatives • Osculating interpolation: interpolating derivatives • Error analysis for polynomial interpolation • Reducing the error using the Chebyshev points as abscissae CPSC 303 Notes 2 Ian M. Mitchell | UBC Computer Science 2/ 53 Interpolation Motivation n We are given a collection of data samples f(xi; yi)gi=0 n • The fxigi=0 are called the abscissae (singular: abscissa), n the fyigi=0 are called the data values • Want to find a function p(x) which can be used to estimate y(x) for x 6= xi • Why? We often get discrete data from sensors or computation, but we want information as if the function were not discretely sampled • If possible, p(x) should be inexpensive to evaluate for a given x CPSC 303 Notes 2 Ian M. Mitchell | UBC Computer Science 3/ 53 Interpolation Formulation There are lots of ways to define a function p(x) to approximate n f(xi; yi)gi=0 • Interpolation means p(xi) = yi (and we will only evaluate p(x) for mini xi ≤ x ≤ maxi xi) • Most interpolants (and even general data fitting) is done with a linear combination of (usually nonlinear) basis functions fφj(x)g n X p(x) = pn(x) = cjφj(x) j=0 where cj are the interpolation coefficients or interpolation weights CPSC 303 Notes 2 Ian M.
    [Show full text]
  • Arxiv:1805.04488V5 [Math.NA]
    GENERALIZED STANDARD TRIPLES FOR ALGEBRAIC LINEARIZATIONS OF MATRIX POLYNOMIALS∗ EUNICE Y. S. CHAN†, ROBERT M. CORLESS‡, AND LEILI RAFIEE SEVYERI§ Abstract. We define generalized standard triples X, Y , and L(z) = zC1 − C0, where L(z) is a linearization of a regular n×n −1 −1 matrix polynomial P (z) ∈ C [z], in order to use the representation X(zC1 − C0) Y = P (z) which holds except when z is an eigenvalue of P . This representation can be used in constructing so-called algebraic linearizations for matrix polynomials of the form H(z) = zA(z)B(z)+ C ∈ Cn×n[z] from generalized standard triples of A(z) and B(z). This can be done even if A(z) and B(z) are expressed in differing polynomial bases. Our main theorem is that X can be expressed using ℓ the coefficients of the expression 1 = Pk=0 ekφk(z) in terms of the relevant polynomial basis. For convenience, we tabulate generalized standard triples for orthogonal polynomial bases, the monomial basis, and Newton interpolational bases; for the Bernstein basis; for Lagrange interpolational bases; and for Hermite interpolational bases. We account for the possibility of common similarity transformations. Key words. Standard triple, regular matrix polynomial, polynomial bases, companion matrix, colleague matrix, comrade matrix, algebraic linearization, linearization of matrix polynomials. AMS subject classifications. 65F15, 15A22, 65D05 1. Introduction. A matrix polynomial P (z) ∈ Fm×n[z] is a polynomial in the variable z with coef- ficients that are m by n matrices with entries from the field F. We will use F = C, the field of complex k numbers, in this paper.
    [Show full text]
  • An Algebraic Approach to Harmonic Polynomials on S3
    AN ALGEBRAIC APPROACH TO HARMONIC POLYNOMIALS ON S3 Kevin Mandira Limanta A thesis in fulfillment of the requirements for the degree of Master of Science (by Research) SCHOOL OF MATHEMATICS AND STATISTICS FACULTY OF SCIENCE UNIVERSITY OF NEW SOUTH WALES June 2017 ORIGINALITY STATEMENT 'I hereby declare that this submission is my own work and to the best of my knowledge it contains no materials previously published or written by another person, or substantial proportions of material which have been accepted for the award of any other degree or diploma at UNSW or any other educational institution, except where due acknowledgement is made in the thesis. Any contribution made to the research by others, with whom I have worked at UNSW or elsewhere, is explicitly acknowledged in the thesis. I also declare that the intellectual content of this thesis is the product of my own work, except to the extent that assistance from others in the project's design and conception or in style, presentation and linguistic expression is acknowledged.' Signed .... Date Show me your ways, Lord, teach me your paths. Guide me in your truth and teach me, for you are God my Savior, and my hope is in you all day long. { Psalm 25:4-5 { i This is dedicated for you, Papa. ii Acknowledgement This thesis is the the result of my two years research in the School of Mathematics and Statistics, University of New South Wales. I learned quite a number of life lessons throughout my entire study here, for which I am very grateful of.
    [Show full text]
  • Interpolation Polynomial Interpolation Piecewise Polynomial Interpolation Outline
    Interpolation Polynomial Interpolation Piecewise Polynomial Interpolation Outline 1 Interpolation 2 Polynomial Interpolation 3 Piecewise Polynomial Interpolation Michael T. Heath Scientific Computing 2 / 56 Chapter 7: Interpolation ❑ Topics: ❑ Examples ❑ Polynomial Interpolation – bases, error, Chebyshev, piecewise ❑ Orthogonal Polynomials ❑ Splines – error, end conditions ❑ Parametric interpolation ❑ Multivariate interpolation: f(x,y) Interpolation Motivation Polynomial Interpolation Choosing Interpolant Piecewise Polynomial Interpolation Existence and Uniqueness Interpolation Basic interpolation problem: for given data (t ,y ), (t ,y ),...(t ,y ) with t <t < <t 1 1 2 2 m m 1 2 ··· m determine function f : R R such that ! f(ti)=yi,i=1,...,m f is interpolating function, or interpolant, for given data Additional data might be prescribed, such as slope of interpolant at given points Additional constraints might be imposed, such as smoothness, monotonicity, or convexity of interpolant f could be function of more than one variable, but we will consider only one-dimensional case Michael T. Heath Scientific Computing 3 / 56 Interpolation Motivation Polynomial Interpolation Choosing Interpolant Piecewise Polynomial Interpolation Existence and Uniqueness Purposes for Interpolation Plotting smooth curve through discrete data points Reading between lines of table Differentiating or integrating tabular data Quick and easy evaluation of mathematical function Replacing complicated function by simple one Michael T. Heath Scientific Computing 4 / 56 Interpolation Motivation Polynomial Interpolation Choosing Interpolant Piecewise Polynomial Interpolation Existence and Uniqueness Interpolation vs Approximation By definition, interpolating function fits given data points exactly Interpolation is inappropriate if data points subject to significant errors It is usually preferable to smooth noisy data, for example by least squares approximation Approximation is also more appropriate for special function libraries Michael T.
    [Show full text]
  • Poincaré Series and Homotopy Lie Algebras of Monomial Rings
    ISSN: 1401-5617 Poincar´e series and homotopy Lie algebras of monomial rings Alexander Berglund Research Reports in Mathematics Number 6, 2005 Department of Mathematics Stockholm University Electronic versions of this document are available at http://www.math.su.se/reports/2005/6 Date of publication: November 28, 2005 2000 Mathematics Subject Classification: Primary 13D40, Secondary 13D02, 13D07. Postal address: Department of Mathematics Stockholm University S-106 91 Stockholm Sweden Electronic addresses: http://www.math.su.se/ [email protected] POINCARE´ SERIES AND HOMOTOPY LIE ALGEBRAS OF MONOMIAL RINGS ALEXANDER BERGLUND Abstract. This thesis comprises an investigation of (co)homological invari- ants of monomial rings, by which is meant commutative algebras over a field whose minimal relations are monomials in a set of generators for the algebra, and of combinatorial aspects of these invariants. Examples of monomial rings include the `Stanley-Reisner rings' of simplicial complexes. Specifically, we study the homotopy Lie algebra π(R), whose universal enveloping algebra is the Yoneda algebra ExtR(k; k), and the multigraded Poincar´e series of R, x i α i PR( ; z) = dimk ExtR(k; k)αx z : n i≥0,α2 ¡ To a set of monomials M we introduce a finite lattice KM , and show how to compute the Poincar´e series of an algebra R, with minimal relations M, in terms of the homology groups of lower intervals in this lattice. We introduce a ≥2 finite dimensional L1-algebra ¢ 1(M), and compute the Lie algebra π (R) ∗ in terms of the cohomology Lie algebra H ( ¢ 1(M)). Applications of these results include a combinatorial criterion for when a monomial ring is Golod.
    [Show full text]
  • Lecture 26: Blossoming Blossom Abundantly, and Rejoice Even With
    Lecture 26: Blossoming blossom abundantly, and rejoice even with joy and singing: Isaiah 35:2 1. Introduction Linear functions are simple; polynomials are complicated. If L(t) = at , then clearly L(µs+ λt) = µL(s)+ λL(t) . More generally, if L(t) = at + b , then it is easy to verify that L((1− λ)s+ λt) = (1− λ)L(s) + λ L(t) . € € In the first case L(t) preserves linear combinations; in the second case L(t) preserves affine n combinations.€ But if P(t) = a t + + a is a polynomial of degree n > 1, then n L 0 € P(µ s+ λt) ≠ µP( s)+ λP( t) P((1− λ)s+ λt) ≠ (1− λ)P(s) + λ P(t) . Thus arbitrary polynomials preserve neither linear nor affine combinations. The key idea behind €blossoming is to replace a complicated polynomial function P(t) in one variable by a simple €polynomial function p(u , , u ) in many variables that is either linear or affine in each variable. 1 K n The function p(u , , u ) is called the blossom of P(t) , and converting from P(t) to p(u , , u ) 1 K n 1 K n is called blossoming. Blossoming is intimately linked to Bezier curves. We shall see in Section 3 that the Bezier control points of a polynomial curve are given by the blossom of the curve evaluated at the end points of the parameter interval. Moreover, there is an algorithm for evaluating the blossom recursively that closely mimics the de Casteljau evaluation algorithm for Bezier curves. In this lecture we shall apply the blossom to derive two standard algorithms for Bezier curves: the de Casteljau subdivision algorithm and the procedure for differentiating the de Casteljau evaluation algorithm.
    [Show full text]
  • Change of Basis in Polynomial Interpolation
    NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1–6 Prepared using nlaauth.cls [Version: 2000/03/22 v1.0] Change of basis in polynomial interpolation W. Gander Institute of Computational Science ETH Zurich CH-8092 Zurich Switzerland SUMMARY Several representations for the interpolating polynomial exist: Lagrange, Newton, orthogonal polynomials etc. Each representation is characterized by some basis functions. In this paper we investigate the transformations between the basis functions which map a specific representation to another. We show that for this purpose the LU- and the QR decomposition of the Vandermonde matrix play a crucial role. Copyright c 2000 John Wiley & Sons, Ltd. key words: Polynomial interpolation, basis change 1. REPRESENTATIONS OF THE INTERPOLATING POLYNOMIAL Given function values x x , x , ··· x 0 1 n , f(x) f0, f1, ··· fn with xi 6= xj for i 6= j. There exists a unique polynomial Pn of degree less or equal n which interpolates these values, i.e. Pn(xi) = fi, i = 0, 1, . , n. (1) Several representations of Pn are known, we will present in this paper the basis transformations among four of them. 1.1. Monomial Basis We consider first the monomials m(x) = (1, x, x2, . , xn)T and the representation n−1 n Pn(x) = a0 + a1x + ... + an−1x + anx . †Please ensure that you use the most up to date class file, available from the NLA Home Page at http://www.interscience.wiley.com/jpages/1070-5325/ Contract/grant sponsor: Publishing Arts Research Council; contract/grant number: 98–1846389 Received ?? Copyright c 2000 John Wiley & Sons, Ltd.
    [Show full text]
  • Trees, Parking Functions, Syzygies, and Deformations of Monomial Ideals
    TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 356, Number 8, Pages 3109{3142 S 0002-9947(04)03547-0 Article electronically published on March 12, 2004 TREES, PARKING FUNCTIONS, SYZYGIES, AND DEFORMATIONS OF MONOMIAL IDEALS ALEXANDER POSTNIKOV AND BORIS SHAPIRO Abstract. For a graph G, we construct two algebras whose dimensions are both equal to the number of spanning trees of G. One of these algebras is the quotient of the polynomial ring modulo certain monomial ideal, while the other is the quotient of the polynomial ring modulo certain powers of linear forms. We describe the set of monomials that forms a linear basis in each of these two algebras. The basis elements correspond to G-parking functions that naturally came up in the abelian sandpile model. These ideals are instances of the general class of monotone monomial ideals and their deformations. We show that the Hilbert series of a monotone monomial ideal is always bounded by the Hilbert series of its deformation. Then we define an even more general class of monomial ideals associated with posets and construct free resolutions for these ideals. In some cases these resolutions coincide with Scarf resolutions. We prove several formulas for Hilbert series of monotone monomial ideals and investigate when they are equal to Hilbert series of deformations. In the appendix we discuss the abelian sandpile model. 1. Introduction The famous formula of Cayley says that the number of trees on n + 1 labelled vertices equals (n +1)n−1. Remarkably, this number has several other interesting combinatorial interpretations. For example, it is equal to the number of parking functions of size n.
    [Show full text]
  • Are Resultant Methods Numerically Unstable for Multidimensional Rootfinding?
    Are resultant methods numerically unstable for multidimensional rootfinding? Noferini, Vanni and Townsend, Alex 2015 MIMS EPrint: 2015.51 Manchester Institute for Mathematical Sciences School of Mathematics The University of Manchester Reports available from: http://eprints.maths.manchester.ac.uk/ And by contacting: The MIMS Secretary School of Mathematics The University of Manchester Manchester, M13 9PL, UK ISSN 1749-9097 ARE RESULTANT METHODS NUMERICALLY UNSTABLE FOR MULTIDIMENSIONAL ROOTFINDING? VANNI NOFERINI∗ AND ALEX TOWNSENDy Abstract. Hidden-variable resultant methods are a class of algorithms for solving multidimen- sional polynomial rootfinding problems. In two dimensions, when significant care is taken, they are competitive practical rootfinders. However, in higher dimensions they are known to miss zeros, cal- culate roots to low precision, and introduce spurious solutions. We show that the hidden-variable resultant method based on the Cayley (Dixon or B´ezout)resultant is inherently and spectacularly numerically unstable by a factor that grows exponentially with the dimension. We also show that the Sylvester resultant for solving bivariate polynomial systems can square the condition number of the problem. In other words, two popular hidden-variable resultant methods are numerically unstable, and this mathematically explains the difficulties that are frequently reported by practitioners. Along the way, we prove that the Cayley resultant is a generalization of Cramer's rule for solving linear systems and generalize Clenshaw's algorithm to an evaluation scheme for polynomials expressed in a degree-graded polynomial basis. Key words. resultants, rootfinding, conditioning, multivariate polynomials, Cayley, Sylvester AMS subject classifications. 13P15, 65H04, 65F35 1. Introduction. Hidden-variable resultant methods are a popular class of al- gorithms for global multidimensional rootfinding [1, 17, 27, 35, 39, 40].
    [Show full text]