Non-Orthogonal Bases

Total Page:16

File Type:pdf, Size:1020Kb

Non-Orthogonal Bases Non-orthogonal bases Although orthogonal bases have many useful properties, it is possi- ble (and sometimes desirable) to use a set of non-orthogonal basis functions for discretization. The main property we want from a basis is that the mapping from signal space to coefficient space is a stable bijection | each signal should have a different set of expansion coefficients, and each set of expansion coefficients should correspond to a different signal. Small changes in the expansion coefficients should not lead to large changes in the re-synthesized signal. We also want a concrete method for calculating the coefficients in a basis expansion. N We start our discussion in the familiar setting of R . N Bases in R N N Let 1;:::; N 2 R be a basis for R . We know from basic linear algebra that these vectors form a basis if and only if they are linearly N independent. For any x 2 R , we have N X x = αn n; (1) n=1 for some coefficient sequence 2 3 α1 6 α2 7 α = 6 . 7 : 4 . 5 αN How do we compute the αn? 149 Georgia Tech ECE 6250 Fall 2016; Notes by J. Romberg. Last updated 11:51, October 5, 2016 The answer to this turns out to be straightforward as soon as we have everything written down the right way. We can write the de- composition in (1) as 2 3 2 3 j j j α1 6 7 6 α2 7 x = α1 + ··· + αN = 6 ··· 7 6 . 7 1 N 4 1 2 N 5 4 . 5 j j j αN = Ψα That is, the basis vectors are concatenated as columns in the N × N matrix Ψ. Since its columns are linearly independent, Ψ is invertible, and so we have the reproducing formula x = ΨΨ−1x N X = hx; e ni n; n=1 where e n is the nth row of the inverse of Ψ: 2 T 3 − e 1 − 6 T 7 −1 6− e −7 Ψ = 6 2 7 6 . 7 4 T 5 − e N − We have Transform: αn = hx; e ni; n = 1;:::;N; N X Inverse transform (synthesis): x = αn n n=1 150 Georgia Tech ECE 6250 Fall 2016; Notes by J. Romberg. Last updated 11:51, October 5, 2016 So we compute the expansion coefficients by taking inner products against basis signals, just not the same basis signals as we are using to re-synthesize x. The e 1;:::; e N themselves are linearly indepen- dent, and are called the dual basis for 1;:::; N . Also note that while the f ng are not orthonormal and the f e ng are not orthonormal, jointly they obey the realtion ( 1; n = `; h ; e i = n ` 0; n =6 `: (This follows simply from the fact that ΨΨ−1 = I.) For this reason, f ng and f e ng are called biorthogonal bases. N Bases for subspaces of R N Suppose that T is a M-dimensional (M < N) subspace of R , and 1;:::; M 2 T is a basis for this space. Now 2 j j 3 6 7 Ψ = 6 ··· 7 4 1 M 5 j j is N ×M | it is not square, and so it is not invertible. But since the m are linearly independent, it does have a left inverse, and hence we can derive a reproducing formula. M Let x 2 T , so there exists an α 2 R such that x = Ψα. Then the reproducing formula is x = Ψ(ΨTΨ)−1ΨTx: 151 Georgia Tech ECE 6250 Fall 2016; Notes by J. Romberg. Last updated 11:51, October 5, 2016 (The M × M matrix ΨTΨ is invertible by the linear independence of 1;:::; M .) To see that the formula above holds, simply plug x = Ψα into the expression on the right hand side. We can write M X x = hx; e mi m; m=1 where T −1 T e m = mth row of the M × N matrix (Ψ Ψ) Ψ : Notice that when M = N (and so Ψ is square and invertible), this agrees with the result in the previous section, as in this case (ΨTΨ)−1ΨT = Ψ−1(ΨT)−1ΨT = Ψ−1: Bases in finite dimensional spaces The construction of the dual basis (which tells us how to compute N the expansion coefficients for a basis) in R relied on manipulating matrices that contained the basis vectors. If our signals of interest are not length N vectors, but still live in a finite dimensional Hilbert space S, then we can proceed in a similar manner. Let 1(t); : : : ; N (t) be a basis for an N-dimensional inner product space S. Let x(t) 2 S be another arbitrary signal in this space. We know that the closest point to x in S is x itself, and from our work on the closest point problem, we know that we can write N X x(t) = αn n(t); n=1 152 Georgia Tech ECE 6250 Fall 2016; Notes by J. Romberg. Last updated 11:51, October 5, 2016 where 2 3 2 3−1 2 3 α1 h 1; 1i h 2; 1i · · · h N ; 1i hx; 1i 6 α2 7 6 h 1; 2i h 2; 2i · · · h N ; 2i 7 6 hx; 2i 7 6 . 7 = 6 . 7 6 . 7 : 4 . 5 4 . .. 5 4 . 5 αN h 1; N i h 2; N i · · · h N ; N i hx; N i Use H to denote the inverse Gram matrix above. Then N * N + X X αn = Hn;`hx; `i = x; Hn;` ` : `=1 `=1 Thus N X x(t) = hx; e ni n(t); n=1 where N X en(t) = Hn;` `(t): `=1 153 Georgia Tech ECE 6250 Fall 2016; Notes by J. Romberg. Last updated 11:51, October 5, 2016 Example: Let S be the space of all second-order polynomials on [0; 1]. Set 2 1(t) = 1; 2(t) = t; 3(t) = t : Then 2 1 1=2 1=33−1 2 9 −36 30 3 H = 41=2 1=3 1=45 = 4−36 192 −1805 ; 1=3 1=4 1=5 30 −180 180 and 2 2 2 e1(t) = 30t −36t+9; e2(t) = −180t +192t−36; e3(t) = 180t −180t+30 154 Georgia Tech ECE 6250 Fall 2016; Notes by J. Romberg. Last updated 11:51, October 5, 2016 Non-orthogonal basis in infinite dimensions: Riesz Bases When S is infinite dimensional, we have to proceed with a little more caution. It is possible that we have a infinite set of vectors which are linearly independent and span S (after closure), but the representation is completely unstable. Recall that a (possibly infinite) set of vectors is called linearly inde- pendent if no finite subset is linearly dependent. Trouble can come when larger and larger sets are coming closer and closer to being linearly dependent. That is, if f n; 1 ≤ n ≤ 1g is a set of vectors, there might be no α2; : : : ; αL such that L X 1 = α` `; `=2 exactly, no matter how large L is. But there could be an infinite sequence fα`g such that L X lim k 1 − α` `k = 0: L!1 `=2 We will see an example of this below. Our definition of basis prevents sequences like the above from occur- ring. 1 1 Definition. We say f ngn=1 is a Riesz basis if there exists constants A; B > 0 such that 2 1 1 1 X 2 X X 2 A jαnj ≤ αn n ≤ B jαnj n=1 n=1 n=1 1This definition uses the natural numbers to index the set of basis functions, but of course it applies equally to any countably infinite sequence. 155 Georgia Tech ECE 6250 Fall 2016; Notes by J. Romberg. Last updated 11:51, October 5, 2016 P1 2 uniformly for all sequences fαng with n=1 jαnj < 1. Note that if f ng is an orthonormal basis, then it is a Riesz basis with A = B = 1 (Parseval theorem). Our first example is something which is not a Riesz basis. Example: Multiscale Tent Functions Consider this set of continuous-time signals on [0; 1]. p p φ0(t) = 2(1 − t); φ1(t) = 2t; (p 3t; 0 ≤ t ≤ 1=2; (t) = p 0 3(1=2 − t); 1=2 ≤ t ≤ 1: j=2 j j j;n(t) = 2 0(2 t − n); j ≥ 1; n = 0;:::; 2 − 1: Sketch: From your sketch above, it should be clear that j Span(fφ0; φ1; 0; j;n; 1 ≤ j ≤ J; n = 0;:::; 2 − 1g) is the set of all continuous piecewise-linear functions on dyadic in- −J tervals of length 2 . Since this set is dense in L2([0; 1]), we can 156 Georgia Tech ECE 6250 Fall 2016; Notes by J. Romberg. Last updated 11:51, October 5, 2016 write X x(t) = b0φ0(t) + b1φ1(t) + cj;n j;n(t) j;n for some sequence of numbers fb0; b1; cj;ng. The problem is that this sequence of numbers might not be well-behaved. To see that this collection of functions cannot be a Riesz basis, notice that using the functions with 1 ≤ j ≤ J, we can match the samples of any function on the grid with spacing 2−J , with linear interpola- tion in between these samples. From this, we see that φ0(t) can be matched exactly on the interval [2−J−1; 1] with a linear combination of j;n; 0 ≤ j ≤ J. This means that there is a sequence of numbers fβj;ng such that 2j−1 X X φ0(t) = βj;n j;n(t): j≥0 n=0 This means that the non-zero sequence of numbers f1; 0; βj;n; j ≥ 0; n = 0;:::; 2j − 1g synthesizes the 0 signal, thus violating the condition that A > 0 uniformly.
Recommended publications
  • Bornologically Isomorphic Representations of Tensor Distributions
    Bornologically isomorphic representations of distributions on manifolds E. Nigsch Thursday 15th November, 2018 Abstract Distributional tensor fields can be regarded as multilinear mappings with distributional values or as (classical) tensor fields with distribu- tional coefficients. We show that the corresponding isomorphisms hold also in the bornological setting. 1 Introduction ′ ′ ′r s ′ Let D (M) := Γc(M, Vol(M)) and Ds (M) := Γc(M, Tr(M) ⊗ Vol(M)) be the strong duals of the space of compactly supported sections of the volume s bundle Vol(M) and of its tensor product with the tensor bundle Tr(M) over a manifold; these are the spaces of scalar and tensor distributions on M as defined in [?, ?]. A property of the space of tensor distributions which is fundamental in distributional geometry is given by the C∞(M)-module isomorphisms ′r ∼ s ′ ∼ r ′ Ds (M) = LC∞(M)(Tr (M), D (M)) = Ts (M) ⊗C∞(M) D (M) (1) (cf. [?, Theorem 3.1.12 and Corollary 3.1.15]) where C∞(M) is the space of smooth functions on M. In[?] a space of Colombeau-type nonlinear generalized tensor fields was constructed. This involved handling smooth functions (in the sense of convenient calculus as developed in [?]) in par- arXiv:1105.1642v1 [math.FA] 9 May 2011 ∞ r ′ ticular on the C (M)-module tensor products Ts (M) ⊗C∞(M) D (M) and Γ(E) ⊗C∞(M) Γ(F ), where Γ(E) denotes the space of smooth sections of a vector bundle E over M. In[?], however, only minor attention was paid to questions of topology on these tensor products.
    [Show full text]
  • Solution: the First Element of the Dual Basis Is the Linear Function Α
    Homework assignment 7 pp. 105 3 Exercise 2. Let B = f®1; ®2; ®3g be the basis for C defined by ®1 = (1; 0; ¡1) ®2 = (1; 1; 1) ®3 = (2; 2; 0): Find the dual basis of B. Solution: ¤ The first element of the dual basis is the linear function ®1 such ¤ ¤ ¤ that ®1(®1) = 1; ®1(®2) = 0 and ®1(®3) = 0. To describe such a function more explicitly we need to find its values on the standard basis vectors e1, e2 and e3. To do this express e1; e2; e3 through ®1; ®2; ®3 (refer to the solution of Exercise 1 pp. 54-55 from Homework 6). For each i = 1; 2; 3 you will find the numbers ai; bi; ci such that e1 = ai®1 + bi®2 + ci®3 (i.e. the coordinates of ei relative to the ¤ ¤ ¤ basis ®1; ®2; ®3). Then by linearity of ®1 we get that ®1(ei) = ai. Then ®2(ei) = bi, ¤ and ®3(ei) = ci. This is the answer. It can also be reformulated as follows. If P is the transition matrix from the standard basis e1; e2; e3 to ®1; ®2; ®3, i.e. ¡1 t (®1; ®2; ®3) = (e1; e2; e3)P , then (P ) is the transition matrix from the dual basis ¤ ¤ ¤ ¤ ¤ ¤ ¤ ¤ ¤ ¤ ¤ ¤ ¡1 t e1; e2; e3 to the dual basis ®1; ®2; a3, i.e. (®1; ®2; a3) = (e1; e2; e3)(P ) . Note that this problem is basically the change of coordinates problem: e.g. ¤ 3 the value of ®1 on the vector v 2 C is the first coordinate of v relative to the basis ®1; ®2; ®3.
    [Show full text]
  • Vectors and Dual Vectors
    Vectors By now you have a pretty good experience with \vectors". Usually, a vector is defined as a quantity that has a direction and a magnitude, such as a position vector, velocity vector, acceleration vector, etc. However, the notion of a vector has a considerably wider realm of applicability than these examples might suggest. The set of all real numbers forms a vector space, as does the set of all complex numbers. The set of functions on a set (e.g., functions of one variable, f(x)) form a vector space. Solutions of linear homogeneous equations form a vector space. We begin by giving the abstract rules for forming a space of vectors, also known as a vector space. A vector space V is a set equipped with an operation of \addition" and an additive identity. The elements of the set are called vectors, which we shall denote as ~u, ~v, ~w, etc. For now, you can think of them as position vectors in order to keep yourself sane. Addition, is an operation in which two vectors, say ~u and ~v, can be combined to make another vector, say, ~w. We denote this operation by the symbol \+": ~u + ~v = ~w: (1) Do not be fooled by this simple notation. The \addition" of vectors may be quite a different operation than ordinary arithmetic addition. For example, if we view position vectors in the x-y plane as \arrows" drawn from the origin, the addition of vectors is defined by the parallelogram rule. Clearly this rule is quite different than ordinary \addition".
    [Show full text]
  • Duality, Part 1: Dual Bases and Dual Maps Notation
    Duality, part 1: Dual Bases and Dual Maps Notation F denotes either R or C. V and W denote vector spaces over F. Define ': R3 ! R by '(x; y; z) = 4x − 5y + 2z. Then ' is a linear functional on R3. n n Fix (b1;:::; bn) 2 C . Define ': C ! C by '(z1;:::; zn) = b1z1 + ··· + bnzn: Then ' is a linear functional on Cn. Define ': P(R) ! R by '(p) = 3p00(5) + 7p(4). Then ' is a linear functional on P(R). R 1 Define ': P(R) ! R by '(p) = 0 p(x) dx. Then ' is a linear functional on P(R). Examples: Linear Functionals Definition: linear functional A linear functional on V is a linear map from V to F. In other words, a linear functional is an element of L(V; F). n n Fix (b1;:::; bn) 2 C . Define ': C ! C by '(z1;:::; zn) = b1z1 + ··· + bnzn: Then ' is a linear functional on Cn. Define ': P(R) ! R by '(p) = 3p00(5) + 7p(4). Then ' is a linear functional on P(R). R 1 Define ': P(R) ! R by '(p) = 0 p(x) dx. Then ' is a linear functional on P(R). Linear Functionals Definition: linear functional A linear functional on V is a linear map from V to F. In other words, a linear functional is an element of L(V; F). Examples: Define ': R3 ! R by '(x; y; z) = 4x − 5y + 2z. Then ' is a linear functional on R3. Define ': P(R) ! R by '(p) = 3p00(5) + 7p(4). Then ' is a linear functional on P(R). R 1 Define ': P(R) ! R by '(p) = 0 p(x) dx.
    [Show full text]
  • A Some Basic Rules of Tensor Calculus
    A Some Basic Rules of Tensor Calculus The tensor calculus is a powerful tool for the description of the fundamentals in con- tinuum mechanics and the derivation of the governing equations for applied prob- lems. In general, there are two possibilities for the representation of the tensors and the tensorial equations: – the direct (symbolic) notation and – the index (component) notation The direct notation operates with scalars, vectors and tensors as physical objects defined in the three dimensional space. A vector (first rank tensor) a is considered as a directed line segment rather than a triple of numbers (coordinates). A second rank tensor A is any finite sum of ordered vector pairs A = a b + ... +c d. The scalars, vectors and tensors are handled as invariant (independent⊗ from the choice⊗ of the coordinate system) objects. This is the reason for the use of the direct notation in the modern literature of mechanics and rheology, e.g. [29, 32, 49, 123, 131, 199, 246, 313, 334] among others. The index notation deals with components or coordinates of vectors and tensors. For a selected basis, e.g. gi, i = 1, 2, 3 one can write a = aig , A = aibj + ... + cidj g g i i ⊗ j Here the Einstein’s summation convention is used: in one expression the twice re- peated indices are summed up from 1 to 3, e.g. 3 3 k k ik ik a gk ∑ a gk, A bk ∑ A bk ≡ k=1 ≡ k=1 In the above examples k is a so-called dummy index. Within the index notation the basic operations with tensors are defined with respect to their coordinates, e.
    [Show full text]
  • Multilinear Algebra
    Appendix A Multilinear Algebra This chapter presents concepts from multilinear algebra based on the basic properties of finite dimensional vector spaces and linear maps. The primary aim of the chapter is to give a concise introduction to alternating tensors which are necessary to define differential forms on manifolds. Many of the stated definitions and propositions can be found in Lee [1], Chaps. 11, 12 and 14. Some definitions and propositions are complemented by short and simple examples. First, in Sect. A.1 dual and bidual vector spaces are discussed. Subsequently, in Sects. A.2–A.4, tensors and alternating tensors together with operations such as the tensor and wedge product are introduced. Lastly, in Sect. A.5, the concepts which are necessary to introduce the wedge product are summarized in eight steps. A.1 The Dual Space Let V be a real vector space of finite dimension dim V = n.Let(e1,...,en) be a basis of V . Then every v ∈ V can be uniquely represented as a linear combination i v = v ei , (A.1) where summation convention over repeated indices is applied. The coefficients vi ∈ R arereferredtoascomponents of the vector v. Throughout the whole chapter, only finite dimensional real vector spaces, typically denoted by V , are treated. When not stated differently, summation convention is applied. Definition A.1 (Dual Space)Thedual space of V is the set of real-valued linear functionals ∗ V := {ω : V → R : ω linear} . (A.2) The elements of the dual space V ∗ are called linear forms on V . © Springer International Publishing Switzerland 2015 123 S.R.
    [Show full text]
  • Basis (Topology), Basis (Vector Space), Ck, , Change of Variables (Integration), Closed, Closed Form, Closure, Coboundary, Cobou
    Index basis (topology), Ôç injective, ç basis (vector space), ç interior, Ôò isomorphic, ¥ ∞ C , Ôý, ÔÔ isomorphism, ç Ck, Ôý, ÔÔ change of variables (integration), ÔÔ kernel, ç closed, Ôò closed form, Þ linear, ç closure, Ôò linear combination, ç coboundary, Þ linearly independent, ç coboundary map, Þ nullspace, ç cochain complex, Þ cochain homotopic, open cover, Ôò cochain homotopy, open set, Ôý cochain map, Þ cocycle, Þ partial derivative, Ôý cohomology, Þ compact, Ôò quotient space, â component function, À range, ç continuous, Ôç rank-nullity theorem, coordinate function, Ôý relative topology, Ôò dierential, Þ resolution, ¥ dimension, ç resolution of the identity, À direct sum, second-countable, Ôç directional derivative, Ôý short exact sequence, discrete topology, Ôò smooth homotopy, dual basis, À span, ç dual space, À standard topology on Rn, Ôò exact form, Þ subspace, ç exact sequence, ¥ subspace topology, Ôò support, Ô¥ Hausdor, Ôç surjective, ç homotopic, symmetry of partial derivatives, ÔÔ homotopy, topological space, Ôò image, ç topology, Ôò induced topology, Ôò total derivative, ÔÔ Ô trivial topology, Ôò vector space, ç well-dened, â ò Glossary Linear Algebra Denition. A real vector space V is a set equipped with an addition operation , and a scalar (R) multiplication operation satisfying the usual identities. + Denition. A subset W ⋅ V is a subspace if W is a vector space under the same operations as V. Denition. A linear combination⊂ of a subset S V is a sum of the form a v, where each a , and only nitely many of the a are nonzero. v v R ⊂ v v∈S v∈S ⋅ ∈ { } Denition.Qe span of a subset S V is the subspace span S of linear combinations of S.
    [Show full text]
  • Arxiv:1104.4829V1 [Physics.Class-Ph] 26 Apr 2011 Summation, with Greek Indexes Used for Temporal and Rocal Basis When Dealing with Non-Orthonormal Spaces
    DRAFT VERSION MAY 30, 2018 Preprint typeset using LATEX style emulateapj v. 11/10/09 CHANGE OF BASIS AND GRAM-SCHMIDT ORTHONORMALIZATION IN SPECIAL RELATIVITY PEETER JOOT 1 Draft version May 30, 2018 ABSTRACT While an explicit basis is common in the study of Euclidean spaces, it is usually implied in the study of inertial relativistic systems. There are some conceptual advantages to including the basis in the study of special relativistic systems. A Minkowski metric implies a non-orthonormal basis, and to deal with this complexity the concepts of reciprocal basis and the vector dual are introduced. It is shown how the reciprocal basis is related to upper and lower index coordinate extraction, the metric tensor, change of basis, projections in non-orthonormal bases, and finally the Gram-Schmidt proce- dure. It will be shown that Lorentz transformations can be viewed as change of basis operations. The Lorentz boost in one spatial dimension will be derived using the Gram-Schmidt orthonormal- ization algorithm, and it will be shown how other Lorentz transformations can be derived using the Gram-Schmidt procedure. Subject headings: Lorentz boost, change of basis, special relativity, inertial frame, reciprocal basis, dual vector, Gram-Schmidt orthonormalization. 1. ABSTRACT 0 0 2. PRELIMINARIES, NOTATION, AND DEFINITIONS x · y = ±(x y − x · y). (3) 2.1. Four vectors, and the standard basis Here y = (y0, y). This is the Minkowski inner prod- Four vectors will be written as a tuples of time and uct, and is non-positive definite. Opposite signs re- space coordinates. These will be represented herein as quired for the spatial and temporal portions of the prod- non-bold letters of the form uct, but the overall sign is arbitrary and conventions vary by author.
    [Show full text]
  • Introduction to De Rham Cohomology
    Introduction to de Rham cohomology Pekka Pankka December 11, 2013 Preface These are lecture notes for the course \Johdatus de Rham kohomologiaan" lectured fall 2013 at Department of Mathematics and Statistics at the Uni- versity of Jyv¨askyl¨a. The main purpose of these lecture notes is to organize the topics dis- cussed on the lectures. They are not meant as a comprehensive material on the topic! These lectures follow closely the book of Madsen and Tornehave \From Calculus to Cohomology" [7] and the reader is strongly encouraged to consult it for more details. There are also several alternative sources e.g. [1, 9] on differential forms and de Rham theory and [4, 5, 3] on multi- linear algebra. 1 Bibliography [1] H. Cartan. Differential forms. Translated from the French. Houghton Mifflin Co., Boston, Mass, 1970. [2] W. Greub. Linear algebra. Springer-Verlag, New York, fourth edition, 1975. Graduate Texts in Mathematics, No. 23. [3] W. Greub. Multilinear algebra. Springer-Verlag, New York, second edi- tion, 1978. Universitext. [4] S. Helgason. Differential geometry and symmetric spaces. Pure and Applied Mathematics, Vol. XII. Academic Press, New York, 1962. [5] S. Helgason. Differential geometry, Lie groups, and symmetric spaces, volume 80 of Pure and Applied Mathematics. Academic Press Inc. [Har- court Brace Jovanovich Publishers], New York, 1978. [6] M. W. Hirsch. Differential topology, volume 33 of Graduate Texts in Mathematics. Springer-Verlag, New York, 1994. Corrected reprint of the 1976 original. [7] I. Madsen and J. Tornehave. From calculus to cohomology. Cambridge University Press, Cambridge, 1997. de Rham cohomology and charac- teristic classes.
    [Show full text]
  • September 11, 2013 III. the DUAL SPACE
    September 11, 2013 III. THE DUAL SPACE RODICA D. COSTIN Contents 1. The dual of a vector space 1 1.1. Linear functionals 1 1.2. The dual space 2 1.3. Dual basis 3 1.4. Linear functionals as covectors; change of basis 4 1.5. Functionals and hyperplanes 4 1.6. Application: interpolation of sampled data 5 1.7. Orthogonality 7 1.8. The bidual 7 1.9. The transpose of a linear transformation 8 1. The dual of a vector space 1.1. Linear functionals. Let V be a vector space over the scalar field F = R or C. Recall that linear functionals are particular cases of linear transformation, namely those whose values are in the scalar field (which is a one dimensional vector space): Definition 1. A linear functional on V is a linear transformation with scalar values: φ : V ! F . We denote φ(x) ≡ (φ, x). Notation used in quantum mechanics: Dirac's bracket notation (φ, x) ≡ hφjxi In this notation, a functional φ is rather denoted hφj and called bra-, while a vector x 2 V is rather denoted jxi and called -ket; they combine to hφjxi, a bra-ket. Example in finite dimensions. If M is a row matrix, M = [a1; : : : ; an] n where aj 2 F , then φ : F ! F defined by matrix multiplication, φ(x) = Mx, is a linear functional on F n. These are, essentially, all the linear func- tionals on a finite dimensional vector space. 1 2 RODICA D. COSTIN Indeed, the matrix associated to a linear functional φ : V ! F in a basis BV = fv1;:::; vng of V , and the (standard) basis f1g of F is the row vector [φ1; : : : ; φn]; where φj = (φ, vj) P P P P If x = j xjvj 2 V then (φ, x) = (φ, j xjvj) = j xj(φ, vj) = j xjφj hence 2 3 x1 6 .
    [Show full text]
  • Linear Algebra Notes Lecture Notes, University of Toronto, Fall 2016
    Linear Algebra Notes Lecture Notes, University of Toronto, Fall 2016 1. Dual spaces Given a vector space V , one can consider the space of linear maps φ: V ! F . Typical examples include: • For the vector space V = F(X; F ) of functions from a set X to F , and any given c 2 X, the evaluation evc : F(X; F ) ! F; f 7! f(c): • The trace of a matrix, tr: V = Matn×n(F ) ! F; A 7! A11 + A22 + ::: + Ann: More generally, for a fixed matrix B 2 Matn×n(F ), there is a linear functional A 7! tr(BA): • For the vector space F n, written as column vectors, the i-th coordinate function 0 1 x1 B . C @ . A 7! xi: xn More generally, any given b1; : : : ; bn defines a linear functional 0 1 x1 B . C @ . A 7! x1b1 + ::: + xnbn: xn (Note that this can also be written as matrix multiplication with the row vector (b1; : : : ; bn).) • This generalizes to the space F 1 of infinite sequences: We have maps 1 F ! F; x = (x1; x2;:::) 7! ai: 1 1 More generally, letting Ffin ⊂ F denote the subspace of finite sequences, every y = 1 (y1; y2;:::) 2 Ffin defines a linear map 1 1 X F ! F; x = (x1; x2;:::) 7! xiyi; i=1 1 similarly every x = (x1; x2;:::) 2 F defines a linear map 1 1 X Ffin ! F; y 7! xiyi: i=1 2 Definition 1.1. For any vector space V over a field F , we denote by V ∗ = L(V; F ) the dual space of V .
    [Show full text]
  • MAT 401: the Geometry of Physics
    MAT 401: The Geometry of Physics Spring 2009 Department of Mathematics SUNY at Stony Brook Welcome to The Geometry of Physics In this class we will develop the mathematical language needed to understand Einstein's field equations. The infinitesimal structure of any space, even curved space, is Euclidean, and so is described with linear algebra. Calculus, in the form of continuity and differentiability properties of paths and surfaces, can express the connectedness of space. The synthesis of these points of view, of the infinitesimal with the global, of linear algebra with calculus, yields the powerful language of differential geometry, which Einstein used to express the physics of General Relativity. Course Content Before studying the field equations we must develop the language of geometry. We will try to integrate intuitive content with hard mathematics, and some of the topics will be partly review for many students. But hard work will be required... it took Einstein more than 2 years to understand the mathematics we will cover in a semester. Homework assignment page Class notes Quiz Prep (including final exam info) Announcements Final Exam: Wed May 13, from 11am-1:30pm, will take place in the usual classroom. There will be a makeup class on Monday (May 11) in P-131 in the math building, at 11am. We will go over the gravitational field equations. I'll be in my office on Tuesday the 12th, from 2-4pm and 5-7pm Course Information: Check out the topics we will cover... Here is a link to the syllabus. Textbook A first Course in General Relavity by Bernard F.
    [Show full text]