Linear Combination Linearly Independent/Dependent Span

Total Page:16

File Type:pdf, Size:1020Kb

Linear Combination Linearly Independent/Dependent Span Definition Definition Linear Combination Linearly Independent/dependent (Sets of) Vectors (Sets of) Vectors Definition Definition Span Inverse/Invertible Matrix (Sets of) Vectors Matrices Definition Definition Matrix Transpose Elementary Row Operation Matrices Matrices Definition Definition Row equivalent Reduced Row Echelon Form Matrices Matrices Definition Definition Pivot (position) Pivot/Non-pivot column Matrices Matrices A set of vectors fv1;:::; vng is linearly independent if the only solution to A linear combination of a vectors v1; v2;:::; vn is a sum of scalar multiples of the vectors: x1v1 + ··· + xnvn = 0 x1v1 + x2v2 + ::: + xnvn is the trivial solution: x1 = ··· = xn = 0. If there is more than one solution to this equation, then the set (the xi's are scalars). of vectors is linearly dependent. The span of a set of vectors fv ;:::; v g (denoted An n × n matrix A is invertible if there exists another 1 n spanfv ;:::; v g) is the set of all linear combinations n × n matrix A−1 such that AA−1 = A−1A = I. 1 n of v1;:::; vn. There are three types of elementary row operations If A is an m × n matrix, then the transpose of A T for matrices. (denoted A ) is an n × m matrix whose i-th row is the i-th column of A: 1. Swapping two rows of the matrix. T 2 T 3 2 j j 3 | v1 | 2. Adding a multiple of one row to another row. v ··· v 6 . 7 4 1 n5 = 4 . 5 : j j T 3. Scaling a row by a non-zero constant. | vn | The reduced row echelon form of an m × n matrix A (denoted rref A) is the m × n matrix which is row equivalent to A and satisfies the following. 1. The leading non-zero entry of each row is 1. Two m × n matrices are row equivalent if one can be transformed into the other by a sequence of 2. All other entries in the columns with leading 1's elementary row operations. are 0's. 3. The leading 1 in any row is to the right of all leading 1's in the rows above. Rows of 0's are at the bottom of the matrix. A column of a matrix is a pivot column if the column A pivot position (or simply, pivot) in a matrix A is a contains a pivot. Otherwise the column is a non-pivot position corresponding to a leading 1 in rref A. column. Definition Definition Column Space Rank Matrices Matrices Definition Definition Null Space Nullity Matrices Matrices Definition Definition Row space Identity Matrix Matrices Matrices Definition Definition Identity Matrix Vector Space Matrices Vector Spaces Definition Definition Subspace Basis Vector Spaces Vector Spaces The rank of a matrix A (denoted rank A) is the The column space of a matrix A (denoted Col A) is dimension of the column space of the matrix, which is the span of the columns of A. the number of pivots/pivot columns in A. The nullity of a matrix A is the dimension of the null The null space of a matrix A (denoted Nul A) is the space of A, which is the number of non-pivot columns set of all vectors x satisfying Ax = 0. in A. A diagonal matrix is an n × n matrix where all the The row space of a matrix A is the column space of entries off the diagonal are 0's. AT . A vector space is a set of objects (e.g. vectors) along with two operations, vector addition and scalar The n × n identity matrix is a diagonal matrix with multiplication, which satisfy ten axioms (not listed 1's on its diagonal. here). Let V be a vector space. A subspace H of V is a subset of V that satisfies 1. 0 2 H; A basis for a vector space V is a set of linearly 2. H is closed under vector addition (if u; v 2 H, independent vectors that spans V (i.e. the span of then u + v 2 H); the vectors is V ). 3. H is closed under scalar multiplication (if u 2 H and k is a scalar, then ku 2 H). Every subspace is a vector space. Definition Definition Standard Basis for Rn Dimension Vector Spaces Vector Spaces Definition Definition Domain/Codomain Image (range) Functions and Linear Transformations Functions and Linear Transformations Definition Definition Kernel Onto Functions and Linear Transformations Functions and Linear Transformations Definition Definition One-to-one Bijection and Inverse Functions and Linear Transformations Functions and Linear Transformations Definition Definition Standard Matrix for a Linear Linear Transformation Transformation Functions and Linear Transformations Functions and Linear Transformations If fv1;:::; vng is a basis for a vector space V , then the dimension of V is n, the number of vectors in the The standard basis for n is the set of columns of the basis. R n × n identity matrix. (The i-th standard basis (A vector space may also be infinite dimensional; it vector is denoted e .) may not be possible to write down a basis for the i vector space in this case.) If T : V ! W is a function, then the image (or range) of T (denoted im T ) is the set of all outputs of T . That is, the image of T is the set of all vectors b for If T : V ! W is a function, then V is the domain of which T (x) = b has a solution. T , and W is the codomain of T . If A is the standard matrix for T , then the image of T is the column space of A, which is a subspace of W . If T : V ! W is a function, then the kernel of T is If T : V ! W is a function, then T is onto if the set of all solutions to T (x) = 0. T (x) = b has at least one solution for each b 2 W . If A is the standard matrix for T , then the kernel of In this case, the image of T is W . T is the null space of A, which is a subspace of V . If T : V ! W is a function that is both one-to-one and onto, then T is a bijection. Moreover, there −1 If T : V ! W is a function, then T is one-to-one if exists a function T : W ! V satisfying T (x) = b has at most one solution for each b 2 W . T −1(T (x)) = x and T (T −1(b)) = b for all x 2 V and b 2 W . The function T −1 is the inverse of T . If T : V ! W is a linear transformation, dim V = m and dim W = n, then there is an n × m matrix A Let V and W be vector spaces. A function that satisfies T (x) = Ax. This matrix A is the T : V ! W is a linear transformation if standard matrix for T . Moreover, the columns of A are the images of the standard basis vectors under T : 1. T (u + v) = T (u) + T (v) and 2 j j j 3 2. T (ku) = kT (u) A = 4T (e1) T (e2) ··· T (en)5 : j j j for any u; v 2 V and any scalar k. Definition Definition Coordinates Change of basis matrix Coordinate Systems Coordinate Systems Definition Definition B-matrix for a linear transformation Trace Coordinate Systems Matrix Invariant Definition Theorem Determinant Eigenvalue formula for trace Matrix Invariant Matrix Invariant Theorem Definition Eigenvalue formula for determinant Characteristic polynomial Matrix Invariant Eigenvalues and Eigenvectors Definition Definition Eigenvalue Multiplicity of an eigenvalue Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors Let B = fb1;:::; bng and D = fd1;:::; dng be bases If B = fb1;:::; bng is a basis for a vector space V , for a vector space V . Then the change of basis matrix then each b 2 V is expressible as a unique linear from B to D is combination of the basis elements: 2 j j j 3 b = x1b1 + ··· + xnbn: P = 4 [b1]D [b2]D ··· [bn]D5 : D B j j j The coordinates for b are the coefficients in this expansion, and we write For any vector v 2 V , this matrix satisfies T [b]B = x1 x2 ··· xn : P [v]B = [v]D: D B Let T : V ! W be a linear transformation between vector spaces. Let B = fb1;:::; bng be a basis for V and D = fd1;:::; dng be a basis for W . Then the The trace of a square matrix is the sum of the entries B-matrix for the linear transformation is on the diagonal of the matrix. 2 j j j 3 B = 4 [T (b1)]D [T (b2)]D ··· [T (bn)]D5 : j j j For any square matrix, the trace of the matrix is The determinant of a square matrix is the sum of the equal to the sum of its eigenvalues. entries on the diagonal of the matrix. Let A be a square matrix. The characteristic For any square matrix, the determinant of the matrix polynomial of A is det(A − λI). is equal to the product of its eigenvalues. Let A be an n × n matrix, and let λ1; λ2; : : : ; λk be the distinct eigenvalues of A. Then the characteristic polynomial of A factors into det(A − λI) = (λ − λ )α1 (λ − λ )α2 ··· (λ − λ )αk : 1 2 k Let A be a square matrix. The eigenvalues of A are the roots of the characteristic polynomial of A. The multiplicity of the eigenvalue λi is αi.
Recommended publications
  • On the Bicoset of a Bivector Space
    International J.Math. Combin. Vol.4 (2009), 01-08 On the Bicoset of a Bivector Space Agboola A.A.A.† and Akinola L.S.‡ † Department of Mathematics, University of Agriculture, Abeokuta, Nigeria ‡ Department of Mathematics and computer Science, Fountain University, Osogbo, Nigeria E-mail: [email protected], [email protected] Abstract: The study of bivector spaces was first intiated by Vasantha Kandasamy in [1]. The objective of this paper is to present the concept of bicoset of a bivector space and obtain some of its elementary properties. Key Words: bigroup, bivector space, bicoset, bisum, direct bisum, inner biproduct space, biprojection. AMS(2000): 20Kxx, 20L05. §1. Introduction and Preliminaries The study of bialgebraic structures is a new development in the field of abstract algebra. Some of the bialgebraic structures already developed and studied and now available in several literature include: bigroups, bisemi-groups, biloops, bigroupoids, birings, binear-rings, bisemi- rings, biseminear-rings, bivector spaces and a host of others. Since the concept of bialgebraic structure is pivoted on the union of two non-empty subsets of a given algebraic structure for example a group, the usual problem arising from the union of two substructures of such an algebraic structure which generally do not form any algebraic structure has been resolved. With this new concept, several interesting algebraic properties could be obtained which are not present in the parent algebraic structure. In [1], Vasantha Kandasamy initiated the study of bivector spaces. Further studies on bivector spaces were presented by Vasantha Kandasamy and others in [2], [4] and [5]. In the present work however, we look at the bicoset of a bivector space and obtain some of its elementary properties.
    [Show full text]
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Linear Algebra for Dummies
    Linear Algebra for Dummies Jorge A. Menendez October 6, 2017 Contents 1 Matrices and Vectors1 2 Matrix Multiplication2 3 Matrix Inverse, Pseudo-inverse4 4 Outer products 5 5 Inner Products 5 6 Example: Linear Regression7 7 Eigenstuff 8 8 Example: Covariance Matrices 11 9 Example: PCA 12 10 Useful resources 12 1 Matrices and Vectors An m × n matrix is simply an array of numbers: 2 3 a11 a12 : : : a1n 6 a21 a22 : : : a2n 7 A = 6 7 6 . 7 4 . 5 am1 am2 : : : amn where we define the indexing Aij = aij to designate the component in the ith row and jth column of A. The transpose of a matrix is obtained by flipping the rows with the columns: 2 3 a11 a21 : : : an1 6 a12 a22 : : : an2 7 AT = 6 7 6 . 7 4 . 5 a1m a2m : : : anm T which evidently is now an n × m matrix, with components Aij = Aji = aji. In other words, the transpose is obtained by simply flipping the row and column indeces. One particularly important matrix is called the identity matrix, which is composed of 1’s on the diagonal and 0’s everywhere else: 21 0 ::: 03 60 1 ::: 07 6 7 6. .. .7 4. .5 0 0 ::: 1 1 It is called the identity matrix because the product of any matrix with the identity matrix is identical to itself: AI = A In other words, I is the equivalent of the number 1 for matrices. For our purposes, a vector can simply be thought of as a matrix with one column1: 2 3 a1 6a2 7 a = 6 7 6 .
    [Show full text]
  • The Invertible Matrix Theorem
    The Invertible Matrix Theorem Ryan C. Daileda Trinity University Linear Algebra Daileda The Invertible Matrix Theorem Introduction It is important to recognize when a square matrix is invertible. We can now characterize invertibility in terms of every one of the concepts we have now encountered. We will continue to develop criteria for invertibility, adding them to our list as we go. The invertibility of a matrix is also related to the invertibility of linear transformations, which we discuss below. Daileda The Invertible Matrix Theorem Theorem 1 (The Invertible Matrix Theorem) For a square (n × n) matrix A, TFAE: a. A is invertible. b. A has a pivot in each row/column. RREF c. A −−−→ I. d. The equation Ax = 0 only has the solution x = 0. e. The columns of A are linearly independent. f. Null A = {0}. g. A has a left inverse (BA = In for some B). h. The transformation x 7→ Ax is one to one. i. The equation Ax = b has a (unique) solution for any b. j. Col A = Rn. k. A has a right inverse (AC = In for some C). l. The transformation x 7→ Ax is onto. m. AT is invertible. Daileda The Invertible Matrix Theorem Inverse Transforms Definition A linear transformation T : Rn → Rn (also called an endomorphism of Rn) is called invertible iff it is both one-to-one and onto. If [T ] is the standard matrix for T , then we know T is given by x 7→ [T ]x. The Invertible Matrix Theorem tells us that this transformation is invertible iff [T ] is invertible.
    [Show full text]
  • Bases for Infinite Dimensional Vector Spaces Math 513 Linear Algebra Supplement
    BASES FOR INFINITE DIMENSIONAL VECTOR SPACES MATH 513 LINEAR ALGEBRA SUPPLEMENT Professor Karen E. Smith We have proven that every finitely generated vector space has a basis. But what about vector spaces that are not finitely generated, such as the space of all continuous real valued functions on the interval [0; 1]? Does such a vector space have a basis? By definition, a basis for a vector space V is a linearly independent set which generates V . But we must be careful what we mean by linear combinations from an infinite set of vectors. The definition of a vector space gives us a rule for adding two vectors, but not for adding together infinitely many vectors. By successive additions, such as (v1 + v2) + v3, it makes sense to add any finite set of vectors, but in general, there is no way to ascribe meaning to an infinite sum of vectors in a vector space. Therefore, when we say that a vector space V is generated by or spanned by an infinite set of vectors fv1; v2;::: g, we mean that each vector v in V is a finite linear combination λi1 vi1 + ··· + λin vin of the vi's. Likewise, an infinite set of vectors fv1; v2;::: g is said to be linearly independent if the only finite linear combination of the vi's that is zero is the trivial linear combination. So a set fv1; v2; v3;:::; g is a basis for V if and only if every element of V can be be written in a unique way as a finite linear combination of elements from the set.
    [Show full text]
  • 28. Exterior Powers
    28. Exterior powers 28.1 Desiderata 28.2 Definitions, uniqueness, existence 28.3 Some elementary facts 28.4 Exterior powers Vif of maps 28.5 Exterior powers of free modules 28.6 Determinants revisited 28.7 Minors of matrices 28.8 Uniqueness in the structure theorem 28.9 Cartan's lemma 28.10 Cayley-Hamilton Theorem 28.11 Worked examples While many of the arguments here have analogues for tensor products, it is worthwhile to repeat these arguments with the relevant variations, both for practice, and to be sensitive to the differences. 1. Desiderata Again, we review missing items in our development of linear algebra. We are missing a development of determinants of matrices whose entries may be in commutative rings, rather than fields. We would like an intrinsic definition of determinants of endomorphisms, rather than one that depends upon a choice of coordinates, even if we eventually prove that the determinant is independent of the coordinates. We anticipate that Artin's axiomatization of determinants of matrices should be mirrored in much of what we do here. We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in the definition of the characteristic polynomial takes us outside the class of vector spaces over fields. We want to give a conceptual proof for the uniqueness part of the structure theorem for finitely-generated modules over principal ideal domains. Multi-linear algebra over fields is surely insufficient for this. 417 418 Exterior powers 2. Definitions, uniqueness, existence Let R be a commutative ring with 1.
    [Show full text]
  • Math 54. Selected Solutions for Week 8 Section 6.1 (Page 282) 22. Let U = U1 U2 U3 . Explain Why U · U ≥ 0. Wh
    Math 54. Selected Solutions for Week 8 Section 6.1 (Page 282) 2 3 u1 22. Let ~u = 4 u2 5 . Explain why ~u · ~u ≥ 0 . When is ~u · ~u = 0 ? u3 2 2 2 We have ~u · ~u = u1 + u2 + u3 , which is ≥ 0 because it is a sum of squares (all of which are ≥ 0 ). It is zero if and only if ~u = ~0 . Indeed, if ~u = ~0 then ~u · ~u = 0 , as 2 can be seen directly from the formula. Conversely, if ~u · ~u = 0 then all the terms ui must be zero, so each ui must be zero. This implies ~u = ~0 . 2 5 3 26. Let ~u = 4 −6 5 , and let W be the set of all ~x in R3 such that ~u · ~x = 0 . What 7 theorem in Chapter 4 can be used to show that W is a subspace of R3 ? Describe W in geometric language. The condition ~u · ~x = 0 is equivalent to ~x 2 Nul ~uT , and this is a subspace of R3 by Theorem 2 on page 187. Geometrically, it is the plane perpendicular to ~u and passing through the origin. 30. Let W be a subspace of Rn , and let W ? be the set of all vectors orthogonal to W . Show that W ? is a subspace of Rn using the following steps. (a). Take ~z 2 W ? , and let ~u represent any element of W . Then ~z · ~u = 0 . Take any scalar c and show that c~z is orthogonal to ~u . (Since ~u was an arbitrary element of W , this will show that c~z is in W ? .) ? (b).
    [Show full text]
  • Math 2331 – Linear Algebra 6.2 Orthogonal Sets
    6.2 Orthogonal Sets Math 2331 { Linear Algebra 6.2 Orthogonal Sets Jiwen He Department of Mathematics, University of Houston [email protected] math.uh.edu/∼jiwenhe/math2331 Jiwen He, University of Houston Math 2331, Linear Algebra 1 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix 6.2 Orthogonal Sets Orthogonal Sets: Examples Orthogonal Sets: Theorem Orthogonal Basis: Examples Orthogonal Basis: Theorem Orthogonal Projections Orthonormal Sets Orthonormal Matrix: Examples Orthonormal Matrix: Theorems Jiwen He, University of Houston Math 2331, Linear Algebra 2 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix Orthogonal Sets Orthogonal Sets n A set of vectors fu1; u2;:::; upg in R is called an orthogonal set if ui · uj = 0 whenever i 6= j. Example 82 3 2 3 2 39 < 1 1 0 = Is 4 −1 5 ; 4 1 5 ; 4 0 5 an orthogonal set? : 0 0 1 ; Solution: Label the vectors u1; u2; and u3 respectively. Then u1 · u2 = u1 · u3 = u2 · u3 = Therefore, fu1; u2; u3g is an orthogonal set. Jiwen He, University of Houston Math 2331, Linear Algebra 3 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix Orthogonal Sets: Theorem Theorem (4) Suppose S = fu1; u2;:::; upg is an orthogonal set of nonzero n vectors in R and W =spanfu1; u2;:::; upg. Then S is a linearly independent set and is therefore a basis for W . Partial Proof: Suppose c1u1 + c2u2 + ··· + cpup = 0 (c1u1 + c2u2 + ··· + cpup) · = 0· (c1u1) · u1 + (c2u2) · u1 + ··· + (cpup) · u1 = 0 c1 (u1 · u1) + c2 (u2 · u1) + ··· + cp (up · u1) = 0 c1 (u1 · u1) = 0 Since u1 6= 0, u1 · u1 > 0 which means c1 = : In a similar manner, c2,:::,cp can be shown to by all 0.
    [Show full text]
  • Math 395. Tensor Products and Bases Let V and V Be Finite-Dimensional
    Math 395. Tensor products and bases Let V and V 0 be finite-dimensional vector spaces over a field F . Recall that a tensor product of V and V 0 is a pait (T, t) consisting of a vector space T over F and a bilinear pairing t : V × V 0 → T with the following universal property: for any bilinear pairing B : V × V 0 → W to any vector space W over F , there exists a unique linear map L : T → W such that B = L ◦ t. Roughly speaking, t “uniquely linearizes” all bilinear pairings of V and V 0 into arbitrary F -vector spaces. In class it was proved that if (T, t) and (T 0, t0) are two tensor products of V and V 0, then there exists a unique linear isomorphism T ' T 0 carrying t and t0 (and vice-versa). In this sense, the tensor product of V and V 0 (equipped with its “universal” bilinear pairing from V × V 0!) is unique up to unique isomorphism, and so we may speak of “the” tensor product of V and V 0. You must never forget to think about the data of t when you contemplate the tensor product of V and V 0: it is the pair (T, t) and not merely the underlying vector space T that is the focus of interest. In this handout, we review a method of construction of tensor products (there is another method that involved no choices, but is horribly “big”-looking and is needed when considering modules over commutative rings) and we work out some examples related to the construction.
    [Show full text]
  • Concept of a Dyad and Dyadic: Consider Two Vectors a and B Dyad: It Consists of a Pair of Vectors a B for Two Vectors a a N D B
    1/11/2010 CHAPTER 1 Introductory Concepts • Elements of Vector Analysis • Newton’s Laws • Units • The basis of Newtonian Mechanics • D’Alembert’s Principle 1 Science of Mechanics: It is concerned with the motion of material bodies. • Bodies have different scales: Microscropic, macroscopic and astronomic scales. In mechanics - mostly macroscopic bodies are considered. • Speed of motion - serves as another important variable - small and high (approaching speed of light). 2 1 1/11/2010 • In Newtonian mechanics - study motion of bodies much bigger than particles at atomic scale, and moving at relative motions (speeds) much smaller than the speed of light. • Two general approaches: – Vectorial dynamics: uses Newton’s laws to write the equations of motion of a system, motion is described in physical coordinates and their derivatives; – Analytical dynamics: uses energy like quantities to define the equations of motion, uses the generalized coordinates to describe motion. 3 1.1 Vector Analysis: • Scalars, vectors, tensors: – Scalar: It is a quantity expressible by a single real number. Examples include: mass, time, temperature, energy, etc. – Vector: It is a quantity which needs both direction and magnitude for complete specification. – Actually (mathematically), it must also have certain transformation properties. 4 2 1/11/2010 These properties are: vector magnitude remains unchanged under rotation of axes. ex: force, moment of a force, velocity, acceleration, etc. – geometrically, vectors are shown or depicted as directed line segments of proper magnitude and direction. 5 e (unit vector) A A = A e – if we use a coordinate system, we define a basis set (iˆ , ˆj , k ˆ ): we can write A = Axi + Ay j + Azk Z or, we can also use the A three components and Y define X T {A}={Ax,Ay,Az} 6 3 1/11/2010 – The three components Ax , Ay , Az can be used as 3-dimensional vector elements to specify the vector.
    [Show full text]