Linear Transformations the Matrix of a Linear Trans

Total Page:16

File Type:pdf, Size:1020Kb

Linear Transformations the Matrix of a Linear Trans Linear Trans- formations Math 240 Linear Trans- formations Transformations of Euclidean space Kernel and Range Linear Transformations The matrix of a linear trans. Composition of linear trans. Kernel and Math 240 | Calculus III Range Summer 2013, Session II Tuesday, July 23, 2013 Linear Trans- formations Agenda Math 240 Linear Trans- formations Transformations of Euclidean space Kernel and 1. Linear Transformations Range Linear transformations of Euclidean space The matrix of a linear trans. Composition of linear trans. Kernel and Range 2. Kernel and Range 3. The matrix of a linear transformation Composition of linear transformations Kernel and Range Linear Trans- formations Motivation Math 240 In the m × n linear system Linear Trans- formations Transformations Ax = 0; of Euclidean space n Kernel and we can regard A as transforming elements of R (as column Range vectors) into elements of m via the rule The matrix of R a linear trans. Composition of T (x) = Ax: linear trans. Kernel and Range Then solving the system amounts to finding all of the vectors n x 2 R such that T (x) = 0. Solving the differential equation y00 + y = 0 is equivalent to finding functions y such that T (y) = 0, where T is defined as T (y) = y00 + y: Linear Trans- formations Definition Math 240 Linear Trans- Definition formations Transformations Let V and W be vector spaces with the same scalars. A of Euclidean space mapping T : V ! W is called a linear transformation from Kernel and V to W if it satisfies Range The matrix of 1. T (u + v) = T (u) + T (v) and a linear trans. Composition of 2. T (c v) = c T (v) linear trans. Kernel and Range for all vectors u; v 2 V and all scalars c. V is called the domain and W the codomain of T . Examples n m I T : R ! R defined by T (x) = Ax, where A is an m × n matrix k k−2 00 I T : C (I) ! C (I) defined by T (y) = y + y T I T : Mm×n(R) ! Mn×m(R) defined by T (A) = A 2 I T : P1 ! P2 defined by T (a + bx) = (a + 2b) + 3ax + 4bx Linear Trans- formations Examples Math 240 1. Verify that T : Mm×n(R) ! Mn×m(R), where T Linear Trans- T (A) = A , is a linear transformation. formations Transformations I The transpose of an m × n matrix is an n × m matrix. of Euclidean space I If A; B 2 Mm×n(R), then Kernel and T T T Range T (A + B) = (A + B) = A + B = T (A) + T (B): The matrix of I If A 2 M ( ) and c 2 , then a linear trans. m×n R R Composition of T T linear trans. T (cA) = (cA) = cA = c T (A): Kernel and Range 2. Verify that T : Ck(I) ! Ck−2(I), where T (y) = y00 + y, is a linear transformation. k 00 k−2 I If y 2 C (I) then T (y) = y + y 2 C (I). k I If y1; y2 2 C (I), then 00 00 00 T (y1 + y2) = (y1 + y2) + (y1 + y2) = y1 + y2 + y1 + y2 00 00 = (y1 + y1) + (y2 + y2) = T (y1) + T (y2): k I If y 2 C (I) and c 2 R, then T (cy) = (cy)00 + (cy) = cy00 + cy = c(y00 + y) = c T (y): Linear Trans- formations Specifying linear transformations Math 240 Linear Trans- formations Transformations of Euclidean space Kernel and Range A consequence of the properties of a linear transformation is The matrix of a linear trans. that they preserve linear combinations, in the sense that Composition of linear trans. Kernel and Range T (c1v1 + ··· + cnvn) = c1 T (v1) + ··· + cn T (vn): In particular, if fv1;:::; vng is a basis for the domain of T , then knowing T (v1);:::;T (vn) is enough to determine T everywhere. Linear Trans- n m formations Linear transformations from R to R Math 240 Linear Trans- formations Transformations of Euclidean Let A be an m × n matrix with real entries and define space n m Kernel and T : R ! R by T (x) = Ax. Verify that T is a linear Range transformation. The matrix of a linear trans. Composition of linear trans. I If x is an n × 1 column vector then Ax is an m × 1 Kernel and Range column vector. I T (x + y) = A(x + y) = Ax + Ay = T (x) + T (y) I T (cx) = A(cx) = cAx = c T (x) Such a transformation is called a matrix transformation. In n m fact, every linear transformation from R to R is a matrix transformation. Linear Trans- formations Matrix transformations Math 240 Linear Trans- formations Theorem Transformations n m of Euclidean Let T : ! be a linear transformation. Then T is space R R Kernel and described by the matrix transformation T (x) = Ax, where Range The matrix of A = T (e1) T (e2) ··· T (en) a linear trans. Composition of n linear trans. and e1; e2;:::; en denote the standard basis vectors for . Kernel and R Range This A is called the matrix of T . Example 4 3 Determine the matrix of the linear transformation T : R ! R defined by T (x1; x2; x3; x4) = (2x1 + 3x2 + x4; 5x1 + 9x3 − x4; 4x1 + 2x2 − x3 + 7x4): Linear Trans- formations Kernel Math 240 Linear Trans- Definition formations Transformations Suppose T : V ! W is a linear transformation. The set of Euclidean space consisting of all the vectors v 2 V such that T (v) = 0 is called Kernel and Range the kernel of T . It is denoted The matrix of a linear trans. Ker(T ) = fv 2 V : T (v) = 0g: Composition of linear trans. Kernel and Range Example Let T : Ck(I) ! Ck−2(I) be the linear transformation T (y) = y00 + y. Its kernel is spanned by fcos x; sin xg. Remarks I The kernel of a linear transformation is a subspace of its domain. I The kernel of a matrix transformation is simply the null space of the matrix. Linear Trans- formations Range Math 240 Linear Trans- Definition formations Transformations The range of the linear transformation T : V ! W is the of Euclidean space subset of W consisting of everything \hit by" T . In symbols, Kernel and Range Rng(T ) = fT (v) 2 W : v 2 V g: The matrix of a linear trans. Composition of linear trans. Kernel and Example Range Consider the linear transformation T : Mn(R) ! Mn(R) defined by T (A) = A + AT . The range of T is the subspace of symmetric n × n matrices. Remarks I The range of a linear transformation is a subspace of its codomain. I The range of a matrix transformation is the column space of the matrix. Linear Trans- formations Rank-Nullity revisited Math 240 Linear Trans- Suppose T is the matrix transformation with m × n matrix A. formations Transformations We know Hence, of Euclidean space I Ker(T ) = nullspace(A), I dim (Ker(T )) = nullity(A), Kernel and Range I Rng(T ) = colspace(A), I dim (Rng(T )) = rank(A), The matrix of n a linear trans. I the domain of T is R . I dim (domain of T ) = n. Composition of linear trans. Kernel and We know from the rank-nullity theorem that Range rank(A) + nullity(A) = n: This fact is also true when T is not a matrix transformation: Theorem If T : V ! W is a linear transformation and V is finite-dimensional, then dim (Ker(T )) + dim (Rng(T )) = dim(V ): Linear Trans- formations The function of bases Math 240 Linear Trans- formations Transformations of Euclidean space Kernel and Range Theorem The matrix of Let V be a vector space with basis fv1; v2;:::; vng. Then a linear trans. every vector v 2 V can be written in a unique way as a linear Composition of linear trans. Kernel and combination Range v = c1v1 + c2v2 + ··· + cnvn: In other words, picking a basis for a vector space allows us to give coordinates for points. This will allow us to give matrices n for linear transformations of vector spaces besides R . Linear Trans- formations The matrix of a linear transformation Math 240 Linear Trans- formations Transformations of Euclidean space Definition Kernel and Let V and W be vector spaces with ordered bases Range The matrix of B = fv1; v2;:::; vng and C = fw1; w2;:::; wmg, a linear trans. respectively, and let T : V ! W be a linear transformation. Composition of linear trans. Kernel and The matrix representation of T relative to the bases B Range and C is A = [aij] where T (vj) = a1jw1 + a2jw2 + ··· + amjwm: In other words, A is the matrix whose j-th column is T (vj), expressed in coordinates using fw1;:::; wmg. Linear Trans- formations Example Math 240 Let T : P1 ! P2 be the linear transformation defined by Linear Trans- formations 2 Transformations T (a + bx) = (2a − 3b) + (b − 5a)x + (a + b)x : of Euclidean space 2 Kernel and Use bases f1; xg for P1 and f1; x; x g for P2 to give a matrix Range representation of T . The matrix of a linear trans. Composition of linear trans. We have Kernel and Range T (1) = 2 − 5x + x2 and T (x) = −3 + x + x2; so 2 2 −33 A1 = 4−5 15 : 1 1 2 Now use the bases2 f21; x−+33 5g for P1 and2 f16; 1 + 25x;31 + x g for P2. A1 = 4−5 15 A2 = 4−5 −245 We have 1 1 1 6 T (1) = 2 − 5x + x2 = 6(1) − 5(1 + x) + (1 + x2) and T (x + 5) = 7 − 24x + 6x2 = 25(1) − 24(1 + x) + 6(1 + x2); so Linear Trans- formations Composition of linear transformations Math 240 Linear Trans- formations Definition Transformations of Euclidean space Let T1 : U ! V and T2 : V ! W be linear transformations.
Recommended publications
  • On the Bicoset of a Bivector Space
    International J.Math. Combin. Vol.4 (2009), 01-08 On the Bicoset of a Bivector Space Agboola A.A.A.† and Akinola L.S.‡ † Department of Mathematics, University of Agriculture, Abeokuta, Nigeria ‡ Department of Mathematics and computer Science, Fountain University, Osogbo, Nigeria E-mail: [email protected], [email protected] Abstract: The study of bivector spaces was first intiated by Vasantha Kandasamy in [1]. The objective of this paper is to present the concept of bicoset of a bivector space and obtain some of its elementary properties. Key Words: bigroup, bivector space, bicoset, bisum, direct bisum, inner biproduct space, biprojection. AMS(2000): 20Kxx, 20L05. §1. Introduction and Preliminaries The study of bialgebraic structures is a new development in the field of abstract algebra. Some of the bialgebraic structures already developed and studied and now available in several literature include: bigroups, bisemi-groups, biloops, bigroupoids, birings, binear-rings, bisemi- rings, biseminear-rings, bivector spaces and a host of others. Since the concept of bialgebraic structure is pivoted on the union of two non-empty subsets of a given algebraic structure for example a group, the usual problem arising from the union of two substructures of such an algebraic structure which generally do not form any algebraic structure has been resolved. With this new concept, several interesting algebraic properties could be obtained which are not present in the parent algebraic structure. In [1], Vasantha Kandasamy initiated the study of bivector spaces. Further studies on bivector spaces were presented by Vasantha Kandasamy and others in [2], [4] and [5]. In the present work however, we look at the bicoset of a bivector space and obtain some of its elementary properties.
    [Show full text]
  • ADDITIVE MAPS on RANK K BIVECTORS 1. Introduction. Let N 2 Be an Integer and Let F Be a Field. We Denote by M N(F) the Algeb
    Electronic Journal of Linear Algebra, ISSN 1081-3810 A publication of the International Linear Algebra Society Volume 36, pp. 847-856, December 2020. ADDITIVE MAPS ON RANK K BIVECTORS∗ WAI LEONG CHOOIy AND KIAM HEONG KWAy V2 Abstract. Let U and V be linear spaces over fields F and K, respectively, such that dim U = n > 2 and jFj > 3. Let U n−1 V2 V2 be the second exterior power of U. Fixing an even integer k satisfying 2 6 k 6 n, it is shown that a map : U! V satisfies (u + v) = (u) + (v) for all rank k bivectors u; v 2 V2 U if and only if is an additive map. Examples showing the indispensability of the assumption on k are given. Key words. Additive maps, Second exterior powers, Bivectors, Ranks, Alternate matrices. AMS subject classifications. 15A03, 15A04, 15A75, 15A86. 1. Introduction. Let n > 2 be an integer and let F be a field. We denote by Mn(F) the algebra of n×n matrices over F. Given a nonempty subset S of Mn(F), a map : Mn(F) ! Mn(F) is called commuting on S (respectively, additive on S) if (A)A = A (A) for all A 2 S (respectively, (A + B) = (A) + (B) for all A; B 2 S). In 2012, using Breˇsar'sresult [1, Theorem A], Franca [3] characterized commuting additive maps on invertible (respectively, singular) matrices of Mn(F). He continued to characterize commuting additive maps on rank k matrices of Mn(F) in [4], where 2 6 k 6 n is a fixed integer, and commuting additive maps on rank one matrices of Mn(F) in [5].
    [Show full text]
  • Sketch Notes — Rings and Fields
    Sketch Notes — Rings and Fields Neil Donaldson Fall 2018 Text • An Introduction to Abstract Algebra, John Fraleigh, 7th Ed 2003, Adison–Wesley (optional). Brief reminder of groups You should be familiar with the majority of what follows though there is a lot of time to remind yourself of the harder material! Try to complete the proofs of any results yourself. Definition. A binary structure (G, ·) is a set G together with a function · : G × G ! G. We say that G is closed under · and typically write · as juxtaposition.1 A semigroup is an associative binary structure: 8x, y, z 2 G, x(yz) = (xy)z A monoid is a semigroup with an identity element: 9e 2 G such that 8x 2 G, ex = xe = x A group is a monoid in which every element has an inverse: 8x 2 G, 9x−1 2 G such that xx−1 = x−1x = e A binary structure is commutative if 8x, y 2 G, xy = yx. A group with a commutative structure is termed abelian. A subgroup2 is a non-empty subset H ⊆ G which remains a group under the same binary operation. We write H ≤ G. Lemma. H is a subgroup of G if and only if it is a non-empty subset of G closed under multiplication and inverses in G. Standard examples of groups: sets of numbers under addition (Z, Q, R, nZ, etc.), matrix groups. Standard families: cyclic, symmetric, alternating, dihedral. 1You should be comfortable with both multiplicative and additive notation. 2More generally any substructure. 1 Cosets and Factor Groups Definition.
    [Show full text]
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • Do Killingâ•Fiyano Tensors Form a Lie Algebra?
    University of Massachusetts Amherst ScholarWorks@UMass Amherst Physics Department Faculty Publication Series Physics 2007 Do Killing–Yano tensors form a Lie algebra? David Kastor University of Massachusetts - Amherst, [email protected] Sourya Ray University of Massachusetts - Amherst Jennie Traschen University of Massachusetts - Amherst, [email protected] Follow this and additional works at: https://scholarworks.umass.edu/physics_faculty_pubs Part of the Physics Commons Recommended Citation Kastor, David; Ray, Sourya; and Traschen, Jennie, "Do Killing–Yano tensors form a Lie algebra?" (2007). Classical and Quantum Gravity. 1235. Retrieved from https://scholarworks.umass.edu/physics_faculty_pubs/1235 This Article is brought to you for free and open access by the Physics at ScholarWorks@UMass Amherst. It has been accepted for inclusion in Physics Department Faculty Publication Series by an authorized administrator of ScholarWorks@UMass Amherst. For more information, please contact [email protected]. Do Killing-Yano tensors form a Lie algebra? David Kastor, Sourya Ray and Jennie Traschen Department of Physics University of Massachusetts Amherst, MA 01003 ABSTRACT Killing-Yano tensors are natural generalizations of Killing vec- tors. We investigate whether Killing-Yano tensors form a graded Lie algebra with respect to the Schouten-Nijenhuis bracket. We find that this proposition does not hold in general, but that it arXiv:0705.0535v1 [hep-th] 3 May 2007 does hold for constant curvature spacetimes. We also show that Minkowski and (anti)-deSitter spacetimes have the maximal num- ber of Killing-Yano tensors of each rank and that the algebras of these tensors under the SN bracket are relatively simple exten- sions of the Poincare and (A)dS symmetry algebras.
    [Show full text]
  • What's in a Name? the Matrix As an Introduction to Mathematics
    St. John Fisher College Fisher Digital Publications Mathematical and Computing Sciences Faculty/Staff Publications Mathematical and Computing Sciences 9-2008 What's in a Name? The Matrix as an Introduction to Mathematics Kris H. Green St. John Fisher College, [email protected] Follow this and additional works at: https://fisherpub.sjfc.edu/math_facpub Part of the Mathematics Commons How has open access to Fisher Digital Publications benefited ou?y Publication Information Green, Kris H. (2008). "What's in a Name? The Matrix as an Introduction to Mathematics." Math Horizons 16.1, 18-21. Please note that the Publication Information provides general citation information and may not be appropriate for your discipline. To receive help in creating a citation based on your discipline, please visit http://libguides.sjfc.edu/citations. This document is posted at https://fisherpub.sjfc.edu/math_facpub/12 and is brought to you for free and open access by Fisher Digital Publications at St. John Fisher College. For more information, please contact [email protected]. What's in a Name? The Matrix as an Introduction to Mathematics Abstract In lieu of an abstract, here is the article's first paragraph: In my classes on the nature of scientific thought, I have often used the movie The Matrix to illustrate the nature of evidence and how it shapes the reality we perceive (or think we perceive). As a mathematician, I usually field questions elatedr to the movie whenever the subject of linear algebra arises, since this field is the study of matrices and their properties. So it is natural to ask, why does the movie title reference a mathematical object? Disciplines Mathematics Comments Article copyright 2008 by Math Horizons.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Appendix a Spinors in Four Dimensions
    Appendix A Spinors in Four Dimensions In this appendix we collect the conventions used for spinors in both Minkowski and Euclidean spaces. In Minkowski space the flat metric has the 0 1 2 3 form ηµν = diag(−1, 1, 1, 1), and the coordinates are labelled (x ,x , x , x ). The analytic continuation into Euclidean space is madethrough the replace- ment x0 = ix4 (and in momentum space, p0 = −ip4) the coordinates in this case being labelled (x1,x2, x3, x4). The Lorentz group in four dimensions, SO(3, 1), is not simply connected and therefore, strictly speaking, has no spinorial representations. To deal with these types of representations one must consider its double covering, the spin group Spin(3, 1), which is isomorphic to SL(2, C). The group SL(2, C) pos- sesses a natural complex two-dimensional representation. Let us denote this representation by S andlet us consider an element ψ ∈ S with components ψα =(ψ1,ψ2) relative to some basis. The action of an element M ∈ SL(2, C) is β (Mψ)α = Mα ψβ. (A.1) This is not the only action of SL(2, C) which one could choose. Instead of M we could have used its complex conjugate M, its inverse transpose (M T)−1,or its inverse adjoint (M †)−1. All of them satisfy the same group multiplication law. These choices would correspond to the complex conjugate representation S, the dual representation S,and the dual complex conjugate representation S. We will use the following conventions for elements of these representations: α α˙ ψα ∈ S, ψα˙ ∈ S, ψ ∈ S, ψ ∈ S.
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • Bases for Infinite Dimensional Vector Spaces Math 513 Linear Algebra Supplement
    BASES FOR INFINITE DIMENSIONAL VECTOR SPACES MATH 513 LINEAR ALGEBRA SUPPLEMENT Professor Karen E. Smith We have proven that every finitely generated vector space has a basis. But what about vector spaces that are not finitely generated, such as the space of all continuous real valued functions on the interval [0; 1]? Does such a vector space have a basis? By definition, a basis for a vector space V is a linearly independent set which generates V . But we must be careful what we mean by linear combinations from an infinite set of vectors. The definition of a vector space gives us a rule for adding two vectors, but not for adding together infinitely many vectors. By successive additions, such as (v1 + v2) + v3, it makes sense to add any finite set of vectors, but in general, there is no way to ascribe meaning to an infinite sum of vectors in a vector space. Therefore, when we say that a vector space V is generated by or spanned by an infinite set of vectors fv1; v2;::: g, we mean that each vector v in V is a finite linear combination λi1 vi1 + ··· + λin vin of the vi's. Likewise, an infinite set of vectors fv1; v2;::: g is said to be linearly independent if the only finite linear combination of the vi's that is zero is the trivial linear combination. So a set fv1; v2; v3;:::; g is a basis for V if and only if every element of V can be be written in a unique way as a finite linear combination of elements from the set.
    [Show full text]
  • Geometric-Algebra Adaptive Filters Wilder B
    1 Geometric-Algebra Adaptive Filters Wilder B. Lopes∗, Member, IEEE, Cassio G. Lopesy, Senior Member, IEEE Abstract—This paper presents a new class of adaptive filters, namely Geometric-Algebra Adaptive Filters (GAAFs). They are Faces generated by formulating the underlying minimization problem (a deterministic cost function) from the perspective of Geometric Algebra (GA), a comprehensive mathematical language well- Edges suited for the description of geometric transformations. Also, (directed lines) differently from standard adaptive-filtering theory, Geometric Calculus (the extension of GA to differential calculus) allows Fig. 1. A polyhedron (3-dimensional polytope) can be completely described for applying the same derivation techniques regardless of the by the geometric multiplication of its edges (oriented lines, vectors), which type (subalgebra) of the data, i.e., real, complex numbers, generate the faces and hypersurfaces (in the case of a general n-dimensional quaternions, etc. Relying on those characteristics (among others), polytope). a deterministic quadratic cost function is posed, from which the GAAFs are devised, providing a generalization of regular adaptive filters to subalgebras of GA. From the obtained update rule, it is shown how to recover the following least-mean squares perform calculus with hypercomplex quantities, i.e., elements (LMS) adaptive filter variants: real-entries LMS, complex LMS, that generalize complex numbers for higher dimensions [2]– and quaternions LMS. Mean-square analysis and simulations in [10]. a system identification scenario are provided, showing very good agreement for different levels of measurement noise. GA-based AFs were first introduced in [11], [12], where they were successfully employed to estimate the geometric Index Terms—Adaptive filtering, geometric algebra, quater- transformation (rotation and translation) that aligns a pair of nions.
    [Show full text]
  • Problems from Ring Theory
    Home Page Problems From Ring Theory In the problems below, Z, Q, R, and C denote respectively the rings of integers, JJ II rational numbers, real numbers, and complex numbers. R generally denotes a ring, and I and J usually denote ideals. R[x],R[x, y],... denote rings of polynomials. rad R is defined to be {r ∈ R : rn = 0 for some positive integer n} , where R is a ring; rad R is called the nil radical or just the radical of R . J I Problem 0. Let b be a nilpotent element of the ring R . Prove that 1 + b is an invertible element Page 1 of 14 of R . Problem 1. Let R be a ring with more than one element such that aR = R for every nonzero Go Back element a ∈ R. Prove that R is a division ring. Problem 2. If (m, n) = 1 , show that the ring Z/(mn) contains at least two idempotents other than Full Screen the zero and the unit. Problem 3. If a and b are elements of a commutative ring with identity such that a is invertible Print and b is nilpotent, then a + b is invertible. Problem 4. Let R be a ring which has no nonzero nilpotent elements. Prove that every idempotent Close element of R commutes with every element of R. Quit Home Page Problem 5. Let A be a division ring, B be a proper subring of A such that a−1Ba ⊆ B for all a 6= 0 . Prove that B is contained in the center of A .
    [Show full text]