<<

Linear Transformations

6.1 Introduction to Linear Transformations 6.2 The and Range of a Linear Transformation 6.3 Matrices for Linear Transformations 6.4 Transition Matrices and Similarity 6.5 Applications of Linear Transformations

6.1 6.1 Introduction to Linear Transformations

 A function T that maps a V into a vector space W: TVWVW:mapping , , : vector spaces V: the domain of T W: the codomain of T

 Image of v under T : If v is a vector in V and w is a vector in W such that T(),vw then w is called the image of v under T (For each v, there is only one w)

 The range of T : The set of all images of vectors in V (see the figure on the next slide) 6.2  The preimage of w: The set of all v in V such that T(v)=w (For each w, v may not be unique)

 The graphical representations of the domain, codomain, and range

※ For example, V is R3, W is R3, and T is the orthogonal projection of any vector (x, y, z) onto the xy-plane, i.e. T(x, y, z) = (x, y, 0) (we will use the above example many times to explain abstract notions) ※ Then the domain is R3, the codomain is R3, and the range is xy-plane (a subspace of the codomian R3) ※ (2, 1, 0) is the image of (2, 1, 3) ※ The preimage of (2, 1, 0) is (2, 1, s), where s is any real number 6.3 2 2  Ex 1: A function from R into R 2 2 2 T : R  R v  (v1,v2 )R

T(v1,v2 )  (v1  v2 ,v1  2v2 ) (a) Find the image of v=(-1,2) (b) Find the preimage of w=(-1,11) Sol: (a) v  ( 1, 2) TT (v )  (  1, 2)  (  1  2,  1  2(2))  (  3, 3) (b) T (vw )  (  1, 11)

T(v1,v2 )  (v1  v2 ,v1  2v2 )  (1, 11)

 v1  v2  1

v1  2v2  11

 v1  3, v2  4 Thus {(3, 4)} is the preimage of w=(-1, 11) 6.4  Linear Transformation : V,W: vector spaces T :V W: A linear transformation of V into W if the following two properties are true (1) T(u  v) T(u) T(v), u, vV (2) T(cu)  cT (u), cR

6.5  Notes: (1) A linear transformation is said to be operation preserving (because the same result occurs whether the operations of addition and multiplication are performed before or after the linear transformation is applied) T(u  v)  T(u) T(v) T(cu)  cT (u)

Addition Addition Scalar Scalar in V in W multiplication multiplication in V in W

(2) A linear transformation T : V  V from a vector space into itself is called a linear operator

6.6 2 2  Ex 2: Verifying a linear transformation T from R into R

T(v1,v2 )  (v1  v2 ,v1  2v2 ) Pf:

2 u  (u1,u2 ), v  (v1,v2 ) : vector in R , c: any real number (1) Vector addition :

uv  (u1 , u 2 )  ( v 1 , v 2 )  ( u 1  v 1 , u 2  v 2 )

T(u  v)  T(u1  v1,u2  v2 )

 ((u1  v1)  (u2  v2 ),(u1  v1)  2(u2  v2 ))

 ((u1 u2 )  (v1  v2 ),(u1  2u2 )  (v1  2v2 ))

 (u1 u2 ,u1  2u2 )  (v1  v2 ,v1  2v2 )  T(u) T(v) 6.7 (2)

cu  c(u1,u2 )  (cu1,cu2 )

T(cu)  T(cu1 ,cu2 )  (cu1  cu2 ,cu1  2cu2 )

 c(u1  u2 ,u1  2u2 )  cT(u) Therefore, T is a linear transformation

6.8  Ex 3: Functions that are not linear transformations (a) f ( x ) sin x sin(x  x )  sin(x )  sin(x ) 1 2 1 2 (f(x) = sin x is not a linear transformation)     sin( 2  3 )  sin( 2 ) sin( 3 ) (b) f ( x )  x2 2 2 2 (x1  x2 )  x1  x2 (f(x) = x2 is not a linear transformation) (1 2)2 12  22 (c) f ( x ) x 1

f (x1  x2 )  x1  x2 1

f (x1)  f (x2 )  (x1 1)  (x2 1)  x1  x2  2

f (x1  x2 )  f (x1)  f (x2 ) (f(x) = x+1 is not a linear transformation, although it is a linear function)

In fact, f ( cx ) cf ( x ) 6.9  Notes: Two uses of the term “linear”.

(1) f ( x )  x  1 is called a linear function because its graph is a line

(2) is not a linear transformation from a vector space R into R because it preserves neither vector addition nor scalar multiplication

6.10  Zero transformation: T :V W TV(v ) 0 ,  v 

 Identity transformation: T :V V T(v)  v, vV

 Theorem 6.1: Properties of linear transformations T :V W, u, vV (1) T(0)  0 (T(cv) = cT(v) for c=0) (2) T(v)  T(v) (T(cv) = cT(v) for c=-1) (3) T(u  v) T(u) T(v) (T(u+(-v))=T(u)+T(-v) and property (2))

(4) If v  c1v1  c2v2  cnvn ,

then T(v)  T(c1v1  c2v2  cnvn )

 c1T(v1)  c2T(v2 )  cnT(vn ) (Iteratively using T(u+v)=T(u)+T(v) and T(cv) = cT(v)) 6.11  Ex 4: Linear transformations and bases Let T : R 3  R 3 be a linear transformation such that T(1,0,0)  (2,1,4) T (0,1,0)  (1,5,2) T (0,0,1)  (0,3,1) Find T(2, 3, -2) Sol: (2,3,2)  2(1,0,0) 3(0,1,0) 2(0,0,1)

According to the fourth property on the previous slide that  Tcvcv()()()()1 1 2 2   cvn n  cTv 1 1  cTv 2 2   cTv n n T(2,3,2)  2T(1,0,0)  3T(0,1,0)  2T(0,0,1)  2(2,1,4)  3(1,5,2)  2T(0,3,1)  (7,7,0) 6.12  Ex 5: A linear transformation defined by a  3 0  2 3 v  The function T : R  R is defined as T(v)  Av   2 1  1  v  1  2  2  (a) Find T (vv ), where  (2, 1)   (b) Show that TRR is a linear transformation form 23 into Sol: 2 (a) v  (2, 1) R vector R3 vector  3 0  6  2  T(v)  Av   2 1   3  1   1  2 0 T(2,1)  (6,3,0)

(b) TAAATT(u v )(  u  v )  u  v  ()() u  v (vector addition) T(cu)  A(cu)  c(Au)  cT(u) (scalar multiplication) 6.13  Theorem 6.2: The linear transformation defined by a matrix Let A be an mn matrix. The function T defined by T(v)  Av is a linear transformation from Rn into Rm

n m  Note: R vector R vector

a11 a12  a1n v1   a11v1  a12v2  a1nvn  a a  a v   a v  a v  a v  Av   21 22 2n  2    21 1 22 2 2n n            am1 am2  amn vn  am1v1  am2v2  amnvn 

T(v)  Av T : Rn Rm ※ If T(v) can represented by Av, then T is a linear transformation ※ If the size of A is m×n, then the domain of T is Rn and the codomain of T is Rm 6.14  Ex 7: in the plane Show that the L.T. T : R 2  R 2 given by the matrix

cos  sin  A  sin cos  has the property that it rotates every vector in R2 counterclockwise about the origin through the angle  Sol: (Polar coordinates: for every point on the xy- v (x , y ) ( r cos , r sin ) plane, it can be represented by a set of (r, α)) r:the length of v (  xy 22 ) T(v) :the angle from the positive v x-axis counterclockwise to the vector v 6.15 cos  sin x cos  sin r cos T(v)  Av   sin cos y sin cos r sin  r cos cos  r sin sin   r sin cos  r cos sin according to the addition formula of trigonometric r cos( )  identities r sin( ) r:remain the same, that means the length of T(v) equals the length of v  +:the angle from the positive x-axis counterclockwise to the vector T(v) Thus, T(v) is the vector that results from rotating the vector v counterclockwise through the angle  6.16 3  Ex 8: A projection in R The linear transformation T : R 3  R 3 is given by 1 0 0 A  0 1 0   0 0 0 is called a projection in R3

1 0 0  xx          If vv is (x , y , z ), A 0 1 0   y   y  0 0 0  z   0  ※ In other words, T maps every vector in R3 to its orthogonal projection in the xy-plane,

as shown in the right figure 6.17 Keywords in Section 6.1:

 function

 domain

 codomain

 image of v under T

 range of T

 preimage of w

 linear transformation

 linear operator

 zero transformation

 identity transformation

6.18 6.2 The Kernel and Range of a Linear Transformation

 Kernel of a linear transformation T : Let T : V  W be a linear transformation. Then the set of all vectors v in V that satisfy T ( v )  0 is called the kernel of T and is denoted by ker(T) ker(T) {v |T(v)  0, vV}

※ For example, V is R3, W is R3, and T is the orthogonal projection of any vector (x, y, z) onto the xy-plane, i.e. T(x, y, z) = (x, y, 0) ※ Then the kernel of T is the set consisting of (0, 0, s), where s is a real number, i.e. ker(T ) {(0,0, s ) | s is a real number}

6.19  Ex 2: The kernel of the zero and identity transformations

(a) If T(v) = 0 (the zero transformation T : V  W ), then

ker(T) V

(b) If T(v) = v (the identity transformation T : V  V ), then

ker(T )  {0}

6.20  Ex 5: Finding the kernel of a linear transformation x 1 1 2 1 T(xx ) A  x ( T : R32  R ) 2 1 2 3 x3 ker(T)  ? Sol:

3 ker(T ) {( xxxTxxx1 , 2 , 3 ) | ( 1 , 2 , 3 )  (0,0), and ( xxxR 1 , 2 , 3 )  }

T(x1, x2, x3)  (0,0)

x1   1 1  2  0 x2  1 2 3   0 x3 

6.21 1 1  2 0 G.-J. E.  1 0  1 0      1 2 3 0   0 1 1 0 

x1   t   1   x    t  t1  2      x3   t   1 

 ker(T)  {t(1,1,1) | t is a real number}  span{(1,1,1)}

6.22  Theorem 6.3: The kernel is a subspace of V The kernel of a linear transformation T : V  W is a subspace of the domain V Pf: T(0)  0 (by Theorem 6.1) ker(T) is a nonempty subset of V Let u and v be vectors in the kernel of T. Then

T(u  v) T(u) T(v)  00  0 (u ker(TTT ), v  ker( )  u  v  ker( )) T is a linear transformation T(cu)  cT (u)  c0  0 (uu ker(T )  c  ker( T )) Thus, ker(TV ) is a subspace of (according to Theorem 4.5 that a nonempty subset of VV is a subspace of if it is closed under vector addition and scalar multiplication)

6.23  Ex 6: Finding a for the kernel Let T : R5  R4 be defined by T(x)  Ax, where x is in R5 and  1 2 0 1 1  2 1 3 1 0  A    1 0  2 0 1     0 0 0 2 8  Find a basis for ker(T) as a subspace of R5 Sol: To find ker(T) means to find all x satisfying T(x) = Ax = 0. Thus we need to form the  A 0  first

6.24  A 0  120110   102010  213100   011020   G.-J. E.   102010   000140      000280   000000  s t x1 2st     2   1  x st 2   1   2  2       x x3 s   s 10   t         x4 4t   0   4        x5 t  01    B  (2, 1, 1, 0, 0),(1, 2, 0,  4, 1): one basis for the kernel of T 6.25  Corollary to Theorem 6.3: Let T : Rn  Rm be the linear fransformation given by T(x)  Ax. Then the kernel of T is equal to the solution space of Ax  0 TATRR(xx ) (a linear transformation :nm ) ker(T )  NS ( A ) x | A x  0,  x  Rnn (subspace of R ) ※ The kernel of T equals the nullspace of A (which is defined in Theorem 4.16 on p.239) and these two are both subspaces of Rn ) ※ So, the kernel of T is sometimes called the nullspace of T

6.26  Range of a linear transformation T : Let T :V W be a linear transformation. Then the set of all vectors w in W that are images of any vectors in V is called the range of T and is denoted by range(T) range(T) {T(v) | vV}

※ For the orthogonal projection of any vector (x, y, z) onto the xy- plane, i.e. T(x, y, z) = (x, y, 0) ※ The domain is V=R3, the codomain is W=R3, and the range is xy-plane (a subspace of the codomian R3) ※ Since T(0, 0, s) = (0, 0, 0) = 0, the kernel of T is the set consisting of (0, 0, s), where s is a real number 6.27  Theorem 6.4: The range of T is a subspace of W The range of a linear transformation T :V W is a subspace of W Pf: T(00 ) (Theorem 6.1) range(T) is a nonempty subset of W

Since T(u) and T(v) are vectors in range(T), and we have because uvV Range of T is closed under vector addition T(u) T(v) T(u  v)range(T)  because TTTT (u ), ( v ), ( u v ) range( ) T is a linear transformation Range of T is closed under scalar multi- cT (u)  T(cu)range(T)  plication because T (uu ) and T ( c ) range( T ) because cuV  Thus, range(TW ) is a subspace of (according to Theorem 4.5 that a nonempty subset of WW is a subspace of if it is closed

under vector addition and scalar multiplication) 6.28  Notes: TVW: is a linear transformation (1) ker(T) is subspace of V (Theorem 6.3) (2) range(T) is subspace of W (Theorem 6.4)

6.29  Corollary to Theorem 6.4: Let T : Rn  Rm be the linear transformation given by T(x)  Ax. The range of T is equal to the column space of A, i.e. range(T)  CS (A) (1) According to the definition of the range of T(x) = Ax, we know that the range of T consists of all vectors b satisfying Ax=b, which is equivalent to find all vectors b such that the system Ax=b is consistent (2) Ax=b can be rewritten as

a11   a 12   a 1n        a21 a 22 a 2n Axb x   x     x    12   n         am12   a m   a mn 

Therefore, the system Ax=b is consistent iff we can find (x1, x2,…, xn) such that b is a of the column vectors of A, i.e. b CS() A Thus, we can conclude that the range consists of all vectors b, which is a linear combination of the column vectors of A or said b  CS () A . So, the column space of the matrix A is the same as the range of T, i.e. range(T) = CS(A) 6.30  Use our example to illustrate the corollary to Theorem 6.4: ※ For the orthogonal projection of any vector (x, y, z) onto the xy- plane, i.e. T(x, y, z) = (x, y, 0) ※ According to the above analysis, we already knew that the range of T is the xy-plane, i.e. range(T)={(x, y, 0)| x and y are real numbers} ※ T can be defined by a matrix A as follows

1 0 0   1 0 0  xx            A0 1 0  , such that  0 1 0   y   y 

0 0 0   0 0 0  z   0  ※ The column space of A is as follows, which is just the xy-plane

1   0   0  x1  CS( A ) x 0   x  1   x  0    x  , where x , x  R 1  2   3    2  1 2         0   0   0   0  6.31  Ex 7: Finding a basis for the range of a linear transformation Let TRRTAR :5 4 be defined by (x ) x , where x is 5 and 1 2 0 1 1 2 1 3 1 0 A   1 0 2 0 1  0 0 0 2 8

Find a basis for the range of T Sol: Since range(T) = CS(A), finding a basis for the range of T is equivalent to fining a basis for the column space of A

6.32 12011   1 0201  21310   0 1102  AB  G.-J. E.    10201   0 0014      00028   0 0000 

c1 c2 c3 c4 c5 w1 w2 w3 w4 w5

 w1, w 2 , and w 4 are indepdnent, so  w 1 , w 2 , w 4 can form a basis for CS ( B ) Row operations will not affect the dependency among columns

 c1 , c 2 , and c 4 are indepdnent, and thus  c 1 , c 2 , c 4 is a basis for CS ( A ) That is,  (1, 2,  1, 0), (2, 1, 0, 0), (1, 1, 0, 2) is a basis for the range of T 6.33  of a linear transformation T:V→W : rank(TTT ) the dimension of the range of dim(range( )) According to the corollary to Thm. 6.4, range(T) = CS(A), so dim(range(T)) = dim(CS(A))

 Nullity of a linear transformation T:V→W : nullity(TTT ) the dimension of the kernel of dim(ker( )) According to the corollary to Thm. 6.3, ker(T) = NS(A), so dim(ker(T)) = dim(NS(A))

 Note: If TRRTA :nm is a linear transformation given by (xx ) , then rank(TA ) dim(range(T )) dim( CS ( A )) rank( ) nullity(T ) dim(ker(T )) dim( NS (A )) nullity(A) ※ The dimension of the row (or column) space of a matrix A is called the rank of A ※ The dimension of the nullspace of A ( NS ( A )  { xx | A 0} ) is called the nullity of A 6.34  Theorem 6.5: Sum of rank and nullity Let T: V →W be a linear transformation from an n-dimensional vector space V (i.e. the dim(domain of T) is n) into a vector space W. Then rank(Tn ) nullity(T) (i.e. dim(range of T ) dim(kernel of TT) dim(domain of ))

※ You can image that the dim(domain of T) should equals the dim(range of T) originally ※ But some dimensions of the domain of T is absorbed by the zero vector in W ※ So the dim(range of T) is smaller than the dim(domain of T) by the number of how many dimensions of the domain of T are absorbed by the zero vector, which is exactly the dim(kernel of T) 6.35 Pf: Let T be represented by an mn matrix A, and assume rank( A)  r (1) rank(T ) dim(range of T )  dim(column space of A )  rank( A )  r (2) nullity(T ) dim(kernel of T )  dim(null space of A )  n  r according to Thm. 4.17 where  rank(T)  nullity(T)  r (nr)  n rank(A) + nullity(A) = n

※ Here we only consider that T is represented by an m×n matrix A. In the next section, we will prove that any linear transformation from an n-dimensional space to an m-dimensional space can be represented by m×n matrix

6.36  Ex 8: Finding the rank and nullity of a linear transformation Find the rank and nullity of the linear transformation T : R3  R3 define by 1 0  2   A  0 1 1  0 0 0  Sol: rank(T)  rank(A)  2 nullity(T)  dim(domain of T)  rank(T)  3 2 1 ※ The rank is determined by the number of leading 1’s, and the nullity by the number of free variables (columns without leading 1’s) 6.37  Ex 9: Finding the rank and nullity of a linear transformation Let TRR :57 be a linear transformation (a) Find the dimension of the kernel of T if the dimension of the range of T is 2 (b) Find the rank of TT if the nullity of is 4 (c) Find the rank of T if ker(T ) {0 } Sol: (a) dim(domain of Tn ) 5 dim(kernel of T ) n  dim(range of T )  5  2  3 (b) rank(T ) n  nullity( T )  5  4  1 (c) rank(T ) n  nullity( T )  5  0  5 6.38  One-to-one:

A function TVW : is called one-to-one if the preimage of every w in the range consists of a single vector. This is equivalent to saying that TVTT is one-to-one iff for all u and v in , ( u ) ( v ) implies that uv

one-to-one not one-to-one 6.39  Theorem 6.6: One-to-one linear transformation Let TVW : be a linear transformation. Then TT is one-to-one iff ker( ) {0 }

Pf: ( ) Suppose T is one-to-one Due to the fact Then T (v ) 0 can have only one solution : v 0 that T(0) = 0 in Thm. 6.1 i.e. ker(T) {0} ( ) Suppose ker(TTT )={0 } and ( u )= ( v ) T(u  v) T(u) T(v)  0 T is a linear transformation, see Property 3 in Thm. 6.1 u  v ker(T )  u  v  0  u  v

TTT is one-to-one (because (u )  ( v ) implies that u  v ) 6.40  Ex 10: One-to-one and not one-to-one linear transformation

T (a) The linear transformation TMMTAA :m n n m given by ( ) is one-to-one because its kernel consists of only the m×n

(b) The zero transformation TRR :33 is not one-to-one because its kernel is all of R3

6.41  Onto: A function T :V W is said to be onto if every element in W has a preimage in V (T is onto W when W is equal to the range of T)

 Theorem 6.7: Onto linear transformations Let T: V → W be a linear transformation, where W is finite dimensional. Then T is onto if and only if the rank of T is equal to the dimension of W rank(T)  dim(range of T)  dim(W)

The definition of The definition the rank of a linear of onto linear transformation transformations 6.42  Theorem 6.8: One-to-one and onto linear transformations Let TVWVW : be a linear transformation with vector space and both of dimension nT . Then is one-to-one if and only if it is onto Pf: ( ) If TTT is one-to-one, then ker( )  {0 } and dim(ker( ))  0 Thm. 6.5 dim(range(T))  n  dim(ker(T))  n  dim(W ) According to the definition of dimension Consequently, T is onto (on p.227) that if a vector space V consists of the zero vector alone, the dimension of V is defined as zero () If T is onto, then dim(range of T)  dim(W)  n Thm. 6.5 dim(ker(T))  n  dim(range of T)  n  n  0  ker(T) {0}

Therefore, T is one-to-one 6.43  Ex 11: The linear transformation TRRTA :nm is given by (xx ) . Find the nullity and rank of TT and determine whether is one-to-one, onto, or neither 1 2 0 12 1 2 0   1 2 0  (a) A   0 1 1 (b) A   0 1 (c) A  (d) A  0 1 1 0 1 1  0 0 1 00 000 Sol: = dim(range of T) If nullity(T) If rank(T) = = dim(Rn) = # of = (1) – (2) = = dim(ker(T)) dim(Rm) = = n leading 1’s dim(ker(T)) = 0 m

dim(domain rank(T) T:Rn→Rm nullity(T) 1-1 onto of T) (1) (2) (a) T:R3→R3 3 3 0 Yes Yes (b) T:R2→R3 2 2 0 Yes No (c) T:R3→R2 3 2 1 No Yes 3 3 (d) T:R →R 3 2 1 No No 6.44  Isomorphism : A linear transformation T :V W that is one to one and onto is called an isomorphism. Moreover, if V and W are vector spaces such that there exists an isomorphism from V to W, then V and W are said to be isomorphic to each other

 Theorem 6.9: Isomorphic spaces and dimension Two finite-dimensional vector space V and W are isomorphic if and only if they are of the same dimension Pf: () Assume that V is isomorphic to W, where V has dimension n  There exists a L.T. T :V W that is one to one and onto T is one-to-one  dim(ker(T))  0 dim(V) = n  dim(range of T)  dim(domain of T)  dim(ker(T))  n  0  n 6.45 T is onto  dim(range of T)  dim(W )  n Thus dim(V )  dim(W )  n

() Assume that V and W both have dimension n

Let BV v12 , v , , vn be a basis of and

BW ' w12 , w , , wn be a basis of Then an arbitrary vector in V can be represented as

v  c1v1  c2 v2  cn vn and you can define a L.T. TVW : as follows

wT( v )  c1 w 1  c 2 w 2   cn w n (by defining T ( v i )  w i )

6.46 Since B ' is a basis for V, {w1, w2,…wn} is linearly independent, and the only solution for w=0 is c1=c2=…=cn=0 So with w=0, the corresponding v is 0, i.e., ker(T) = {0}  T is one- to-one By Theorem 6.5, we can derive that dim(range of T) = dim(domain of T) – dim(ker(T)) = n –0 = n = dim(W)  T is onto

Since this linear transformation is both one-to-one and onto, then V and W are isomorphic

6.47  Note Theorem 6.9 tells us that every vector space with dimension n is isomorphic to Rn

 Ex 12: (Isomorphic vector spaces) The following vector spaces are isomorphic to each other (a) R4  4-space

(b) M41  space of all 4 1 matrices

(c) M22  space of all 2 2 matrices

(d) Px3 ( ) space of all polynomials of degree 3 or less

5 (e) V {( x1 , x 2 , x 3 , x 4 , 0), xi are real numbers}(a subspace of R ) 6.48 Keywords in Section 6.2:

 kernel of a linear transformation T

 range of a linear transformation T

 rank of a linear transformation T

 nullity of a linear transformation T

 one-to-one

 onto

 Isomorphism (one-to-one and onto)

 isomorphic space

6.49 6.3 Matrices for Linear Transformations

3 3  Two representations of the linear transformation T:R →R :

(1) Txxx (123 , , ) (2 xxxxx 12312   ,   3  2 xx 32 ,3  4 x 3 )

2 1 1  x1  (2) T (xx ) A   1 3  2   x    2  0 3 4  x3 

 Three reasons for of a linear transformation:

 It is simpler to write

 It is simpler to read

 It is more easily adapted for computer use

6.50  Theorem 6.10: Standard matrix for a linear transformation Let TRR :nm be a linear trtansformation such that

a11   a 12   a 1n  a   a   a  TTT(e )21  , ( e )   22  , , ( e )   2n  , 12   n         am12   a m   a mn  n where {e1 , e 2 , , e n } is a for R . Then the m n

matrix whose nT columns correspond to (ei ),

a11 a 12 a 1n a a a 21 22 2n ATTT (e12 ) ( e ) ( en ) ,   am12 a m a mn is such that 푇 퐯 = 퐴퐯 for every 퐯 in 푅푛, 퐴 is called the

Standard matrix for T 6.51 Pf:

v1 1   0   0  v 0   1   0  v2 v   v     v    v e  v e   v e  1  2  n   1 1 2 2 n n        vn 0   0   1 

T is a linear transformation T (v )  T ( v1 e 1  v 2 e 2   vnn e )

T ( v1e 1 )  T ( v 2 e 2 )   T ( vnn e )

vT11 (e ) v22 T()()ee vnn T

If ATTT  (e12 ) ( e ) ( en ) , then

a11 a12  a1n v1   a11v1  a12v2  a1nvn  a a  a v   a v  a v  a v  Av   21 22 2n  2    21 1 22 2 2n n             am1 am2  amn vn  am1v1  am2v2  amnvn  6.52 a11   a 12   a 1n  a   a   a  v21   v  22    v  2n  12   n         am12   a m   a mn 

v1 T()()()e 1  v 2 T e 2   vnn T e Therefore, T(v)  Av for each v in Rn

 Note Theorem 6.10 tells us that once we know the image of every

vector in the standard basis (that is T(ei)), you can use the properties of linear transformations to determine T(v) for any v in V

6.53  Ex 1: Finding the standard matrix of a linear transformation Find the standard matrix for the L.T. TRR :32 defined by T(x, y, z)  (x 2y, 2x  y) Sol: Matrix Notation 1 1 TT(e ) ( 0 ) TT(e1 ) (1, 0, 0) (1, 2) 1   2 0 0 2 TT(e ) ( 1 ) TT(e2 ) (0, 1, 0)  (  2, 1) 2   1 0 0 0 TT(e ) ( 0 ) TT(e3 ) (0, 0, 1) (0, 0) 3   0 1 6.54 ATTT  (e1 ) ( e 2 ) ( e 3 ) 1 2 0   2 1 0

 Check: x x 1  2 0 x  2y Ay  y    2 1 0  2x  y z z i.e., T ( x , y , z ) ( x  2 y ,2 x  y )

 Note: a more direct way to construct the standard matrix ※ 1  2 0  1x  2y  0z The first (second) row actually A  represents the linear transformation 2 1 0  2x 1y  0z function to generate the first (second) component of the target vector 6.55  Ex 2: Finding the standard matrix of a linear transformation The linear transformation TRR :22 is given by projecting each point in RT2 onto the x -axis. Find the standard matrix for Sol: T(x, y)  (x, 0) 10 ATTTT (ee12 ) ( )  (1, 0) (0, 1)   00  Notes: (1) The standard matrix for the zero transformation from Rn into Rm is the mn zero matrix (2) The standard matrix for the identity transformation from Rn into n R is the nn In

6.56 n m m p  Composition of T1:R →R with T2:R →R : n T(v) T2 (T1(v)), vR

This composition is denoted by TTT 21

 Theorem 6.11: Composition of linear transformations

n m m p Let TRRTRR12 : and : be linear transformations

with standard matrices AA12 and ,then np (1) The composition TRRTTT : , defined by (vv )21 ( ( )), is still a linear transformation (2) The standard matrix AT for is given by the matrix product

AAA 21 6.57 Pf: (1) (T is a linear transformation) Let uv and be vectors in Rcn and let be any scalar. Then

T(u  v)  T2 (T1(u  v))  T2 (T1(u) T1(v))

 T2 (T1(u))T2 (T1(v))  T(u) T(v)

T(cv)  T2 (T1(cv))  T2 (cT1(v))  cT2 (T1(v))  cT (v)

(2) (AAT21 is the standard matrix for )

T(v)  T2 (T1(v))  T2 (A1v)  A2 A1v  (A2 A1)v

 Note:

T1 T2  T2 T1

6.58  Ex 3: The standard matrix of a composition 33 Let TTRR12 and be linear transformations from into s.t.

T1(x, y, z)  (2x  y, 0, x  z)

T2 (x, y, z)  (x  y, z, y)

Find the standard matrices for the compositions TTT 21

and TTT '  12 Sol: 2 1 0 A  0 0 0 (standard matrix for T ) 1   1 1 0 1 1 1 0 A  0 0 1 (standard matrix for T ) 2   2 0 1 0   6.59 The standard matrix for T  T2 T1 1 1 02 1 0 2 1 0 A  A A  0 0 10 0 0  1 0 1 2 1      0 1 01 0 1 0 0 0

The standard matrix for T' T1 T2

2 1 01 1 0 2  2 1 A' A A  0 0 00 0 1  0 0 0 1 2      1 0 10 1 0 1 0 0

6.60  Inverse linear transformation:

n n n n n If TRRTRRR12 : and : are L.T. s.t. for every v in

T2 (T1(v))  v and T1(T2 (v))  v

Then T2 is called the inverse of T1 and T1 is said to be invertible

 Note: If the transformation T is invertible, then the inverse is unique and denoted by T–1

6.61  Theorem 6.12: Existence of an inverse transformation Let TRRA :nn be a linear transformation with standard matrix , Then the following condition are equivalent ※ For (2)  (1), you can imagine that (1) T is invertible since T is one-to-one and onto, for every (2) T is an isomorphism w in the codomain of T, there is only one preimage v, which implies that T-1(w) =v (3) A is invertible can be a L.T. and well-defined, so we can infer that T is invertible ※ On the contrary, if there are many preimages for each w, it is impossible to find a L.T to represent T-1 (because for a L.T, there is always one input and one  Note: output), so T cannot be invertible If T is invertible with standard matrix A, then the standard –1 –1 matrix for T is A 6.62  Ex 4: Finding the inverse of a linear transformation The linear transformation TRR :33 is defined by

T(x1, x2, x3)  (2x1 3x2  x3, 3x1 3x2  x3, 2x1  4x2  x3) Show that T is invertible, and find its inverse Sol: The standard matrix for T

2 3 1  2x1  3x2  x3 A  3 3 1  3x  3x  x   1 2 3 2 4 1  2x1  4x2  x3 2 3 1 1 0 0 A I  3 3 1 0 1 0 3   2 4 1 0 0 1   6.63 1 0 0 1 1 0 G.-J. E. 1 0 1 0  1 0 1  IA 0 0 1 6 2 3 Therefore T is invertible and the standard matrix for T 1 is A1 1 1 0  A1  1 0 1     6  2  3

1 1 0 x1    x1  x2  T 1(v)  A1v  1 0 1 x     x  x    2   1 3   6  2  3x3  6x1  2x2  3x3  In other words, 1 T (x1, x2 , x3 )  (x1  x2 ,  x1  x3, 6x1  2x2  3x3 )

-1 -1 ※ Check T (T(2, 3, 4)) = T (17, 19, 20) = (2, 3, 4) 6.64  The matrix of T relative to the bases B and B‘: TVW: (a linear transformation)

BV{v12 , v , , vn } (a nonstandard basis for )

The coordinate matrix of any vv relative to B is denoted by [ ]B

c1  c if v can be represeted as c v c v   c v , then [ v ]  2 1 1 2 2 n n B   cn A matrix A can represent T if the result of A multiplied by a coordinate matrix of v relative to B is a coordinate matrix of v relative to B’, where B’ is a basis for W. That is, TA()[],vv  B' B where A is called the matrix of T relative to the bases B and B’

6.65  Transformation matrix for nonstandard bases (the generalization of Theorem 6.10, in which standard bases are considered) : Let VWBB and be finite - dimensional vector spaces with bases and ',

respectively, where B  {v12 , v , , vn }

If TVW : is a linear transformation s.t.

a11   a 12   a 1n  a   a   a  21   22   2n  TTT(v12 )  ,  ( v )  , ,  ( vn )  BBB'''            am12   a m   a mn 

then the mn matrix whose n columns correspond to T(vi )B' 6.66 a11 a 12 a 1n a a a 21 22 2n ATTT[ (v12 )]B [ ( v )] B  [ ( v n )] B     am12 a m a mn is such that TAV (v ) [ v ] for every v in  B' B ※The above result state that the coordinate of T(v) relative to the basis B’ equals the multiplication of A defined above and the coordinate of v relative to the basis B. ※ Comparing to the result in Thm. 6.10 (T(v) = Av), it can infer that the linear transformation and the basis change can be achieved in one step through multiplying the matrix A defined above (see the figure on 6.74 for illustration)

6.67  Ex 5: Finding a matrix relative to nonstandard bases Let TRR :22 be a linear transformation defined by

T(x1, x2 )  (x1  x2 , 2x1  x2 ) Find the matrix of TB relative to the basis  {(1, 2), ( 1, 1)} and B ' {(1, 0), (0, 1)} Sol: T (1, 2)  (3, 0)  3(1, 0)  0(0, 1) T (1, 1)  (0,  3)  0(1, 0)  3(0, 1) 3  0  T (1, 2)  , T (1, 1)  B' 0 B'  3 the matrix for T relative to B and B' 3 0  A  T(1, 2) T(1, 2)   B'  B'    0  3 6.68  Ex 6: For the L.T. TRRA :22 given in Example 5, use the matrix to find T (vv ), where  (2, 1) Sol: v  (2, 1) 1(1, 2) 1(1, 1) B {(1, 2), (1, 1)}  1   vB  1 3 0  1  3  T(v)  AvB   B' 0  31 3 T(v)  3(1, 0) 3(0, 1)  (3, 3) B'{(1, 0), (0, 1)}

 Check: T(2, 1)  (21, 2(2)1)  (3, 3)

6.69  Notes:

1 In the special case where 푉 = W (i.e., T:V → 푉) 푎푛푑 퐵 = 퐵′ The matrix A is called the matrix of T relative to the basis B

(2) If TVV : is the identity transformation

BV {v12 , v , , vn }: a basis for  the matrix of TB relative to the basis 1 0 0  0 1 0 ATTTI (v12 )  ( v )  ( vnn )   BBB   0 0 1

6.70 Keywords in Section 6.3:

 standard matrix for T

 composition of linear transformations

 inverse linear transformation

 matrix of T relative to the bases B and B'

 matrix of T relative to the basis B:

6.71 6.4 Transition Matrices and Similarity TVV: ( a linear transformation)

BV{v12 , v , , vn } ( a basis of )

BV' {w12 , w , , wn } (a basis of ) ATTTTB (v ) ( v ) ( v ) ( matrix of relative to )  12BBB   n  ATTTTB'  (w ) ( w ) ( w ) (matrix of relative to ')  12BBB'''   n  TATA(v )  v , and ( v )   v  BBBB    '' 

PBB w w w ( transition matrix from ' to )  12BBB   n  PBB1  v v v ( transition matrix from to ')  12BBB'''   n  from the definition of the transition matrix on p.254 in the text book or on Slide 4.108 and 4.109 in the lecture notes v PP v, and v  1 v  BBBB  ''    6.72  Two ways to get from  v  B ' to  T ( v )  B ' :

(1) (direct):AT '[vv ]BB'' [ ( )]

11 (2) (indirect) :P AP [vv ]BB'' [ T ( )]  A '  P AP indirect

direct 6.73  Ex 1: (Finding a matrix for a linear transformation) Find the matrix A' for T: R2  R2

T(x1, x2 )  (2x1  2x2 ,  x1  3x2 ) reletive to the basis B'{(1, 0), (1, 1)} Sol: (1) ATT '  (1, 0) (1, 1)  BB''   3  T(1, 0)  (2, 1)  3(1, 0) 1(1, 1)  T(1, 0)  B' 1  2 T(1, 1)  (0, 2)  2(1, 0)  2(1, 1)  T(1, 1)  B'  2   3  2  A' T(1, 0) T(1, 1)  B' B' 1 2  6.74 (2) standard matrix for TTB (matrix of relative to  {(1, 0), (0, 1)})  2  2 A  T (1, 0) T (0, 1) 1 3  transition matrix from B' to B 1 1 ※ Solve a(1, 0) + b(0, 1) = (1, 0)  P  (1, 0) (1, 1)  (a, b) = (1, 0) B B 0 1 ※ Solve c(1, 0) + d(0, 1) = (1, 1)    (c, d) = (1, 1) transition matrix from BB to ' 11 ※ Solve a(1, 0) + b(1, 1) = (1, 0)  P1 (1, 0) (0, 1) (a, b) = (1, 0)  BB''   01 ※ Solve c(1, 0) + d(1, 1) = (0, 1)  (c, d) = (-1, 1) matrix of T relative B'

1 1 1 2  21 1  3  2 A' P AP         0 1 1 3 0 1 1 2  6.75  Ex 2: (Finding a matrix for a linear transformation) Let BB {(  3, 2), (4,  2)} and '  {(  1, 2), (2,  2)} be basis for

227 2 2 RATRRB, and let  be the matrix for : relative to . 37 Find the matrix of TB relative to ' Sol:

Because the specific function is unknown, it is difficult to apply the direct method to derive A’, so we resort to the indirect method where A’ = P-1AP 3  2 transition matrix from B' to B : P  (1, 2)B (2,  2)B    2 1

1 1 2 transition matrix from B to B': P  (3, 2)B' (4,  2)B'     2 3 matrix of T relative to B':

1 1 2 2 73  2  2 1 A' P AP          2 3 3 72 1 1 3 6.76  Ex 3: (Finding a matrix for a linear transformation) For the linear transformation TRR :22 given in Ex.2, find v ,  B TT(v ) , and ( v ) , for the vector v whose coordinate matrix is  BB  ' 3 v   B'  Sol: 1 3  2 3  7 vB  PvB'   2 11  5  2 7 7  21 T(v)  AvB   B  3 7 5 14

1 1 2 21  7 T (v)  P T(v)   B' B  2 314  0   2 1 3  7 or T(v)B'  A'vB'        1 31  0  6.77  Similar matrix :

 For square matrices A and A’ of order n, A’ is said to be similar to A if there exist an P s.t. A’=P-1AP

 Theorem 6.13: (Properties of similar matrices) Let A, B, and C be square matrices of order n. Then the following properties are true. (1) A is similar to A (2) If A is similar to B, then B is similar to A (3) If A is similar to B and B is similar to C, then A is similar to C Pf for (1) and (2) (the proof of (3) is left in Exercise 23):

(1) A In AI n (the transition matrix from A to A is the I n ) (2) A P1 BP  PAP  1  P ( P  1 BP ) P  1  PAP  1  B  Q11 AQ  B (by defining Q  P ), thus B is similar to A 6.78  Ex 4: (Similar matrices)

2 2   3 2  (a)AA  and '   are similar 1 3   1 2 

1 1 1 because A' P AP, where P    0 1

2 7   2 1  (b)AA  and '   are similar 3 7   1 3 

1 3  2 because A' P AP, where P    2 1

6.79  Ex 5: (A comparison of two matrices for a linear transformation) 1 3 0  33 Suppose ATRR 3 1 0 is the matrix for : relative 0 0 2 to the standard basis. Find the matrix for T relative to the basis B ' {(1, 1, 0), (1, 1, 0), (0, 0, 1)} Sol: The transition matrix from B' to the standard matrix 1 1 0 P  (1, 1, 0) (1, 1, 0) (0, 0, 1)  1 1 0 B B B   1 1 0 0 1  2 2 0  P1   1  1 0  2 2  0 0 1 6.80 matrix of TB relative to ':

11 220 1 3 0   1 1 0  A' P1 AP 11  0 3 1 0   1  1 0  22     0 0 1 0 0 2   0 0 1  4 0 0 (A’ is a , which is  0 2 0 simple and with some computational advantages) 0 0 2

※ You have seen that the standard matrix for a linear transformation T:V →V depends on the basis used for V. What choice of basis will make the standard matrix for T as simple as possible? This case shows that it is not always

the standard basis. 6.81  Notes: Diagonal matrices have many computational advantages over nondiagonal ones (it will be used in the next chapter)

d1 00 00d for D  2   00 dn k d1 00 k 00d T (1) Dk  2 (2) DD  k 00 dn 1 d 00 1 001 1 d2 (3) Dd , i 0  1 00 6.82 dn Keywords in Section 6.4:

 matrix of T relative to B

 matrix of T relative to B'

 transition matrix from B' to B

 transition matrix from B to B'

 similar matrix

6.83 6.5 Applications of Linear Transformation

 The geometry of linear transformation in the plane (p.407-p.110)

 Reflection in x-axis, y-axis, and y=x

 Horizontal and vertical expansion and contraction

 Horizontal and vertical shear

(to produce any desired angle of view of a 3- D figure)

6.84