<<

Unit 4: Matrices, Linear maps and change of

Juan Luis Melero and Eduardo Eyras September 2018

1 Contents

1 Linear maps3 1.1 Operations of linear maps...... 3 1.1.1 Scaling...... 3 1.1.2 Reflection...... 4 1.1.3 Pure rotation...... 4 1.2 Definition of ...... 6 1.3 of a map...... 6 1.4 (nullspace) of a map...... 8 1.5 Types of maps...... 10 1.5.1 (injective or one-to-one map)..... 10 1.5.2 (surjective or onto map)...... 11 1.5.3 (bijective or ”one-to-one and onto” maps) 11 1.6 representation of a linear map...... 13 1.7 Properties of the matrix associated to a linear map, kernel and image...... 16 1.7.1 Definitions...... 16 1.7.2 Application of the properties...... 18

2 20

3 Composition of linear maps 22

4 Inverse of a linear map 23

5 of linear maps 23

6 Exercises 24

7 R practical 28 7.1 Kernel of a linear map...... 28 7.2 Image of a linear map...... 28

2 1 Linear maps

1.1 Operations of linear maps A linear map is an on a to transform one vector into another and which can be represented as a :

n m fA : R → R m ~u →~v = fA(~u) = A~u ∈ R For instance, in three :       a11 a12 a13 u1 v1 ~v = fA(~u) = A~u = a21 a22 a23 u2 = v2 a31 a32 a33 u3 v3 Many operations can be represented with linear maps. We describe below some interesting ones.

1.1.1 Scaling A scaling operation returns a vector in the same direction. The matrices associated to this operation are diagonal matrices.

u  au  a 0 u  f(~u) = a~u → f(~u) = a 1 = 1 = 1 = A~u u2 au2 0 a u2

Figure 1: Example of a scaling operation. The thick arrow represents the original vector, whereas the thin arrow represents the scaled vector.

For example:

2 2 0 2 4 ~u = → f(~u) = A~u = = 1 0 2 1 2

3 1.1.2 Reflection The reflection operation returns a vector mirrored by a given axis. The matrix has the property of being diagonal and such that the square of the matrix is the unit matrix. If the matrix X is a reflection, then

2 A = In

For instance:       1 0 u1 u1 ~v = fA(~u) = A~u = = 0 −1 u2 −u2

2 1 0  2  2  → = 1 0 −1 1 −1

Figure 2: Example of a reflection operation. The thick arrow represents the original vector, whereas the thin arrow represents the reflected vector.

1.1.3 Pure rotation A pure rotation returns the vector rotated with a certain angle. That is, it does not change its . Additionally, a rotation does not change the relative angle between vectors, so in particular, it preserves the between vectors.

4 It can then be proven (left as exercise) that pure rotation matrices fulfill the T T property AA = A A = In. This is the general definition of an orthonormal matrix (preserves norms and relative angles). In particular, an orthonormal matrix is formed by column or row vectors that mutually orthogonal and have norm () 1. For instance, in two dimensions one can show that:  a2 + c2 = 1 a b  A = A−1 = AT → b2 + d2 = 1 c d ab + cd = 0 We can parametrize the matrix with the angle using trigonometric functions. If recall that sin2 θ + cos2 θ = 1, and use the fact that row or column vectors must be orthogonal, we can reparametrize the matrix as:

cos θ − sin θ A(θ) = sin θ cos θ cos θ − sin θ u  u cos θ − u sin θ ~v = A(θ)~u = 1 = 1 2 sin θ cos θ u2 u1 sin θ + u2 cos θ For example:

π  0 1 2  1  θ = → u~0 = A(π/2)~u = = 2 −1 0 1 −2

Figure 3: Example of pure rotation operation. Here u represents the original vector, whereas u0 represents the rotated vector.

5 1.2 Definition of linear map A map f, also called application or , is a relation between two vector spaces M,N such that every vector in M has a corresponding vector in N:

f : M→ N u →f(u) f is a map ⇐⇒ ∀u, ∃!f(u) Such a transformation is a linear map if it fulfills these two properties:

1. u, v ∈ M =⇒ f(u + v) = f(u) + f(v) ∈ N

2. λ ∈ R, u ∈ M =⇒ f(λu) = λf(u) ∈ N

For instance, consider the following map between R2 and R:

f : R2 → R (x, y)→f(x, y) = 2x − y

We show property 1. Consider the map on the sum of any two vectors in R2: f((x, y)+(w, z)) = f(x+w, y+z) = 2(x+w)−(y+z) = 2x−y+2w−z = f(x, y)+f(w, z)

Similarly, we show property 2:

f(λ(x, y)) = f(λx, λy) = 2λx − λy = λ(2x − y) = λf(x, y)

1.3 Image of a map The image of a map is the of all elements of the target set that are described by the map (given by the map):

Im(f) = {w ∈ N | ∃u ∈ M : f(u) = w} ⊆ N As we will see, linear maps can be represented with matrices. In this repre- sentation, the image of a map is the vector subspace generated by the column vectors of the matrix A representing the map: Span({(a11, a21, . . . , an1),..., (a1m, a2m, . . . , anm)}).

6 Proposition: the image of a linear map is a vector subspace.

The proof follows from the definition of linear map. We show that the ele- ments of the Image fulfill the closure (under vector addition and multiplica- tion by scalars) and include the neutral element:

1. ∀f(u), f(v) ∈ Im(f) → f(u) + f(v) = f(u + v) ∈ Im(f)

2. ∀a ∈ R, ∀f(u) ∈ Im(f) → af(u) = f(au) ∈ Im(f) 3. f(u) ∈ Im(f) → f(0) = f(u − u) = f(u) − f(u) ∈ Im(f) → f(0) ∈ Im(f)

Figure 4: Illustration of the image of a map.

Example. In this example we use the fact that a linear map can be repre- sented as a matrix and that the columns of the matrix are the vectors that span the target space. Thus, the number of rows is the of the image (more details on this later).

Consider the following linear map:

f : R2 → R3 (x, y)→f(x, y) = (x + y, 2 + y, y − x)

We can represent this linear map with the following matrix:

 1 1  1 1  x + y  x A = 2 1 since 2 1 = 2x + y f     y   −1 1 −1 1 −x + y

7 The image of the linear map is the span of the column vectors: 2 3 Im(f) = f(R ) = Span{(1, 2, −1), (1, 1, 1)} ⊆ R The image of this linear map is the set of all the linear combinations of these two vectors.

1.4 Kernel (nullspace) of a map The kernel of a map is the set of the elements which map to the zero (neutral) vector.

f : M → N n o null(f) = f −1(0) = Ker(f) = ~v ∈ M | f(~v) = ~0 Proposition: the kernel of a map is a vector space.

The proof follows from the definition of linear map: 1. ∀u, v ∈ Ker(f) → f(u+v) = f(u)+f(v) = 0+0 = 0 → u+v ∈ Ker(f)

2. ∀a ∈ R, ∀u ∈ Ker(f) → f(au) = af(u) = a · 0 = 0 → au ∈ Ker(f) 3. f(0) = f(u − u) = f(u) − f(u) = 0 − 0 = 0 → 0 ∈ Ker(f)

Figure 5: Illustration of the kernel of a map.

Example: given a linear map, using its associated matrix, we want to find which vectors in the domain set map to the zero vector in the target space. Consider the same linear map as the example of the image (section 1.3).  1 1 Af =  2 1 −1 1

8 We find those vectors that map to the zero vector:

 1 1 0 x A ~u = ~0 2 1 = 0 f   y   −1 1 0

x + y = 0   x = 0  x = 0 2x + y = 0 → → x = y y = 0 −x + y = 0  The kernel of this linear map is:

0 Ker(f) = 0

0 Let us consider now an example where the Ker(f) is not . Consider the 0 following matrix associated to a linear map:

3 2 f : R → R 1 1 1  A = 1 −1 −2

x 1 1 1  0 y = 1 −1 −2   0 z x = z/2  x + y + z = 0   → y = −3z/2 x − y − 2z = 0 z = z  So the kernel is:     z/2  Ker(f) = −3z/2 , z ∈ R  z  This the parametric representation of the Kernel as a vector space. We can also represent a vector space as the span of the basis vectors. In the case of this Kernel: Ker(f) = Span{(1/2, −3/2, 1)}

9 1.5 Types of maps 1.5.1 Monomorphism (injective or one-to-one map) A monomorphism is a linear map which, for different vectors, return different images. That is, not two elements map to the same element in the image space: If u 6= v =⇒ f(u) 6= f(v) or equivalently

If f(u) = f(v) =⇒ u = v Proposition: Injective maps have a trivial Kernel. That is, a map f is n o injective ⇐⇒ Ker(f) = ~0 .

Proof: We have to prove both directions: n o 1. If f is injective =⇒ Ker(f) = ~0 n o 2. If Ker(f) = ~0 =⇒ f is injective

Proof of 1: assume that f is injective. In that case:

f(~u) = f(~v) =⇒ ~u = ~v

~u ∈ Ker(f) =⇒ f(~u) = ~0 = f(~0) (~0 belongs to the kernel (section 1.4)) We use that f is injective to show that every element of the kernel must be the null vector: n o f(~u) = f(~0) =⇒ ~u = ~0 → Ker(f) = ~0

Proof of 2: We assume that the kernel is trivial and need to show that:

f(~u) = f(~v) =⇒ ~u = ~v If f(~u) = f(~v) → f(~u) − f(~v) = ~0 → f(~u − ~v) = ~0 → ~u − ~v ∈ Ker(f) ~u − ~v = ~0 → ~u = ~v Q.E.D.

10 Figure 6: Illustration of an injective map.

1.5.2 Epimorphism (surjective or onto map) An epimophism is a map for which every element of the target space has a preimage. That is, all elements of the target space have been mapped, so the image covers the entire target vector space.

f : M → N ∀w ∈ N, ∃u ∈ M | f(u) = w

Figure 7: Illustration of a surjective map.

1.5.3 Isomorphism (bijective or ”one-to-one and onto” maps) are maps that are both injective and surjective. That is, dif- ferent vectors map to different images (injective) and all the target space is covered (surjective).

f : M → N If f(u) = f(v) =⇒ u = v and

11 ∀w ∈ N, ∃u ∈ M | f(u) = w

Figure 8: Illustration of a bijective map.

If a map is bijective, then there is an inverse map such that:

f : M→ N u →v = f(u) f −1 : N→ M v →u = f −1(v) If we find an isomorphism between two vector spaces, we say that both vector spaces are isomorphic. This implies that both vector spaces have the same dimension.

f : M → N M ∼= N ⇐⇒ dim(M) = dim(N) Example: There is an isomorphism (a linear map that is injective and surjective) between the vector space of of grade ≤ n and the vector space Rn+1.

n+1 f : Pn(R) → R 2 n n+1 a0 + a1x + a2x + ··· + anx →(a0, a1, . . . , an) ∈ R ∼ n+1 Pn(R) = R

12 1.6 Matrix representation of a linear map To derive the matrix associated to a linear map we must represent the map on the basis of the domain vector space. Let us consider this first for a 2 × 2 matrix and then we will generalize it gradually. Consider the following endomorphism1: f : R2→ R2 u →f(u) and the of R2:  1 0 B = e = , e = 1 0 2 1 Let’s define a map by describing how the map works on the basis vectors:

f(e1) = e1, f(e2) = e1 + e2 Given a general vector:

a u = = ae + be b 1 2 we can write down how the map works on the vector:

f(u) = f(ae1 + be2) = af(e1) + bf(e2) =

a + b 1 1 a = ae + b(e + e ) = (a + b)e + be = = = Au 1 1 2 1 2 b 0 1 b The map is represented by the matrix A in the canonical basis. Note that the column vectors of A are the representation of the mapped basis vectors (in this case in the same basis):

1 1 1 1 Rep (f(e )) = , Rep (f(e )) = → A = B 1 0 B 2 1 0 1

In a general endomorphism of R2. f : R2 → R2, we define the linear map on the basis vectors. Then, the column vectors of the matrix associated to the linear map are the representation of the image of the basis vectors:

1An endomorphism is a map of a vector space to itself.

13 f(e1) = ae1 + ce2 f(e2) = be1 + de2 a b Rep(f(e )) = Rep(f(e )) = 1 c 2 d We could then build the matrix associated to f using these representations. Alternatively, we could obtain the matrix by applying the linear map on a generic vector. Consider a vector

u  u = 1 u2 And then apply the map:

f(u) = f(u1e1 + u2e2) = u1f(e1) + u2f(e2) =     a b u1 = u1(ae1 +ce2)+u2(be1 +de2) = (au1 +bu2)e1 +(cu1 +du2)e2 = c d u2

We can generalized this to any two vector spaces M and N. We will follow exactly the same steps as in the first example above, namely, we first de- fine the map, establish a basis for each of the vector spaces, then define the map on the vectors of the basis of the domain set, which will lead to the representation of this basis in the basis of the target space, apply the map to a generic vector, and finally reorder the terms to have a matrix expression.

First, we define the map:

f : M→ N u →f(u) dim(M) = m, dim(N) = n Consider the basis of both vector spaces:

BM = {u1, . . . , um} BN = {v1, . . . , vn}

Every vector u ∈ M is a of the vectors from BM and every vector v ∈ N is a linear combination of the vectors from base BN , hence, we can write u and v as:

14 m n X X u = αiui v = αjvj i=1 j=1

Now we define the linear map for the vectors of the basis BM :   n a1i X f(u ) = a v or Rep (f(u )) = (a , . . . , a ) =  .  i ji j BN i 1i ni  .  j=1 ani

Now we have the representation of the map on vectors of the basis BM in the basis BN . We could build the associated matrix, since the representation forms the column vectors of the matrix. We have to build a column ∀i ∈ [1, m]. We can obtain this by applying the map on a vector from M:

m m m n X X X X f(u) = f( αiui) = αif(ui) = αi ajivj = i=1 i=1 i=1 j=1

m n n m ! X X X X = αiajivj = ajiαi vj i=1 j=1 j=1 i=1 That is m n m ! X X X u = αiui → f(u) = ajiαi vj = i=1 j=1 i=1

= (a11α1 + a12α2 + ··· + a1mαm)v1 + ··· + (an1α1 + an2α2 + ··· + anmαm)vn

Since v1, . . . , vn are the vectors of the basis BN , in parenthesis we have the coefficients of the representation for a vector in this basis. We can then We can build the matrix using this result:

      Pm  α1 a11 . . . a1m α1 i=1 a1iαi  .  f  . .. .   .   .  u =  .  −→ f(u) =  . . .   .  =  .  ∈ N Pm αn an1 . . . anm αm i=1 aniαi Let’s consider a specific example:

f : R2→ R3 u →f(u)

15 Consider the two bases:

BR2 = {u1, u2} ,BR3 = {v1, v2, v3} Define f for the basis:

f(u1) = v1 + 2v2, f(u1) = v1 − v3

Apply the map to a vector:

f(u) = f(au1, bu2) = af(u1) + bf(u2) = av1 + 2av2 + bv1 − bv3 = 1 1  a (a + b)v + 2av − bv → f(u) = 2 0 1 2 3   b 0 −1 We obtain the representations of the image of the basis vectors, and the form of the matrix 1  1  1 1 

RepB 3 (f(u1)) = 2 , RepB 3 (f(u2)) = 0 → Af = 2 0 R   R     0 −1 0 −1

1.7 Properties of the matrix associated to a linear map, kernel and image 1.7.1 Definitions The study of the matrix associated to a linear map gives information of the kernel and the image of the linear map.

f : M → N u ∈ M→f(u) ∈ N dim(M) = m, dim(N) = n The matrix representation for this linear map:

f(u) = Au, A ∈ Mn×m(R) The number of columns is the dimension of the domain set. The number of rows is the dimension of the image. Moreover, there is a relationship with the dimensions.

16 The of the matrix is the dimension of the image:

rank(A) = dim(Im(f))

The dimension of the domain is the sum of the dimensions of the kernel and the image: dim(M) = dim(Ker(f)) + dim(Im(f)) Using these properties, we can reformulate the conditions to be injective and surjective with the dimensions. Recall that an injective map has a trivial kernel, hence: Injective ⇐⇒ dim(Ker(f)) = 0 So for injective maps,

dim(M) = dim(Im(f)) = rank(A)

On the other hand, surjective maps cover the entire target space, hence:

Surjective ⇐⇒ dim(Im(f)) = dim(N) = n

So for surjective maps rank(A) = dim(N) If f is an isomorphism (injective and surjective) we then have

dim(M) = dim() = rank(A)

So A is always a for isomorphisms.

However, given a linear map represented by a squared matrix A, if det(A) = 0, from the formula above it follows that

rank(A) = dim(Im(f)) < dim(N) → f is not surjective (onto) Additionally, it has a non-trivial kernel:

dim(Ker(f)) > 0 → f is not injective (one-to-one)

17 1.7.2 Application of the properties We will analyze 3 examples of maps: one neither injective nor surjective, one injective but not surjective and one surjective but not injective.

Example case 1: a linear map that is neither injective nor surjective:

1 0 f : 2 → 2 e = , e = R R 1 0 2 1

f(e1) = e2, f(e2) = 0 0 0 0 0 Rep(f(e )) = , Rep(f(e )) = → A = 1 1 2 0 1 0

det(A) = 0 → rank(A) < 2 0 0 a  Ker(f) = v ∈ 2 | f(v) = 0 = = 0 = R 1 0 b 0  = , b ∈ → dim(Ker(f)) = 1 6= 0, not injective b R

0  Im(f) = f(v) | v ∈ 2 = , a ∈ → dim(Im(f)) = 1 < dim( 2) = 2 R a R R

Thus, not surjective

Example of case 2: a map that is injective but not surjective:

2 3 f : R → R Consider the canonical basis:

BR2 = {e1, e2},BR3 = {e1, e2, e3} And the mapped basis:

f(e1) = (1, 0, 0) f(e2) = (0, 1, 0) Matrix representation:

18 1 0 1 0 Rep(f(e1)) = 0 , Rep(f(e2)) = 1 → A = 0 1 0 0 0 0 Analyze the dimension:

3 rank(A) = 2 → dim(Im(f)) = 2 < dim(R ) = 3 → not onto

1 0   a 0 0 Ker(f) = 0 1 = = → dim(Ker(f)) = 0 → Injective   b 0 0  0 0 

Exaple of case 3: a linear map that is surjective but not injective:

3 2 f : R → R Consider the canonical basis:

BR2 = {e1, e2},BR3 = {e1, e2, e3} And the mapped basis:

f(e1) = (1, 0) f(e2) = (0, 1) f(e3) = (1, 1) Matrix representation:

1 0 1 1 0 1 Rep(f(e )) = , Rep(f(e )) = , Rep(f(e )) = → A = 1 0 2 1 3 1 0 1 1 Analyze the dimensions:

2 rank(A) = 2 → dim(Im(f)) = dim(R ) = 2 → onto

 a   a   1 0 1 0   Ker(f) = b = = a , a ∈ → dim(Ker(f)) = 1 6= 0 0 1 1   0   R  c   −a  So f is not injective

19 2 Change of basis

So far, the linear maps we have studied were applied between two different vector spaces. A change of basis transformation is a linear map that maps one vector space to itself (endomorphism) but in a different basis representations. Let us see first a particular example. Consider the vector space R2, and two possible bases:

 1 0  1  1  E = e = , e = ,B = w = , w = 1 0 2 1 1 1 2 −1 Given a generic vector in R2, we can represent this vector in each of the two bases: a u = ae + be → Rep (u) = 1 2 E b c u = cw + dw → Rep (u) = 1 2 B d Recall that although the representations are different, it is the same vector:

u = ae1 + be2 = cw1 + dw2

Now we write the vectors from the basis B in terms of vectors from the basis E: w1 = e1 + e2, w2 = e1 − e2 With this, we rewrite the vector u;

u = ae1 + be2 = cw1 + dw2 = c(e1 + e2) + d(e1e2) = (c + d)e1 + (c − d)e2 ( a = c + d a 1 1  c → = b = c − d b 1 −1 d As you can see, the vector columns of the matrix are the representation of the vectors of the basis B in basis E. We can also write:

1  1  c + d 1 1  c Rep (u) = cRep (w )+dRep (w ) = c +d = = E E 1 E 2 1 −1 c − d 1 −1 d

This is the matrix transformation to change from the representation in B to the representation in E.

20 We can do the same to find the inverse transformation:

1/2  1/2  Rep (e ) = , Rep (e ) = B 1 1/2 B 1 −1/2 c 1/2  1/2  1/2 1/2  a Rep (u) = = aRep (e )+bRep (e ) = a +b = B d B 1 B 2 1/2 −1/2 1/2 −1/2 b The two matrices provide the transformation in opposite directions. In fact, one is the inverse of the other:

1/2 1/2  1 1  1 0 = 1/2 −1/2 1 −1 0 1 The matrix of change of basis is an automorphism2. Therefore, it will always have an inverse, which represents the change in the other direction. The matrix of change of basis can be conceived as the matrix associated to the identity linear map between two different basis, since the vector space is the same:

id:(R2,E) → (R2,B) a 1/2 1/2  a Rep (u) = →Rep (u) = E b B 1/2−1/2 b And the inverse map:

id:(R2,B) → (R2,E) c 1 1  c Rep (u) = →Rep (u) = E d E 1−1 d We can write this in general:

Consider the two bases:

B = {β1, . . . , βn} ,D = {δ1, . . . , δn}

2An is a map which is an endomorphism (a vector space mapped to itself) and bijective (one-to-one and onto at the same time).

21 id:(V,B) → (V,D)     u1 u1 Rep (u) =  . →Rep u = (Rep (β ) | · · · | Rep (β ))  .  B  .  D D 1 D n n×n  .  un un

The change of basis is represented as the identity linear map of V onto itself

id :(V,B) → (V,D)

RepBD(id) = (RepD(β1) | · · · | RepD(βn)) The matrix of the change of basis from B to D is the matrix formed by the representation in D of the basis vectors from B.

3 Composition of linear maps

The composition of linear maps allows to go from one vector space to another, passing through a third one. Consider two linear maps:

f : V → W ∀v ∈ V,Af v = w ∈ W

g : W → U ∀w ∈ W, Agw = u ∈ U We can go from V to U. f g V −→ W −→ U We defined the composition of the two linear maps from V to U:

g ◦ f : V → U

This translates to a multiplication of matrices:

∀v ∈ V,AgAf v = u ∈ U

As with matrix multiplication, composition of linear maps is associative but, in general, not commutative:

f g V −→ W −→ U −→h T

22 h ◦ (g ◦ f): V → T (h ◦ g) ◦ f : V → T

∀v ∈ V,Ah(AgAf )v = (AhAg)Af v ∈ T

4 Inverse of a linear map

An isomorphism f : V → W is invertible if there is another linear map g : W → V such that:

g ◦ f = idv : V → V

f ◦ g = idW : W → W g is the inverse of f (g = f −1).

The associated matrix of the inverse map is the inverse of the matrix:

−1 Ag = Af → Af Ag = AgAf = In

5 Path of linear maps

Having a collection of linear maps, we can find new maps composing and inverting those we already have. For example, consider the linear maps in Figure9. We have a linear map f, two changes of basis, and the resulting linear map f 0 in the new bases. To find the matrix A0 associated to f 0 we need to do

0 −1 A = RepDD0 (id) · A · RepB0B(id) = RepDD0 (id) · A · (RepBB0 (id))

The order is important. You can see in section3 that the order from left to right is inverted with respect to the order of application, i.e. the first map to be applied is the last one to be written. Accordingly, the multiplication of matrices is also inverted in the path.

23 Figure 9: Example of path of linear maps. f is a linear map between V and W . A is the matrix associated with f. RepBB0 (id) is the change of basis 0 0 0 from B to B . RepDD0 id the change of basis from D to D . f is the map between V and W with the other bases. A0 is the matrix associated to f 0.

6 Exercises Ex. 1 — A pure rotation returns the vector rotated with a certain angle. That is, it does not change its norm. Additionally, a rotation does not change the relative angle between vectors, so in particular, it preserves the orthog- onality between vectors. Show that a pure rotation matrix A ∈ M2x2(R) fulfills the property A−1 = AT .

Ex. 2 — Consider the linear map f : R2 → R2. In the canonical basis in 1 0 2, e = , e = , the map is defined by f(e ) = e , f(e ) = e + e . R 1 0 2 1 1 1 2 1 2 Find the matrix representation for this map.

Ex. 3 — Consider the vector space of polynomials of degree ≤ 1, P1[x] = 3 {v = αx + β, with α, β ∈ R}. We define the linear map f : R → P1, such that a f : b → (2a + b) − cx c Find the matrix representation for f.

Ex. 4 — Consider the following 2 bases for R2: 1 0 1  1  ε = , ,B = , 2 0 1 1 −1

24 Calculate the matrix of change of basis and its inverse.

Ex. 5 — Consider the linear map f : R2 → R3 with the bases: 1  0  1 2 1   B = {u , u } = , ,D = {v , v , v } = 0 , −2 0 1 2 0 4 1 2 3        0 0 1 

The linear map is defined as:

1 1 f(u1) = 1 , f(u2) = 2 1 0

1. Write the image of the basis B, f(B) in terms of the basis D, that is RepD(f(u1)) and RepD(f(u2))

2. Given the vector w = au1 + bu2, write f(w) in the basis D. 3. Re-write the linear map as a matrix.

2 3 Ex. 6 — Consider a linear map f : R → R defined as f(u1) = v1 + 2 2v2, f(u2) = v1 − v3 with B = {u1, u2} , {v1, v2, v3} the two bases of R and R3, respectively. Calculate the matrix representation of f. Ex. 7 — Consider a linear map between a 3-dimensional space and the a 3 polynomials of degree ≤ 1, f : R → P1[x], defined as f(b) = (2a+b)−cx. c find a matrix representation for f. Note: there can be more than one matrix representation, depending on the bases chosen for the image space.

Ex. 8 — Consider the linear map f : R2 → R3 defined as:   f(e1) = 1 0 0 , f(e2) = 0 1 0

Consider the two bases in R2: 1 0 1 1 ε = {e , e } = , ,B = {u , u } = , 2 1 2 0 1 1 2 1 2

25 1. Find the matrix representation for f in ε2 and in B. 2. Find Ker(f). 3. Is f an isomorphism?

Ex. 9 — Consider the linear map f : R3 → R2 with the action on the basis vectors as: 1 0 0 1 2 1 f 0 = , f 1 = , f 0 =   0   1   1 0 0 1

1. What is the dimension of the kernel Ker(f)? 2. Is f an isomorphism?

3 2  Ex. 10 — Consider the linear map f : R → R defined by f a, b, c = a + b, b + c. 1. Find an associated matrix. 2. Find Ker(f). 3. Is it one-to-one (injective)? Is it onto (surjective)? Is f an isomor- phism?

1 0 Ex. 11 — Consider the bases ε = {e , e } = , ,D = {u , u } = 2 1 2 0 1 1 2 2 −2 , . Find the two matrices of change of basis ε → D and 1 4 2 D → ε2. Ex. 12 — Consider the bases: 2 1 −1 1 B = , ,D = , 1 0 1 1

Find the two matrices of change of basis B → D and D → B.

Ex. 13 — Find the change of basis from ε2 = {e1, e2} to D = {e2, e1}, where e1, e2 are the canonical vectors. Compare to the matrix for the opposite change.

26 1 0 Ex. 14 — Consider the bases ε = {e , e } = ,B = {u , u } = 2 1 2 0 1 1 2 1 −2 3 , . Write v = in the basis B using the change of basis 2 3 5 matrix.

Ex. 15 — Consider a linear map f : R2 → R2 defined in the canonical basis in both spaces by the matrix √  3/2 −1/2 A = √ 1/2 3/2

Transform this representation to another with respect to other 2 bases:

1 0 −1 2 B = , ,D = , 1 2 0 3

f 0 That is, we are looking for the matrix that represents de application (V,B) −→ (V,D), with V = R2, such that:

f (V, ε2) −→ (V, ε2) ↓ id ↓ id f 0 (V,B) −→ (V,D)

27 7 R practical

Besides the matrix operations seen in Unit 3 about Matrices, we will see here some operations related to linear maps.

7.1 Kernel of a linear map The pracma package in R thas the command null() or nullspace() which computes the null space (the kernel) of a linear map. #Load the package > library ("pracma")

#Define the matrix > m <- matrix(c(0,1,0,0), 2,2)

#Compute the kernel of the corresponding linear map > null (m) [,1] [1 ,] 0 [2 ,] 1

#To know the dimension of the kernel #we can calculate the rank of the matrix

> qr(null(m))$rank [1] 1

#In this case the Kernel isa one-dimensional vector space.

7.2 Image of a linear map As we mentioned before, the columns of the matrix associated to the linear map are the generator set of the target set. Moreover, the rank of the matrix is the dimension of the image.

28 #Define the matrix > m <- matrix(c(1, 0, 0, 0, 1, 0), 3, 2) > m [,1] [,2] [1 ,] 1 0 [2 ,] 0 1 [3 ,] 0 0

#The vectors are inR^3. So the image isa subspace ofR^3.

#Compute the rank > qr(m)$rank [1] 2

#The dimension of the image is2. #The linear map is not surjective(onto).

29 References

[1] Marc Peter Deisenroth; A Aldo Faisal and Cheng Soon Ong. Mathematics for Machine Learning. 2018.

[2] Jim Hefferon. A free text for a standard us undergraduate course, 2017.

[3] Soren Hojsgaard. Introduction to with R. 2011.

[4] Jordi Vill`aand Pau Ru´e. Elements of Mathematics: an embarrasignly simple (but practical) introduction to algebra. 2011.

30