<<

Theorem A nonempty subset W of a V is a subspace of V if W satisfies the two axioms. Proof: Suppose now that W satisfies the closure axioms. We just need to prove existence of inverses and the zero element. Let x ∈ W . By distributivity

0x = (0 + 0)x = 0x + 0x.

Hence 0 = 0x. By closure axioms 0 ∈ W . If x ∈ W then −x = (−1)x is in W by closure axioms. 2

Subspaces and Linear Span

Definition A nonempty subset W of a vector space V is called a subspace of V if it is a vector space under the operations in V .

1/43 Proof: Suppose now that W satisfies the closure axioms. We just need to prove existence of inverses and the zero element. Let x ∈ W . By distributivity

0x = (0 + 0)x = 0x + 0x.

Hence 0 = 0x. By closure axioms 0 ∈ W . If x ∈ W then −x = (−1)x is in W by closure axioms. 2

Subspaces and Linear Span

Definition A nonempty subset W of a vector space V is called a subspace of V if it is a vector space under the operations in V .

Theorem A nonempty subset W of a vector space V is a subspace of V if W satisfies the two closure axioms.

1/43 Let x ∈ W . By distributivity

0x = (0 + 0)x = 0x + 0x.

Hence 0 = 0x. By closure axioms 0 ∈ W . If x ∈ W then −x = (−1)x is in W by closure axioms. 2

Subspaces and Linear Span

Definition A nonempty subset W of a vector space V is called a subspace of V if it is a vector space under the operations in V .

Theorem A nonempty subset W of a vector space V is a subspace of V if W satisfies the two closure axioms. Proof: Suppose now that W satisfies the closure axioms. We just need to prove existence of inverses and the zero element.

1/43 If x ∈ W then −x = (−1)x is in W by closure axioms. 2

Subspaces and Linear Span

Definition A nonempty subset W of a vector space V is called a subspace of V if it is a vector space under the operations in V .

Theorem A nonempty subset W of a vector space V is a subspace of V if W satisfies the two closure axioms. Proof: Suppose now that W satisfies the closure axioms. We just need to prove existence of inverses and the zero element. Let x ∈ W . By distributivity

0x = (0 + 0)x = 0x + 0x.

Hence 0 = 0x. By closure axioms 0 ∈ W .

1/43 Subspaces and Linear Span

Definition A nonempty subset W of a vector space V is called a subspace of V if it is a vector space under the operations in V .

Theorem A nonempty subset W of a vector space V is a subspace of V if W satisfies the two closure axioms. Proof: Suppose now that W satisfies the closure axioms. We just need to prove existence of inverses and the zero element. Let x ∈ W . By distributivity

0x = (0 + 0)x = 0x + 0x.

Hence 0 = 0x. By closure axioms 0 ∈ W . If x ∈ W then −x = (−1)x is in W by closure axioms. 2

1/43 2. Cr [a, b] is a subspace of the vector space Cs[a, b] for s < r. All of them are subspaces of F([a, b]; R). 3. Mm,n(R) is a subspace of the real vector space Mm,n(C). 4. The of points on the x-axis form a subspace of the plane. More generally, the set of points on a line passing through the origin is a subspace of R2. Likewise the set of real solutions of a1x1 + ... + anxn = 0 form a subspace of Rn. It is called a hyperplane. More generally, the set of solutions of a homogeneous system of linear equations in n variables forms a subspace n of K . In other words, if A ∈ Mm,n(K), then the set {x ∈ Kn : Ax = 0} is a subspace of Kn. It is called the null space of A.

Examples 1. R is a subspace of the real vector space C. But it is not a subspace of the complex vector space C.

2/43 All of them are subspaces of F([a, b]; R). 3. Mm,n(R) is a subspace of the real vector space Mm,n(C). 4. The set of points on the x-axis form a subspace of the plane. More generally, the set of points on a line passing through the origin is a subspace of R2. Likewise the set of real solutions of a1x1 + ... + anxn = 0 form a subspace of Rn. It is called a hyperplane. More generally, the set of solutions of a homogeneous system of linear equations in n variables forms a subspace n of K . In other words, if A ∈ Mm,n(K), then the set {x ∈ Kn : Ax = 0} is a subspace of Kn. It is called the null space of A.

Examples 1. R is a subspace of the real vector space C. But it is not a subspace of the complex vector space C. 2. Cr [a, b] is a subspace of the vector space Cs[a, b] for s < r.

2/43 3. Mm,n(R) is a subspace of the real vector space Mm,n(C). 4. The set of points on the x-axis form a subspace of the plane. More generally, the set of points on a line passing through the origin is a subspace of R2. Likewise the set of real solutions of a1x1 + ... + anxn = 0 form a subspace of Rn. It is called a hyperplane. More generally, the set of solutions of a homogeneous system of linear equations in n variables forms a subspace n of K . In other words, if A ∈ Mm,n(K), then the set {x ∈ Kn : Ax = 0} is a subspace of Kn. It is called the null space of A.

Examples 1. R is a subspace of the real vector space C. But it is not a subspace of the complex vector space C. 2. Cr [a, b] is a subspace of the vector space Cs[a, b] for s < r. All of them are subspaces of F([a, b]; R).

2/43 4. The set of points on the x-axis form a subspace of the plane. More generally, the set of points on a line passing through the origin is a subspace of R2. Likewise the set of real solutions of a1x1 + ... + anxn = 0 form a subspace of Rn. It is called a hyperplane. More generally, the set of solutions of a homogeneous system of linear equations in n variables forms a subspace n of K . In other words, if A ∈ Mm,n(K), then the set {x ∈ Kn : Ax = 0} is a subspace of Kn. It is called the null space of A.

Examples 1. R is a subspace of the real vector space C. But it is not a subspace of the complex vector space C. 2. Cr [a, b] is a subspace of the vector space Cs[a, b] for s < r. All of them are subspaces of F([a, b]; R). 3. Mm,n(R) is a subspace of the real vector space Mm,n(C).

2/43 More generally, the set of points on a line passing through the origin is a subspace of R2. Likewise the set of real solutions of a1x1 + ... + anxn = 0 form a subspace of Rn. It is called a hyperplane. More generally, the set of solutions of a homogeneous system of linear equations in n variables forms a subspace n of K . In other words, if A ∈ Mm,n(K), then the set {x ∈ Kn : Ax = 0} is a subspace of Kn. It is called the null space of A.

Examples 1. R is a subspace of the real vector space C. But it is not a subspace of the complex vector space C. 2. Cr [a, b] is a subspace of the vector space Cs[a, b] for s < r. All of them are subspaces of F([a, b]; R). 3. Mm,n(R) is a subspace of the real vector space Mm,n(C). 4. The set of points on the x-axis form a subspace of the plane.

2/43 Likewise the set of real solutions of a1x1 + ... + anxn = 0 form a subspace of Rn. It is called a hyperplane. More generally, the set of solutions of a homogeneous system of linear equations in n variables forms a subspace n of K . In other words, if A ∈ Mm,n(K), then the set {x ∈ Kn : Ax = 0} is a subspace of Kn. It is called the null space of A.

Examples 1. R is a subspace of the real vector space C. But it is not a subspace of the complex vector space C. 2. Cr [a, b] is a subspace of the vector space Cs[a, b] for s < r. All of them are subspaces of F([a, b]; R). 3. Mm,n(R) is a subspace of the real vector space Mm,n(C). 4. The set of points on the x-axis form a subspace of the plane. More generally, the set of points on a line passing through the origin is a subspace of R2.

2/43 More generally, the set of solutions of a homogeneous system of linear equations in n variables forms a subspace n of K . In other words, if A ∈ Mm,n(K), then the set {x ∈ Kn : Ax = 0} is a subspace of Kn. It is called the null space of A.

Examples 1. R is a subspace of the real vector space C. But it is not a subspace of the complex vector space C. 2. Cr [a, b] is a subspace of the vector space Cs[a, b] for s < r. All of them are subspaces of F([a, b]; R). 3. Mm,n(R) is a subspace of the real vector space Mm,n(C). 4. The set of points on the x-axis form a subspace of the plane. More generally, the set of points on a line passing through the origin is a subspace of R2. Likewise the set of real solutions of a1x1 + ... + anxn = 0 form a subspace of Rn. It is called a hyperplane.

2/43 Examples 1. R is a subspace of the real vector space C. But it is not a subspace of the complex vector space C. 2. Cr [a, b] is a subspace of the vector space Cs[a, b] for s < r. All of them are subspaces of F([a, b]; R). 3. Mm,n(R) is a subspace of the real vector space Mm,n(C). 4. The set of points on the x-axis form a subspace of the plane. More generally, the set of points on a line passing through the origin is a subspace of R2. Likewise the set of real solutions of a1x1 + ... + anxn = 0 form a subspace of Rn. It is called a hyperplane. More generally, the set of solutions of a homogeneous system of linear equations in n variables forms a subspace n of K . In other words, if A ∈ Mm,n(K), then the set {x ∈ Kn : Ax = 0} is a subspace of Kn. It is called the null space of A.

2/43 Pn A typical element i=1 ci xi of L(S) is called a of xi ’s. Thus L(S) is the set of all finite linear combinations of elements of S. In case V = L(S), we say that S spans V or generatesV.

Proposition The smallest subspace of V containing S is L(S).

Proof: If S ⊂ W ⊂ V and W is a subspace of V then by closure axioms, L(S) ⊂ W . If we show that L(S) itself is a subspace of V , then the proof will be completed. It is easy to verify that 0 ∈ L(S) and that L(S) is closed under addition as well as multiplication (exercise!). 2

Linear Span of a set in a Vector Space

Definition Let S be a subset of a vector space V . The linear span of S is the subset Pn L(S) = i=1 ci xi : x1,..., xn ∈ S and c1,..., cn are scalars . We set L(∅) = {0} by convention.

3/43 Proposition The smallest subspace of V containing S is L(S).

Proof: If S ⊂ W ⊂ V and W is a subspace of V then by closure axioms, L(S) ⊂ W . If we show that L(S) itself is a subspace of V , then the proof will be completed. It is easy to verify that 0 ∈ L(S) and that L(S) is closed under addition as well as (exercise!). 2

Linear Span of a set in a Vector Space

Definition Let S be a subset of a vector space V . The linear span of S is the subset Pn L(S) = i=1 ci xi : x1,..., xn ∈ S and c1,..., cn are scalars . Pn We set L(∅) = {0} by convention. A typical element i=1 ci xi of L(S) is called a linear combination of xi ’s. Thus L(S) is the set of all finite linear combinations of elements of S. In case V = L(S), we say that S spans V or generatesV.

3/43 Proof: If S ⊂ W ⊂ V and W is a subspace of V then by closure axioms, L(S) ⊂ W . If we show that L(S) itself is a subspace of V , then the proof will be completed. It is easy to verify that 0 ∈ L(S) and that L(S) is closed under addition as well as scalar multiplication (exercise!). 2

Linear Span of a set in a Vector Space

Definition Let S be a subset of a vector space V . The linear span of S is the subset Pn L(S) = i=1 ci xi : x1,..., xn ∈ S and c1,..., cn are scalars . Pn We set L(∅) = {0} by convention. A typical element i=1 ci xi of L(S) is called a linear combination of xi ’s. Thus L(S) is the set of all finite linear combinations of elements of S. In case V = L(S), we say that S spans V or generatesV.

Proposition The smallest subspace of V containing S is L(S).

3/43 It is easy to verify that 0 ∈ L(S) and that L(S) is closed under addition as well as scalar multiplication (exercise!). 2

Linear Span of a set in a Vector Space

Definition Let S be a subset of a vector space V . The linear span of S is the subset Pn L(S) = i=1 ci xi : x1,..., xn ∈ S and c1,..., cn are scalars . Pn We set L(∅) = {0} by convention. A typical element i=1 ci xi of L(S) is called a linear combination of xi ’s. Thus L(S) is the set of all finite linear combinations of elements of S. In case V = L(S), we say that S spans V or generatesV.

Proposition The smallest subspace of V containing S is L(S).

Proof: If S ⊂ W ⊂ V and W is a subspace of V then by closure axioms, L(S) ⊂ W . If we show that L(S) itself is a subspace of V , then the proof will be completed.

3/43 Linear Span of a set in a Vector Space

Definition Let S be a subset of a vector space V . The linear span of S is the subset Pn L(S) = i=1 ci xi : x1,..., xn ∈ S and c1,..., cn are scalars . Pn We set L(∅) = {0} by convention. A typical element i=1 ci xi of L(S) is called a linear combination of xi ’s. Thus L(S) is the set of all finite linear combinations of elements of S. In case V = L(S), we say that S spans V or generatesV.

Proposition The smallest subspace of V containing S is L(S).

Proof: If S ⊂ W ⊂ V and W is a subspace of V then by closure axioms, L(S) ⊂ W . If we show that L(S) itself is a subspace of V , then the proof will be completed. It is easy to verify that 0 ∈ L(S) and that L(S) is closed under addition as well as scalar multiplication (exercise!). 2 3/43 (ii) The vector space R[t] is spanned by {1, t, t2,..., tn,...} and also by {1, (1 + t),..., (1 + t)n,...}. (iii) The linear span of any non zero element in a field K is the field itself. (iv) While talking about the linear span or any other vector space notion, the underlying field of scalars is understood. If we change this we get different objects and relations. For instance, the real linear span of 1 ∈ C is R where as the complex linear span is the whole of C.

Remark (i) Different sets may span the same subspace. For example

2 L({ˆi,ˆj}) = L({ˆi,ˆj,ˆi +ˆj}) = R .

More generally, the standard vectors e1,..., en span n n R and so does any set S ⊂ R containing e1,..., en

4/43 (iii) The linear span of any non zero element in a field K is the field itself. (iv) While talking about the linear span or any other vector space notion, the underlying field of scalars is understood. If we change this we get different objects and relations. For instance, the real linear span of 1 ∈ C is R where as the complex linear span is the whole of C.

Remark (i) Different sets may span the same subspace. For example

2 L({ˆi,ˆj}) = L({ˆi,ˆj,ˆi +ˆj}) = R .

More generally, the vectors e1,..., en span n n R and so does any set S ⊂ R containing e1,..., en (ii) The vector space R[t] is spanned by {1, t, t2,..., tn,...} and also by {1, (1 + t),..., (1 + t)n,...}.

4/43 (iv) While talking about the linear span or any other vector space notion, the underlying field of scalars is understood. If we change this we get different objects and relations. For instance, the real linear span of 1 ∈ C is R where as the complex linear span is the whole of C.

Remark (i) Different sets may span the same subspace. For example

2 L({ˆi,ˆj}) = L({ˆi,ˆj,ˆi +ˆj}) = R .

More generally, the standard basis vectors e1,..., en span n n R and so does any set S ⊂ R containing e1,..., en (ii) The vector space R[t] is spanned by {1, t, t2,..., tn,...} and also by {1, (1 + t),..., (1 + t)n,...}. (iii) The linear span of any non zero element in a field K is the field itself.

4/43 For instance, the real linear span of 1 ∈ C is R where as the complex linear span is the whole of C.

Remark (i) Different sets may span the same subspace. For example

2 L({ˆi,ˆj}) = L({ˆi,ˆj,ˆi +ˆj}) = R .

More generally, the standard basis vectors e1,..., en span n n R and so does any set S ⊂ R containing e1,..., en (ii) The vector space R[t] is spanned by {1, t, t2,..., tn,...} and also by {1, (1 + t),..., (1 + t)n,...}. (iii) The linear span of any non zero element in a field K is the field itself. (iv) While talking about the linear span or any other vector space notion, the underlying field of scalars is understood. If we change this we get different objects and relations.

4/43 Remark (i) Different sets may span the same subspace. For example

2 L({ˆi,ˆj}) = L({ˆi,ˆj,ˆi +ˆj}) = R .

More generally, the standard basis vectors e1,..., en span n n R and so does any set S ⊂ R containing e1,..., en (ii) The vector space R[t] is spanned by {1, t, t2,..., tn,...} and also by {1, (1 + t),..., (1 + t)n,...}. (iii) The linear span of any non zero element in a field K is the field itself. (iv) While talking about the linear span or any other vector space notion, the underlying field of scalars is understood. If we change this we get different objects and relations. For instance, the real linear span of 1 ∈ C is R where as the complex linear span is the whole of C.

4/43 (iv) Any subset of a L.I. set is L.I.

S is called linearly independent (L.I.) if it is not linearly dependent.

Remark (i) Thus a relation such as (∗) holds in a linearly independent set S with distinct vi ’s iff all the scalars αi = 0. (ii) Any subset which contains a L.D. set is again L.D. (iii) The singleton set {0} is L.D. in every vector space.

Linear Dependence

Definition Let V be a vector space. A subset S of V is called linearly dependent (L.D.) if there exist distinct elements v1,..., vn ∈ S and αi ∈ K, not all zero, such that

n X αi vi = 0. (∗) i=1

5/43 (iv) Any subset of a L.I. set is L.I.

Remark (i) Thus a relation such as (∗) holds in a linearly independent set S with distinct vi ’s iff all the scalars αi = 0. (ii) Any subset which contains a L.D. set is again L.D. (iii) The singleton set {0} is L.D. in every vector space.

Linear Dependence

Definition Let V be a vector space. A subset S of V is called linearly dependent (L.D.) if there exist distinct elements v1,..., vn ∈ S and αi ∈ K, not all zero, such that

n X αi vi = 0. (∗) i=1 S is called linearly independent (L.I.) if it is not linearly dependent.

5/43 (iv) Any subset of a L.I. set is L.I.

(ii) Any subset which contains a L.D. set is again L.D. (iii) The singleton set {0} is L.D. in every vector space.

Linear Dependence

Definition Let V be a vector space. A subset S of V is called linearly dependent (L.D.) if there exist distinct elements v1,..., vn ∈ S and αi ∈ K, not all zero, such that

n X αi vi = 0. (∗) i=1 S is called linearly independent (L.I.) if it is not linearly dependent.

Remark (i) Thus a relation such as (∗) holds in a linearly independent set S with distinct vi ’s iff all the scalars αi = 0.

5/43 (iii) The singleton set {0} is L.D. in every vector space. (iv) Any subset of a L.I. set is L.I.

Linear Dependence

Definition Let V be a vector space. A subset S of V is called linearly dependent (L.D.) if there exist distinct elements v1,..., vn ∈ S and αi ∈ K, not all zero, such that

n X αi vi = 0. (∗) i=1 S is called linearly independent (L.I.) if it is not linearly dependent.

Remark (i) Thus a relation such as (∗) holds in a linearly independent set S with distinct vi ’s iff all the scalars αi = 0. (ii) Any subset which contains a L.D. set is again L.D.

5/43 (iv) Any subset of a L.I. set is L.I.

Linear Dependence

Definition Let V be a vector space. A subset S of V is called linearly dependent (L.D.) if there exist distinct elements v1,..., vn ∈ S and αi ∈ K, not all zero, such that

n X αi vi = 0. (∗) i=1 S is called linearly independent (L.I.) if it is not linearly dependent.

Remark (i) Thus a relation such as (∗) holds in a linearly independent set S with distinct vi ’s iff all the scalars αi = 0. (ii) Any subset which contains a L.D. set is again L.D. (iii) The singleton set {0} is L.D. in every vector space.

5/43 Linear Dependence

Definition Let V be a vector space. A subset S of V is called linearly dependent (L.D.) if there exist distinct elements v1,..., vn ∈ S and αi ∈ K, not all zero, such that

n X αi vi = 0. (∗) i=1 S is called linearly independent (L.I.) if it is not linearly dependent.

Remark (i) Thus a relation such as (∗) holds in a linearly independent set S with distinct vi ’s iff all the scalars αi = 0. (ii) Any subset which contains a L.D. set is again L.D. (iii) The singleton set {0} is L.D. in every vector space. (iv) Any subset of a L.I. set is L.I. 5/43 This can be shown easily by taking with ei with a relation of the type (∗). (ii) The set S = {1, t, t2,..., tn,... } is L.I. in K[t]. This follows from the definition of a polynomial (!). Alternatively, if we think of polynomial functions (from K to K) defined by the polynomials, then can be proved by evaluating a dependence relation as well as its derivatives of sufficiently high orders at t = 0. (iii) In the space C[a, b] for a < b ∈ R consider the set S = {1, cos2 t, sin2 t}. The familiar formula cos2t + sin2t = 1 tells us that S is linearly dependent. What about the set {1, cos t, sin t}? th (iv) If Eij denotes the m × n with 1 in (i, j) and 0 elsewhere, then the set {Eij : i = 1,..., m, j = 1,..., n} is linearly independent in the vector space Mm,n(K).

Examples n (i) The set {ei = (0,..., 1,..., 0): 1 ≤ i ≤ n} is L.I. in K .

6/43 (ii) The set S = {1, t, t2,..., tn,... } is L.I. in K[t]. This follows from the definition of a polynomial (!). Alternatively, if we think of polynomial functions (from K to K) defined by the polynomials, then linear independence can be proved by evaluating a dependence relation as well as its derivatives of sufficiently high orders at t = 0. (iii) In the space C[a, b] for a < b ∈ R consider the set S = {1, cos2 t, sin2 t}. The familiar formula cos2t + sin2t = 1 tells us that S is linearly dependent. What about the set {1, cos t, sin t}? th (iv) If Eij denotes the m × n matrix with 1 in (i, j) position and 0 elsewhere, then the set {Eij : i = 1,..., m, j = 1,..., n} is linearly independent in the vector space Mm,n(K).

Examples n (i) The set {ei = (0,..., 1,..., 0): 1 ≤ i ≤ n} is L.I. in K . This can be shown easily by taking dot product with ei with a relation of the type (∗).

6/43 (iii) In the space C[a, b] for a < b ∈ R consider the set S = {1, cos2 t, sin2 t}. The familiar formula cos2t + sin2t = 1 tells us that S is linearly dependent. What about the set {1, cos t, sin t}? th (iv) If Eij denotes the m × n matrix with 1 in (i, j) position and 0 elsewhere, then the set {Eij : i = 1,..., m, j = 1,..., n} is linearly independent in the vector space Mm,n(K).

Examples n (i) The set {ei = (0,..., 1,..., 0): 1 ≤ i ≤ n} is L.I. in K . This can be shown easily by taking dot product with ei with a relation of the type (∗). (ii) The set S = {1, t, t2,..., tn,... } is L.I. in K[t]. This follows from the definition of a polynomial (!). Alternatively, if we think of polynomial functions (from K to K) defined by the polynomials, then linear independence can be proved by evaluating a dependence relation as well as its derivatives of sufficiently high orders at t = 0.

6/43 The familiar formula cos2t + sin2t = 1 tells us that S is linearly dependent. What about the set {1, cos t, sin t}? th (iv) If Eij denotes the m × n matrix with 1 in (i, j) position and 0 elsewhere, then the set {Eij : i = 1,..., m, j = 1,..., n} is linearly independent in the vector space Mm,n(K).

Examples n (i) The set {ei = (0,..., 1,..., 0): 1 ≤ i ≤ n} is L.I. in K . This can be shown easily by taking dot product with ei with a relation of the type (∗). (ii) The set S = {1, t, t2,..., tn,... } is L.I. in K[t]. This follows from the definition of a polynomial (!). Alternatively, if we think of polynomial functions (from K to K) defined by the polynomials, then linear independence can be proved by evaluating a dependence relation as well as its derivatives of sufficiently high orders at t = 0. (iii) In the space C[a, b] for a < b ∈ R consider the set S = {1, cos2 t, sin2 t}.

6/43 th (iv) If Eij denotes the m × n matrix with 1 in (i, j) position and 0 elsewhere, then the set {Eij : i = 1,..., m, j = 1,..., n} is linearly independent in the vector space Mm,n(K).

Examples n (i) The set {ei = (0,..., 1,..., 0): 1 ≤ i ≤ n} is L.I. in K . This can be shown easily by taking dot product with ei with a relation of the type (∗). (ii) The set S = {1, t, t2,..., tn,... } is L.I. in K[t]. This follows from the definition of a polynomial (!). Alternatively, if we think of polynomial functions (from K to K) defined by the polynomials, then linear independence can be proved by evaluating a dependence relation as well as its derivatives of sufficiently high orders at t = 0. (iii) In the space C[a, b] for a < b ∈ R consider the set S = {1, cos2 t, sin2 t}. The familiar formula cos2t + sin2t = 1 tells us that S is linearly dependent. What about the set {1, cos t, sin t}?

6/43 Examples n (i) The set {ei = (0,..., 1,..., 0): 1 ≤ i ≤ n} is L.I. in K . This can be shown easily by taking dot product with ei with a relation of the type (∗). (ii) The set S = {1, t, t2,..., tn,... } is L.I. in K[t]. This follows from the definition of a polynomial (!). Alternatively, if we think of polynomial functions (from K to K) defined by the polynomials, then linear independence can be proved by evaluating a dependence relation as well as its derivatives of sufficiently high orders at t = 0. (iii) In the space C[a, b] for a < b ∈ R consider the set S = {1, cos2 t, sin2 t}. The familiar formula cos2t + sin2t = 1 tells us that S is linearly dependent. What about the set {1, cos t, sin t}? th (iv) If Eij denotes the m × n matrix with 1 in (i, j) position and 0 elsewhere, then the set {Eij : i = 1,..., m, j = 1,..., n} is linearly independent in the vector space Mm,n(K).

6/43 −1 Pk If β 6= 0 then v = β ( i=1 αi vi ) ∈ L(T ). Therefore Pk β = 0. But then i=1 αi vi = 0 and this implies that α1 = α2 = ... = αk = 0, since T is linearly independent Definition A subset S of a vector space V is called a basis of V if (i) V = L(S), and (ii) S is linearly independent.

A useful Lemma

Lemma Let T be a linearly independent subset of a vector space V . If v ∈ V \L(T ), then T ∪ {v} is linearly independent.

Proof: Suppose there is a linear dependence relation in the elements of T ∪ {v} of the form k X αi vi + βv = 0, i=1

with vi ∈ T

7/43 Therefore Pk β = 0. But then i=1 αi vi = 0 and this implies that α1 = α2 = ... = αk = 0, since T is linearly independent Definition A subset S of a vector space V is called a basis of V if (i) V = L(S), and (ii) S is linearly independent.

A useful Lemma

Lemma Let T be a linearly independent subset of a vector space V . If v ∈ V \L(T ), then T ∪ {v} is linearly independent.

Proof: Suppose there is a linear dependence relation in the elements of T ∪ {v} of the form k X αi vi + βv = 0, i=1 −1 Pk with vi ∈ T If β 6= 0 then v = β ( i=1 αi vi ) ∈ L(T ).

7/43 A useful Lemma

Lemma Let T be a linearly independent subset of a vector space V . If v ∈ V \L(T ), then T ∪ {v} is linearly independent.

Proof: Suppose there is a linear dependence relation in the elements of T ∪ {v} of the form k X αi vi + βv = 0, i=1 −1 Pk with vi ∈ T If β 6= 0 then v = β ( i=1 αi vi ) ∈ L(T ). Therefore Pk β = 0. But then i=1 αi vi = 0 and this implies that α1 = α2 = ... = αk = 0, since T is linearly independent Definition A subset S of a vector space V is called a basis of V if (i) V = L(S), and (ii) S is linearly independent. 7/43 Proof: If S ⊆ L(S1), then L(S1) = L(S) = V and we can take S2 = S1. Otherwise there exists v1 ∈ S\S1 and by the Lemma 0 above, S1 := S1 ∪ {v1} is linearly independent. Replace S1 by 0 S1 and repeat the above argument. Since S is finite, this process will terminate in a finite number of steps and yield the desired result. 2 Definition A vector space V is called finite dimensional if there exists a finite set S ⊂ V such that L(S) = V .

Remark By the above theorem, it follows that every finite dimensional vector space has a finite basis. To see this, choose a finite set S such that L(S) = V and apply the theorem with S1 = ∅.

Theorem Let S be a finite subset of a vector space V such that V = L(S). Suppose S1 is a subset of S which is linearly independent. Then there exists a basis S2 of V such that S1 ⊂ S2 ⊂ S.

8/43 Otherwise there exists v1 ∈ S\S1 and by the Lemma 0 above, S1 := S1 ∪ {v1} is linearly independent. Replace S1 by 0 S1 and repeat the above argument. Since S is finite, this process will terminate in a finite number of steps and yield the desired result. 2 Definition A vector space V is called finite dimensional if there exists a finite set S ⊂ V such that L(S) = V .

Remark By the above theorem, it follows that every finite dimensional vector space has a finite basis. To see this, choose a finite set S such that L(S) = V and apply the theorem with S1 = ∅.

Theorem Let S be a finite subset of a vector space V such that V = L(S). Suppose S1 is a subset of S which is linearly independent. Then there exists a basis S2 of V such that S1 ⊂ S2 ⊂ S.

Proof: If S ⊆ L(S1), then L(S1) = L(S) = V and we can take S2 = S1.

8/43 Replace S1 by 0 S1 and repeat the above argument. Since S is finite, this process will terminate in a finite number of steps and yield the desired result. 2 Definition A vector space V is called finite dimensional if there exists a finite set S ⊂ V such that L(S) = V .

Remark By the above theorem, it follows that every finite dimensional vector space has a finite basis. To see this, choose a finite set S such that L(S) = V and apply the theorem with S1 = ∅.

Theorem Let S be a finite subset of a vector space V such that V = L(S). Suppose S1 is a subset of S which is linearly independent. Then there exists a basis S2 of V such that S1 ⊂ S2 ⊂ S.

Proof: If S ⊆ L(S1), then L(S1) = L(S) = V and we can take S2 = S1. Otherwise there exists v1 ∈ S\S1 and by the Lemma 0 above, S1 := S1 ∪ {v1} is linearly independent.

8/43 Definition A vector space V is called finite dimensional if there exists a finite set S ⊂ V such that L(S) = V .

Remark By the above theorem, it follows that every finite dimensional vector space has a finite basis. To see this, choose a finite set S such that L(S) = V and apply the theorem with S1 = ∅.

Theorem Let S be a finite subset of a vector space V such that V = L(S). Suppose S1 is a subset of S which is linearly independent. Then there exists a basis S2 of V such that S1 ⊂ S2 ⊂ S.

Proof: If S ⊆ L(S1), then L(S1) = L(S) = V and we can take S2 = S1. Otherwise there exists v1 ∈ S\S1 and by the Lemma 0 above, S1 := S1 ∪ {v1} is linearly independent. Replace S1 by 0 S1 and repeat the above argument. Since S is finite, this process will terminate in a finite number of steps and yield the desired result. 2

8/43 Definition A vector space V is called finite dimensional if there exists a finite set S ⊂ V such that L(S) = V .

Remark By the above theorem, it follows that every finite dimensional vector space has a finite basis. To see this, choose a finite set S such that L(S) = V and apply the theorem with S1 = ∅.

Theorem Let S be a finite subset of a vector space V such that V = L(S). Suppose S1 is a subset of S which is linearly independent. Then there exists a basis S2 of V such that S1 ⊂ S2 ⊂ S.

Proof: If S ⊆ L(S1), then L(S1) = L(S) = V and we can take S2 = S1. Otherwise there exists v1 ∈ S\S1 and by the Lemma 0 above, S1 := S1 ∪ {v1} is linearly independent. Replace S1 by 0 S1 and repeat the above argument. Since S is finite, this process will terminate in a finite number of steps and yield the desired result. 2

8/43 Remark By the above theorem, it follows that every finite dimensional vector space has a finite basis. To see this, choose a finite set S such that L(S) = V and apply the theorem with S1 = ∅.

Theorem Let S be a finite subset of a vector space V such that V = L(S). Suppose S1 is a subset of S which is linearly independent. Then there exists a basis S2 of V such that S1 ⊂ S2 ⊂ S.

Proof: If S ⊆ L(S1), then L(S1) = L(S) = V and we can take S2 = S1. Otherwise there exists v1 ∈ S\S1 and by the Lemma 0 above, S1 := S1 ∪ {v1} is linearly independent. Replace S1 by 0 S1 and repeat the above argument. Since S is finite, this process will terminate in a finite number of steps and yield the desired result. 2 Definition A vector space V is called finite dimensional if there exists a finite set S ⊂ V such that L(S) = V .

8/43 To see this, choose a finite set S such that L(S) = V and apply the theorem with S1 = ∅.

Theorem Let S be a finite subset of a vector space V such that V = L(S). Suppose S1 is a subset of S which is linearly independent. Then there exists a basis S2 of V such that S1 ⊂ S2 ⊂ S.

Proof: If S ⊆ L(S1), then L(S1) = L(S) = V and we can take S2 = S1. Otherwise there exists v1 ∈ S\S1 and by the Lemma 0 above, S1 := S1 ∪ {v1} is linearly independent. Replace S1 by 0 S1 and repeat the above argument. Since S is finite, this process will terminate in a finite number of steps and yield the desired result. 2 Definition A vector space V is called finite dimensional if there exists a finite set S ⊂ V such that L(S) = V .

Remark By the above theorem, it follows that every finite dimensional vector space has a finite basis.

8/43 Theorem Let S be a finite subset of a vector space V such that V = L(S). Suppose S1 is a subset of S which is linearly independent. Then there exists a basis S2 of V such that S1 ⊂ S2 ⊂ S.

Proof: If S ⊆ L(S1), then L(S1) = L(S) = V and we can take S2 = S1. Otherwise there exists v1 ∈ S\S1 and by the Lemma 0 above, S1 := S1 ∪ {v1} is linearly independent. Replace S1 by 0 S1 and repeat the above argument. Since S is finite, this process will terminate in a finite number of steps and yield the desired result. 2 Definition A vector space V is called finite dimensional if there exists a finite set S ⊂ V such that L(S) = V .

Remark By the above theorem, it follows that every finite dimensional vector space has a finite basis. To see this, choose a finite set S such that L(S) = V and apply the theorem with S = ∅. 1 8/43 Theorem

If a vector space V contains a finite subset S = {v1,..., vn} such that V = L(S), then every subset of V with n + 1 (or more) elements is linearly dependent.

Proof: Let u1,... un+1 be any n + 1 elements of V . Since V = L(S), we can write

n X ui = aij vj for some aij ∈ K and for i = 1,..., n + 1 j=1

0 Consider the (n + 1) × n matrix A = (aij ). Let A be a REF of A. Then A0 can have at most n pivots, and so the last row of A0 must be full of zeros. On the other hand, A0 = RA for some (n + 1) × (n + 1) R (which is a product of elementary matrices). Now if (c1,..., cn+1) denotes the last row of R, then not all ci ’s are zero since R is invertible. Also

9/43 Definition Given a finite dimensional vector space V , the dimension of V is defined to be the number of elements in any basis for V .

Proof Contd.

since the the last row of A0 = RA is 0, we see that n+1 X ci aij = 0 for each j = 1,..., n. i=1

Multiplying by vj and summing over j, we obtain

n n+1 ! n+1  n  n+1 X X X X X 0 = ci aij vj = ci  aij vj  = ci ui . j=1 i=1 i=1 j=1 i=1

This proves that u1,... un+1 are linearly dependent. 2 Corollary For a finite dimensional vector space, the number of elements in any two bases are the same.

10/43 Proof Contd.

since the the last row of A0 = RA is 0, we see that n+1 X ci aij = 0 for each j = 1,..., n. i=1

Multiplying by vj and summing over j, we obtain

n n+1 ! n+1  n  n+1 X X X X X 0 = ci aij vj = ci  aij vj  = ci ui . j=1 i=1 i=1 j=1 i=1

This proves that u1,... un+1 are linearly dependent. 2 Corollary For a finite dimensional vector space, the number of elements in any two bases are the same.

Definition Given a finite dimensional vector space V , the dimension of V is defined to be the number of elements in any basis for V . 10/43 (i.e., if you put any more elements V in S then it will not remain linearly independent.) Show that S is a basis for V . (iii) Show that a subspace W of a finite dimensional space V is finite dimensional. Further prove that dim W ≤ dim V and equality holds iff W = V . (Most of these results are true for infinite dimensional case also. But this last mentioned result is an exception.) (iv) Let V and W be vector subspaces of another vector space U. Show that L(V ∪ W ) = {v + w : v ∈ V , w ∈ W }. This subspace is denoted by V + W and is called the sum of V and W . Show that V ∩ W is a subspace and dim(V + W ) = dim V + dim W − dim(V ∩ W ).

Exercises

(i) Show that in any vector space of dimension n any subset S such that L(S) = V has at least n elements. (ii) Let V be a finite dimensional vector space. Suppose S is a maximal linearly independent set

11/43 Show that S is a basis for V . (iii) Show that a subspace W of a finite dimensional space V is finite dimensional. Further prove that dim W ≤ dim V and equality holds iff W = V . (Most of these results are true for infinite dimensional case also. But this last mentioned result is an exception.) (iv) Let V and W be vector subspaces of another vector space U. Show that L(V ∪ W ) = {v + w : v ∈ V , w ∈ W }. This subspace is denoted by V + W and is called the sum of V and W . Show that V ∩ W is a subspace and dim(V + W ) = dim V + dim W − dim(V ∩ W ).

Exercises

(i) Show that in any vector space of dimension n any subset S such that L(S) = V has at least n elements. (ii) Let V be a finite dimensional vector space. Suppose S is a maximal linearly independent set (i.e., if you put any more elements V in S then it will not remain linearly independent.)

11/43 (iii) Show that a subspace W of a finite dimensional space V is finite dimensional. Further prove that dim W ≤ dim V and equality holds iff W = V . (Most of these results are true for infinite dimensional case also. But this last mentioned result is an exception.) (iv) Let V and W be vector subspaces of another vector space U. Show that L(V ∪ W ) = {v + w : v ∈ V , w ∈ W }. This subspace is denoted by V + W and is called the sum of V and W . Show that V ∩ W is a subspace and dim(V + W ) = dim V + dim W − dim(V ∩ W ).

Exercises

(i) Show that in any vector space of dimension n any subset S such that L(S) = V has at least n elements. (ii) Let V be a finite dimensional vector space. Suppose S is a maximal linearly independent set (i.e., if you put any more elements V in S then it will not remain linearly independent.) Show that S is a basis for V .

11/43 Further prove that dim W ≤ dim V and equality holds iff W = V . (Most of these results are true for infinite dimensional case also. But this last mentioned result is an exception.) (iv) Let V and W be vector subspaces of another vector space U. Show that L(V ∪ W ) = {v + w : v ∈ V , w ∈ W }. This subspace is denoted by V + W and is called the sum of V and W . Show that V ∩ W is a subspace and dim(V + W ) = dim V + dim W − dim(V ∩ W ).

Exercises

(i) Show that in any vector space of dimension n any subset S such that L(S) = V has at least n elements. (ii) Let V be a finite dimensional vector space. Suppose S is a maximal linearly independent set (i.e., if you put any more elements V in S then it will not remain linearly independent.) Show that S is a basis for V . (iii) Show that a subspace W of a finite dimensional space V is finite dimensional.

11/43 (Most of these results are true for infinite dimensional case also. But this last mentioned result is an exception.) (iv) Let V and W be vector subspaces of another vector space U. Show that L(V ∪ W ) = {v + w : v ∈ V , w ∈ W }. This subspace is denoted by V + W and is called the sum of V and W . Show that V ∩ W is a subspace and dim(V + W ) = dim V + dim W − dim(V ∩ W ).

Exercises

(i) Show that in any vector space of dimension n any subset S such that L(S) = V has at least n elements. (ii) Let V be a finite dimensional vector space. Suppose S is a maximal linearly independent set (i.e., if you put any more elements V in S then it will not remain linearly independent.) Show that S is a basis for V . (iii) Show that a subspace W of a finite dimensional space V is finite dimensional. Further prove that dim W ≤ dim V and equality holds iff W = V .

11/43 (iv) Let V and W be vector subspaces of another vector space U. Show that L(V ∪ W ) = {v + w : v ∈ V , w ∈ W }. This subspace is denoted by V + W and is called the sum of V and W . Show that V ∩ W is a subspace and dim(V + W ) = dim V + dim W − dim(V ∩ W ).

Exercises

(i) Show that in any vector space of dimension n any subset S such that L(S) = V has at least n elements. (ii) Let V be a finite dimensional vector space. Suppose S is a maximal linearly independent set (i.e., if you put any more elements V in S then it will not remain linearly independent.) Show that S is a basis for V . (iii) Show that a subspace W of a finite dimensional space V is finite dimensional. Further prove that dim W ≤ dim V and equality holds iff W = V . (Most of these results are true for infinite dimensional case also. But this last mentioned result is an exception.)

11/43 Show that V ∩ W is a subspace and dim(V + W ) = dim V + dim W − dim(V ∩ W ).

Exercises

(i) Show that in any vector space of dimension n any subset S such that L(S) = V has at least n elements. (ii) Let V be a finite dimensional vector space. Suppose S is a maximal linearly independent set (i.e., if you put any more elements V in S then it will not remain linearly independent.) Show that S is a basis for V . (iii) Show that a subspace W of a finite dimensional space V is finite dimensional. Further prove that dim W ≤ dim V and equality holds iff W = V . (Most of these results are true for infinite dimensional case also. But this last mentioned result is an exception.) (iv) Let V and W be vector subspaces of another vector space U. Show that L(V ∪ W ) = {v + w : v ∈ V , w ∈ W }. This subspace is denoted by V + W and is called the sum of V and W .

11/43 Exercises

(i) Show that in any vector space of dimension n any subset S such that L(S) = V has at least n elements. (ii) Let V be a finite dimensional vector space. Suppose S is a maximal linearly independent set (i.e., if you put any more elements V in S then it will not remain linearly independent.) Show that S is a basis for V . (iii) Show that a subspace W of a finite dimensional space V is finite dimensional. Further prove that dim W ≤ dim V and equality holds iff W = V . (Most of these results are true for infinite dimensional case also. But this last mentioned result is an exception.) (iv) Let V and W be vector subspaces of another vector space U. Show that L(V ∪ W ) = {v + w : v ∈ V , w ∈ W }. This subspace is denoted by V + W and is called the sum of V and W . Show that V ∩ W is a subspace and dim(V + W ) = dim V + dim W − dim(V ∩ W ).

11/43 2. dimC C = 1; however, dimR C = 2. 3. dim Rn = n. 4. dim Mm,n(K) = mn. 5. Let Symn(K) denote the space of all symmetric n × n matrices with entries in K. What is the dimension of Symn(K)? (Answer: n(n + 1)/2.) Indeed, check that

{Eii : 1 ≤ i ≤ n} ∪ {Eij + Eji : i < j} forms a basis. 6. Let Hermn(C) be the space of all n × n matrices A with complex entries such that A = A∗(:= A¯T ). This is a vector space over R and not over C. (Why?) What is the value of 2 dimR[Hermn(C)]? (Answer: n ). Indeed

{Eii , 1 ≤ i ≤ n} ∪ {Eij + Eji : i < j} ∪ {ι(Eij − Eji ): i < j} forms a basis.

Examples: 1. dim K = 1 for any field considered as a vector space over itself.

12/43 3. dim Rn = n. 4. dim Mm,n(K) = mn. 5. Let Symn(K) denote the space of all symmetric n × n matrices with entries in K. What is the dimension of Symn(K)? (Answer: n(n + 1)/2.) Indeed, check that

{Eii : 1 ≤ i ≤ n} ∪ {Eij + Eji : i < j} forms a basis. 6. Let Hermn(C) be the space of all n × n matrices A with complex entries such that A = A∗(:= A¯T ). This is a vector space over R and not over C. (Why?) What is the value of 2 dimR[Hermn(C)]? (Answer: n ). Indeed

{Eii , 1 ≤ i ≤ n} ∪ {Eij + Eji : i < j} ∪ {ι(Eij − Eji ): i < j} forms a basis.

Examples: 1. dim K = 1 for any field considered as a vector space over itself.

2. dimC C = 1; however, dimR C = 2.

12/43 4. dim Mm,n(K) = mn. 5. Let Symn(K) denote the space of all symmetric n × n matrices with entries in K. What is the dimension of Symn(K)? (Answer: n(n + 1)/2.) Indeed, check that

{Eii : 1 ≤ i ≤ n} ∪ {Eij + Eji : i < j} forms a basis. 6. Let Hermn(C) be the space of all n × n matrices A with complex entries such that A = A∗(:= A¯T ). This is a vector space over R and not over C. (Why?) What is the value of 2 dimR[Hermn(C)]? (Answer: n ). Indeed

{Eii , 1 ≤ i ≤ n} ∪ {Eij + Eji : i < j} ∪ {ι(Eij − Eji ): i < j} forms a basis.

Examples: 1. dim K = 1 for any field considered as a vector space over itself.

2. dimC C = 1; however, dimR C = 2. 3. dim Rn = n.

12/43 5. Let Symn(K) denote the space of all symmetric n × n matrices with entries in K. What is the dimension of Symn(K)? (Answer: n(n + 1)/2.) Indeed, check that

{Eii : 1 ≤ i ≤ n} ∪ {Eij + Eji : i < j} forms a basis. 6. Let Hermn(C) be the space of all n × n matrices A with complex entries such that A = A∗(:= A¯T ). This is a vector space over R and not over C. (Why?) What is the value of 2 dimR[Hermn(C)]? (Answer: n ). Indeed

{Eii , 1 ≤ i ≤ n} ∪ {Eij + Eji : i < j} ∪ {ι(Eij − Eji ): i < j} forms a basis.

Examples: 1. dim K = 1 for any field considered as a vector space over itself.

2. dimC C = 1; however, dimR C = 2. 3. dim Rn = n. 4. dim Mm,n(K) = mn.

12/43 (Answer: n(n + 1)/2.) Indeed, check that

{Eii : 1 ≤ i ≤ n} ∪ {Eij + Eji : i < j} forms a basis. 6. Let Hermn(C) be the space of all n × n matrices A with complex entries such that A = A∗(:= A¯T ). This is a vector space over R and not over C. (Why?) What is the value of 2 dimR[Hermn(C)]? (Answer: n ). Indeed

{Eii , 1 ≤ i ≤ n} ∪ {Eij + Eji : i < j} ∪ {ι(Eij − Eji ): i < j} forms a basis.

Examples: 1. dim K = 1 for any field considered as a vector space over itself.

2. dimC C = 1; however, dimR C = 2. 3. dim Rn = n. 4. dim Mm,n(K) = mn. 5. Let Symn(K) denote the space of all symmetric n × n matrices with entries in K. What is the dimension of Symn(K)?

12/43 6. Let Hermn(C) be the space of all n × n matrices A with complex entries such that A = A∗(:= A¯T ). This is a vector space over R and not over C. (Why?) What is the value of 2 dimR[Hermn(C)]? (Answer: n ). Indeed

{Eii , 1 ≤ i ≤ n} ∪ {Eij + Eji : i < j} ∪ {ι(Eij − Eji ): i < j} forms a basis.

Examples: 1. dim K = 1 for any field considered as a vector space over itself.

2. dimC C = 1; however, dimR C = 2. 3. dim Rn = n. 4. dim Mm,n(K) = mn. 5. Let Symn(K) denote the space of all symmetric n × n matrices with entries in K. What is the dimension of Symn(K)? (Answer: n(n + 1)/2.) Indeed, check that

{Eii : 1 ≤ i ≤ n} ∪ {Eij + Eji : i < j} forms a basis.

12/43 This is a vector space over R and not over C. (Why?) What is the value of 2 dimR[Hermn(C)]? (Answer: n ). Indeed

{Eii , 1 ≤ i ≤ n} ∪ {Eij + Eji : i < j} ∪ {ι(Eij − Eji ): i < j} forms a basis.

Examples: 1. dim K = 1 for any field considered as a vector space over itself.

2. dimC C = 1; however, dimR C = 2. 3. dim Rn = n. 4. dim Mm,n(K) = mn. 5. Let Symn(K) denote the space of all symmetric n × n matrices with entries in K. What is the dimension of Symn(K)? (Answer: n(n + 1)/2.) Indeed, check that

{Eii : 1 ≤ i ≤ n} ∪ {Eij + Eji : i < j} forms a basis. 6. Let Hermn(C) be the space of all n × n matrices A with complex entries such that A = A∗(:= A¯T ).

12/43 What is the value of 2 dimR[Hermn(C)]? (Answer: n ). Indeed

{Eii , 1 ≤ i ≤ n} ∪ {Eij + Eji : i < j} ∪ {ι(Eij − Eji ): i < j} forms a basis.

Examples: 1. dim K = 1 for any field considered as a vector space over itself.

2. dimC C = 1; however, dimR C = 2. 3. dim Rn = n. 4. dim Mm,n(K) = mn. 5. Let Symn(K) denote the space of all symmetric n × n matrices with entries in K. What is the dimension of Symn(K)? (Answer: n(n + 1)/2.) Indeed, check that

{Eii : 1 ≤ i ≤ n} ∪ {Eij + Eji : i < j} forms a basis. 6. Let Hermn(C) be the space of all n × n matrices A with complex entries such that A = A∗(:= A¯T ). This is a vector space over R and not over C. (Why?)

12/43 (Answer: n2). Indeed

{Eii , 1 ≤ i ≤ n} ∪ {Eij + Eji : i < j} ∪ {ι(Eij − Eji ): i < j} forms a basis.

Examples: 1. dim K = 1 for any field considered as a vector space over itself.

2. dimC C = 1; however, dimR C = 2. 3. dim Rn = n. 4. dim Mm,n(K) = mn. 5. Let Symn(K) denote the space of all symmetric n × n matrices with entries in K. What is the dimension of Symn(K)? (Answer: n(n + 1)/2.) Indeed, check that

{Eii : 1 ≤ i ≤ n} ∪ {Eij + Eji : i < j} forms a basis. 6. Let Hermn(C) be the space of all n × n matrices A with complex entries such that A = A∗(:= A¯T ). This is a vector space over R and not over C. (Why?) What is the value of dimR[Hermn(C)]?

12/43 Examples: 1. dim K = 1 for any field considered as a vector space over itself.

2. dimC C = 1; however, dimR C = 2. 3. dim Rn = n. 4. dim Mm,n(K) = mn. 5. Let Symn(K) denote the space of all symmetric n × n matrices with entries in K. What is the dimension of Symn(K)? (Answer: n(n + 1)/2.) Indeed, check that

{Eii : 1 ≤ i ≤ n} ∪ {Eij + Eji : i < j} forms a basis. 6. Let Hermn(C) be the space of all n × n matrices A with complex entries such that A = A∗(:= A¯T ). This is a vector space over R and not over C. (Why?) What is the value of 2 dimR[Hermn(C)]? (Answer: n ). Indeed

{Eii , 1 ≤ i ≤ n} ∪ {Eij + Eji : i < j} ∪ {ι(Eij − Eji ): i < j} forms a basis. 12/43 Remark Pk Pk (i) f ( i=1 αi vi ) = i=1 αi f (vi ). (ii) If {v1,..., vk } is a basis for a finite dimensional vector space V , then every element of V is a linear combination Pk of these vi , say v = i=1 αi vi . Therefore Pk f (v) = i=1 αi f (vi ). Thus, it follows that f is completely determined by its value on a basis of V , i.e., if f and g are two linear maps such that f (vi ) = g(vi ) for all i = 1,..., k, then f = g.

4. Linear Transformations

Definition Let V , W be any two vector spaces over K. By a (or a linear transformation) f : V → W we mean a function f satisfying f (α1v1 + α2v2) = α1f (v1) + α2f (v2) for all vi ∈ V and αi ∈ K.

13/43 (ii) If {v1,..., vk } is a basis for a finite dimensional vector space V , then every element of V is a linear combination Pk of these vi , say v = i=1 αi vi . Therefore Pk f (v) = i=1 αi f (vi ). Thus, it follows that f is completely determined by its value on a basis of V , i.e., if f and g are two linear maps such that f (vi ) = g(vi ) for all i = 1,..., k, then f = g.

4. Linear Transformations

Definition Let V , W be any two vector spaces over K. By a linear map (or a linear transformation) f : V → W we mean a function f satisfying f (α1v1 + α2v2) = α1f (v1) + α2f (v2) for all vi ∈ V and αi ∈ K.

Remark Pk Pk (i) f ( i=1 αi vi ) = i=1 αi f (vi ).

13/43 Therefore Pk f (v) = i=1 αi f (vi ). Thus, it follows that f is completely determined by its value on a basis of V , i.e., if f and g are two linear maps such that f (vi ) = g(vi ) for all i = 1,..., k, then f = g.

4. Linear Transformations

Definition Let V , W be any two vector spaces over K. By a linear map (or a linear transformation) f : V → W we mean a function f satisfying f (α1v1 + α2v2) = α1f (v1) + α2f (v2) for all vi ∈ V and αi ∈ K.

Remark Pk Pk (i) f ( i=1 αi vi ) = i=1 αi f (vi ). (ii) If {v1,..., vk } is a basis for a finite dimensional vector space V , then every element of V is a linear combination Pk of these vi , say v = i=1 αi vi .

13/43 Thus, it follows that f is completely determined by its value on a basis of V , i.e., if f and g are two linear maps such that f (vi ) = g(vi ) for all i = 1,..., k, then f = g.

4. Linear Transformations

Definition Let V , W be any two vector spaces over K. By a linear map (or a linear transformation) f : V → W we mean a function f satisfying f (α1v1 + α2v2) = α1f (v1) + α2f (v2) for all vi ∈ V and αi ∈ K.

Remark Pk Pk (i) f ( i=1 αi vi ) = i=1 αi f (vi ). (ii) If {v1,..., vk } is a basis for a finite dimensional vector space V , then every element of V is a linear combination Pk of these vi , say v = i=1 αi vi . Therefore Pk f (v) = i=1 αi f (vi ).

13/43 4. Linear Transformations

Definition Let V , W be any two vector spaces over K. By a linear map (or a linear transformation) f : V → W we mean a function f satisfying f (α1v1 + α2v2) = α1f (v1) + α2f (v2) for all vi ∈ V and αi ∈ K.

Remark Pk Pk (i) f ( i=1 αi vi ) = i=1 αi f (vi ). (ii) If {v1,..., vk } is a basis for a finite dimensional vector space V , then every element of V is a linear combination Pk of these vi , say v = i=1 αi vi . Therefore Pk f (v) = i=1 αi f (vi ). Thus, it follows that f is completely determined by its value on a basis of V , i.e., if f and g are two linear maps such that f (vi ) = g(vi ) for all i = 1,..., k, then f = g.

13/43 and vice versa.

Example: Consider the space Cr = Cr [a, b], r ≥ 1. To each f ∈ Cr consider its derivative f 0 ∈ Cr−1. This defines a function D : Cr → Cr−1 by D(f ) = f 0 We know that D is a linear map, i.e., D(αf + βg) = αD(f ) + βD(g). Now for each f ∈ Cr−1 define I(f ) ∈ Cr by

Z x I(f )(x) = f (t)dt. a Then I is also a linear map. Moreover, we have D ◦ I = Id. Can you say I ◦ D = Id? Is D a one-one map?

Remark

(iii) If V has a basis {v1,..., vk } then for each ordered k tuple {w1,..., wk } of elements of W we obtain a unique linear transformation f : V → W by choosing f (vi ) = wi for all i,

14/43 Example: Consider the space Cr = Cr [a, b], r ≥ 1. To each f ∈ Cr consider its derivative f 0 ∈ Cr−1. This defines a function D : Cr → Cr−1 by D(f ) = f 0 We know that D is a linear map, i.e., D(αf + βg) = αD(f ) + βD(g). Now for each f ∈ Cr−1 define I(f ) ∈ Cr by

Z x I(f )(x) = f (t)dt. a Then I is also a linear map. Moreover, we have D ◦ I = Id. Can you say I ◦ D = Id? Is D a one-one map?

Remark

(iii) If V has a basis {v1,..., vk } then for each ordered k tuple {w1,..., wk } of elements of W we obtain a unique linear transformation f : V → W by choosing f (vi ) = wi for all i, and vice versa.

14/43 To each f ∈ Cr consider its derivative f 0 ∈ Cr−1. This defines a function D : Cr → Cr−1 by D(f ) = f 0 We know that D is a linear map, i.e., D(αf + βg) = αD(f ) + βD(g). Now for each f ∈ Cr−1 define I(f ) ∈ Cr by

Z x I(f )(x) = f (t)dt. a Then I is also a linear map. Moreover, we have D ◦ I = Id. Can you say I ◦ D = Id? Is D a one-one map?

Remark

(iii) If V has a basis {v1,..., vk } then for each ordered k tuple {w1,..., wk } of elements of W we obtain a unique linear transformation f : V → W by choosing f (vi ) = wi for all i, and vice versa.

Example: Consider the space Cr = Cr [a, b], r ≥ 1.

14/43 We know that D is a linear map, i.e., D(αf + βg) = αD(f ) + βD(g). Now for each f ∈ Cr−1 define I(f ) ∈ Cr by

Z x I(f )(x) = f (t)dt. a Then I is also a linear map. Moreover, we have D ◦ I = Id. Can you say I ◦ D = Id? Is D a one-one map?

Remark

(iii) If V has a basis {v1,..., vk } then for each ordered k tuple {w1,..., wk } of elements of W we obtain a unique linear transformation f : V → W by choosing f (vi ) = wi for all i, and vice versa.

Example: Consider the space Cr = Cr [a, b], r ≥ 1. To each f ∈ Cr consider its derivative f 0 ∈ Cr−1. This defines a function D : Cr → Cr−1 by D(f ) = f 0

14/43 Now for each f ∈ Cr−1 define I(f ) ∈ Cr by

Z x I(f )(x) = f (t)dt. a Then I is also a linear map. Moreover, we have D ◦ I = Id. Can you say I ◦ D = Id? Is D a one-one map?

Remark

(iii) If V has a basis {v1,..., vk } then for each ordered k tuple {w1,..., wk } of elements of W we obtain a unique linear transformation f : V → W by choosing f (vi ) = wi for all i, and vice versa.

Example: Consider the space Cr = Cr [a, b], r ≥ 1. To each f ∈ Cr consider its derivative f 0 ∈ Cr−1. This defines a function D : Cr → Cr−1 by D(f ) = f 0 We know that D is a linear map, i.e., D(αf + βg) = αD(f ) + βD(g).

14/43 Can you say I ◦ D = Id? Is D a one-one map?

Remark

(iii) If V has a basis {v1,..., vk } then for each ordered k tuple {w1,..., wk } of elements of W we obtain a unique linear transformation f : V → W by choosing f (vi ) = wi for all i, and vice versa.

Example: Consider the space Cr = Cr [a, b], r ≥ 1. To each f ∈ Cr consider its derivative f 0 ∈ Cr−1. This defines a function D : Cr → Cr−1 by D(f ) = f 0 We know that D is a linear map, i.e., D(αf + βg) = αD(f ) + βD(g). Now for each f ∈ Cr−1 define I(f ) ∈ Cr by

Z x I(f )(x) = f (t)dt. a Then I is also a linear map. Moreover, we have D ◦ I = Id.

14/43 Remark

(iii) If V has a basis {v1,..., vk } then for each ordered k tuple {w1,..., wk } of elements of W we obtain a unique linear transformation f : V → W by choosing f (vi ) = wi for all i, and vice versa.

Example: Consider the space Cr = Cr [a, b], r ≥ 1. To each f ∈ Cr consider its derivative f 0 ∈ Cr−1. This defines a function D : Cr → Cr−1 by D(f ) = f 0 We know that D is a linear map, i.e., D(αf + βg) = αD(f ) + βD(g). Now for each f ∈ Cr−1 define I(f ) ∈ Cr by

Z x I(f )(x) = f (t)dt. a Then I is also a linear map. Moreover, we have D ◦ I = Id. Can you say I ◦ D = Id? Is D a one-one map?

14/43 Given real numbers Pk i a0,..., ak (= 1), consider f = i=0 ai D . Then show that f : Cr → Cr−k is a linear map. Determining the zeros of this linear map is precisely the problem of solving the homogeneous linear differential equation of order k :

k k−1 y + ak−1y + ... + a1y + a0 = 0.

Exercise: On the vector space P[x] of all polynomials in one-variable, determine all linear maps φ : P[x] → P[x] having the property φ(fg) = f φ(g) + gφ(f ) and φ(x) = 1.

Let us write

Dk := D ◦ D ◦ ... ◦ D (k factors)

Thus Dk : Cr → Cr−k is the map defined by Dk (f ) = f (k) where f (k) is the k th derivative of f , (r > k).

15/43 Determining the zeros of this linear map is precisely the problem of solving the homogeneous linear differential equation of order k :

k k−1 y + ak−1y + ... + a1y + a0 = 0.

Exercise: On the vector space P[x] of all polynomials in one-variable, determine all linear maps φ : P[x] → P[x] having the property φ(fg) = f φ(g) + gφ(f ) and φ(x) = 1.

Let us write

Dk := D ◦ D ◦ ... ◦ D (k factors)

Thus Dk : Cr → Cr−k is the map defined by Dk (f ) = f (k) where f (k) is the k th derivative of f , (r > k). Given real numbers Pk i a0,..., ak (= 1), consider f = i=0 ai D . Then show that f : Cr → Cr−k is a linear map.

15/43 Exercise: On the vector space P[x] of all polynomials in one-variable, determine all linear maps φ : P[x] → P[x] having the property φ(fg) = f φ(g) + gφ(f ) and φ(x) = 1.

Let us write

Dk := D ◦ D ◦ ... ◦ D (k factors)

Thus Dk : Cr → Cr−k is the map defined by Dk (f ) = f (k) where f (k) is the k th derivative of f , (r > k). Given real numbers Pk i a0,..., ak (= 1), consider f = i=0 ai D . Then show that f : Cr → Cr−k is a linear map. Determining the zeros of this linear map is precisely the problem of solving the homogeneous linear differential equation of order k :

k k−1 y + ak−1y + ... + a1y + a0 = 0.

15/43 Let us write

Dk := D ◦ D ◦ ... ◦ D (k factors)

Thus Dk : Cr → Cr−k is the map defined by Dk (f ) = f (k) where f (k) is the k th derivative of f , (r > k). Given real numbers Pk i a0,..., ak (= 1), consider f = i=0 ai D . Then show that f : Cr → Cr−k is a linear map. Determining the zeros of this linear map is precisely the problem of solving the homogeneous linear differential equation of order k :

k k−1 y + ak−1y + ... + a1y + a0 = 0.

Exercise: On the vector space P[x] of all polynomials in one-variable, determine all linear maps φ : P[x] → P[x] having the property φ(fg) = f φ(g) + gφ(f ) and φ(x) = 1.

15/43 i.e., there exist g : W → V such that g ◦ f = IdV and f ◦ g = IdW . If there exists an isomorphism f : V → W then we call V and W are isomorphic to each other.

Definition Let f : V → W be a linear transformation. Define R(f ) := f (V ) := {f (v) ∈ W : v ∈ V }, N (f ) := {v ∈ V : f (v) = 0}.

One can easily check that R(f ) and N (f ) are both vector subspace of W and V respectively. They are respectively called the range and the null space of f .

Definition A linear transformation f : V → W is called an isomorphism if it is invertible,

16/43 If there exists an isomorphism f : V → W then we call V and W are isomorphic to each other.

Definition Let f : V → W be a linear transformation. Define R(f ) := f (V ) := {f (v) ∈ W : v ∈ V }, N (f ) := {v ∈ V : f (v) = 0}.

One can easily check that R(f ) and N (f ) are both vector subspace of W and V respectively. They are respectively called the range and the null space of f .

Definition A linear transformation f : V → W is called an isomorphism if it is invertible, i.e., there exist g : W → V such that g ◦ f = IdV and f ◦ g = IdW .

16/43 Definition Let f : V → W be a linear transformation. Define R(f ) := f (V ) := {f (v) ∈ W : v ∈ V }, N (f ) := {v ∈ V : f (v) = 0}.

One can easily check that R(f ) and N (f ) are both vector subspace of W and V respectively. They are respectively called the range and the null space of f .

Definition A linear transformation f : V → W is called an isomorphism if it is invertible, i.e., there exist g : W → V such that g ◦ f = IdV and f ◦ g = IdW . If there exists an isomorphism f : V → W then we call V and W are isomorphic to each other.

16/43 One can easily check that R(f ) and N (f ) are both vector subspace of W and V respectively. They are respectively called the range and the null space of f .

Definition A linear transformation f : V → W is called an isomorphism if it is invertible, i.e., there exist g : W → V such that g ◦ f = IdV and f ◦ g = IdW . If there exists an isomorphism f : V → W then we call V and W are isomorphic to each other.

Definition Let f : V → W be a linear transformation. Define R(f ) := f (V ) := {f (v) ∈ W : v ∈ V }, N (f ) := {v ∈ V : f (v) = 0}.

16/43 Definition A linear transformation f : V → W is called an isomorphism if it is invertible, i.e., there exist g : W → V such that g ◦ f = IdV and f ◦ g = IdW . If there exists an isomorphism f : V → W then we call V and W are isomorphic to each other.

Definition Let f : V → W be a linear transformation. Define R(f ) := f (V ) := {f (v) ∈ W : v ∈ V }, N (f ) := {v ∈ V : f (v) = 0}.

One can easily check that R(f ) and N (f ) are both vector subspace of W and V respectively. They are respectively called the range and the null space of f .

16/43 Definition A linear transformation f : V → W is called an isomorphism if it is invertible, i.e., there exist g : W → V such that g ◦ f = IdV and f ◦ g = IdW . If there exists an isomorphism f : V → W then we call V and W are isomorphic to each other.

Definition Let f : V → W be a linear transformation. Define R(f ) := f (V ) := {f (v) ∈ W : v ∈ V }, N (f ) := {v ∈ V : f (v) = 0}.

One can easily check that R(f ) and N (f ) are both vector subspace of W and V respectively. They are respectively called the range and the null space of f .

16/43 (b) Suppose f is onto and S spans V . Then f (S) spans W . (c) Suppose S is a basis for V and f is an isomorphism then f (S) is a basis for W .

Proof: Pk P (a ) Let i=1 ai f (vi ) = 0 where vi ∈ S. So f ( i ai vi ) = 0. P Since f is injective, we have i ai vi = 0 and since S is L. I., a1 = ... = ak = 0. (b) Given w ∈ W . Pick v ∈ V such that f (v) = w. Now Pn L(S) = V implies that we can write v = i=1 ai vi with P vi ∈ S. But then w = f (v) = i ai f (vi ) ∈ L(f (S)). (c) Put (a) and (b) together. 2

Lemma Let f : V → W be a linear transformation. (a) Suppose f is injective and S ⊂ V is linearly independent. Then f (S) is linearly independent.

17/43 (c) Suppose S is a basis for V and f is an isomorphism then f (S) is a basis for W .

Proof: Pk P (a ) Let i=1 ai f (vi ) = 0 where vi ∈ S. So f ( i ai vi ) = 0. P Since f is injective, we have i ai vi = 0 and since S is L. I., a1 = ... = ak = 0. (b) Given w ∈ W . Pick v ∈ V such that f (v) = w. Now Pn L(S) = V implies that we can write v = i=1 ai vi with P vi ∈ S. But then w = f (v) = i ai f (vi ) ∈ L(f (S)). (c) Put (a) and (b) together. 2

Lemma Let f : V → W be a linear transformation. (a) Suppose f is injective and S ⊂ V is linearly independent. Then f (S) is linearly independent. (b) Suppose f is onto and S spans V . Then f (S) spans W .

17/43 Proof: Pk P (a ) Let i=1 ai f (vi ) = 0 where vi ∈ S. So f ( i ai vi ) = 0. P Since f is injective, we have i ai vi = 0 and since S is L. I., a1 = ... = ak = 0. (b) Given w ∈ W . Pick v ∈ V such that f (v) = w. Now Pn L(S) = V implies that we can write v = i=1 ai vi with P vi ∈ S. But then w = f (v) = i ai f (vi ) ∈ L(f (S)). (c) Put (a) and (b) together. 2

Lemma Let f : V → W be a linear transformation. (a) Suppose f is injective and S ⊂ V is linearly independent. Then f (S) is linearly independent. (b) Suppose f is onto and S spans V . Then f (S) spans W . (c) Suppose S is a basis for V and f is an isomorphism then f (S) is a basis for W .

17/43 P So f ( i ai vi ) = 0. P Since f is injective, we have i ai vi = 0 and since S is L. I., a1 = ... = ak = 0. (b) Given w ∈ W . Pick v ∈ V such that f (v) = w. Now Pn L(S) = V implies that we can write v = i=1 ai vi with P vi ∈ S. But then w = f (v) = i ai f (vi ) ∈ L(f (S)). (c) Put (a) and (b) together. 2

Lemma Let f : V → W be a linear transformation. (a) Suppose f is injective and S ⊂ V is linearly independent. Then f (S) is linearly independent. (b) Suppose f is onto and S spans V . Then f (S) spans W . (c) Suppose S is a basis for V and f is an isomorphism then f (S) is a basis for W .

Proof: Pk (a ) Let i=1 ai f (vi ) = 0 where vi ∈ S.

17/43 P Since f is injective, we have i ai vi = 0 and since S is L. I., a1 = ... = ak = 0. (b) Given w ∈ W . Pick v ∈ V such that f (v) = w. Now Pn L(S) = V implies that we can write v = i=1 ai vi with P vi ∈ S. But then w = f (v) = i ai f (vi ) ∈ L(f (S)). (c) Put (a) and (b) together. 2

Lemma Let f : V → W be a linear transformation. (a) Suppose f is injective and S ⊂ V is linearly independent. Then f (S) is linearly independent. (b) Suppose f is onto and S spans V . Then f (S) spans W . (c) Suppose S is a basis for V and f is an isomorphism then f (S) is a basis for W .

Proof: Pk P (a ) Let i=1 ai f (vi ) = 0 where vi ∈ S. So f ( i ai vi ) = 0.

17/43 and since S is L. I., a1 = ... = ak = 0. (b) Given w ∈ W . Pick v ∈ V such that f (v) = w. Now Pn L(S) = V implies that we can write v = i=1 ai vi with P vi ∈ S. But then w = f (v) = i ai f (vi ) ∈ L(f (S)). (c) Put (a) and (b) together. 2

Lemma Let f : V → W be a linear transformation. (a) Suppose f is injective and S ⊂ V is linearly independent. Then f (S) is linearly independent. (b) Suppose f is onto and S spans V . Then f (S) spans W . (c) Suppose S is a basis for V and f is an isomorphism then f (S) is a basis for W .

Proof: Pk P (a ) Let i=1 ai f (vi ) = 0 where vi ∈ S. So f ( i ai vi ) = 0. P Since f is injective, we have i ai vi = 0

17/43 (b) Given w ∈ W . Pick v ∈ V such that f (v) = w. Now Pn L(S) = V implies that we can write v = i=1 ai vi with P vi ∈ S. But then w = f (v) = i ai f (vi ) ∈ L(f (S)). (c) Put (a) and (b) together. 2

Lemma Let f : V → W be a linear transformation. (a) Suppose f is injective and S ⊂ V is linearly independent. Then f (S) is linearly independent. (b) Suppose f is onto and S spans V . Then f (S) spans W . (c) Suppose S is a basis for V and f is an isomorphism then f (S) is a basis for W .

Proof: Pk P (a ) Let i=1 ai f (vi ) = 0 where vi ∈ S. So f ( i ai vi ) = 0. P Since f is injective, we have i ai vi = 0 and since S is L. I., a1 = ... = ak = 0.

17/43 Now Pn L(S) = V implies that we can write v = i=1 ai vi with P vi ∈ S. But then w = f (v) = i ai f (vi ) ∈ L(f (S)). (c) Put (a) and (b) together. 2

Lemma Let f : V → W be a linear transformation. (a) Suppose f is injective and S ⊂ V is linearly independent. Then f (S) is linearly independent. (b) Suppose f is onto and S spans V . Then f (S) spans W . (c) Suppose S is a basis for V and f is an isomorphism then f (S) is a basis for W .

Proof: Pk P (a ) Let i=1 ai f (vi ) = 0 where vi ∈ S. So f ( i ai vi ) = 0. P Since f is injective, we have i ai vi = 0 and since S is L. I., a1 = ... = ak = 0. (b) Given w ∈ W . Pick v ∈ V such that f (v) = w.

17/43 P But then w = f (v) = i ai f (vi ) ∈ L(f (S)). (c) Put (a) and (b) together. 2

Lemma Let f : V → W be a linear transformation. (a) Suppose f is injective and S ⊂ V is linearly independent. Then f (S) is linearly independent. (b) Suppose f is onto and S spans V . Then f (S) spans W . (c) Suppose S is a basis for V and f is an isomorphism then f (S) is a basis for W .

Proof: Pk P (a ) Let i=1 ai f (vi ) = 0 where vi ∈ S. So f ( i ai vi ) = 0. P Since f is injective, we have i ai vi = 0 and since S is L. I., a1 = ... = ak = 0. (b) Given w ∈ W . Pick v ∈ V such that f (v) = w. Now Pn L(S) = V implies that we can write v = i=1 ai vi with vi ∈ S.

17/43 (c) Put (a) and (b) together. 2

Lemma Let f : V → W be a linear transformation. (a) Suppose f is injective and S ⊂ V is linearly independent. Then f (S) is linearly independent. (b) Suppose f is onto and S spans V . Then f (S) spans W . (c) Suppose S is a basis for V and f is an isomorphism then f (S) is a basis for W .

Proof: Pk P (a ) Let i=1 ai f (vi ) = 0 where vi ∈ S. So f ( i ai vi ) = 0. P Since f is injective, we have i ai vi = 0 and since S is L. I., a1 = ... = ak = 0. (b) Given w ∈ W . Pick v ∈ V such that f (v) = w. Now Pn L(S) = V implies that we can write v = i=1 ai vi with P vi ∈ S. But then w = f (v) = i ai f (vi ) ∈ L(f (S)).

17/43 Lemma Let f : V → W be a linear transformation. (a) Suppose f is injective and S ⊂ V is linearly independent. Then f (S) is linearly independent. (b) Suppose f is onto and S spans V . Then f (S) spans W . (c) Suppose S is a basis for V and f is an isomorphism then f (S) is a basis for W .

Proof: Pk P (a ) Let i=1 ai f (vi ) = 0 where vi ∈ S. So f ( i ai vi ) = 0. P Since f is injective, we have i ai vi = 0 and since S is L. I., a1 = ... = ak = 0. (b) Given w ∈ W . Pick v ∈ V such that f (v) = w. Now Pn L(S) = V implies that we can write v = i=1 ai vi with P vi ∈ S. But then w = f (v) = i ai f (vi ) ∈ L(f (S)). (c) Put (a) and (b) together. 2

17/43 Proof: Pick bases A and B for V and W respectively. Then both A and B have same number of elements. Let f : A → B be any bijection. Then by the above discussion f extends to a linear map f : V → W . If g : B → A is the inverse of f : A → B then g also extends to a linear map.

Since g ◦ f = Id on A, it follows that g ◦ f = IdV on the whole of V . Likewise f ◦ g = IdW . Converse follows from part (c) of the previous lemma. 2 Remark Because of the above theorem any vector space of dimension n is isomorphic to Kn.

Theorem Let V and W be any two vector spaces of dimension n. Then V and W are isomorphic to each other and conversely.

18/43 Let f : A → B be any bijection. Then by the above discussion f extends to a linear map f : V → W . If g : B → A is the inverse of f : A → B then g also extends to a linear map.

Since g ◦ f = Id on A, it follows that g ◦ f = IdV on the whole of V . Likewise f ◦ g = IdW . Converse follows from part (c) of the previous lemma. 2 Remark Because of the above theorem any vector space of dimension n is isomorphic to Kn.

Theorem Let V and W be any two vector spaces of dimension n. Then V and W are isomorphic to each other and conversely.

Proof: Pick bases A and B for V and W respectively. Then both A and B have same number of elements.

18/43 If g : B → A is the inverse of f : A → B then g also extends to a linear map.

Since g ◦ f = Id on A, it follows that g ◦ f = IdV on the whole of V . Likewise f ◦ g = IdW . Converse follows from part (c) of the previous lemma. 2 Remark Because of the above theorem any vector space of dimension n is isomorphic to Kn.

Theorem Let V and W be any two vector spaces of dimension n. Then V and W are isomorphic to each other and conversely.

Proof: Pick bases A and B for V and W respectively. Then both A and B have same number of elements. Let f : A → B be any bijection. Then by the above discussion f extends to a linear map f : V → W .

18/43 Since g ◦ f = Id on A, it follows that g ◦ f = IdV on the whole of V . Likewise f ◦ g = IdW . Converse follows from part (c) of the previous lemma. 2 Remark Because of the above theorem any vector space of dimension n is isomorphic to Kn.

Theorem Let V and W be any two vector spaces of dimension n. Then V and W are isomorphic to each other and conversely.

Proof: Pick bases A and B for V and W respectively. Then both A and B have same number of elements. Let f : A → B be any bijection. Then by the above discussion f extends to a linear map f : V → W . If g : B → A is the inverse of f : A → B then g also extends to a linear map.

18/43 Converse follows from part (c) of the previous lemma. 2 Remark Because of the above theorem any vector space of dimension n is isomorphic to Kn.

Theorem Let V and W be any two vector spaces of dimension n. Then V and W are isomorphic to each other and conversely.

Proof: Pick bases A and B for V and W respectively. Then both A and B have same number of elements. Let f : A → B be any bijection. Then by the above discussion f extends to a linear map f : V → W . If g : B → A is the inverse of f : A → B then g also extends to a linear map.

Since g ◦ f = Id on A, it follows that g ◦ f = IdV on the whole of V . Likewise f ◦ g = IdW .

18/43 Remark Because of the above theorem any vector space of dimension n is isomorphic to Kn.

Theorem Let V and W be any two vector spaces of dimension n. Then V and W are isomorphic to each other and conversely.

Proof: Pick bases A and B for V and W respectively. Then both A and B have same number of elements. Let f : A → B be any bijection. Then by the above discussion f extends to a linear map f : V → W . If g : B → A is the inverse of f : A → B then g also extends to a linear map.

Since g ◦ f = Id on A, it follows that g ◦ f = IdV on the whole of V . Likewise f ◦ g = IdW . Converse follows from part (c) of the previous lemma. 2

18/43 Theorem Let V and W be any two vector spaces of dimension n. Then V and W are isomorphic to each other and conversely.

Proof: Pick bases A and B for V and W respectively. Then both A and B have same number of elements. Let f : A → B be any bijection. Then by the above discussion f extends to a linear map f : V → W . If g : B → A is the inverse of f : A → B then g also extends to a linear map.

Since g ◦ f = Id on A, it follows that g ◦ f = IdV on the whole of V . Likewise f ◦ g = IdW . Converse follows from part (c) of the previous lemma. 2 Remark Because of the above theorem any vector space of dimension n is isomorphic to Kn.

18/43 It follows that the study of ‘linear transformations on finite dimensional vector spaces’ can also be converted into the ‘study of matrices’.

Exercises: (1) Clearly a bijective linear transformation is invertible. Show that the inverse is also linear.

Remark We have seen that the study of ‘linear transformations on euclidean spaces’ can be converted into the ‘study of matrices’.

19/43 Exercises: (1) Clearly a bijective linear transformation is invertible. Show that the inverse is also linear.

Remark We have seen that the study of ‘linear transformations on euclidean spaces’ can be converted into the ‘study of matrices’. It follows that the study of ‘linear transformations on finite dimensional vector spaces’ can also be converted into the ‘study of matrices’.

19/43 Remark We have seen that the study of ‘linear transformations on euclidean spaces’ can be converted into the ‘study of matrices’. It follows that the study of ‘linear transformations on finite dimensional vector spaces’ can also be converted into the ‘study of matrices’.

Exercises: (1) Clearly a bijective linear transformation is invertible. Show that the inverse is also linear.

19/43 (iv) there exist g : V → V such that g ◦ f = IdV . (v) there exists h : V → V such that f ◦ h = IdV .

(3) Let A and B be any two n × n matrices and AB = In. Show that both A and B are invertible and they are inverses of each other. Proof: If f and g denote the corresponding linear maps then f ◦ g = Id : Rn → Rn. From the exercise (2) above, f is an isomorphism and f ◦ g = g ◦ f = Id. Hence AB = In = BA which means A = B−1.

(2) Let V be a finite dimensional vector space and f : V → V be a linear map. Prove that the following are equivalent: (i) f is an isomorphism. (ii) f is surjective. (iii) f is injective.

20/43 (3) Let A and B be any two n × n matrices and AB = In. Show that both A and B are invertible and they are inverses of each other. Proof: If f and g denote the corresponding linear maps then f ◦ g = Id : Rn → Rn. From the exercise (2) above, f is an isomorphism and f ◦ g = g ◦ f = Id. Hence AB = In = BA which means A = B−1.

(2) Let V be a finite dimensional vector space and f : V → V be a linear map. Prove that the following are equivalent: (i) f is an isomorphism. (ii) f is surjective. (iii) f is injective. (iv) there exist g : V → V such that g ◦ f = IdV . (v) there exists h : V → V such that f ◦ h = IdV .

20/43 Proof: If f and g denote the corresponding linear maps then f ◦ g = Id : Rn → Rn. From the exercise (2) above, f is an isomorphism and f ◦ g = g ◦ f = Id. Hence AB = In = BA which means A = B−1.

(2) Let V be a finite dimensional vector space and f : V → V be a linear map. Prove that the following are equivalent: (i) f is an isomorphism. (ii) f is surjective. (iii) f is injective. (iv) there exist g : V → V such that g ◦ f = IdV . (v) there exists h : V → V such that f ◦ h = IdV .

(3) Let A and B be any two n × n matrices and AB = In. Show that both A and B are invertible and they are inverses of each other.

20/43 From the exercise (2) above, f is an isomorphism and f ◦ g = g ◦ f = Id. Hence AB = In = BA which means A = B−1.

(2) Let V be a finite dimensional vector space and f : V → V be a linear map. Prove that the following are equivalent: (i) f is an isomorphism. (ii) f is surjective. (iii) f is injective. (iv) there exist g : V → V such that g ◦ f = IdV . (v) there exists h : V → V such that f ◦ h = IdV .

(3) Let A and B be any two n × n matrices and AB = In. Show that both A and B are invertible and they are inverses of each other. Proof: If f and g denote the corresponding linear maps then f ◦ g = Id : Rn → Rn.

20/43 (2) Let V be a finite dimensional vector space and f : V → V be a linear map. Prove that the following are equivalent: (i) f is an isomorphism. (ii) f is surjective. (iii) f is injective. (iv) there exist g : V → V such that g ◦ f = IdV . (v) there exists h : V → V such that f ◦ h = IdV .

(3) Let A and B be any two n × n matrices and AB = In. Show that both A and B are invertible and they are inverses of each other. Proof: If f and g denote the corresponding linear maps then f ◦ g = Id : Rn → Rn. From the exercise (2) above, f is an isomorphism and f ◦ g = g ◦ f = Id. Hence AB = In = BA which means A = B−1.

20/43 By nullity of f we mean the dimension of the null space, i.e., n(f )= dim N (f ).

Theorem ( and Nullity Theorem): The rank and nullity of a linear transformation f : V → W on a finite dimensional vector space V add up to the dimension of V :

rk(f ) + n(f ) = dim V .

Rank and Nullity

Definition Let f : V → W be a linear transformation of finite dimensional vector spaces. By the rank of f we mean the dimension of the range of f , i.e.,rk (f ) = dim f (V ) = dim R(f ).

21/43 Theorem (Rank and Nullity Theorem): The rank and nullity of a linear transformation f : V → W on a finite dimensional vector space V add up to the dimension of V :

rk(f ) + n(f ) = dim V .

Rank and Nullity

Definition Let f : V → W be a linear transformation of finite dimensional vector spaces. By the rank of f we mean the dimension of the range of f , i.e.,rk (f ) = dim f (V ) = dim R(f ). By nullity of f we mean the dimension of the null space, i.e., n(f )= dim N (f ).

21/43 rk(f ) + n(f ) = dim V .

Rank and Nullity

Definition Let f : V → W be a linear transformation of finite dimensional vector spaces. By the rank of f we mean the dimension of the range of f , i.e.,rk (f ) = dim f (V ) = dim R(f ). By nullity of f we mean the dimension of the null space, i.e., n(f )= dim N (f ).

Theorem (Rank and Nullity Theorem): The rank and nullity of a linear transformation f : V → W on a finite dimensional vector space V add up to the dimension of V :

21/43 Rank and Nullity

Definition Let f : V → W be a linear transformation of finite dimensional vector spaces. By the rank of f we mean the dimension of the range of f , i.e.,rk (f ) = dim f (V ) = dim R(f ). By nullity of f we mean the dimension of the null space, i.e., n(f )= dim N (f ).

Theorem (Rank and Nullity Theorem): The rank and nullity of a linear transformation f : V → W on a finite dimensional vector space V add up to the dimension of V :

rk(f ) + n(f ) = dim V .

21/43 We can extend S to a basis 0 S = {v1, v2,..., vk , w1, w2,..., wn−k } of V . We would show that

T = {f (w1), f (w2),..., f (wn−k )}

is a basis of R(f ). Observe that f (S0) = T . By part (b) of the previous lemma it follows that T spans f (V ) = R(f ). Suppose

β1f (w1) + ... + βn−k f (wn−k ) = 0.

Then f (β1w1 + ... + βn−k wn−k ) = 0. Thus β1w1 + ... + βn−k wn−k ∈ N (f ). Hence there are scalars α1, α2, . . . , αk such that

α1v1 + α2v2 + ... + αk vk = β1w1 + β2w2 + ... + βn−k wn−k .

Proof: Suppose dim V = n. Let S = {v1, v2,..., vk } be a basis of N (f ).

22/43 We would show that

T = {f (w1), f (w2),..., f (wn−k )}

is a basis of R(f ). Observe that f (S0) = T . By part (b) of the previous lemma it follows that T spans f (V ) = R(f ). Suppose

β1f (w1) + ... + βn−k f (wn−k ) = 0.

Then f (β1w1 + ... + βn−k wn−k ) = 0. Thus β1w1 + ... + βn−k wn−k ∈ N (f ). Hence there are scalars α1, α2, . . . , αk such that

α1v1 + α2v2 + ... + αk vk = β1w1 + β2w2 + ... + βn−k wn−k .

Proof: Suppose dim V = n. Let S = {v1, v2,..., vk } be a basis of N (f ). We can extend S to a basis 0 S = {v1, v2,..., vk , w1, w2,..., wn−k } of V .

22/43 Observe that f (S0) = T . By part (b) of the previous lemma it follows that T spans f (V ) = R(f ). Suppose

β1f (w1) + ... + βn−k f (wn−k ) = 0.

Then f (β1w1 + ... + βn−k wn−k ) = 0. Thus β1w1 + ... + βn−k wn−k ∈ N (f ). Hence there are scalars α1, α2, . . . , αk such that

α1v1 + α2v2 + ... + αk vk = β1w1 + β2w2 + ... + βn−k wn−k .

Proof: Suppose dim V = n. Let S = {v1, v2,..., vk } be a basis of N (f ). We can extend S to a basis 0 S = {v1, v2,..., vk , w1, w2,..., wn−k } of V . We would show that

T = {f (w1), f (w2),..., f (wn−k )} is a basis of R(f ).

22/43 Suppose

β1f (w1) + ... + βn−k f (wn−k ) = 0.

Then f (β1w1 + ... + βn−k wn−k ) = 0. Thus β1w1 + ... + βn−k wn−k ∈ N (f ). Hence there are scalars α1, α2, . . . , αk such that

α1v1 + α2v2 + ... + αk vk = β1w1 + β2w2 + ... + βn−k wn−k .

Proof: Suppose dim V = n. Let S = {v1, v2,..., vk } be a basis of N (f ). We can extend S to a basis 0 S = {v1, v2,..., vk , w1, w2,..., wn−k } of V . We would show that

T = {f (w1), f (w2),..., f (wn−k )} is a basis of R(f ). Observe that f (S0) = T . By part (b) of the previous lemma it follows that T spans f (V ) = R(f ).

22/43 Thus β1w1 + ... + βn−k wn−k ∈ N (f ). Hence there are scalars α1, α2, . . . , αk such that

α1v1 + α2v2 + ... + αk vk = β1w1 + β2w2 + ... + βn−k wn−k .

Proof: Suppose dim V = n. Let S = {v1, v2,..., vk } be a basis of N (f ). We can extend S to a basis 0 S = {v1, v2,..., vk , w1, w2,..., wn−k } of V . We would show that

T = {f (w1), f (w2),..., f (wn−k )} is a basis of R(f ). Observe that f (S0) = T . By part (b) of the previous lemma it follows that T spans f (V ) = R(f ). Suppose

β1f (w1) + ... + βn−k f (wn−k ) = 0.

Then f (β1w1 + ... + βn−k wn−k ) = 0.

22/43 Hence there are scalars α1, α2, . . . , αk such that

α1v1 + α2v2 + ... + αk vk = β1w1 + β2w2 + ... + βn−k wn−k .

Proof: Suppose dim V = n. Let S = {v1, v2,..., vk } be a basis of N (f ). We can extend S to a basis 0 S = {v1, v2,..., vk , w1, w2,..., wn−k } of V . We would show that

T = {f (w1), f (w2),..., f (wn−k )} is a basis of R(f ). Observe that f (S0) = T . By part (b) of the previous lemma it follows that T spans f (V ) = R(f ). Suppose

β1f (w1) + ... + βn−k f (wn−k ) = 0.

Then f (β1w1 + ... + βn−k wn−k ) = 0. Thus β1w1 + ... + βn−k wn−k ∈ N (f ).

22/43 Proof: Suppose dim V = n. Let S = {v1, v2,..., vk } be a basis of N (f ). We can extend S to a basis 0 S = {v1, v2,..., vk , w1, w2,..., wn−k } of V . We would show that

T = {f (w1), f (w2),..., f (wn−k )} is a basis of R(f ). Observe that f (S0) = T . By part (b) of the previous lemma it follows that T spans f (V ) = R(f ). Suppose

β1f (w1) + ... + βn−k f (wn−k ) = 0.

Then f (β1w1 + ... + βn−k wn−k ) = 0. Thus β1w1 + ... + βn−k wn−k ∈ N (f ). Hence there are scalars α1, α2, . . . , αk such that

α1v1 + α2v2 + ... + αk vk = β1w1 + β2w2 + ... + βn−k wn−k .

22/43 The trace of a which is defined as the sum of the diagonal entries, i.e., for A = ((aij )) ∈ Mn

n X trace (A) = aii i=1

By linear independence of {v1, v2,..., vk , w1, w2,..., wn−k } we conclude that β1 = β2 = ... = βn−k = 0. Hence T is L. I. Therefore it is a basis of R(f ). 2

23/43 i.e., for A = ((aij )) ∈ Mn

n X trace (A) = aii i=1

By linear independence of {v1, v2,..., vk , w1, w2,..., wn−k } we conclude that β1 = β2 = ... = βn−k = 0. Hence T is L. I. Therefore it is a basis of R(f ). 2

The trace of a square matrix which is defined as the sum of the diagonal entries,

23/43 By linear independence of {v1, v2,..., vk , w1, w2,..., wn−k } we conclude that β1 = β2 = ... = βn−k = 0. Hence T is L. I. Therefore it is a basis of R(f ). 2

The trace of a square matrix which is defined as the sum of the diagonal entries, i.e., for A = ((aij )) ∈ Mn

n X trace (A) = aii i=1

23/43