<<

Unit 2: Projections and subspaces

Juan Luis Melero and Eduardo Eyras September 2018

1 Contents

1 Operations and definitions3 1.1 Inner product (scalar product)...... 3 1.2 Norm of a vector...... 3 1.3 Distance between vectors...... 5 1.4 Angle between two vectors...... 6 1.5 Orthogonal projection...... 7

2 Gram-Schmidt orthogonalization9 2.1 Process of orthogonalization...... 10 2.2 Example of in R2 ...... 11 2.3 Example of orthogonal basis in R3 ...... 12 3 Vector subspaces 13 3.1 Vector space projection...... 14

4 Exercises 15

5 R practical 17 5.1 Inner product...... 17 5.2 Norm of a vector...... 17 5.3 Distance between two vectors...... 17 5.4 Angle between two vectors...... 18 5.5 Orthogonal projection...... 18 5.6 Gram-Schmidt orthogonalization...... 20

2 1 Operations and definitions

1.1 Inner product (scalar product) The inner product (also called dot product or scalar product) is an operation that, taking two vectors of the same dimension, returns a scalar (real) value. We define an inner product as:

h., .i : V × V → R (~u,~v) →h~u,~vi ∈ R with the following properties:

1. h~u,~vi = h~v, ~ui Commutative property 2. h~u,~v + ~wi = h~u + ~v, ~u + ~wi Distributive property with respect to the vector addition

3. c h~u,~vi = hc~u,~vi = h~u,c~vi ∀c ∈ R Associative property with re- spect to the scalar multiplication. 4. h~u,~ui ≥ 0 Non-negative property 5. h~u,~ui = 0 ⇐⇒ ~u = ~0

The standard definition of a scalar product in Rn is the following:     u1 v1  .   .  ~u =  .  ,~v =  .  un vn   v1 t u . . . u   .  h~u,~vi = ~u ~v = 1 n  .  = u1v1 + ··· + unvn vn

1.2 Norm of a vector The norm of the vector can be considered as the length of the vector. The concept of distance exists in any number of dimensions n and we use the norm of a vector to define it. The definition of the norm is the square root of the inner product of a vector with itself:

3 Norm = k~uk = ph~u,~ui The norm of a vector has the following properties:

1. k~uk = 0 ⇐⇒ ~u = 0 Only the vector ~0 has ”0” length.

2. k~uk = 1 ⇐⇒ h~u,~ui = 1 Definition of unit vector (vector of lenght 1)

3. kc~uk = |c|k~uk

1 4. ∀~v ∈ V, k~vk= 6 0 =⇒ k~vk~v is a unit vector. Proof of property 3:

kc~uk = phc~u,c~ui = pc2 h~u,~ui = |c|ph~u,~ui = |c|k~uk Proof of property 4:

s  s 1 1 1 1 1 ~v == ~v, ~v = h~v,~vi = k~vk = 1 k~vk k~vk k~vk k~vk2 k~vk

The length (module) of a vector in Rn can be written as:

v   u u 1 √ u 1 n ! 2 p u . √ X 2 k~uk = h~v,~vi = ~ut~u = u u . . . u   .  = u u + ··· + u u = u t 1 n  .  1 1 n n i i=1 un

This is called the L2 Euclidean norm, but there are other possible definitions of norm:

1 n ! p X p Lp norm k~ukp = ui i=1 n X L1 norm k~uk = |ui| i=1

L∞ norm k~uk∞ = max {|ui|} i

4 1.3 Distance between vectors As mentioned before, the norm allows us to define a distance in a Vector Space. The mathematical definition of the distance d between two vectors ~u,~v is the following:

d: V × V → R (~u,~v) →d(~u,~v) = k~u − ~vk ∈ R A distance has the following properties:

1. d(~u,~v) ≥ 0 Non-negative

2. d(~u,~v) = d(~v, ~u) Symmetric

3. d(~u,~v) = 0 ⇐⇒ ~u = ~v ”Identity”

4. d(~u,~v) + d(~u,~w) ≥ d(~u,~w) Triangle inequality

To proof the triangle inequality, first we need to know the Cauchy-Schwartz inequality:

|h~u,~vi| ≤ k~uk · k~vk Proof:

h~u,~vi Consider: ~u,~v ∈ V, ~u,~v 6= 0, λ = k~vk2

We define a new vector ~z = ~u − λ~v 0 ≤ k~zk = h~z, ~zi (this is always true for any vector)

0 ≤ h~u − λ~v, ~u − λ~vi = h~u,~ui − λ h~v, ~ui − λ h~u,~vi + λ2 h~v,~vi = h~u,~vi h~u,~vi2 = k~uk2 − 2λ h~u,~vi + λ2k~vk2 = k~uk2 − 2 h~u,~vi + = k~vk2 k~vk2 h~u,~vi2 h~u,~vi2 k~uk2 − → k~uk ≥ → k~ukk~vk ≥ |h~u,~vi| k~vk2 k~vk2

5 Proof of the triangle inequality: k~u + ~vk ≤ k~uk + k~vk

k~u + ~vk2 = h~u + ~v, ~u + ~vi = h~u,~vi + 2 h~u,~vi + h~v,~vi = = k~uk2 + k~vk2 + 2 h~u,~vi ≤ ≤ k~uk2 + k~vk2 + 2k~ukk~vk = (k~uk + k~vk)2

Example in R2: ~ ~ ~a1 = (2, 2), b1 = (1, 2.5),~a2 = ~a1 − b1 = (1, −0.5)

r ~ ~ D ~ ~ E d(~a1, b1) = k~a1 − b1k = ~a1 − b1,~a1 − b1 =

√ = p(2 − 1)2 + (2 − 2.5)2 = p(1)2 + (0.5)2 = 1.25

Another way to do that is with the vector ”difference”, in this case, ~a2:

√ √ ~ ~ p p d(~a1, b1) = k~a1−b1k = k~a2k = h~a2,~a2i = h(1, −0.5), (1, −0.5)i = 1 + 0.25 = 1.25

1.4 Angle between two vectors We define the angle between two vectors as:

h~u,~vi  h~u,~vi  cos θ = → θ = arccos k~ukk~vk k~ukk~vk The relationships and properties of the inner product with the angle are: π h~u,~vi = 0 ⇐⇒ θ = Orthogonal vectors 2

h~u,~vi = k~ukk~vk ⇐⇒ θ = 0 Parallel vectors

h~u,~vi = −k~ukk~vk ⇐⇒ θ = π Anti-parallel vectors

6 Using the properties of the angle between two vectors, the Pythagorean the- orem can be proven:

if ~u ⊥ ~v =⇒ k~u + ~vk2 = k~uk2 + k~vk2

k~u + ~vk2 = h~u + ~v, ~u + ~vi = h~u,~vi + 2 h~u,~vi + h~v,~vi = = k~uk2 + k~vk2 + 2 h~u,~vi = k~uk2 + k~vk2 (if ~u ⊥ ~v =⇒ h~u,~vi = 0)

With the angle definition we can also proof the Cauchy-Schwartz inequality in a more simple way:

h~u,~vi cos θ = → h~u,~vi = k~ukk~vk cos θ k~ukk~vk

Since |cos θ| ≤ 1 → |h~u,~vi| ≤ k~ukk~vk

1.5 Orthogonal projection

Consider a vector space V over the real numbers R. Given two vectors ~u,~v ∈ V with different directions we can define a orthogonal projection of ~u onto ~v as:

h~u,~vi P roj (~u) = ~v ~v k~vk2 In Figure1 we can see a projection in R2.

Figure 1: Projection of ~u onto ~v in R2.

7 We can define the projection as the components of ~u that are in the direction of ~v.

There are some properties related to the projection:

1. The length of ~v does not contribute to the definition projection.

2. ~v marks the direction of the projection.

Proof of property 1:

h~u,~vi k~ukk~vk cos θ ~v P roj (~u) = ~v = ~v = k~uk cos θ ~v k~vk2 k~vk2 k~vk ~v k~vk is a unit vector for any ~v. Hence, only k~uk, cos θ, and the direction of contribute to the definition.

h~u,~vi Proof of property 2: k~vk2 is a scalar, i.e. a real number. Therefore, we can rewrite the projection as

h~u,~vi P roj (~u) = ~v = α~v ~v k~vk2

Theorem:(~u − P roj~v(~u)) ⊥ P roj~v(~u)

Proof:

 h~u,~vi h~u,~vi   h~u,~vi  h~u,~vi h~u,~vi  h~u − P roj (~u), P roj (~u)i = ~u − ~v, ~v = ~u, ~v − ~v, ~v = ~v ~v k~vk2 k~vk2 k~vk2 k~vk2 k~vk2

h~u,~vi h~u,~vi2 1 1 h~u,~vi − h~v,~vi = h~u,~vi2 − k~vk2 h~u,~vi2 = 0 k~vk2 k~vk2 k~vk2 k~vk4

As mentioned above, P roj~v(~u) = α~v. Using the condition of ~u−P roj~v(~u)) ⊥ P roj~v(~u) → hα~v − ~u,α~vi = 0, We can recover the definition of orthogonal projection:

8 0 = hα~v − ~u,α~vi = α2 h~v,~vi − α h~u,~vi → α (α h~v,~vi − h~u,~vi) = 0

h~u,~vi α = 0 α = h~v,~vi h~u,~vi P roj (~u) = ~v ~v h~v,~vi

In general, given a basis of orthogonal vectors B = {~w1, . . . , ~wn}, the repre- sentation of a vector in this basis is given by the projections:

X X h~u,~wii u = P roj (~u) = ~w ~wi k~w k i i i i

 1  1  Example: consider the following basis in 2, B = ~v = ,~v = , R 1 1 2 −1 a the vector in a basis ~u = can be written as: b a + b 1 a − b  1  ~u = P roj (~u) + P roj (~u) = + ~v1 ~v2 2 1 2 −1 which gives us the representation in the basis B as:

 a+b  2 RepB(~u) =   a−b 2 2 Gram-Schmidt orthogonalization

The orthogonal projection suggests that we can decompose any vector into orthogonal components. Moreover, a set of vectors mutually orthogonal and non-zero are linearly independent. Hence, they can form a basis.

Proposition: a set of vectors mutually orthogonal and non-zero are linearly independent.

9 Proof: consider a set of n non-zero vectors in the vector space Rn. Consider a linear combination of these vectors of the form:

λ1~u1 + ··· + λn~un = 0

Taking the scalar product by any of the vectors uj and knowing that they are mutually orthogonal:

h~ui, ~uji = 0, ∀i 6= j

But for the vector ~uj, we arrive at:

λj h~uj, ~uji = λjk ~ujk = 0

Since all vectors are non-zero vectors, we arrive at the conclusion that λj = 0 for ant j, so they are mutually independent.

Orthogonalization is a process for which we can define an orthogonal basis from a non-orthogonal basis. Gram-Schmidt orthogonalization in general is used to obtain orthogonal basis.

2.1 Process of orthogonalization The process consists in taking a vector as reference and make an orthogonal vector to the reference vector using the projection. That is, having a set of initial vectors non-orthogonals {~v1, . . . ,~vn}, we want to arrive at a set of n mutually orthogonal vectors {~u1, . . . , ~un}, such that h~ui, ~uji = 0. If you remember, ~u − P roj~v(~u), is orthogonal to ~v. Therefore, we will iteratively subtract from each vectors the component along the orthogonal vectors al- ready obtained in each iteration.

~u1 = ~v1 → ~u1 ~u = ~v − P roj → h~u , ~u i = 0 2 2 u~1(~v2) 2 1

~u3 = ~v3 − P roj~u1 (~v3) − P roj~u2 (~v3) → h~u3, ~u1i = 0, h ~u3, ~u2i = 0 ... → ...

~un = ~vn − P roj ~u1 (~vn) − · · · − P roj~un−1 (~vn) → h~un, ~u1i = 0,..., h~un, ~un−1i = 0 Example: take the vectors ~u = (1, 2),~v = (3, 1). Using vector ~v as reference, calculate a vector u~0 orthogonal to ~v.

10 h~u,~vi Define u~0 = ~u − P roj (~u) = ~u − ~v = ~v k~vk2

 1  1  1   1  1  3   −1/2  = − 3 1  ~v = − = 2 k~vk2 2 2 2 1 3/2

 −1/2  Vector u~0 = is a vector perpendicular to ~v. 3/2

2.2 Example of orthogonal basis in R2 Consider the basis

 4   1  B = {~v ,~v } ,~v = ,~v = , Span(B) = 2 1 2 1 2 2 3 R

B is a basis for R2. Vectors from B are linearly independent, but they are not orthogonal. We want to find and orthogonal basis. First, we select a  4  vector as reference, in this case, ~u = ~v = . For the second vector, 1 1 2 we subtract to ~v2 the part in the direction of ~v1, that is, the projection of ~v2 onto ~v1 (or ~u1).

      1 h~u1,~v2i 1 4 · 1 + 2 · 3 4 ~u2 = ~v2 − P roj~u1 (~v2) = − ~u1 = − 2 2 = 3 k~u1k 3 4 + 2 2

 1  10  4   −1  = − = 3 20 2 2 The new basis

 4   −1  B0 = {~u , ~u } , ~u = , ~u = , Span(B0) = 2 1 2 1 2 2 2 R is orthogonal. Since ~u1 and ~u2 are perpendicular.

 −1  h~u , ~u i = 4 2  = 4 · (−1) + 2 · 2 = 0 1 2 2

11 2.3 Example of orthogonal basis in R3 Let’s do the same but for a basis of vector space R3, which will involve three vectors and we have to make mutually perpendicular.

Consider the basis

 1   0   1  3 B = {~v1,~v2,~v3} ,~v1 =  1  ,~v2 =  2  ,~v3 =  0  Span(B) = R 1 0 3

We want to transform this basis into an orthogonal basis B = {~u1, ~u2, ~u3}.

 1  ~u1 = ~v1 =  1  1  0   1   −2/3  2 ~u = ~v − P roj (~v ) = 2 − 1 = 4/3 2 2 ~u1 2   3     0 1 −2/3

For the third vector, we have to subtract from ~v3 the in ~u1 and ~u2.

 1   1   1 

~u3 = ~v3−P roj~u1 (~v3)−P roj~u2 (~v3) =  0 −P roj~u1  0 −P roj~u2  0  = 3 3 3

 1   1   −2/3   −1  4 = 0 − 1 + 4/3 = 0   3       3 1 −2/3 1 We found an orthogonal basis B0.

 1   −2/3   −1  0 0 3 B = {~u1, ~u2, ~u3} , ~u1 =  1  , ~u2 =  4/3  , ~u3 =  0  Span(B ) = R 1 −2/3 1

12 3 Vector subspaces

A vector subspace U is a subset of a vector space V , U ⊂ V , such that (U, +, ·) is a vector space under the same addition of vectors and multiplication by scalars:

1. ∀~u,~v ∈ U =⇒ ~u + ~v ∈ U

2. ∀~u ∈ U, ∀λ ∈ R =⇒ λ~u ∈ U

In other words, the subset is a vector space and is closed under the same operations than its container vector space. If a property is true for (V, +, ·), it will be true also for (U, +, ·). The only restrictions are that ~u,~v, · · · ∈ U.

Example:

 1   0    3 S =  0  ,  1  , Span(S) ⊂ R  0 0  S is a basis for a vector subspace:

U = {(x, y, 0), x, y ∈ R} We can see that it fulfills the conditions to be a vector subspace.

~u = (x1, y1, 0)

~v = (x2, y2, 0)

~u + ~v = (x1 + x2, y1 + y2, 0) ∈ U

λ~u = (λx1, λx2, 0) ∈ U hence, U is a vector subspace.

13 3.1 Vector space projection If W is an m-dimensional subspace of a vector space V with the inner prod- uct h, i defined, then it is possible to project vectors from V to W. That is, instead of projecting only a single point onto a line, you project the whole subspace onto the vector space in general.

Note that since every vector in W is also a vector in V , vectors in W also have a representation in V .

For example, consider W to be just one single axis (R vector space), e.g. the x-axis in the R2 plane. The projection of the vectors P (x, y) ∈ R2 is of the form P (x, y) = (x, 0). This is an orthogonal projection.

In general, when the subspace W has an orthonormal basis B = {w1, . . . , wm}, then we can define the orthogonal projection of any vector u ∈ V on the sub- space W as: X P rojW (u) = hu, wji wj ∈ W j

14 4 Exercises Ex. 1 — Consider L the set of all points on a line through the origin of R2. Show that L is a subspace of R2. Recall that L can be represented as L = {(x, y) ∈ R | ax + by = 0, ∀a, b ∈ R} Ex. 2 — Consider the set of all points on a line that does not pass through the origin. Explain why this is not a subspace of R2.

Ex. 3 — Consider two vectors u = (u1, . . . , un), v = (v1, . . . , vn) in the vector space Rn. Using the definition of scalar (inner) product and norm of a vector, show the following properties for Rn: 1. kcvk = |c| · kvk v 2. kvk is a unit vector 3. All the vectors of the canonical basis in Rn are unit vectors. 4. The distance between two vectors d(u, v) = ku − vk is the Euclidean distance.

Ex. 4 — Consider the vectors u = (1, 2, 0), v = (−1, 4, 1) in R3. 1. Calculate the distance d(u, v). 2. Calculate unit vectors from them.

Ex. 5 — Consider the vectors u = (1, 1), v = (2, −2), w = (2, 2) ∈ R2. Using the scalar product, determine whether these vectors are orthogonal or parallel to each other.

Ex. 6 — Given two vectors in a vector space, u, v ∈ V , using the definition of norm and the properties of the scalar product, show that ku + vk2 = kuk2 + kvk2 if u and v are orthogonal, i.e. u ⊥ v (Pythagorean theorem).

Ex. 7 — Using the vectors u = (3, −2) and v = (4, 6), verify the Pythagorean theorem (if u and v are orthogonal =⇒ ku + vk2 = kuk2 + kvk2)

Ex. 8 — Let u = (2, −1, 1). Find all the vectors that are orthogonal to u.

Ex. 9 — Consider the two vectors u = (1, −1, 0) and v = (0, 1, −1). Verify the Cauchy-Schwartz inequality: |hu, vi| ≤ kuk · kvk.

Ex. 10 — Given the definition of an orthogonal projection of a vector u

15 onto a vector v, show that (u−P rojv(u)) ⊥ P rojv(u). That is, the orthogonal projection of a vector decomposes the vector into two orthogonal parts.

Ex. 11 — Given the vector u = (a, b, c), u ∈ R3, calculate the orthogonal projections of u onto the vectors of the canonical basis.

2 Ex. 12 — Consider the vector space R with the basis B = {β1, β2} = 4 1 , . 2 3 1. Confirm that B is a basis (i.e. B is a linear independent set and Span(B) = R2) 2. Verify that this is not an orthogonal basis (basis vectors are not or- thogonal to each other). 3. Can you find an orthogonal basis from them using the definition of orthogonal projection? (Gram-Schmidt orthogonalization).

Ex. 13 — Consider the following basis for R3:        1 0 1  B = {β1, β2, β3} = 1 , 2 , 0  1 0 3 

1. Verify that this is not an orthogonal basis. 2. Transform this basis into an orthogonal basis using the definition of orthogonal projection (Gram-Schmidt orthogonalization).

16 5 R practical

5.1 Inner product The inner product is calculated in R by iterating the sum with the multi- plication of every element of the vector. The command sum() will perform the addition and to multiply every component of the two vectors we use the command a*b. #Define vectors > u <- c(1, 0, 3, 2) > v <- c(-1, 3, 2, 0)

#Calculate the scalar product > sum (u*v) [1] 5

5.2 Norm of a vector To calculate the norm of a vector we follow the formula. v u n ! p u X 2 k~uk = h~u,~ui = t ui i=1 Thus, we calculate the square root of the inner product of a vector with itself. #Definea vector > u <- c(1, 0, 3, 2)

#Calculate the norm > sqrt(sum(u*u)) [1] 3.741657

5.3 Distance between two vectors We defined the distance between two vectors as the norm of the subtraction of the two vectors. We could do this in two ways, either calculating the vector ”subtraction” and then calculating its norm, or calculating the norm of the subtraction of the components directly.

17 #Define the vectors > u <- c(1, 0, 3, 2) > v <- c(-1, 3, 2, 0)

#In the first way, calculate the vector"subtraction". #Then, calculate the norm. > w <- u - v > sqrt(sum(w*w)) [1] 4.242641

#In the second way, we can calculate the norm #of the subtraction of the vectors directly. > sqrt(sum((u-v)*(u-v))) [1] 4.242641

5.4 Angle between two vectors To calculate the angle between two vectors we will follow the formula:

 h~u,~vi  θ = arccos k~ukk~vk

#Introduce the vectors > u <- c(1, 0, 3, 2) > v <- c(-1, 3, 2, 0)

#Calculate the angle with the formula > a <- acos(sum(u*v) / (sqrt(sum(u*u)) * sqrt(sum(v*v)))) > a [1] 1.205589 Note: by default, the angle units is in International System, which is radians (rad).

5.5 Orthogonal projection The process to project a vector onto another may be done step by step in R, as a calculator. However, for R2 vectors there is a package to perform orthogonal projections. The package is called LearnGeom. You have to install

18 it previously. This package projects a vector onto a line, which is created from the vector you want to project onto. #Install the package, if not installed yet > install.packages("LearnGeom")

#Once installed, load the package > library ("LearnGeom")

#Define the vectors > u <- c(1, 2) > v <- c(3, 1)

#Create the line from the origin tov > Line <- CreateLinePoints(c(0,0), v)

#Project the vectoru onto the line > projection <- ProjectPoint(u, Line) > projection XY 1.5 0.5#This is the result of the projection. As we can see, the projection (1.5, 0.5) is half the vector ~v. This matches with the property that the projection is linearly dependent on the vector you project onto.

For vectors in higher dimensions, we have to do it step by step, with the formula:

h~u,~vi P roj (~u) = ~v ~v k~vk2

#Define the vectors > u <- c(1, 2, 0) > v <- c(3, 1, 0)

#Calculate the projection > projection <- (sum(u*v) / sum(v*v))*v > projection [1] 1.5 0.5 0.0

19 5.6 Gram-Schmidt orthogonalization As in the orthogonal projection, we can do it step by step or with a package in R. Let’s do it first step by step, testing the results in the examples.

In R2: #Introduce the vectors > v1 <- c(4, 2) > v2 <- c(1, 3)

#Transform the vectorv1 intou1 > u1 <- v1

#Orthogonalizev2 > u2 <- v2 - (sum(u1*v2) / sum(u1*u1))*u1 > u2 [1] -1 2

#You can now show that they are orthogonal > sum(u1*u2) [1] 0 In R3: #Introduce the vectors > v1 <- c(1, 1, 1) > v2 <- c(0, 2, 0) > v3 <- c(1, 0, 3)

#Transform the vectorv1 intou1 > u1 <- v1

#Orthogonalizev2 > u2 <- v2 - (sum(u1*v2) / sum(u1*u1))*u1 > u2 [1] -0.6666667 1.3333333 -0.6666667

#Orthogonalizev3 > u3 <- v3 - (sum(u1*v3) / sum(u1*u1))*u1 - (sum(u2*v3) / sum(u2*u2))*u2 > u3

20 [1] -1 0 1

#You can now show that they are orthogonal > sum(u1*u2) [1] 0 > sum(u1*u3) [1] 0 > sum(u2*u3) [1] 0 There is a package called ”Practical Numerical Math Functions”, pracma, which contains a command to perform directly Gram-Schmidt orthogonal- ization: gramSchmidt. Given a matrix in which the rows or columns are the vectors of a basis, it returns another matrix in which the rows or the columns are an orthonormal basis1based on the vectors you provided (and more things that are not important now). Let’s try the package with the same examples. In R2: #Install the package, if not installed yet > install.packages("pracma")

#Once installed, load the package > library ("pracma")

#Define the vectors in the matrix > m <- matrix(c(4, 2, 1, 3), 2, 2) > m [,1] [,2] [1 ,] 4 1 [2 ,] 2 3

#Perform the Gram-Schmidt orthogonalization > gs <- gramSchmidt(m)

#Orthonormal matrix is stored in gs$Q > gs$Q [,1] [,2]

1An orthonormal basis is a basis in which all the vectors are orthogonal (perpendicular) and have module 1.

21 [1,] 0.8944272 -0.4472136 [2,] 0.4472136 0.8944272

#You can also check if they are orthonormal > sum(gs$Q[,1]*gs$Q[,2]) [1] 0 > sum(gs$Q[,1]*gs$Q[,1]) [1] 1 In R3: #Install the package if not installed yet > install.packages("pracma")

#Load the package if installed > library ("pracma")

#Introduce the vectors in the matrix > m <- matrix(c(1, 1, 1, 0, 2, 0, 1, 0, 3), 3, 3) > m [,1] [,2] [,3] [1 ,] 1 0 1 [2 ,] 1 2 0 [3 ,] 1 0 3

#Perform the Gram-Schmidt orthogonalization > gs <- gramSchmidt(m)

#Orthonormal matrix is stored in gs$Q > gs$Q [,1] [,2] [,3] [1,] 0.5773503 -0.4082483 -7.071068e-01 [2,]0.5773503 0.8164966 0 [3,] 0.5773503 -0.4082483 7.071068e-01

#You can show that they are orthonormal > sum(gs$Q[,1]*gs$Q[,2]) [1] 0 > sum(gs$Q[,1]*gs$Q[,3]) [1] 0

22 > sum(gs$Q[,2]*gs$Q[,3]) [1] 0 > sum(gs$Q[,1]*gs$Q[,1]) [1] 1 > sum(gs$Q[,2]*gs$Q[,2]) [1] 1 > sum(gs$Q[,3]*gs$Q[,3]) [1] 1

23 References

[1] Howard Anton. Introducci´onal ´algebra lineal. 2003.

[2] Marc Peter Deisenroth; A Aldo Faisal and Cheng Soon Ong. Mathematics for Machine Learning. 2018.

[3] Soren Hojsgaard. Introduction to with R. 2011.

[4] Jordi Vill`aand Pau Ru´e. Elements of Mathematics: an embarrasignly simple (but practical) introduction to algebra. 2011.

24