Linear Algebra- Final Exam Review

Total Page:16

File Type:pdf, Size:1020Kb

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review 1. Show that Row(A) ? Null(A). SOLUTION: We can write matrix-vector multiplication in terms of the rows of the matrix A. If A is m × n, then: 2 3 2 3 Row1(A) Row1(A)x 6 Row (A) 7 6 Row (A)x 7 6 2 7 6 2 7 Ax = 6 . 7 x = 6 . 7 4 . 5 4 . 5 Rowm(A) Rowm(A)x Each of these products is the \dot product" of a row of A with the vector x. To show the desired result, let x 2 Null(A). Then each of the products shown in the equation above must be zero, since Ax = 0, so that x is orthogonal to each row of A. Since the rows form a spanning set for the row space, x is orthogonal to every vector in the row space. 2. Let A be invertible. Show that, if v1; v2; v3 are linearly independent vectors, so are Av1;Av2;Av3. NOTE: It should be clear from your answer that you know the definition. SOLUTION: We need to show that the only solution to: c1Av1 + c2Av2 + c3Av3 = 0 is the trivial solution. Factoring out the matrix A, A(c1v1 + c2v2 + c3v3) = 0 Think of the form Ax^ = 0. Since A is invertible, the only solution to this is x^ = 0, which implies that the only solution to the equation above is the solution to c1v1 + c2v2 + c3v3 = 0 Which is (only) the trivial solution, since the vectors are linearly independent. (NOTE: Notice that if the original vectors had been linearly dependent, this last equation would have non-trivial solutions). 3. Find the line of best first for the data: x 0 1 2 3 y 1 1 2 2 Let A be the matrix formed by a column from x column of ones, then we form the normal equations AT Ac = AT y and solve: 14 6 11 AT Ac = AT y c^ = 6 4 6 T −1 T 1 T The solution is c^ = (A A) A y = 10 [4; 9] , so the slope is 2=5 and the intercept is 9=10. 1 −3 0 4. Let A = . (a) Is A orthogonally diagonalizable? If so, orthogonally diagonalize 0 0 it! (b) Find the SVD of A. SOLUTION: For part (a), the matrix A is symmetric, so it is orthogonally diagonalizable. It is also a diagonal matrix, so the eigenvalues are λ = −3 and λ = 0. The eigenvalues are the standard basis vectors, so P = I, and D = A. For the SVD, the eigenvalues of AT A are 9 and 0, so the singular values are 3 and 0. The column space is spanned by [1; 0]T , as is the row space. We also see that −3 −1 Av = σ u = = 3 1 1 1 0 0 This brings up a good point- You may use either: −1 0 1 0 U = V = 0 1 0 1 or the reverse: 1 0 −1 0 U = V = 0 1 0 1 This problem with the ±1 is something we don't run into with the usual diagonalization. 5. Let V be the vector space spanned by the functions on the interval [−1; 1]. 1; t; t2 Use Gram-Schmidt to find an orthonormal basis, if we define the inner product: Z 1 hf(t); g(t)i = 2f(t)g(t) dt −1 SOLUTION: Let v1 = 1 (which is not normalized- We'll normalize later). Then R 1 2t dt v = t − Proj (t) = t − −1 1 = t − 0 = t 2 v1 R 1 −1 2 dt R 1 2 R 1 3 2 2 2 2 −1 2t dt −1 2t dt v3 = t − Projv (t ) − Projv (t ) = t − 1 − t 1 2 R 1 R 1 2 −1 2 dt −1 2t dt We note that the integral of any odd function will be zero, so that last term drops: 4=3 1 v = t2 − 1 = t2 − 3 4 3 2 6. Suppose that vectors u; v; w are linearly independent vectors in IRn. Determine if the set fu + v; u − v; u − 2v + wg are also linearly independent. SOLUTION: We want to show that the only solution to the following equation is the trivial solution: C1(u + v) + C2(u − v) + C3(u − 2v + w) = 0 Regrouping this using the original vectors, we have: (C1 + C2 + C3)u + (C1 − C2 − 2C3)v + C3w = 0 Since these vectors are linearly independent, each coefficient must be zero: C1 + C2 + C3 = 0 C1 − C2 − 2C3 = 0 C3 = 0 From which we get that C1 = C2 = C3 = 0 is the only solution. 7. Let v1;:::; vp be orthonormal. If x = c1v1 + c2v2 + ··· + cpvp 2 2 2 then show that kxk = jc1j + ··· + jcpj . (Hint: Write the norm squared as the dot product). SOLUTION: Compute x · x, and use the property that the vectors vi are orthonormal: (c1v1 + c2v2 + ··· + cpvp) · (c1v1 + c2v2 + ··· + cpvp) = Since all dot products of the form cickvi ·vk = 0 for i 6= k, then the dot product simplifies to: 2 2 2 c1v1 · v1 + c2v2 · v2 + ··· cpvp · vp And since the vectors are normalized, this gives the result: 2 2 2 kxk = c1 + ··· + cp (Don't need the magnitudes here, since we're working with real numbers). 8. Short answer: (a) If ku + vk2 = kuk2 + kvk2, then u; v are orthogonal. SOLUTION: True- This is the Pythagorean Theorem. (b) Let H be the subset of vectors in IR3 consisting of those vectors whose first element is the sum of the second and third elements. Is H a subspace? SOLUTION: One way of showing that a subset is a subspace is to show that the subspace can be represented by the span of some set of vectors. In this case, 2 a + b 3 2 1 3 2 1 3 4 a 5 = a 4 1 5 + b 4 0 5 b 0 1 Because H is the span of the given vectors, it is a subspace. 3 (c) Explain why the image of a linear transformation T : V ! W is a subspace of W SOLUTION: Maybe \Prove" would have been better than \Explain", since we want to go through the three parts: i. 0 2 T (V ) since 0 2 V and T (0) = 0. ii. Let u; v be in T (V ). Then there is an x; y in V so that T (x) = u and T (y) = v. Since V is a subspace, x+y 2 V , and therefore T (x+y) = T (x)+T (y) = u+v so that u + v 2 T (V ). iii. Let u 2 T (V ). Show that cu 2 T (V ) for all scalars c. If u 2 T (V ), there is an x in V so that T (x) = u. Since V is a subspace, cu 2 V , and T (cu) 2 T (V ). By linearity, this means cT (u) 2 T (V ). (OK, that probably should not have been in the short answer section) 2 1 2 3 3 (d) Is the following matrix diagonalizable? Explain. A = 4 0 5 8 5 0 0 13 SOLUTION: Yes. The eigenvalues are all distinct, so the corresponding eigenvec- tors are linearly independent. (e) If the column space of an 8 × 4 matrix A is 3 dimensional, give the dimensions of the other three fundamental subspaces. Given these numbers, is it possible that the mapping x ! Ax is one to one? onto? SOLUTION: If the column space is 3-d, so is the row space. Therefore the null space (as a subspace of IR4) is 1 dimensional and the null space of AT is 5 dimensional. Since the null space has more than the zero vector, Ax = 0 has non-trivial solutions, so the matrix mapping will not be 1-1. Since the column space is a three dimensional subspace of IR8, the mapping cannot be onto. (f) i. Suppose matrix Q has orthonormal columns. Must QT Q = I? SOLUTION: Yes, QT Q = I. ii. True or False: If Q is m × n with m > n, then QQT = I. SOLUTION: False- If m 6= n, then QQT is the projection matrix that takes a vector x and projects it to the column space of Q. iii. Suppose Q is an orthogonal matrix. Prove that det(Q) = ±1. SOLUTION: If Q is orthogonal, then QT Q = I, and if we take determinants of both sides, we get: (det(Q))2 = 1 Therefore, the determinant of Q is ±1. 2 1 1 2 2 3 9. Find a basis for the null space, row space and column space of A, if A = 4 2 2 5 5 5 0 0 3 3 The basis for the column space is the set containing the first and third columns of A.A basis for the row space is the set of vectors [1; 1; 0; 0]T ; [0; 0; 1; 1]T . A basis for the null space of A is [−1; 1; 0; 0]T ; [0; 0; −1; 1]T . 4 10. Find an orthonormal basis for W = Span fx1; x2; x3g using Gram-Schmidt (you might wait until the very end to normalize all vectors at once): 2 0 3 2 0 3 2 1 3 6 0 7 6 1 7 6 1 7 x1 = 6 7 ; x2 = 6 7 ; x3 = 6 7 ; 4 1 5 4 1 5 4 1 5 1 2 1 SOLUTION: Using Gram Schmidt (before normalization, which is OK if doing by hand), we get 2 0 3 2 0 3 2 3 3 6 0 7 6 2 7 6 1 7 v1 = 6 7 ; v2 = 6 7 ; v3 = 6 7 4 1 5 4 −1 5 4 1 5 1 1 −1 11.
Recommended publications
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Linear Algebra for Dummies
    Linear Algebra for Dummies Jorge A. Menendez October 6, 2017 Contents 1 Matrices and Vectors1 2 Matrix Multiplication2 3 Matrix Inverse, Pseudo-inverse4 4 Outer products 5 5 Inner Products 5 6 Example: Linear Regression7 7 Eigenstuff 8 8 Example: Covariance Matrices 11 9 Example: PCA 12 10 Useful resources 12 1 Matrices and Vectors An m × n matrix is simply an array of numbers: 2 3 a11 a12 : : : a1n 6 a21 a22 : : : a2n 7 A = 6 7 6 . 7 4 . 5 am1 am2 : : : amn where we define the indexing Aij = aij to designate the component in the ith row and jth column of A. The transpose of a matrix is obtained by flipping the rows with the columns: 2 3 a11 a21 : : : an1 6 a12 a22 : : : an2 7 AT = 6 7 6 . 7 4 . 5 a1m a2m : : : anm T which evidently is now an n × m matrix, with components Aij = Aji = aji. In other words, the transpose is obtained by simply flipping the row and column indeces. One particularly important matrix is called the identity matrix, which is composed of 1’s on the diagonal and 0’s everywhere else: 21 0 ::: 03 60 1 ::: 07 6 7 6. .. .7 4. .5 0 0 ::: 1 1 It is called the identity matrix because the product of any matrix with the identity matrix is identical to itself: AI = A In other words, I is the equivalent of the number 1 for matrices. For our purposes, a vector can simply be thought of as a matrix with one column1: 2 3 a1 6a2 7 a = 6 7 6 .
    [Show full text]
  • Orthonormal Bases in Hilbert Space APPM 5440 Fall 2017 Applied Analysis
    Orthonormal Bases in Hilbert Space APPM 5440 Fall 2017 Applied Analysis Stephen Becker November 18, 2017(v4); v1 Nov 17 2014 and v2 Jan 21 2016 Supplementary notes to our textbook (Hunter and Nachtergaele). These notes generally follow Kreyszig’s book. The reason for these notes is that this is a simpler treatment that is easier to follow; the simplicity is because we generally do not consider uncountable nets, but rather only sequences (which are countable). I have tried to identify the theorems from both books; theorems/lemmas with numbers like 3.3-7 are from Kreyszig. Note: there are two versions of these notes, with and without proofs. The version without proofs will be distributed to the class initially, and the version with proofs will be distributed later (after any potential homework assignments, since some of the proofs maybe assigned as homework). This version does not have proofs. 1 Basic Definitions NOTE: Kreyszig defines inner products to be linear in the first term, and conjugate linear in the second term, so hαx, yi = αhx, yi = hx, αy¯ i. In contrast, Hunter and Nachtergaele define the inner product to be linear in the second term, and conjugate linear in the first term, so hαx,¯ yi = αhx, yi = hx, αyi. We will follow the convention of Hunter and Nachtergaele, so I have re-written the theorems from Kreyszig accordingly whenever the order is important. Of course when working with the real field, order is completely unimportant. Let X be an inner product space. Then we can define a norm on X by kxk = phx, xi, x ∈ X.
    [Show full text]
  • Lecture 4: April 8, 2021 1 Orthogonality and Orthonormality
    Mathematical Toolkit Spring 2021 Lecture 4: April 8, 2021 Lecturer: Avrim Blum (notes based on notes from Madhur Tulsiani) 1 Orthogonality and orthonormality Definition 1.1 Two vectors u, v in an inner product space are said to be orthogonal if hu, vi = 0. A set of vectors S ⊆ V is said to consist of mutually orthogonal vectors if hu, vi = 0 for all u 6= v, u, v 2 S. A set of S ⊆ V is said to be orthonormal if hu, vi = 0 for all u 6= v, u, v 2 S and kuk = 1 for all u 2 S. Proposition 1.2 A set S ⊆ V n f0V g consisting of mutually orthogonal vectors is linearly inde- pendent. Proposition 1.3 (Gram-Schmidt orthogonalization) Given a finite set fv1,..., vng of linearly independent vectors, there exists a set of orthonormal vectors fw1,..., wng such that Span (fw1,..., wng) = Span (fv1,..., vng) . Proof: By induction. The case with one vector is trivial. Given the statement for k vectors and orthonormal fw1,..., wkg such that Span (fw1,..., wkg) = Span (fv1,..., vkg) , define k u + u = v − hw , v i · w and w = k 1 . k+1 k+1 ∑ i k+1 i k+1 k k i=1 uk+1 We can now check that the set fw1,..., wk+1g satisfies the required conditions. Unit length is clear, so let’s check orthogonality: k uk+1, wj = vk+1, wj − ∑ hwi, vk+1i · wi, wj = vk+1, wj − wj, vk+1 = 0. i=1 Corollary 1.4 Every finite dimensional inner product space has an orthonormal basis.
    [Show full text]
  • Ch 5: ORTHOGONALITY
    Ch 5: ORTHOGONALITY 5.5 Orthonormal Sets 1. a set fv1; v2; : : : ; vng is an orthogonal set if hvi; vji = 0, 81 ≤ i 6= j; ≤ n (note that orthogonality only makes sense in an inner product space since you need to inner product to check hvi; vji = 0. n For example fe1; e2; : : : ; eng is an orthogonal set in R . 2. fv1; v2; : : : ; vng is an orthogonal set =) v1; v2; : : : ; vn are linearly independent. 3. a set fv1; v2; : : : ; vng is an orthonormal set if hvi; vji = 0 and jjvijj = 1, 81 ≤ i 6= j; ≤ n (i.e. orthogonal and unit length). n For example fe1; e2; : : : ; eng is an orthonormal set in R . 4. given an orthogonal basis for a vector space V , we can always find an orthonormal basis for V by dividing each vector by its length (see Example 2 and 3 page 256) n 5. a space with an orthonormal basis behaves like the euclidean space R with the standard basis (it is easier to work with basis that are orthonormal, just like it is n easier to work with the standard basis versus other bases for R ) 6. if v is a vector in a inner product space V with fu1; u2; : : : ; ung an orthonormal basis, Pn Pn then we can write v = i=1 ciui = i=1hv; uiiui Pn 7. an easy way to find the inner product of two vectors: if u = i=1 aiui and v = Pn i=1 biui, where fu1; u2; : : : ; ung is an orthonormal basis for an inner product space Pn V , then hu; vi = i=1 aibi Pn 8.
    [Show full text]
  • Math 2331 – Linear Algebra 6.2 Orthogonal Sets
    6.2 Orthogonal Sets Math 2331 { Linear Algebra 6.2 Orthogonal Sets Jiwen He Department of Mathematics, University of Houston [email protected] math.uh.edu/∼jiwenhe/math2331 Jiwen He, University of Houston Math 2331, Linear Algebra 1 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix 6.2 Orthogonal Sets Orthogonal Sets: Examples Orthogonal Sets: Theorem Orthogonal Basis: Examples Orthogonal Basis: Theorem Orthogonal Projections Orthonormal Sets Orthonormal Matrix: Examples Orthonormal Matrix: Theorems Jiwen He, University of Houston Math 2331, Linear Algebra 2 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix Orthogonal Sets Orthogonal Sets n A set of vectors fu1; u2;:::; upg in R is called an orthogonal set if ui · uj = 0 whenever i 6= j. Example 82 3 2 3 2 39 < 1 1 0 = Is 4 −1 5 ; 4 1 5 ; 4 0 5 an orthogonal set? : 0 0 1 ; Solution: Label the vectors u1; u2; and u3 respectively. Then u1 · u2 = u1 · u3 = u2 · u3 = Therefore, fu1; u2; u3g is an orthogonal set. Jiwen He, University of Houston Math 2331, Linear Algebra 3 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix Orthogonal Sets: Theorem Theorem (4) Suppose S = fu1; u2;:::; upg is an orthogonal set of nonzero n vectors in R and W =spanfu1; u2;:::; upg. Then S is a linearly independent set and is therefore a basis for W . Partial Proof: Suppose c1u1 + c2u2 + ··· + cpup = 0 (c1u1 + c2u2 + ··· + cpup) · = 0· (c1u1) · u1 + (c2u2) · u1 + ··· + (cpup) · u1 = 0 c1 (u1 · u1) + c2 (u2 · u1) + ··· + cp (up · u1) = 0 c1 (u1 · u1) = 0 Since u1 6= 0, u1 · u1 > 0 which means c1 = : In a similar manner, c2,:::,cp can be shown to by all 0.
    [Show full text]
  • Math 22 – Linear Algebra and Its Applications
    Math 22 – Linear Algebra and its applications - Lecture 25 - Instructor: Bjoern Muetzel GENERAL INFORMATION ▪ Office hours: Tu 1-3 pm, Th, Sun 2-4 pm in KH 229 Tutorial: Tu, Th, Sun 7-9 pm in KH 105 ▪ Homework 8: due Wednesday at 4 pm outside KH 008. There is only Section B,C and D. 5 Eigenvalues and Eigenvectors 5.1 EIGENVECTORS AND EIGENVALUES Summary: Given a linear transformation 푇: ℝ푛 → ℝ푛, then there is always a good basis on which the transformation has a very simple form. To find this basis we have to find the eigenvalues of T. GEOMETRIC INTERPRETATION 5 −3 1 1 Example: Let 퐴 = and let 푢 = 푥 = and 푣 = . −6 2 0 2 −1 1.) Find A푣 and Au. Draw a picture of 푣 and A푣 and 푢 and A푢. 2.) Find A(3푢 +2푣) and 퐴2 (3푢 +2푣). Hint: Use part 1.) EIGENVECTORS AND EIGENVALUES ▪ Definition: An eigenvector of an 푛 × 푛 matrix A is a nonzero vector x such that 퐴푥 = 휆푥 for some scalar λ in ℝ. In this case λ is called an eigenvalue and the solution x≠ ퟎ is called an eigenvector corresponding to λ. ▪ Definition: Let A be an 푛 × 푛 matrix. The set of solutions 푛 Eig(A, λ) = {x in ℝ , such that (퐴 − 휆퐼푛)x = 0} is called the eigenspace Eig(A, λ) of A corresponding to λ. It is the null space of the matrix 퐴 − 휆퐼푛: Eig(A, λ) = Nul(퐴 − 휆퐼푛) Slide 5.1- 7 EIGENVECTORS AND EIGENVALUES 16 Example: Show that 휆 =7 is an eigenvalue of matrix A = 52 and find the corresponding eigenspace Eig(A,7).
    [Show full text]
  • True False Questions from 4,5,6
    (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. Math 217: True False Practice Professor Karen Smith 1. For every 2 × 2 matrix A, there exists a 2 × 2 matrix B such that det(A + B) 6= det A + det B. Solution note: False! Take A to be the zero matrix for a counterexample. 2. If the change of basis matrix SA!B = ~e4 ~e3 ~e2 ~e1 , then the elements of A are the same as the element of B, but in a different order. Solution note: True. The matrix tells us that the first element of A is the fourth element of B, the second element of basis A is the third element of B, the third element of basis A is the second element of B, and the the fourth element of basis A is the first element of B. 6 6 3. Every linear transformation R ! R is an isomorphism. Solution note: False. The zero map is a a counter example. 4. If matrix B is obtained from A by swapping two rows and det A < det B, then A is invertible. Solution note: True: we only need to know det A 6= 0. But if det A = 0, then det B = − det A = 0, so det A < det B does not hold. 5. If two n × n matrices have the same determinant, they are similar. Solution note: False: The only matrix similar to the zero matrix is itself. But many 1 0 matrices besides the zero matrix have determinant zero, such as : 0 0 6.
    [Show full text]
  • C Self-Adjoint Operators and Complete Orthonormal Bases
    C Self-adjoint operators and complete orthonormal bases The concept of the adjoint of an operator plays a very important role in many aspects of linear algebra and functional analysis. Here we will primar­ ily focus on s e ll~adjoint operators and how they can be used to obtain and characterize complete orthonormal sets of vectors in a Hilbert space. The main fact we will present here is that self-adjoint operators typically have orthogonal eigenvectors, and under some conditions, these eigenvectors can form a complete orthonormal basis for the Hilbert space. In particular, we will use this technique to find a variety of orthonormal bases for L2 [a, b] that satisfy different types of boundary conditions. We begin with the algebraic aspects of self-adjoint operators and their eigenvectors. Those algebraic properties are identical in the finite and infinite dimensional cases. The completeness arguments in infinite dimensions are more delicate, we will only state those results without complete proofs. We begin first by defining the adjoint. D efinition C.l. (Finite dimensional or bounded operator case) Let A H I -; H2 be a bounded operator between the Hilbert spaces HI and H 2. The adjomt A' : H2 ---; HI of Xis defined by the req1Lirement J (V2, AVI ) ~ = (A''V2, v])], (C.l) fOT all VI E HI--- and'V2 E H2· Note that for emphasis, we have written (. , .)] and (. , ')2 for the inner products in HI and H2 respectively. It remains to be shown that the relation (C.l) defines an operator A' from A uniquely. We will not do this here in general but will illustrate it with several examples.
    [Show full text]
  • Different Forms of Linear Systems, Linear Combinations, and Span
    Math 20F, 2015SS1 / TA: Jor-el Briones / Sec: A01 / Handout 2 Page 1 of2 Different forms of Linear Systems, Linear Combinations, and Span (1.3-1.4) Terms you should know Linear combination (and weights): A vector y is called a linear combination of vectors v1; v2; :::; vk if given some numbers c1; c2; :::; ck, y = c1v1 + c2v2 + ::: + ckvk The numbers c1; c2; :::; ck are called weights. Span: We call the set of ALL the possible linear combinations of a set of vectors to be the span of those vectors. For example, the span of v1 and v2 is written as span(v1; v2) NOTE: The zero vector is in the span of ANY set of vectors in Rn. Rn: The set of all vectors with n entries 3 ways to write a linear system with m equations and n unknowns If A is the m×n coefficient matrix corresponding to a linear system, with columns a1; a2; :::; an, and b in Rm is the constant vector corresponding to that linear system, we may represent the linear system using 1. A matrix equation: Ax = b 2. A vector equation: x1a1 + x2a2 + ::: + xnan = b, where x1; x2; :::xn are numbers. h i 3. An augmented matrix: A j b NOTE: You should know how to multiply a matrix by a column vector, and that doing so would result in some linear combination of the columns of that matrix. Math 20F, 2015SS1 / TA: Jor-el Briones / Sec: A01 / Handout 2 Page 2 of2 Important theorems to know: Theorem. (Chapter 1, Theorem 3) If A is an m × n matrix, with columns a1; a2; :::; an and b is in Rm, the matrix equation Ax = b has the same solution set as the vector equation x1a1 + x2a2 + ::: + xnan = b as well as the system of linear equations whose augmented matrix is h i A j b Theorem.
    [Show full text]
  • Worksheet and Solutions
    Tuesday, December 6 ∗∗ Orthogonal bases & Orthogonal projections ∗∗ Solutions 1. A basis B is called an orthonormal basis if it is orthogonal and each basis vector has norm equal to 1. (a) Convert the orthogonal basis 80− 1 0 1 0 19 < 1 1 1 = B = @ 1 A , @ 1 A , @1A : 0 −2 1 ; into an orthonormal basis C . Solution. We just scale each basis vector by its length. The new, orthonormal, basis is 8 0−11 0 1 1 0119 < 1 1 1 = C = p @ 1 A , p @ 1 A , p @1A 2 6 3 : 0 −2 1 ; 011 0−41 (b) Find the coordinates of the vectors v1 = @3A and v2 = @ 2 A in the basis C . 5 7 Solution. The formula is the same as for a general orthogonal basis: writing u1, u2, and u3 for the basis vectors, the formula for each coordinate of v1 in the basis C is u · v i 1 = · 2 ui v1. kuik We compute to find 2 p −6 p 9 p u1 · v1 = p = 2, u2 · v1 = p = − 6, u3 · v1 = p = 3 3, 2 6 3 0 p 1 p2 B C so (v1)C = @−p 6A. Similarly, we get 3 3 r 6 p −16 2 5 u1 · v2 = p = 3 2, u2 · v2 = p = −8 , u3 · v2 = p , 2 6 3 3 0 p 1 3p 2 B C so (v2)C = @−8 p2/3A. 5/ 3 Orthonormal bases are very convenient for calculations. 2. (The Gram-Schmidt process) There is a standard process for converting a basis into an 3 orthogonal basis.
    [Show full text]