Midterm Review Notes for Applied Linear Algebra Midterm: Oct 28, 2019 in Class the Midterm Will Focus on Testing You on the More

Total Page:16

File Type:pdf, Size:1020Kb

Midterm Review Notes for Applied Linear Algebra Midterm: Oct 28, 2019 in Class the Midterm Will Focus on Testing You on the More Midterm Review Notes for Applied Linear Algebra Midterm: Oct 28, 2019 in class The midterm will focus on testing you on the more theoretical aspects of applied linear algebra. These are concepts that were covered on your quizzes and problem sets, but on the exam you will be asked to show your logic and work in a way that may not have been required for the problem sets. When showing something is true, make sure you state why you can take certain steps. I have provided hints on the practice questions for what assumptions you need in order to show or demonstrate what the question asked for. You should know the definitions of and how to check: 1. whether a subset of vectors is a subspace 2. linear independence of vectors 3. range, basis, column space, row space, nullspace, and rank 4. properties of a norm 5. conditioning of Ax = b 6. orthogonal & orthonormal vectors 7. Cauchy-Schwarz and triangle inequalities 8. orthogonality of a matrix You should know the geometric interpretations of (be able to draw in 2-D): 1. linearly independent vectors 2. normalized vectors 3. orthonormal vectors 4. vector projections 5. induced matrix norms for 2-norm You should know the procedure for: 1. matrix multiplication 2. row reduction in terms of elementary matrices (you won't have to do a full matrix) 3. finding the range, basis, column space, row space, nullspace, and rank of a matrix 4. finding the LU decomposition 5. solving Ax= b given PA =LU 1 6. using elementary matrices to preform Gaussian elimination and form L, perform row exchanges, and multiply a row by a constant. You may need to use the following properties to demonstrate properties: 1.( AB)T = BT AT 2. det(AB) = det(A)det(B) 3. the determinant of an upper or lower triangular matrix is the product of the diagonal entries. Here are some practice questions: 1. Find a basis for the row space of the following matrix. What is the rank of A? Find a basis for the column space of A. 21 0 1 03 60 1 0 17 A = 6 7 (1) 41 1 1 15 0 0 1 1 2. How many solutions would Ax = b have (for problem 1) if 213 617 b1 = 6 7? (2) 415 1 What if 213 607 b2 = 6 7? (3) 415 1 3. By changing one entry of A (problem 1), create a new matrix B that has a unique solution for Bx = b (problem 2, either b). 4. Find a basis for the subspace S 2 R3 defined by the equation x + 2y + 3z = 0. Verify T that y1 = [−1; 1; 1] 2 S and find a basis for S that includes y1 5. If v1, v2, and v3 are linearly independent, for what values of c are the vectors v2 − v1, cv3 − v2, and v1 − v3 linearly independent. Hint: To show this, you will need to use the definition of linear independence twice and some algebraic manipulation. 6. The left nullspace of a matrix A is the set of all vectors x such that xT A = 0. 2 2 1 6 23 (a) Find the left nullspace of A = 4−1 3 05 Hint: Find the equations through −2 15 2 matrix multiplication for a vector x, then reformulate into a linear system you know how to deal with. (b) Find the nullspace of AT . (c) Show that if A is an n × n matrix, the left nullspace of A is equal to the nullspace of AT . Hint: You will only need the definition of nullspace and the useful property of transposes ((AB)T = BT AT ). To prove this you would need to show both directions (ie left nullspace of A is equal to nullspace of AT and nullspace of AT is equal to left nullspace of A). (Note: to make the exam feasible I will give you most matrices in reduced row echelon form. However, I may ask that you reduce at least one by hand. So you should decide how much time you want to spend practicing row reduction vs using Matlab) 2 1 3 7. Verify the Cauchy-Schwarz and triangle inequalities for the vectors x = 4−15, 2 2 2 3 y = 4 3 5 −4 8. Draw a sketch of two linearly independent vectors in R2 that are normalized, but not orthogonal. Be precise in giving the vectors length and direction. 9. Draw a sketch illustrating a vector projection of one vector onto another. Label with formula the distance projected, the direction of the projection, and the projected vector itself. Your formula should only be a function of the original vectors (not the angle between them). 10. Show that if x and y are vectors in Rn, then hAx; yi = hx; AT yi. (Hint: you should only need the definition of inner product, the property of the transpose of a matrix product, and rules of matrix multiplication.) 11. Show that if A and B are orthogonal matrices, then AB and BA are orthogonal matrices. (Hint: You should only need the definition of orthogonal matrices and the rule for transpose of a matrix product.) 12. (6.25) Show that if P is an orthogonal matrix, and x and y are vectors in Rn, then hPx; Pyi = hx; yi. (Hint: You should only need the definition of orthogonal matrices, the vector/matrix multiplication formulation of inner product, and the rule for transposes of products of matrices.) 13. Show that jjxjj1 is a vector norm by verifying Def. 7.1 in the text. (Hint: you should only need the Def. 7.1, algebra with summations, and an upper bound on the absolute value as used in the proof in Theorem 7.1 Triangle Inequality.) 3 14. Show that for I an n × n identity matrix: (a) jjIjj2 = 1 p (b) jjIjjF = (n) 15. Draw a conceptual sketch of the matrix induced 2-norm, as in Fig. 7.8. Label jjxjj2 = 1, jjAxjj2, and jjAjj2 for some A. What is the semi-major axis? 16. If you have PA = LU how can you calculate the determinant of A 17. List the steps to find x given PA = LU for Ax = b. 22 1 03 18. What two elementary row matrices E21(t1) and E32(t2) put A = 46 4 25 into 0 3 5 upper triangular form? E21(t1)E32(t2)A = U. Calculate L using these matrices (use the properties of inverse and multiplication of elementary matrices). 19. Construct a 4 × 4 matrix that will take 3rd column of another 4 × 4 matrix, A, multiply it by 4, and subtract it from the 2nd column of A, through matrix multiplication. 20. Take the inverse of the elementary matrix you constructed in the previous problem. 21. Construct a 3x3 matrix, P1 that swaps the 2nd and 3rd row of another 3x3 matrix. 22. What is the inverse of P1? Show whether P1 is an orthogonal matrix. 4.
Recommended publications
  • Learning Geometric Algebra by Modeling Motions of the Earth and Shadows of Gnomons to Predict Solar Azimuths and Altitudes
    Learning Geometric Algebra by Modeling Motions of the Earth and Shadows of Gnomons to Predict Solar Azimuths and Altitudes April 24, 2018 James Smith [email protected] https://mx.linkedin.com/in/james-smith-1b195047 \Our first step in developing an expression for the orientation of \our" gnomon: Diagramming its location at the instant of the 2016 December solstice." Abstract Because the shortage of worked-out examples at introductory levels is an obstacle to widespread adoption of Geometric Algebra (GA), we use GA to calculate Solar azimuths and altitudes as a function of time via the heliocentric model. We begin by representing the Earth's motions in GA terms. Our representation incorporates an estimate of the time at which the Earth would have reached perihelion in 2017 if not affected by the Moon's gravity. Using the geometry of the December 2016 solstice as a starting 1 point, we then employ GA's capacities for handling rotations to determine the orientation of a gnomon at any given latitude and longitude during the period between the December solstices of 2016 and 2017. Subsequently, we derive equations for two angles: that between the Sun's rays and the gnomon's shaft, and that between the gnomon's shadow and the direction \north" as traced on the ground at the gnomon's location. To validate our equations, we convert those angles to Solar azimuths and altitudes for comparison with simulations made by the program Stellarium. As further validation, we analyze our equations algebraically to predict (for example) the precise timings and locations of sunrises, sunsets, and Solar zeniths on the solstices and equinoxes.
    [Show full text]
  • Schaum's Outline of Linear Algebra (4Th Edition)
    SCHAUM’S SCHAUM’S outlines outlines Linear Algebra Fourth Edition Seymour Lipschutz, Ph.D. Temple University Marc Lars Lipson, Ph.D. University of Virginia Schaum’s Outline Series New York Chicago San Francisco Lisbon London Madrid Mexico City Milan New Delhi San Juan Seoul Singapore Sydney Toronto Copyright © 2009, 2001, 1991, 1968 by The McGraw-Hill Companies, Inc. All rights reserved. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior writ- ten permission of the publisher. ISBN: 978-0-07-154353-8 MHID: 0-07-154353-8 The material in this eBook also appears in the print version of this title: ISBN: 978-0-07-154352-1, MHID: 0-07-154352-X. All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after every occurrence of a trademarked name, we use names in an editorial fashion only, and to the benefit of the trademark owner, with no intention of infringement of the trademark. Where such designations appear in this book, they have been printed with initial caps. McGraw-Hill eBooks are available at special quantity discounts to use as premiums and sales promotions, or for use in corporate training programs. To contact a representative please e-mail us at [email protected]. TERMS OF USE This is a copyrighted work and The McGraw-Hill Companies, Inc. (“McGraw-Hill”) and its licensors reserve all rights in and to the work.
    [Show full text]
  • Exercise 7.5
    IN SUMMARY Key Idea • A projection of one vector onto another can be either a scalar or a vector. The difference is the vector projection has a direction. A A a a u u O O b NB b N B Scalar projection of a on b Vector projection of a on b Need to Know ! ! ! ! a # b ! • The scalar projection of a on b ϭϭ! 0 a 0 cos u, where u is the angle @ b @ ! ! between a and b. ! ! ! ! ! ! a # b ! a # b ! • The vector projection of a on b ϭ ! b ϭ a ! ! b b @ b @ 2 b # b ! • The direction cosines for OP ϭ 1a, b, c2 are a b cos a ϭ , cos b ϭ , 2 2 2 2 2 2 Va ϩ b ϩ c Va ϩ b ϩ c c cos g ϭ , where a, b, and g are the direction angles 2 2 2 Va ϩ b ϩ c ! between the position vector OP and the positive x-axis, y-axis and z-axis, respectively. Exercise 7.5 PART A ! 1. a. The vector a ϭ 12, 32 is projected onto the x-axis. What is the scalar projection? What is the vector projection? ! b. What are the scalar and vector projections when a is projected onto the y-axis? 2. Explain why it is not possible to obtain! either a scalar! projection or a vector projection when a nonzero vector x is projected on 0. 398 7.5 SCALAR AND VECTOR PROJECTIONS NEL ! ! 3. Consider two nonzero vectors,a and b, that are perpendicular! ! to each other.! Explain why the scalar and vector projections of a on b must! be !0 and 0, respectively.
    [Show full text]
  • MATH 304 Linear Algebra Lecture 24: Scalar Product. Vectors: Geometric Approach
    MATH 304 Linear Algebra Lecture 24: Scalar product. Vectors: geometric approach B A B′ A′ A vector is represented by a directed segment. • Directed segment is drawn as an arrow. • Different arrows represent the same vector if • they are of the same length and direction. Vectors: geometric approach v B A v B′ A′ −→AB denotes the vector represented by the arrow with tip at B and tail at A. −→AA is called the zero vector and denoted 0. Vectors: geometric approach v B − A v B′ A′ If v = −→AB then −→BA is called the negative vector of v and denoted v. − Vector addition Given vectors a and b, their sum a + b is defined by the rule −→AB + −→BC = −→AC. That is, choose points A, B, C so that −→AB = a and −→BC = b. Then a + b = −→AC. B b a C a b + B′ b A a C ′ a + b A′ The difference of the two vectors is defined as a b = a + ( b). − − b a b − a Properties of vector addition: a + b = b + a (commutative law) (a + b) + c = a + (b + c) (associative law) a + 0 = 0 + a = a a + ( a) = ( a) + a = 0 − − Let −→AB = a. Then a + 0 = −→AB + −→BB = −→AB = a, a + ( a) = −→AB + −→BA = −→AA = 0. − Let −→AB = a, −→BC = b, and −→CD = c. Then (a + b) + c = (−→AB + −→BC) + −→CD = −→AC + −→CD = −→AD, a + (b + c) = −→AB + (−→BC + −→CD) = −→AB + −→BD = −→AD. Parallelogram rule Let −→AB = a, −→BC = b, −−→AB′ = b, and −−→B′C ′ = a. Then a + b = −→AC, b + a = −−→AC ′.
    [Show full text]
  • Linear Algebra - Part II Projection, Eigendecomposition, SVD
    Linear Algebra - Part II Projection, Eigendecomposition, SVD (Adapted from Punit Shah's slides) 2019 Linear Algebra, Part II 2019 1 / 22 Brief Review from Part 1 Matrix Multiplication is a linear tranformation. Symmetric Matrix: A = AT Orthogonal Matrix: AT A = AAT = I and A−1 = AT L2 Norm: s X 2 jjxjj2 = xi i Linear Algebra, Part II 2019 2 / 22 Angle Between Vectors Dot product of two vectors can be written in terms of their L2 norms and the angle θ between them. T a b = jjajj2jjbjj2 cos(θ) Linear Algebra, Part II 2019 3 / 22 Cosine Similarity Cosine between two vectors is a measure of their similarity: a · b cos(θ) = jjajj jjbjj Orthogonal Vectors: Two vectors a and b are orthogonal to each other if a · b = 0. Linear Algebra, Part II 2019 4 / 22 Vector Projection ^ b Given two vectors a and b, let b = jjbjj be the unit vector in the direction of b. ^ Then a1 = a1 · b is the orthogonal projection of a onto a straight line parallel to b, where b a = jjajj cos(θ) = a · b^ = a · 1 jjbjj Image taken from wikipedia. Linear Algebra, Part II 2019 5 / 22 Diagonal Matrix Diagonal matrix has mostly zeros with non-zero entries only in the diagonal, e.g. identity matrix. A square diagonal matrix with diagonal elements given by entries of vector v is denoted: diag(v) Multiplying vector x by a diagonal matrix is efficient: diag(v)x = v x is the entrywise product. Inverting a square diagonal matrix is efficient: 1 1 diag(v)−1 = diag [ ;:::; ]T v1 vn Linear Algebra, Part II 2019 6 / 22 Determinant Determinant of a square matrix is a mapping to a scalar.
    [Show full text]
  • Basics of Linear Algebra
    Basics of Linear Algebra Jos and Sophia Vectors ● Linear Algebra Definition: A list of numbers with a magnitude and a direction. ○ Magnitude: a = [4,3] |a| =sqrt(4^2+3^2)= 5 ○ Direction: angle vector points ● Computer Science Definition: A list of numbers. ○ Example: Heights = [60, 68, 72, 67] Dot Product of Vectors Formula: a · b = |a| × |b| × cos(θ) ● Definition: Multiplication of two vectors which results in a scalar value ● In the diagram: ○ |a| is the magnitude (length) of vector a ○ |b| is the magnitude of vector b ○ Θ is the angle between a and b Matrix ● Definition: ● Matrix elements: ● a)Matrix is an arrangement of numbers into rows and columns. ● b) A matrix is an m × n array of scalars from a given field F. The individual values in the matrix are called entries. ● Matrix dimensions: the number of rows and columns of the matrix, in that order. Multiplication of Matrices ● The multiplication of two matrices ● Result matrix dimensions ○ Notation: (Row, Column) ○ Columns of the 1st matrix must equal the rows of the 2nd matrix ○ Result matrix is equal to the number of (1, 2, 3) • (7, 9, 11) = 1×7 +2×9 + 3×11 rows in 1st matrix and the number of = 58 columns in the 2nd matrix ○ Ex. 3 x 4 ॱ 5 x 3 ■ Dot product does not work ○ Ex. 5 x 3 ॱ 3 x 4 ■ Dot product does work ■ Result: 5 x 4 Dot Product Application ● Application: Ray tracing program ○ Quickly create an image with lower quality ○ “Refinement rendering pass” occurs ■ Removes the jagged edges ○ Dot product used to calculate ■ Intersection between a ray and a sphere ■ Measure the length to the intersection points ● Application: Forward Propagation ○ Input matrix * weighted matrix = prediction matrix http://immersivemath.com/ila/ch03_dotprodu ct/ch03.html#fig_dp_ray_tracer Projections One important use of dot products is in projections.
    [Show full text]
  • Quantum Flatland and Monolayer Graphene from a Viewpoint of Geometric Algebra A
    Vol. 124 (2013) ACTA PHYSICA POLONICA A No. 4 Quantum Flatland and Monolayer Graphene from a Viewpoint of Geometric Algebra A. Dargys∗ Center for Physical Sciences and Technology, Semiconductor Physics Institute A. Go²tauto 11, LT-01108 Vilnius, Lithuania (Received May 11, 2013) Quantum mechanical properties of the graphene are, as a rule, treated within the Hilbert space formalism. However a dierent approach is possible using the geometric algebra, where quantum mechanics is done in a real space rather than in the abstract Hilbert space. In this article the geometric algebra is applied to a simple quantum system, a single valley of monolayer graphene, to show the advantages and drawbacks of geometric algebra over the Hilbert space approach. In particular, 3D and 2D Euclidean space algebras Cl3;0 and Cl2;0 are applied to analyze relativistic properties of the graphene. It is shown that only three-dimensional Cl3;0 rather than two-dimensional Cl2;0 algebra is compatible with a relativistic atland. DOI: 10.12693/APhysPolA.124.732 PACS: 73.43.Cd, 81.05.U−, 03.65.Pm 1. Introduction mension has been reduced, one is automatically forced to change the type of Cliord algebra and its irreducible Graphene is a two-dimensional crystal made up of car- representation. bon atoms. The intense interest in graphene is stimulated On the other hand, if the problem was formulated in by its potential application in the construction of novel the Hilbert space terms then, generally, the neglect of nanodevices based on relativistic physics, or at least with one of space dimensions does not spoil the Hilbert space.
    [Show full text]
  • Online Homework 2 with Solutions
    Mei Qin Chen citadel-math231 WeBWorK assignment number Homework2 is due : 09/02/2011 at 09:00am EDT. The (* replace with url for the course home page *) for the course contains the syllabus, grading policy and other information. This file is /conf/snippets/setHeader.pg you can use it as a model for creating files which introduce each problem set. The primary purpose of WeBWorK is to let you know that you are getting the correct answer or to alert you if you are making some kind of mistake. Usually you can attempt a problem as many times as you want before the due date. However, if you are having trouble figuring out your error, you should consult the book, or ask a fellow student, one of the TA’s or your professor for help. Don’t spend a lot of time guessing – it’s not very efficient or effective. Give 4 or 5 significant digits for (floating point) numerical answers. For most problems when entering numerical answers, you can if you wish enter elementary expressions such as 2 ^ 3 instead of 8, sin(3 ∗ pi=2)instead of -1, e ^ (ln(2)) instead of 2, (2 +tan(3)) ∗ (4 − sin(5)) ^ 6 − 7=8 instead of 27620.3413, etc. Here’s the list of the functions which WeBWorK understands. You can use the Feedback button on each problem page to send e-mail to the professors. 1. (1 pt) local/Library/Rochester/setVectors2DotProduct- ~u ·~v +~v ·~w = /UR VC 1 11.pg Correct Answers: Find a · b if jjajj = 7, jjbjj = 2, and the angle between a and b is • 1 p − 10 radians.
    [Show full text]
  • J. Phys. A: Math
    Journal of Physics A: Mathematical and Theoretical PAPER SICs and the elements of order three in the Clifford group To cite this article: Len Bos and Shayne Waldron 2019 J. Phys. A: Math. Theor. 52 105301 View the article online for updates and enhancements. This content was downloaded from IP address 130.216.231.21 on 10/02/2019 at 21:05 IOP Journal of Physics A: Mathematical and Theoretical J. Phys. A: Math. Theor. Journal of Physics A: Mathematical and Theoretical J. Phys. A: Math. Theor. 52 (2019) 105301 (31pp) https://doi.org/10.1088/1751-8121/aafff3 52 2019 © 2019 IOP Publishing Ltd SICs and the elements of order three in the Clifford group JPHAC5 Len Bos1 and Shayne Waldron2,3 105301 1 Department of Computer Science, University of Verona, Verona, Italy 2 Department of Mathematics, University of Auckland, Private Bag 92019, Auckland, L Bos and S Waldron New Zealand E-mail: [email protected] SICs and the elements of order three in the Clifford group Received 21 September 2018, revised 23 December 2018 Accepted for publication 18 January 2019 Printed in the UK Published 8 February 2019 Abstract JPA For over a decade, there has been intensive work on the numerical and analytic construction of SICs (d2 equiangular lines in Cd) as an orbit of the Heisenberg 10.1088/1751-8121/aafff3 group. The Clifford group, which consists of the unitary matrices which normalise the Heisenberg group, plays a key role in these constructions. All of the known fiducial (generating) vectors for such SICs are eigenvectors of Paper symplectic operations in the Clifford group with canonical order 3.
    [Show full text]
  • 1 Scalar.Mcd Fundamental Concepts in Vector Analysis: Scalar Product, Magnitude, Angle, Orthogonality, Projection
    1 scalar.mcd Fundamental concepts in vector analysis: scalar product, magnitude, angle, orthogonality, projection. Instructor: Nam Sun Wang In a linear vector space (LVS), there are rules on 1) addition of two vectors and 2) multiplication by a scalar. The resulting vector must also lie in the same LVS. We now impose another rule: multiplication of two vectors to yield a scalar. Real Euclidean Space. Real Euclidean Space is a subspace of LVS where there is a real-valued function (called scalar product) defined on pairs of vectors x and y with the following four properties. 1. Commutative (x, y) (y, x) 2. Associative (.x, y) .(x, y) 3. Distributive (x y, z) (x, z) (y, z) 4. Positive magnitude (x, x)>0 unless x 0 math jargon: (x, x) 0 (x, x) 0 iff x 0 Note that it is up to us to come up with a definition of the scalar product. Any definition that satisfies that above four properties is a valid one. Not only do we define the scalar product accordingly for different types of vectors, we conveniently adopt different definitions of scalar product for different applications even for one type of vectors. A scalar product is also known as an inner product or a dot product. We have vectors in linear vector space if we come up with rules of addition and multiplication by a scalar; furthermore, these vectors are in real Euclidean space if we come up with a rule on multiplication of two vectors. There are two metrics of a vector: magnitude and direction.
    [Show full text]
  • Linear Algebra I: 2017/18 Revision Checklist for the Examination
    Linear Algebra I: 2017/18 Revision Checklist for the Examination The examination tests knowledge of three definitions, one theorem with its proof (in the written part) and a survey question on one topic (in the oral part). Each definition is followed by the request for illustrative examples or a straightforward problem involving the defined terms. Survey questions involve providing definitions, giving theorem statements, examples and relationships between ideas { proofs for this part are not required beyond the key notions, but you will be asked to recall theorem statements. (In the written part you will be given the statement of a theorem, the task being there to prove it). The oral part also involves a discussion of your answers to the written part. The examination consists of at most an hour on the written part and a discussion for up to half an hour. The oral part may come in two parts: as there may be other people taking the examination concurrently, you may be asked to begin with the survey topic discussion before doing the written part, and then return for a short second discussion of the written part once you have completed it. You may need to wait for a period after finishing the written part to be called for the discussion part. Survey topics The following gives an indication of likely topics that you may be asked about during the oral part of the examination (be prepared to give definitions, examples, algorithm descriptions, theorems, notable corollaries etc. { proofs will not be asked for in this part). • elementary row operations
    [Show full text]
  • Math 3A Notes
    5. Orthogonality 5.1. The Scalar Product in Euclidean Space 5.1 The Scalar Product in Rn Thusfar we have restricted ourselves to vector spaces and the operations of addition and scalar multiplication: how else might we combine vectors? You should know about the scalar (dot) and cross products of vectors in R3 The scalar product extends nicely to other vector spaces, while the cross product is another story1 The basic purpose of scalar products is to define and analyze the lengths of and angles between vectors2 1But not for this class! You may see ‘wedge’ products in later classes. 2It is important to note that everything in this chapter, until mentioned, only applies to real vector spaces (where F = R) 5. Orthogonality 5.1. The Scalar Product in Euclidean Space Euclidean Space Definition 5.1.1 Suppose x, y Rn are written with respect to the standard 2 basis e ,..., e f 1 ng 1 The scalar product of x, y is the real numbera (x, y) := xTy = x y + x y + + x y 1 1 2 2 ··· n n 2 x, y are orthogonal or perpendicular if (x, y) = 0 3 n-dimensional Euclidean Space Rn is the vector space of n 1 column vectors R × together with the scalar product aOther notations include x y and x, y · h i Euclidean Space is more than just a collection of co-ordinates vectors: it implicitly comes with notions of angle and length3 Important Fact: (y, x) = yTx = (xTy)T = xTy = (x, y) so the scalar product is symmetric 3To be seen in R2 and R3 shortly 5.
    [Show full text]