Learning Canonical Correlations of Paired Tensor Sets Via Tensor-To-Vector Projection

Total Page:16

File Type:pdf, Size:1020Kb

Learning Canonical Correlations of Paired Tensor Sets Via Tensor-To-Vector Projection Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence Learning Canonical Correlations of Paired Tensor Sets Via Tensor-to-Vector Projection Haiping Lu Institute for Infocomm Research Singapore [email protected] Abstract structure in feature extraction, obtain more compact represen- tation, and process big data more efficiently, without reshap- Canonical correlation analysis (CCA) is a useful ing tensors into vectors. Many multilinear subspace learn- technique for measuring relationship between two ing (MSL) algorithms [Lu et al., 2011] generalize linear algo- sets of vector data. For paired tensor data sets, rithms such as principal component analysis and linear dis- we propose a multilinear CCA (MCCA) method. criminant analysis to second/higher-order [Ye et al., 2004a; Unlike existing multilinear variations of CCA, 2004b; Yan et al., 2005; Tao et al., 2007; Lu et al., 2008; MCCA extracts uncorrelated features under two ar- 2009a; 2009b]. There have also been efforts to generalize chitectures while maximizing paired correlations. CCA to tensor data. To the best of our knowledge, there are Through a pair of tensor-to-vector projections, one two approaches. architecture enforces zero-correlation within each One approach analyzes the relationship between two tensor set while the other enforces zero-correlation be- data sets rather than vector data sets. The two-dimensional tween different pairs of the two sets. We take a CCA (2D-CCA) [Lee and Choi, 2007] is the first work along successive and iterative approach to solve the prob- this line to analyze relations between two sets of image data lem. Experiments on matching faces of different without reshaping into vectors. It was further extended to poses show that MCCA outperforms CCA and 2D- local 2D-CCA [Wang, 2010], sparse 2D-CCA [Yan et al., CCA, while using much fewer features. In addi- 2012], and 3-D CCA (3D-CCA) [Gang et al., 2011]. Un- tion, the fusion of two architectures leads to per- der the MSL framework, these CCA extensions are using the formance improvement, indicating complementary tensor-to-tensor projection (TTP) [Lu et al., 2011]. information. Another approach studies the correlations between two data tensors with one or more shared modes because CCA can be viewed as taking two data matrices as input. This 1 Introduction idea was first introduced in [Harshman, 2006]. Tensor CCA Canonical correlation analysis (CCA) [Hotelling, 1936] is a (TCCA) [Kim and Cipolla, 2009] was proposed later with well-known method for analyzing the relations between two two architectures for matching (3-D) video volume data of the sets of vector-valued variables. It assumes that the two data same dimensions. The single-shared-mode and joint-shared- sets are two views of the same set of objects and projects them mode TCCAs can be viewed as transposed versions of 2D- into low dimensional spaces where each pair is maximally CCA and classical CCA, respectively. They apply canonical correlated subject to being uncorrelated with other pairs. transformations to the non-shared one/two axes of 3-D data, CCA has various applications such as information retrieval which can be considered as partial TTPs under MSL. Follow- [Hardoon et al., 2004], multi-label learning [Sun et al., 2008; ing TCCA, the multiway CCA in [Zhang et al., 2011] uses Rai and Daume´ III, 2009] and multi-view learning [Chaud- partial projections to find correlations between second-order huri et al., 2009; Dhillon et al., 2011]. and third-order tensors. Many real-world data are multi-dimensional, represented However, none of the multilinear extensions above takes as tensors rather than vectors. The number of dimensions is an important property of CCA into account, i.e., CCA ex- the order of a tensor. Each dimension is a mode. Second- tracts uncorrelated features for different pairs of projections. order tensors include matrix data such as 2-D images. Ex- To close this gap, we propose a multilinear CCA (MCCA) al- amples of third-order tensors are video sequences, 3-D im- gorithm for learning canonical correlations of paired tensor ages, and web graph mining data organized in three modes data sets with uncorrelated features under two architectures. of source, destination and text [Kolda and Bader, 2009; It belongs to the first approach mentioned above. Kolda and Sun, 2008]. MCCA uses the tensor-to-vector projection (TVP) [Lu et To deal with multi-dimensional data effectively, re- al., 2009a], which will be briefly reviewed in Sec. 2. We fol- searchers have attempted to learn features directly from ten- low the CCA derivation in [Anderson, 2003] to successively sors [He et al., 2005]. The motivation is to preserve data solve for elementary multilinear projections (EMPs) to maxi- 1516 mize pairwise correlations while enforcing same-set or cross- 3.1 Problem formulation through TVP set zero-correlation constraint in Sec. 3. Finally, we evaluate In (second-order) MCCA, we consider two matrix data the performance on facial image matching in Sec. 4. I1×I2 sets with M samples each: Xm 2 R and To reduce notation complexity, we present our work here Y 2 J1×J2 , m = 1; :::; M. I and I may not only for second-order tensors, i.e., matrices, as a special case, m R 1 2 equal to J and J , respectively. We assume that their although we have developed MCCA for general tensors of 1 2 [ ] means have been subtracted so that they have zero-mean, any order following Lu et al., 2011 . PM PM i.e., m=1 Xm = 0 and m=1 Ym = 0. We are in- terested in paired projections f(u ; v ); (u ; v )g, 2 Preliminary of CCA and TVP xp xp yp yp I1 I2 J1 J2 uxp 2 R , vxp 2 R , uyp 2 R , vyp 2 R , p = 1; :::; P , 2.1 CCA and zero-correlation constraint for dimensionality reduction to vector spaces to reveal CCA captures the correlations between two sets of vector- correlations between them. The P projection pairs form valued variables that are assumed to be different represen- matrices Ux = [ux1 ; :::; uxP ], Vx = [vx1 ; :::; vxP ], [ ] tations of the same set of objects Hotelling, 1936 . For two Uy = [uy1 ; :::; uyP ], and Vy = [vy1 ; :::; vyP ]. The matrix x 2 I y 2 J m = 1; :::; M I 6= J P paired data sets m R , m R , , in sets fXmg and fYmg are projected to frm 2 R g and general, CCA finds paired projections fu ; u g, u 2 I , P xp yp xp R fsm 2 R g, respectively, through TVP (3) as J uyp 2 R , p = 1; :::; P , such that the canonical variates wp r = diag (U0 X V ) ; s = diag U0 Y V : and zp are maximally correlated. The mth elements of wp m x m x m y m y (4) and z are denoted as w and z , respectively: p pm pm We can write the projection above according to EMP (2) as 0 0 wp = u xm; zp = u ym; (1) m xp m yp r = u0 X v ; s = u0 Y v ; mp xp m xp mp yp m yp (5) where u0 denotes the transpose of u. In addition, for p 6= q where r and s are the pth elements of r and s , re- (p; q = 1; :::; P ), wp and wq, zp and zq, and wp and zq, re- mp mp m m M spectively, are all uncorrelated. Thus, CCA produces uncor- spectively. We then define two coordinate vectors wp 2 R M related features both within each set and between different and zp 2 R for the pth EMP, where their pth elements pairs of the two sets. wpm = rmp and zpm = smp . Denote R = [r1; :::; rM ], The solutions fuxp ; uyp g for this CCA problem can be ob- S = [s1; :::; sM ], W = [w1; :::; wP ], and Z = [z1; :::; zP ]. 0 0 tained as eigenvectors of two related eigenvalue problems. We have W = R and Z = S . Here, wp and zp are analo- However, a straightforward generalization of these results to gous to the canonical variates in CCA [Anderson, 2003]. tensors [Lee and Choi, 2007; Gang et al., 2011] does not We propose MCCA with two architectures. Architecture give uncorrelated features while maximizing the paired cor- I: wp and zp are maximally correlated, while for q 6= p relations (to be shown in Figs. 3(b) and 3(c)). Therefore, (p; q = 1; :::; P ), wq and wp, and zq and zp, respectively, are we derive MCCA following the more principled approach of uncorrelated. Architecture II: wp and zp are maximally cor- solving CCA in [Anderson, 2003], where CCA projection related, while for q 6= p (p; q = 1; :::; P ), wp and zq are un- pairs are obtained successively through enforcing the zero- correlated. Figure 1 is a schematic diagram showing MCCA correlation constraints. and its two different architectures. We extend the sample-based estimation of canonical corre- 2.2 TVP: tensor space to vector subspace lations and variates in CCA to the multilinear case for matrix To extract uncorrelated features from tensor data directly, we data using Architecture I. use the TVP in [Lu et al., 2009a]. The TVP of a matrix X 2 I1×I2 P Definition 1. Canonical correlations and variates for ma- R to a vector r 2 R consists of P EMPs fup; vpg, trix data (Architecture I). For two matrix data sets with I1 I2 u 2 , v 2 , p = 1; :::; P . It is a mapping from I1×I2 J1×J2 p R p R M samples each fXm 2 R g and fYm 2 R g, I1 N I2 the original tensor space R R into a vector subspace m = 1; :::; M, the pth pair of canonical variates is the pair P .
Recommended publications
  • Learning Geometric Algebra by Modeling Motions of the Earth and Shadows of Gnomons to Predict Solar Azimuths and Altitudes
    Learning Geometric Algebra by Modeling Motions of the Earth and Shadows of Gnomons to Predict Solar Azimuths and Altitudes April 24, 2018 James Smith [email protected] https://mx.linkedin.com/in/james-smith-1b195047 \Our first step in developing an expression for the orientation of \our" gnomon: Diagramming its location at the instant of the 2016 December solstice." Abstract Because the shortage of worked-out examples at introductory levels is an obstacle to widespread adoption of Geometric Algebra (GA), we use GA to calculate Solar azimuths and altitudes as a function of time via the heliocentric model. We begin by representing the Earth's motions in GA terms. Our representation incorporates an estimate of the time at which the Earth would have reached perihelion in 2017 if not affected by the Moon's gravity. Using the geometry of the December 2016 solstice as a starting 1 point, we then employ GA's capacities for handling rotations to determine the orientation of a gnomon at any given latitude and longitude during the period between the December solstices of 2016 and 2017. Subsequently, we derive equations for two angles: that between the Sun's rays and the gnomon's shaft, and that between the gnomon's shadow and the direction \north" as traced on the ground at the gnomon's location. To validate our equations, we convert those angles to Solar azimuths and altitudes for comparison with simulations made by the program Stellarium. As further validation, we analyze our equations algebraically to predict (for example) the precise timings and locations of sunrises, sunsets, and Solar zeniths on the solstices and equinoxes.
    [Show full text]
  • Schaum's Outline of Linear Algebra (4Th Edition)
    SCHAUM’S SCHAUM’S outlines outlines Linear Algebra Fourth Edition Seymour Lipschutz, Ph.D. Temple University Marc Lars Lipson, Ph.D. University of Virginia Schaum’s Outline Series New York Chicago San Francisco Lisbon London Madrid Mexico City Milan New Delhi San Juan Seoul Singapore Sydney Toronto Copyright © 2009, 2001, 1991, 1968 by The McGraw-Hill Companies, Inc. All rights reserved. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior writ- ten permission of the publisher. ISBN: 978-0-07-154353-8 MHID: 0-07-154353-8 The material in this eBook also appears in the print version of this title: ISBN: 978-0-07-154352-1, MHID: 0-07-154352-X. All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after every occurrence of a trademarked name, we use names in an editorial fashion only, and to the benefit of the trademark owner, with no intention of infringement of the trademark. Where such designations appear in this book, they have been printed with initial caps. McGraw-Hill eBooks are available at special quantity discounts to use as premiums and sales promotions, or for use in corporate training programs. To contact a representative please e-mail us at [email protected]. TERMS OF USE This is a copyrighted work and The McGraw-Hill Companies, Inc. (“McGraw-Hill”) and its licensors reserve all rights in and to the work.
    [Show full text]
  • Exercise 7.5
    IN SUMMARY Key Idea • A projection of one vector onto another can be either a scalar or a vector. The difference is the vector projection has a direction. A A a a u u O O b NB b N B Scalar projection of a on b Vector projection of a on b Need to Know ! ! ! ! a # b ! • The scalar projection of a on b ϭϭ! 0 a 0 cos u, where u is the angle @ b @ ! ! between a and b. ! ! ! ! ! ! a # b ! a # b ! • The vector projection of a on b ϭ ! b ϭ a ! ! b b @ b @ 2 b # b ! • The direction cosines for OP ϭ 1a, b, c2 are a b cos a ϭ , cos b ϭ , 2 2 2 2 2 2 Va ϩ b ϩ c Va ϩ b ϩ c c cos g ϭ , where a, b, and g are the direction angles 2 2 2 Va ϩ b ϩ c ! between the position vector OP and the positive x-axis, y-axis and z-axis, respectively. Exercise 7.5 PART A ! 1. a. The vector a ϭ 12, 32 is projected onto the x-axis. What is the scalar projection? What is the vector projection? ! b. What are the scalar and vector projections when a is projected onto the y-axis? 2. Explain why it is not possible to obtain! either a scalar! projection or a vector projection when a nonzero vector x is projected on 0. 398 7.5 SCALAR AND VECTOR PROJECTIONS NEL ! ! 3. Consider two nonzero vectors,a and b, that are perpendicular! ! to each other.! Explain why the scalar and vector projections of a on b must! be !0 and 0, respectively.
    [Show full text]
  • MATH 304 Linear Algebra Lecture 24: Scalar Product. Vectors: Geometric Approach
    MATH 304 Linear Algebra Lecture 24: Scalar product. Vectors: geometric approach B A B′ A′ A vector is represented by a directed segment. • Directed segment is drawn as an arrow. • Different arrows represent the same vector if • they are of the same length and direction. Vectors: geometric approach v B A v B′ A′ −→AB denotes the vector represented by the arrow with tip at B and tail at A. −→AA is called the zero vector and denoted 0. Vectors: geometric approach v B − A v B′ A′ If v = −→AB then −→BA is called the negative vector of v and denoted v. − Vector addition Given vectors a and b, their sum a + b is defined by the rule −→AB + −→BC = −→AC. That is, choose points A, B, C so that −→AB = a and −→BC = b. Then a + b = −→AC. B b a C a b + B′ b A a C ′ a + b A′ The difference of the two vectors is defined as a b = a + ( b). − − b a b − a Properties of vector addition: a + b = b + a (commutative law) (a + b) + c = a + (b + c) (associative law) a + 0 = 0 + a = a a + ( a) = ( a) + a = 0 − − Let −→AB = a. Then a + 0 = −→AB + −→BB = −→AB = a, a + ( a) = −→AB + −→BA = −→AA = 0. − Let −→AB = a, −→BC = b, and −→CD = c. Then (a + b) + c = (−→AB + −→BC) + −→CD = −→AC + −→CD = −→AD, a + (b + c) = −→AB + (−→BC + −→CD) = −→AB + −→BD = −→AD. Parallelogram rule Let −→AB = a, −→BC = b, −−→AB′ = b, and −−→B′C ′ = a. Then a + b = −→AC, b + a = −−→AC ′.
    [Show full text]
  • Linear Algebra - Part II Projection, Eigendecomposition, SVD
    Linear Algebra - Part II Projection, Eigendecomposition, SVD (Adapted from Punit Shah's slides) 2019 Linear Algebra, Part II 2019 1 / 22 Brief Review from Part 1 Matrix Multiplication is a linear tranformation. Symmetric Matrix: A = AT Orthogonal Matrix: AT A = AAT = I and A−1 = AT L2 Norm: s X 2 jjxjj2 = xi i Linear Algebra, Part II 2019 2 / 22 Angle Between Vectors Dot product of two vectors can be written in terms of their L2 norms and the angle θ between them. T a b = jjajj2jjbjj2 cos(θ) Linear Algebra, Part II 2019 3 / 22 Cosine Similarity Cosine between two vectors is a measure of their similarity: a · b cos(θ) = jjajj jjbjj Orthogonal Vectors: Two vectors a and b are orthogonal to each other if a · b = 0. Linear Algebra, Part II 2019 4 / 22 Vector Projection ^ b Given two vectors a and b, let b = jjbjj be the unit vector in the direction of b. ^ Then a1 = a1 · b is the orthogonal projection of a onto a straight line parallel to b, where b a = jjajj cos(θ) = a · b^ = a · 1 jjbjj Image taken from wikipedia. Linear Algebra, Part II 2019 5 / 22 Diagonal Matrix Diagonal matrix has mostly zeros with non-zero entries only in the diagonal, e.g. identity matrix. A square diagonal matrix with diagonal elements given by entries of vector v is denoted: diag(v) Multiplying vector x by a diagonal matrix is efficient: diag(v)x = v x is the entrywise product. Inverting a square diagonal matrix is efficient: 1 1 diag(v)−1 = diag [ ;:::; ]T v1 vn Linear Algebra, Part II 2019 6 / 22 Determinant Determinant of a square matrix is a mapping to a scalar.
    [Show full text]
  • Basics of Linear Algebra
    Basics of Linear Algebra Jos and Sophia Vectors ● Linear Algebra Definition: A list of numbers with a magnitude and a direction. ○ Magnitude: a = [4,3] |a| =sqrt(4^2+3^2)= 5 ○ Direction: angle vector points ● Computer Science Definition: A list of numbers. ○ Example: Heights = [60, 68, 72, 67] Dot Product of Vectors Formula: a · b = |a| × |b| × cos(θ) ● Definition: Multiplication of two vectors which results in a scalar value ● In the diagram: ○ |a| is the magnitude (length) of vector a ○ |b| is the magnitude of vector b ○ Θ is the angle between a and b Matrix ● Definition: ● Matrix elements: ● a)Matrix is an arrangement of numbers into rows and columns. ● b) A matrix is an m × n array of scalars from a given field F. The individual values in the matrix are called entries. ● Matrix dimensions: the number of rows and columns of the matrix, in that order. Multiplication of Matrices ● The multiplication of two matrices ● Result matrix dimensions ○ Notation: (Row, Column) ○ Columns of the 1st matrix must equal the rows of the 2nd matrix ○ Result matrix is equal to the number of (1, 2, 3) • (7, 9, 11) = 1×7 +2×9 + 3×11 rows in 1st matrix and the number of = 58 columns in the 2nd matrix ○ Ex. 3 x 4 ॱ 5 x 3 ■ Dot product does not work ○ Ex. 5 x 3 ॱ 3 x 4 ■ Dot product does work ■ Result: 5 x 4 Dot Product Application ● Application: Ray tracing program ○ Quickly create an image with lower quality ○ “Refinement rendering pass” occurs ■ Removes the jagged edges ○ Dot product used to calculate ■ Intersection between a ray and a sphere ■ Measure the length to the intersection points ● Application: Forward Propagation ○ Input matrix * weighted matrix = prediction matrix http://immersivemath.com/ila/ch03_dotprodu ct/ch03.html#fig_dp_ray_tracer Projections One important use of dot products is in projections.
    [Show full text]
  • Quantum Flatland and Monolayer Graphene from a Viewpoint of Geometric Algebra A
    Vol. 124 (2013) ACTA PHYSICA POLONICA A No. 4 Quantum Flatland and Monolayer Graphene from a Viewpoint of Geometric Algebra A. Dargys∗ Center for Physical Sciences and Technology, Semiconductor Physics Institute A. Go²tauto 11, LT-01108 Vilnius, Lithuania (Received May 11, 2013) Quantum mechanical properties of the graphene are, as a rule, treated within the Hilbert space formalism. However a dierent approach is possible using the geometric algebra, where quantum mechanics is done in a real space rather than in the abstract Hilbert space. In this article the geometric algebra is applied to a simple quantum system, a single valley of monolayer graphene, to show the advantages and drawbacks of geometric algebra over the Hilbert space approach. In particular, 3D and 2D Euclidean space algebras Cl3;0 and Cl2;0 are applied to analyze relativistic properties of the graphene. It is shown that only three-dimensional Cl3;0 rather than two-dimensional Cl2;0 algebra is compatible with a relativistic atland. DOI: 10.12693/APhysPolA.124.732 PACS: 73.43.Cd, 81.05.U−, 03.65.Pm 1. Introduction mension has been reduced, one is automatically forced to change the type of Cliord algebra and its irreducible Graphene is a two-dimensional crystal made up of car- representation. bon atoms. The intense interest in graphene is stimulated On the other hand, if the problem was formulated in by its potential application in the construction of novel the Hilbert space terms then, generally, the neglect of nanodevices based on relativistic physics, or at least with one of space dimensions does not spoil the Hilbert space.
    [Show full text]
  • Online Homework 2 with Solutions
    Mei Qin Chen citadel-math231 WeBWorK assignment number Homework2 is due : 09/02/2011 at 09:00am EDT. The (* replace with url for the course home page *) for the course contains the syllabus, grading policy and other information. This file is /conf/snippets/setHeader.pg you can use it as a model for creating files which introduce each problem set. The primary purpose of WeBWorK is to let you know that you are getting the correct answer or to alert you if you are making some kind of mistake. Usually you can attempt a problem as many times as you want before the due date. However, if you are having trouble figuring out your error, you should consult the book, or ask a fellow student, one of the TA’s or your professor for help. Don’t spend a lot of time guessing – it’s not very efficient or effective. Give 4 or 5 significant digits for (floating point) numerical answers. For most problems when entering numerical answers, you can if you wish enter elementary expressions such as 2 ^ 3 instead of 8, sin(3 ∗ pi=2)instead of -1, e ^ (ln(2)) instead of 2, (2 +tan(3)) ∗ (4 − sin(5)) ^ 6 − 7=8 instead of 27620.3413, etc. Here’s the list of the functions which WeBWorK understands. You can use the Feedback button on each problem page to send e-mail to the professors. 1. (1 pt) local/Library/Rochester/setVectors2DotProduct- ~u ·~v +~v ·~w = /UR VC 1 11.pg Correct Answers: Find a · b if jjajj = 7, jjbjj = 2, and the angle between a and b is • 1 p − 10 radians.
    [Show full text]
  • J. Phys. A: Math
    Journal of Physics A: Mathematical and Theoretical PAPER SICs and the elements of order three in the Clifford group To cite this article: Len Bos and Shayne Waldron 2019 J. Phys. A: Math. Theor. 52 105301 View the article online for updates and enhancements. This content was downloaded from IP address 130.216.231.21 on 10/02/2019 at 21:05 IOP Journal of Physics A: Mathematical and Theoretical J. Phys. A: Math. Theor. Journal of Physics A: Mathematical and Theoretical J. Phys. A: Math. Theor. 52 (2019) 105301 (31pp) https://doi.org/10.1088/1751-8121/aafff3 52 2019 © 2019 IOP Publishing Ltd SICs and the elements of order three in the Clifford group JPHAC5 Len Bos1 and Shayne Waldron2,3 105301 1 Department of Computer Science, University of Verona, Verona, Italy 2 Department of Mathematics, University of Auckland, Private Bag 92019, Auckland, L Bos and S Waldron New Zealand E-mail: [email protected] SICs and the elements of order three in the Clifford group Received 21 September 2018, revised 23 December 2018 Accepted for publication 18 January 2019 Printed in the UK Published 8 February 2019 Abstract JPA For over a decade, there has been intensive work on the numerical and analytic construction of SICs (d2 equiangular lines in Cd) as an orbit of the Heisenberg 10.1088/1751-8121/aafff3 group. The Clifford group, which consists of the unitary matrices which normalise the Heisenberg group, plays a key role in these constructions. All of the known fiducial (generating) vectors for such SICs are eigenvectors of Paper symplectic operations in the Clifford group with canonical order 3.
    [Show full text]
  • 1 Scalar.Mcd Fundamental Concepts in Vector Analysis: Scalar Product, Magnitude, Angle, Orthogonality, Projection
    1 scalar.mcd Fundamental concepts in vector analysis: scalar product, magnitude, angle, orthogonality, projection. Instructor: Nam Sun Wang In a linear vector space (LVS), there are rules on 1) addition of two vectors and 2) multiplication by a scalar. The resulting vector must also lie in the same LVS. We now impose another rule: multiplication of two vectors to yield a scalar. Real Euclidean Space. Real Euclidean Space is a subspace of LVS where there is a real-valued function (called scalar product) defined on pairs of vectors x and y with the following four properties. 1. Commutative (x, y) (y, x) 2. Associative (.x, y) .(x, y) 3. Distributive (x y, z) (x, z) (y, z) 4. Positive magnitude (x, x)>0 unless x 0 math jargon: (x, x) 0 (x, x) 0 iff x 0 Note that it is up to us to come up with a definition of the scalar product. Any definition that satisfies that above four properties is a valid one. Not only do we define the scalar product accordingly for different types of vectors, we conveniently adopt different definitions of scalar product for different applications even for one type of vectors. A scalar product is also known as an inner product or a dot product. We have vectors in linear vector space if we come up with rules of addition and multiplication by a scalar; furthermore, these vectors are in real Euclidean space if we come up with a rule on multiplication of two vectors. There are two metrics of a vector: magnitude and direction.
    [Show full text]
  • Linear Algebra I: 2017/18 Revision Checklist for the Examination
    Linear Algebra I: 2017/18 Revision Checklist for the Examination The examination tests knowledge of three definitions, one theorem with its proof (in the written part) and a survey question on one topic (in the oral part). Each definition is followed by the request for illustrative examples or a straightforward problem involving the defined terms. Survey questions involve providing definitions, giving theorem statements, examples and relationships between ideas { proofs for this part are not required beyond the key notions, but you will be asked to recall theorem statements. (In the written part you will be given the statement of a theorem, the task being there to prove it). The oral part also involves a discussion of your answers to the written part. The examination consists of at most an hour on the written part and a discussion for up to half an hour. The oral part may come in two parts: as there may be other people taking the examination concurrently, you may be asked to begin with the survey topic discussion before doing the written part, and then return for a short second discussion of the written part once you have completed it. You may need to wait for a period after finishing the written part to be called for the discussion part. Survey topics The following gives an indication of likely topics that you may be asked about during the oral part of the examination (be prepared to give definitions, examples, algorithm descriptions, theorems, notable corollaries etc. { proofs will not be asked for in this part). • elementary row operations
    [Show full text]
  • Math 3A Notes
    5. Orthogonality 5.1. The Scalar Product in Euclidean Space 5.1 The Scalar Product in Rn Thusfar we have restricted ourselves to vector spaces and the operations of addition and scalar multiplication: how else might we combine vectors? You should know about the scalar (dot) and cross products of vectors in R3 The scalar product extends nicely to other vector spaces, while the cross product is another story1 The basic purpose of scalar products is to define and analyze the lengths of and angles between vectors2 1But not for this class! You may see ‘wedge’ products in later classes. 2It is important to note that everything in this chapter, until mentioned, only applies to real vector spaces (where F = R) 5. Orthogonality 5.1. The Scalar Product in Euclidean Space Euclidean Space Definition 5.1.1 Suppose x, y Rn are written with respect to the standard 2 basis e ,..., e f 1 ng 1 The scalar product of x, y is the real numbera (x, y) := xTy = x y + x y + + x y 1 1 2 2 ··· n n 2 x, y are orthogonal or perpendicular if (x, y) = 0 3 n-dimensional Euclidean Space Rn is the vector space of n 1 column vectors R × together with the scalar product aOther notations include x y and x, y · h i Euclidean Space is more than just a collection of co-ordinates vectors: it implicitly comes with notions of angle and length3 Important Fact: (y, x) = yTx = (xTy)T = xTy = (x, y) so the scalar product is symmetric 3To be seen in R2 and R3 shortly 5.
    [Show full text]