Notation, Definitions, Glossary

Total Page:16

File Type:pdf, Size:1020Kb

Notation, Definitions, Glossary Appendix F Notation, Definitions, Glossary b scalar or column vector (italic abcdefghijklmnopqrstuvwxyz) th th bi i entry of vector b =[bi , i=1 ...n] or i b vector from a list bj , j =1 ...n or ith iterate of vector b { } th th bi:j or b(i:j) : truncated vector comprising i through j entry of vector b (294) th th bk(i:j) or bi:j,k : truncated vector comprising i through j entry of vector bk bT vector transpose or row vector H T b complex conjugate transpose b∗ A matrix (italic ABCDEFGHIJKLMNOPQRSTUVWXY Z) T A Matrix transpose [Aij ] [Aji] is a linear operator. Regarding A as a linear← operator, AT is its adjoint. T 1 T T 1 A− matrix transpose of inverse; and vice versa, A− = A − (confer p.489 no.47) AT1 first of various transpositions of a cubix or quartix¡ ¢ A (p.¡ 555¢ , p.559) 1 A− inverse of matrix A A† Moore-Penrose pseudoinverse of matrix A (§E) 1 2 3 1 th ··· 2 T Aij or A(i,j) : ij entry of matrix A = or rank-1 matrix aiaj (§4.10) ··· 3 ··· A(i,j) A is a function of i and j th th th Ai i matrix from a set or i principal submatrix (1329) or i iterate of A th A(i , :) or Ai: : i row of matrix A th A(: ,j) or A:j : j column of matrix A [185, §1.1.8] th th th th A(i:j , k :ℓ) or Ai:j, k:ℓ : submatrix; i through j row and k through ℓ column of A √ positive square root √◦ x entrywise positive square root of vector x Dattorro, Convex Optimization Euclidean Distance Geometry 2ε, εβoo, v2018.09.21. 629 M 630 APPENDIX F. NOTATION, DEFINITIONS, GLOSSARY √ℓ positive ℓth root A1/2 and √A A1/2 is any matrix such that A1/2A1/2 =A . n n § For A S , √A S is unique and √A√A =A . [59, §1.2] ( A.5.1.4) ∈ + ∈ + ◦ √D = [ dij ] absolute distance matrix (1517) or Hadamard positive square root: D = √◦ D √◦ D p ◦ thin a skinny matrix; meaning, more rows than columns: When there are more equations than unknowns, we say that the system Ax = b is overdetermined. [185, §5.3] wide a fat matrix; meaning, more columns than rows: underdetermined £ ¤ hollow matrix having 0 main diagonal some set (calligraphic ) A ABCDEFGHIJKLMNOPQRST UVWXYZ A set of vectors or matrices (blackboard ABCDEFGHIJKLMNOPQRSTUVWXYZ) F discrete Fourier transform (932) (Euler Fraktur ABCDEFGHIJKLMNOPQRSTUVWXYZ) ( A) smallest face (176) that contains element A of set F C ∋ C ( ) generators (§2.13.4.2.1) of set ; G K any collection of points and directionsK whose hull constructs K ν level set (571) Lν sublevel set (575) Lν ν superlevel set (669) L L Lagrangian (516) E member of elliptope (1258) parametrized by scalar t Et elliptope (1237) E E elementary matrix Eij member of standard orthonormal basis for symmetric (62) or symmetric hollow (78) matrices id est from the Latin meaning that is e.g exempli gratia, from the Latin meaning for sake of example sic from the Latin meaning so or thus or in this manner; something meant as written videlicet from the Latin meaning it is permitted to see ibidem from the Latin meaning in the same place no. number, from the Latin numero vs. versus, from the Latin 631 a.i. affinely independent (§2.4.2.3) c.i. conically independent (§2.10) l.i. linearly independent w.r.t with respect to a.k.a also known as re real part im imaginary part ı or √ 1 − subset, superset ⊆ ⊇ proper subset, proper superset ⊂ ⊃ intersection, union ∩ ∪ membership, element belongs to, or element is a member of ∈ membership, contains as in y ( contains element y) ∋ C ∋ C Ä such that there exists ∃ ∴ therefore for all, or over all ∀ & (ampersand) and & (ampersand italic) and proportional to ∝ infinity ∞ equivalent to ≡ , defined equal to, equal by definition approximately equal to ≈ isomorphic to or with ≃ ∼= congruent to or with x Hadamard quotient as in, for x,y Rn, , [ x /y , i=1 ...n ] Rn ∈ y i i ∈ n § Hadamard product of matrices: x y , [ x y , i=1 ...n ] R (§D.1.2.2, A.1.1) ◦ ◦ i i ∈ § Kronecker product of matrices (§D.1.2.1, A.1.1) ⊗ vector sum of sets = where every element x has unique expression ⊕ x = y + z where yX Yand ⊕ Z z ; [349, p.19] then∈X summands are algebraic complements. = ∈Y =∈ Z+ . Now assume and are nontrivial X Y⊕Z⇒X Y Z Y Z § subspaces. = + = = 0 [350, §1.2] [123, 5.8]. Each element fromX a vectorY Z sum ⇒X (+) ofY⊕Z subspaces ⇔ Y∩Z has unique expression ( ) when a basis from each subspace is linearly independent of bases from all the other⊕ subspaces. 632 APPENDIX F. NOTATION, DEFINITIONS, GLOSSARY likewise, unique vector difference of sets ⊖ ⊞ orthogonal vector sum of sets = ⊞ where every element x has unique orthogonal expression x = y +Xz whereY Z y , z , and y ∈Xz . [372, p.51] ∈Y ∈ Z ⊥ = ⊞ = + . If ⊥ then = ⊞ = . [123, §5.8] X Y Z⇒X Y Z Z⊆Y X Y Z ⇔X Y ⊕ Z If = ⊥ then summands are orthogonal complements. Z Y plus or minus or plus and minus ± as in means logical not , or relative complement of set ; id est, \ = \Ax / ; e.g, , x A x / A \A { ∈A} B\A { ∈ B | ∈A} ≡ B∩\A and sufficient and necessary, implies and is implied by; e.g, ⇒ ⇐ A is sufficient: A B , A is necessary: A B , A B A ⇒B , A B A ⇐ B , if A⇒then⇔B \, ⇐ \ ⇐ ⇔if B \ then⇒ \A , A only if B , B only if A . if and only if (iff) or corresponds with or necessary and sufficient or logical ⇔ equivalence is plural are, as in A is B means A B ; conventional usage of English language imposed by logicians ⇒ ; and : insufficient and unnecessary, does not imply and is not implied by; e.g, A is insufficient: A ; B , A is unnecessary: A : B , A ; B A : B , A : B A ; B . ⇔ \ \ ⇔ \ \ is replaced with; substitution, assignment ← goes to, or approaches, or maps to → t 0+ t goes to 0 from above; meaning, from the positive [230, p.2] → . .. as in 1 1 means ones in a row or ··· ··· [ s1 sN ] means continuation; a matrix whose columns are si for i=1 ...N or··· as in n(n 1)(n 2) 1 means continuation of a product − − ··· . as in i=1 ...N meaning, i is a sequence of successive integers beginning with 1 and ending with N ; id est, 1 ...N = 1:N : as in f : Rn Rm meaning f is a mapping, or sequence→ of successive integers specified by bounds as in i:j = i...j (if j < i then sequence is descending) f real function or multidimensional function a.k.a operator f : meaning f is a mapping from ambient space to ambient , not necessarily M → R denoting either domain or range M R as in f(x) x means with the condition(s) or such that or evaluated for, or | as in f(x)| x∈C means evaluated for each and every x belonging to set { | ∈ C} C g expression g evaluated at xp |xp parallel k A,B as in, for example, A,B SN means A SN and B SN ∈ ∈ ∈ 633 (a, b) open interval between a and b in R or variable pair perhaps of disparate dimension [a, b ] closed interval or line segment between a and b in R ( ) hierarchal, parenthetical, optional 1 , k = 0 k ( n+k 1)! 1 − − , k > 0 − k!( n 1)! n k − −( k 1)! n 1 − − , k n n < 0 [260] 2 , − (n k)!( n 1)! binomial coefficient on Z k − − − − ≤ µ ¶ 0 , n < k < 0 0 , k < 0 or k > n n! n 0 k!(n k)! , otherwise ≥ − ¾ ! factorial; id est, for integer n>0 , n! , n(n 1)(n 2) 1 , ( n)!, , 0!,1 − − ··· − ∞ curly braces denote a set or list; e.g, Xa a 0 the set comprising Xa evaluated { } for each and every a 0 where membership{ | ofºa}to some space is implicit, a union; or 0, 1 n representsº a binary vector of dimension n { } angle brackets denote vector inner-product (35) (40) h i [ ] matrix or vector, or quote insertion, or citation th [dij ] matrix whose ij entry is dij th [xi] vector whose i entry is xi xp particular value of x x0 particular instance of x , or initial value of a sequence xi x first entry of vector x , or first element of a list x 1 { i} xε extreme point 1 x+ vector x whose negative entries are replaced with 0 : x+ = 2 (x + x ) (546) nonnegative part of x or clipped vector x | | 1 x x , (x x ) : nonpositive part of x = x+ + x − − 2 − | | − xˇ known data x⋆ optimal value of variable x . optimal feasible ⇒ x∗ complex conjugate or dual variable or extreme direction of dual cone f ∗ convex conjugate function f ∗(s) = sup s,x f(x) x dom f {h i − | ∈ } P x or Px projection of point x on set , P is operator or idempotent matrix C C P x projection of point x on set or on range of implicit vector k Ck δ(A) (a.k.a diag(A) , §A.1) vector made from main diagonal of A if A is a matrix; otherwise, diagonal matrix made from vector A δ2(A) δ(δ(A)). For vector or diagonal matrix Λ , δ2(Λ) = Λ ≡ δ(A)2 = δ(A)δ(A) where A is a vector 634 APPENDIX F.
Recommended publications
  • Realizing Euclidean Distance Matrices by Sphere Intersection
    Realizing Euclidean distance matrices by sphere intersection Jorge Alencara,∗, Carlile Lavorb, Leo Libertic aInstituto Federal de Educa¸c~ao,Ci^enciae Tecnologia do Sul de Minas Gerais, Inconfidentes, MG, Brazil bUniversity of Campinas (IMECC-UNICAMP), 13081-970, Campinas-SP, Brazil cCNRS LIX, Ecole´ Polytechnique, 91128 Palaiseau, France Abstract This paper presents the theoretical properties of an algorithm to find a re- alization of a (full) n × n Euclidean distance matrix in the smallest possible embedding dimension. Our algorithm performs linearly in n, and quadratically in the minimum embedding dimension, which is an improvement w.r.t. other algorithms. Keywords: Distance geometry, sphere intersection, euclidean distance matrix, embedding dimension. 2010 MSC: 00-01, 99-00 1. Introduction Euclidean distance matrices and sphere intersection have a strong mathe- matical importance [1, 2, 3, 4, 5, 6], in addition to many applications, such as navigation problems, molecular and nanostructure conformation, network 5 localization, robotics, as well as other problems of distance geometry [7, 8, 9]. Before defining precisely the problem of interest, we need the formal defini- tion for Euclidean distance matrices. Let D be a n × n symmetric hollow (i.e., with zero diagonal) matrix with non-negative elements. We say that D is a (squared) Euclidean Distance Matrix (EDM) if there are points x1; x2; : : : ; xn 2 RK (for a positive integer K) such that 2 D(i; j) = Dij = kxi − xjk ; i; j 2 f1; : : : ; ng; where k · k denotes the Euclidean norm. The smallest K for which such a set of points exists is called the embedding dimension of D, denoted by dim(D).
    [Show full text]
  • 1 Euclidean Vector Space and Euclidean Affi Ne Space
    Profesora: Eugenia Rosado. E.T.S. Arquitectura. Euclidean Geometry1 1 Euclidean vector space and euclidean a¢ ne space 1.1 Scalar product. Euclidean vector space. Let V be a real vector space. De…nition. A scalar product is a map (denoted by a dot ) V V R ! (~u;~v) ~u ~v 7! satisfying the following axioms: 1. commutativity ~u ~v = ~v ~u 2. distributive ~u (~v + ~w) = ~u ~v + ~u ~w 3. ( ~u) ~v = (~u ~v) 4. ~u ~u 0, for every ~u V 2 5. ~u ~u = 0 if and only if ~u = 0 De…nition. Let V be a real vector space and let be a scalar product. The pair (V; ) is said to be an euclidean vector space. Example. The map de…ned as follows V V R ! (~u;~v) ~u ~v = x1x2 + y1y2 + z1z2 7! where ~u = (x1; y1; z1), ~v = (x2; y2; z2) is a scalar product as it satis…es the …ve properties of a scalar product. This scalar product is called standard (or canonical) scalar product. The pair (V; ) where is the standard scalar product is called the standard euclidean space. 1.1.1 Norm associated to a scalar product. Let (V; ) be a real euclidean vector space. De…nition. A norm associated to the scalar product is a map de…ned as follows V kk R ! ~u ~u = p~u ~u: 7! k k Profesora: Eugenia Rosado, E.T.S. Arquitectura. Euclidean Geometry.2 1.1.2 Unitary and orthogonal vectors. Orthonormal basis. Let (V; ) be a real euclidean vector space. De…nition.
    [Show full text]
  • Stat 5102 Notes: Regression
    Stat 5102 Notes: Regression Charles J. Geyer April 27, 2007 In these notes we do not use the “upper case letter means random, lower case letter means nonrandom” convention. Lower case normal weight letters (like x and β) indicate scalars (real variables). Lowercase bold weight letters (like x and β) indicate vectors. Upper case bold weight letters (like X) indicate matrices. 1 The Model The general linear model has the form p X yi = βjxij + ei (1.1) j=1 where i indexes individuals and j indexes different predictor variables. Ex- plicit use of (1.1) makes theory impossibly messy. We rewrite it as a vector equation y = Xβ + e, (1.2) where y is a vector whose components are yi, where X is a matrix whose components are xij, where β is a vector whose components are βj, and where e is a vector whose components are ei. Note that y and e have dimension n, but β has dimension p. The matrix X is called the design matrix or model matrix and has dimension n × p. As always in regression theory, we treat the predictor variables as non- random. So X is a nonrandom matrix, β is a nonrandom vector of unknown parameters. The only random quantities in (1.2) are e and y. As always in regression theory the errors ei are independent and identi- cally distributed mean zero normal. This is written as a vector equation e ∼ Normal(0, σ2I), where σ2 is another unknown parameter (the error variance) and I is the identity matrix. This implies y ∼ Normal(µ, σ2I), 1 where µ = Xβ.
    [Show full text]
  • Glossary of Linear Algebra Terms
    INNER PRODUCT SPACES AND THE GRAM-SCHMIDT PROCESS A. HAVENS 1. The Dot Product and Orthogonality 1.1. Review of the Dot Product. We first recall the notion of the dot product, which gives us a familiar example of an inner product structure on the real vector spaces Rn. This product is connected to the Euclidean geometry of Rn, via lengths and angles measured in Rn. Later, we will introduce inner product spaces in general, and use their structure to define general notions of length and angle on other vector spaces. Definition 1.1. The dot product of real n-vectors in the Euclidean vector space Rn is the scalar product · : Rn × Rn ! R given by the rule n n ! n X X X (u; v) = uiei; viei 7! uivi : i=1 i=1 i n Here BS := (e1;:::; en) is the standard basis of R . With respect to our conventions on basis and matrix multiplication, we may also express the dot product as the matrix-vector product 2 3 v1 6 7 t î ó 6 . 7 u v = u1 : : : un 6 . 7 : 4 5 vn It is a good exercise to verify the following proposition. Proposition 1.1. Let u; v; w 2 Rn be any real n-vectors, and s; t 2 R be any scalars. The Euclidean dot product (u; v) 7! u · v satisfies the following properties. (i:) The dot product is symmetric: u · v = v · u. (ii:) The dot product is bilinear: • (su) · v = s(u · v) = u · (sv), • (u + v) · w = u · w + v · w.
    [Show full text]
  • Vector Differential Calculus
    KAMIWAAI – INTERACTIVE 3D SKETCHING WITH JAVA BASED ON Cl(4,1) CONFORMAL MODEL OF EUCLIDEAN SPACE Submitted to (Feb. 28,2003): Advances in Applied Clifford Algebras, http://redquimica.pquim.unam.mx/clifford_algebras/ Eckhard M. S. Hitzer Dept. of Mech. Engineering, Fukui Univ. Bunkyo 3-9-, 910-8507 Fukui, Japan. Email: [email protected], homepage: http://sinai.mech.fukui-u.ac.jp/ Abstract. This paper introduces the new interactive Java sketching software KamiWaAi, recently developed at the University of Fukui. Its graphical user interface enables the user without any knowledge of both mathematics or computer science, to do full three dimensional “drawings” on the screen. The resulting constructions can be reshaped interactively by dragging its points over the screen. The programming approach is new. KamiWaAi implements geometric objects like points, lines, circles, spheres, etc. directly as software objects (Java classes) of the same name. These software objects are geometric entities mathematically defined and manipulated in a conformal geometric algebra, combining the five dimensions of origin, three space and infinity. Simple geometric products in this algebra represent geometric unions, intersections, arbitrary rotations and translations, projections, distance, etc. To ease the coordinate free and matrix free implementation of this fundamental geometric product, a new algebraic three level approach is presented. Finally details about the Java classes of the new GeometricAlgebra software package and their associated methods are given. KamiWaAi is available for free internet download. Key Words: Geometric Algebra, Conformal Geometric Algebra, Geometric Calculus Software, GeometricAlgebra Java Package, Interactive 3D Software, Geometric Objects 1. Introduction The name “KamiWaAi” of this new software is the Romanized form of the expression in verse sixteen of chapter four, as found in the Japanese translation of the first Letter of the Apostle John, which is part of the New Testament, i.e.
    [Show full text]
  • Arxiv:1701.03378V1 [Math.RA] 12 Jan 2017 Setao Apihe Rusadfe Probability” Free and Groups T Lamplighter by on Supported Technology
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by TUGraz OPEN Library Linearizing the Word Problem in (some) Free Fields Konrad Schrempf∗ January 13, 2017 Abstract We describe a solution of the word problem in free fields (coming from non- commutative polynomials over a commutative field) using elementary linear algebra, provided that the elements are given by minimal linear representa- tions. It relies on the normal form of Cohn and Reutenauer and can be used more generally to (positively) test rational identities. Moreover we provide a construction of minimal linear representations for the inverse of non-zero elements. Keywords: word problem, minimal linear representation, linearization, realiza- tion, admissible linear system, rational series AMS Classification: 16K40, 16S10, 03B25, 15A22 Introduction arXiv:1701.03378v1 [math.RA] 12 Jan 2017 Free (skew) fields arise as universal objects when it comes to embed the ring of non-commutative polynomials, that is, polynomials in (a finite number of) non- commuting variables, into a skew field [Coh85, Chapter 7]. The notion of “free fields” goes back to Amitsur [Ami66]. A brief introduction can be found in [Coh03, Section 9.3], for details we refer to [Coh95, Section 6.4]. In the present paper we restrict the setting to commutative ground fields, as a special case. See also [Rob84]. In [CR94], Cohn and Reutenauer introduced a normal form for elements in free fields in order to extend results from the theory of formal languages. In particular they characterize minimality of linear representations in terms of linear independence of ∗Contact: [email protected], Department of Discrete Mathematics (Noncommutative Structures), Graz University of Technology.
    [Show full text]
  • Unsupervised and Supervised Principal Component Analysis: Tutorial
    Unsupervised and Supervised Principal Component Analysis: Tutorial Benyamin Ghojogh [email protected] Department of Electrical and Computer Engineering, Machine Learning Laboratory, University of Waterloo, Waterloo, ON, Canada Mark Crowley [email protected] Department of Electrical and Computer Engineering, Machine Learning Laboratory, University of Waterloo, Waterloo, ON, Canada Abstract space and manifold learning (Friedman et al., 2009). This This is a detailed tutorial paper which explains method, which is also used for feature extraction (Ghojogh the Principal Component Analysis (PCA), Su- et al., 2019b), was first proposed by (Pearson, 1901). In or- pervised PCA (SPCA), kernel PCA, and kernel der to learn a nonlinear sub-manifold, kernel PCA was pro- SPCA. We start with projection, PCA with eigen- posed, first by (Scholkopf¨ et al., 1997; 1998), which maps decomposition, PCA with one and multiple pro- the data to high dimensional feature space hoping to fall on jection directions, properties of the projection a linear manifold in that space. matrix, reconstruction error minimization, and The PCA and kernel PCA are unsupervised methods for we connect to auto-encoder. Then, PCA with sin- subspace learning. To use the class labels in PCA, super- gular value decomposition, dual PCA, and ker- vised PCA was proposed (Bair et al., 2006) which scores nel PCA are covered. SPCA using both scor- the features of the X and reduces the features before ap- ing and Hilbert-Schmidt independence criterion plying PCA. This type of SPCA were mostly used in bioin- are explained. Kernel SPCA using both direct formatics (Ma & Dai, 2011). and dual approaches are then introduced.
    [Show full text]
  • Left Eigenvector of a Stochastic Matrix
    Advances in Pure Mathematics, 2011, 1, 105-117 doi:10.4236/apm.2011.14023 Published Online July 2011 (http://www.SciRP.org/journal/apm) Left Eigenvector of a Stochastic Matrix Sylvain Lavalle´e Departement de mathematiques, Universite du Quebec a Montreal, Montreal, Canada E-mail: [email protected] Received January 7, 2011; revised June 7, 2011; accepted June 15, 2011 Abstract We determine the left eigenvector of a stochastic matrix M associated to the eigenvalue 1 in the commu- tative and the noncommutative cases. In the commutative case, we see that the eigenvector associated to the eigenvalue 0 is (,,NN1 n ), where Ni is the ith principal minor of NMI= n , where In is the 11 identity matrix of dimension n . In the noncommutative case, this eigenvector is (,P1 ,Pn ), where Pi is the sum in aij of the corresponding labels of nonempty paths starting from i and not passing through i in the complete directed graph associated to M . Keywords: Generic Stochastic Noncommutative Matrix, Commutative Matrix, Left Eigenvector Associated To The Eigenvalue 1, Skew Field, Automata 1. Introduction stochastic free field and that the vector 11 (,,PP1 n ) is fixed by our matrix; moreover, the sum 1 It is well known that 1 is one of the eigenvalue of a of the Pi is equal to 1, hence they form a kind of stochastic matrix (i.e. the sum of the elements of each noncommutative limiting probability. row is equal to 1) and its associated right eigenvector is These results have been proved in [1] but the proof the vector (1,1, ,1)T .
    [Show full text]
  • DECOMPOSITION of SINGULAR MATRICES INTO IDEMPOTENTS 11 Us Show How to Construct Ai+1, Bi+1, Ci+1, Di+1
    DECOMPOSITION OF SINGULAR MATRICES INTO IDEMPOTENTS ADEL ALAHMADI, S. K. JAIN, AND ANDRE LEROY Abstract. In this paper we provide concrete constructions of idempotents to represent typical singular matrices over a given ring as a product of idempo- tents and apply these factorizations for proving our main results. We generalize works due to Laffey ([12]) and Rao ([3]) to noncommutative setting and fill in the gaps in the original proof of Rao's main theorems (cf. [3], Theorems 5 and 7 and [4]). We also consider singular matrices over B´ezoutdomains as to when such a matrix is a product of idempotent matrices. 1. Introduction and definitions It was shown by Howie [10] that every mapping from a finite set X to itself with image of cardinality ≤ cardX − 1 is a product of idempotent mappings. Erd¨os[7] showed that every singular square matrix over a field can be expressed as a product of idempotent matrices and this was generalized by several authors to certain classes of rings, in particular, to division rings and euclidean domains [12]. Turning to singular elements let us mention two results: Rao [3] characterized, via continued fractions, singular matrices over a commutative PID that can be decomposed as a product of idempotent matrices and Hannah-O'Meara [9] showed, among other results, that for a right self-injective regular ring R, an element a is a product of idempotents if and only if Rr:ann(a) = l:ann(a)R= R(1 − a)R. The purpose of this paper is to provide concrete constructions of idempotents to represent typical singular matrices over a given ring as a product of idempotents and to apply these factorizations for proving our main results.
    [Show full text]
  • Basics of Linear Algebra
    Basics of Linear Algebra Jos and Sophia Vectors ● Linear Algebra Definition: A list of numbers with a magnitude and a direction. ○ Magnitude: a = [4,3] |a| =sqrt(4^2+3^2)= 5 ○ Direction: angle vector points ● Computer Science Definition: A list of numbers. ○ Example: Heights = [60, 68, 72, 67] Dot Product of Vectors Formula: a · b = |a| × |b| × cos(θ) ● Definition: Multiplication of two vectors which results in a scalar value ● In the diagram: ○ |a| is the magnitude (length) of vector a ○ |b| is the magnitude of vector b ○ Θ is the angle between a and b Matrix ● Definition: ● Matrix elements: ● a)Matrix is an arrangement of numbers into rows and columns. ● b) A matrix is an m × n array of scalars from a given field F. The individual values in the matrix are called entries. ● Matrix dimensions: the number of rows and columns of the matrix, in that order. Multiplication of Matrices ● The multiplication of two matrices ● Result matrix dimensions ○ Notation: (Row, Column) ○ Columns of the 1st matrix must equal the rows of the 2nd matrix ○ Result matrix is equal to the number of (1, 2, 3) • (7, 9, 11) = 1×7 +2×9 + 3×11 rows in 1st matrix and the number of = 58 columns in the 2nd matrix ○ Ex. 3 x 4 ॱ 5 x 3 ■ Dot product does not work ○ Ex. 5 x 3 ॱ 3 x 4 ■ Dot product does work ■ Result: 5 x 4 Dot Product Application ● Application: Ray tracing program ○ Quickly create an image with lower quality ○ “Refinement rendering pass” occurs ■ Removes the jagged edges ○ Dot product used to calculate ■ Intersection between a ray and a sphere ■ Measure the length to the intersection points ● Application: Forward Propagation ○ Input matrix * weighted matrix = prediction matrix http://immersivemath.com/ila/ch03_dotprodu ct/ch03.html#fig_dp_ray_tracer Projections One important use of dot products is in projections.
    [Show full text]
  • A Guided Tour to the Plane-Based Geometric Algebra PGA
    A Guided Tour to the Plane-Based Geometric Algebra PGA Leo Dorst University of Amsterdam Version 1.15{ July 6, 2020 Planes are the primitive elements for the constructions of objects and oper- ators in Euclidean geometry. Triangulated meshes are built from them, and reflections in multiple planes are a mathematically pure way to construct Euclidean motions. A geometric algebra based on planes is therefore a natural choice to unify objects and operators for Euclidean geometry. The usual claims of `com- pleteness' of the GA approach leads us to hope that it might contain, in a single framework, all representations ever designed for Euclidean geometry - including normal vectors, directions as points at infinity, Pl¨ucker coordinates for lines, quaternions as 3D rotations around the origin, and dual quaternions for rigid body motions; and even spinors. This text provides a guided tour to this algebra of planes PGA. It indeed shows how all such computationally efficient methods are incorporated and related. We will see how the PGA elements naturally group into blocks of four coordinates in an implementation, and how this more complete under- standing of the embedding suggests some handy choices to avoid extraneous computations. In the unified PGA framework, one never switches between efficient representations for subtasks, and this obviously saves any time spent on data conversions. Relative to other treatments of PGA, this text is rather light on the mathematics. Where you see careful derivations, they involve the aspects of orientation and magnitude. These features have been neglected by authors focussing on the mathematical beauty of the projective nature of the algebra.
    [Show full text]
  • Mathematics Study Guide
    Mathematics Study Guide Matthew Chesnes The London School of Economics September 28, 2001 1 Arithmetic of N-Tuples • Vectors specificed by direction and length. The length of a vector is called its magnitude p 2 2 or “norm.” For example, x = (x1, x2). Thus, the norm of x is: ||x|| = x1 + x2. pPn 2 • Generally for a vector, ~x = (x1, x2, x3, ..., xn), ||x|| = i=1 xi . • Vector Order: consider two vectors, ~x, ~y. Then, ~x> ~y iff xi ≥ yi ∀ i and xi > yi for some i. ~x>> ~y iff xi > yi ∀ i. • Convex Sets: A set is convex if whenever is contains x0 and x00, it also contains the line segment, (1 − α)x0 + αx00. 2 2 Vector Space Formulations in Economics • We expect a consumer to have a complete (preference) order over all consumption bundles x in his consumption set. If he prefers x to x0, we write, x x0. If he’s indifferent between x and x0, we write, x ∼ x0. Finally, if he weakly prefers x to x0, we write x x0. • The set X, {x ∈ X : x xˆ ∀ xˆ ∈ X}, is a convex set. It is all bundles of goods that make the consumer at least as well off as with his current bundle. 3 3 Complex Numbers • Define complex numbers as ordered pairs such that the first element in the vector is the real part of the number and the second is complex. Thus, the real number -1 is denoted by (-1,0). A complex number, 2+3i, can be expressed (2,3). • Define multiplication on the complex numbers as, Z · Z0 = (a, b) · (c, d) = (ac − bd, ad + bc).
    [Show full text]