Computer Vision I - Projective Geometry and a Camera

Total Page:16

File Type:pdf, Size:1020Kb

Computer Vision I - Projective Geometry and a Camera Computer Vision I - Projective Geometry and a Camera Carsten Rother 22/12/2014 Computer Vision I: Image Formation Process Roadmap for next five lecture • Appearance based matching (sec. 4.1) • How do we get an RGB image? (sec 2.2-2.3) • The Human eye • The Camera and Image formation model • Projective Geometry - Basics (sec. 2.1.1-2.1.4) • Geometry of a single camera (sec 2.1.5, 2.1.6) • Pinhole camera • Lens effects • Geometry of two cameras (sec. 7.2) • Robust Geometry estimation for two cameras (sec. 6.1.4) • Multi-View 3D reconstruction (sec. 7.3-7.4) Computer Vision I: Image Formation Process 22/12/2014 2 Some Basics 2 1 • Real coordinate space 푅 example: 2 3 1 • Real coordinate space 푅 example: 2 3 • Operations we need are: scalar product: 풕 푥1 풙 풚 = 풙 풚 = 푥1푦1 + 푥2푦2 + 푥3푦3 where x = 푥2 푥3 cross/vector product: 풙 × 풚 = 풙 × 풚 0 −푥3 푥2 풙 × = 푥3 0 −푥1 −푥2 푥1 0 Computer Vision I: Image Formation Process 22/12/2014 3 Euclidean Space • Euclidean Space 푅2 or 푅3 has angles and distances defined • Angle defined as: • Length of the vector 푥: = 푥 푥 푥 • Distance: 푥 − 푦 Θ 푦 표푟푖푔푖푛 Computer Vision I: Image Formation Process 22/12/2014 4 Projective Space • 2D Point in a real coordinate space: 1 2 2 ∈ 푅 has 2 DoF (degrees of freedom) • 3D Point in a real coordinate space: 1 3 2 ∈ 푅 has 3 DoF 3 • Definition: A point in 2-dimensional projective space 푃2 is defined as 푥 2 푘푥 푝 = 푦 ∈ 푃 , such that all vectors 푘푦 ∀ 푘 ≠ 0 푤 푘푤 define the same point 푝 in 푃2 (equivalent classes) 1 2 • Sometimes written as: 2 ~ 4 1 2 1 2 • We write as: 2 = 4 1 2 Computer Vision I: Image Formation Process 22/12/2014 5 Projective Space - visualization Definition: A point in 2-dimensional projective space 푃2 is defined as 푥 2 푘푥 푝 = 푦 ∈ 푃 , such that all vectors 푘푦 ∀ 푘 ≠ 0 푤 푘푤 define the same point 푝 in 푃2 (equivalent classes) A point in 푃2 is a ray in 푅3 that goes through the origin: 푥 w-axis 푦 푤 All rays go through (0,0,0) define a point in 푃2 Plane w=1 Plane w=0 푥 푦 0 Computer Vision I: Image Formation Process 22/12/2014 6 Projective Space 2 3 0 • All points in 푃 are given by: 푅 \ 0 0 푥 2 • A point 푦 ∈ 푃 has 2 DoF (3 elements but norm of vector can be set to 1) 푤 푥 w-axis 푦 푤 All rays go through (0,0,0) define a point in 푃2 Plane w=1 Plane w=0 푥 푦 0 Computer Vision I: Image Formation Process 22/12/2014 7 From 푅2 to 푃2 and back • From 푅2 to 푃2: 푥 푥 푝~ = ∈ 푅2 푝 = 푦 ∈ 푃2 푦 1 - a point in inhomogeneous coordinates - a point in homogeneous coordinates - we soemtimes write ~푝 for inhomogeneous coordinates • From 푃2 to 푅2: 푥 푥/푤 푦 ∈ 푃2 ∈ 푅2 푤 푦/푤 푓표푟 푤 ≠ 0 what does it mean if w=0? We can do this transformation with all primitives Computer Vision I: Image Formation Process 22/12/2014 8 From 푅2 to 푃2 and back: Example • From 푅2 to 푃2: 푥 푥 2 푝~ = ∈ 푅2 푝 = 푦 ∈ 푃 푦 1 - a point in inhomogeneous coordinates - a point in homogeneous coordinates 3 ~ 2 3 4.5 6 2 6/2 푝 = ∈ 푅 푝 = 2 = 3 = 4 ∈ 푃 푝~ = ∈ 푅2 2 1 1.5 2 4/2 w-axis (6,4,2) (3,2,1) Plane w=1 (Space 푹ퟐ) Plane w=0 Computer Vision I: Image Formation Process 22/12/2014 9 Things we would like to have and do We look at these operations in: 푅2/푅3, 푃2/푃3 Primitives: • Points • Lines • Conics (Quadric in 3D) • (Planes in 3D) Operations with Primitives: • intersection • tangent Transformations: • Rotation • Translation • projective • …. Computer Vision I: Image Formation Process 22/12/2014 10 Why bother about 푃2? • All Primitives, operations and transformations are defined in 푅2 and 푃2 • Advantage of 푃2: • Many transformation and operations are written more compactly (e.g. linear transformations) • We will introduce new special “primitives” that are useful when dealing with “parallelism” This example will come 2 later in detail. In 푃 they meet in a „point at inifinty“ Two parallel lines in 푅2 Computer Vision I: Image Formation Process 22/12/2014 11 Points at infinity 푥 w-axis 푦 푤 All rays go through (0,0,0) define a point in 푃2 Plane w=1 Plane w=0 푥 푦 0 푥 Points with coordinate 푦 are ideal points or points at infinity 0 푥 푦 ∈ 푃2 Not defined in 푅2 푠푖푛푐푒 푤 = 0 0 Computer Vision I: Image Formation Process 22/12/2014 12 Lines in 푅2 • For Lines in coordinate space 푅2 we can write 푡 푙 = (푛푥, 푛푦, 푑) with 푛 = 푛푥, 푛푦 is normal vector and ||푛|| = 1 • A line has 2 DoF • A point (푥, 푦) lies on 푙 if: 푛푥 푥 + 푛푦 푦 + 푑 = 0 • Normal can also be encoded with an angle 휃: 푛 = cos 휃 , sin 휃 푡 Computer Vision I: Image Formation Process 22/12/2014 13 Lines in 푃2 • Points in 푃2: 풙 = (푥, 푦, 푤) • Lines in 푃2: 풍 = (푎, 푏, 푐) (again equivalent class: 푎, 푏, 푐 = 푘푎, 푘푏, 푘푐 ∀ 푘 ≠ 0 ) Hence also 2 DoF • All points 푥, 푦, 푤 on the line (푎, 푏, 푐) satisfy: 푎푥 + 푏푦 + 푐푤 = 0 this is the equation of a plane in 푅3 with normal (a,b,c) going through (0,0,0) Computer Vision I: Image Formation Process 22/12/2014 14 Converting Lines between 푃2 and 푅2 • Points in 푃2: 풙 = (푥, 푦, 푤) • Lines in 푃2: 풍 = (푎, 푏, 푐) From 푃2 to 푅2: 풍 = (푘푎, 푘푏, 푘푐) chose 푘 such that ||(푘푎, 푘푏)|| = 1 From 푅2 to 푃2: 2 풍 = 푛푥, 푛푦, 푑 is already a line in 푃 Computer Vision I: Image Formation Process 22/12/2014 15 Line at Infininty • There is a “special” line, called line at infinity: (0,0,1) • All points at infinity 푥, 푦, 0 lie on the line at infinity (0,0,1): 푥 ∗ 0 + 푦 ∗ 0 + 0 ∗ 1 = 0 0 0 vector for line at infinity w-axis 1 Plane w=1 Plane w=0 푥 푦 A point at infinity (w=0) 0 Computer Vision I: Image Formation Process 22/12/2014 16 A Line defined by two points in 푃2 • The line through two points 풙 and 풙′ is given as: 풍 = 풙 × 풙’ • Proof: It is: 풙 풙 × 풙’ = 풙′ 풙 × 풙′ = ퟎ vectors are orthogonal This is the same as: 풙 풍 = 풙′풍 = ퟎ Hence, the line 풍 goes through points 풙 and 풙′ 풙 풙′ Computer Vision I: Image Formation Process 22/12/2014 17 The Intersection of two lines in 푃2 • Intersection of two lines 풍 and 풍’ is the point 풙 = 풍 × 풍’ • Proof: It is: 풍 풍 × 풍’ = 풍′ 풍 × 풍’ = ퟎ vectors are orthogonal This is the same as: 풍풙 = 풍′풙 = ퟎ Hence, the point 풙 lies on the lines 풍 and 풍′ Note the „Theorem“ and Proofs have been very similiar, we only interchanged meaning of points and lines Computer Vision I: Image Formation Process 22/12/2014 18 Duality of points and lines • Note 풍풙 = 풙풍 = ퟎ (풙 and 풍 are “interchangeable”) • Duality Theorem: Two any theorem of 2D projective geometry there corresponds a dual theorem, which may be derived by interchanging the roles of points and lines in the original theorem. The intersection of two lines 풍 and 풍’ is the point 풙 = 풍 × 풍’ The line through two points 풙 and 풙′ is the line 풍 = 풙 × 풙’ Computer Vision I: Image Formation Process 22/12/2014 19 Parallel lines meet in a point at Infinity 풍 1 2 풍 = 0 ; 풍′ = 0 풍’ 1 1 풍 풍’ In 푅2 (Plane 푤 = 1) intersection 푦 0 −1 0 2 0 ′ (-1,1,1) (-1/2,1,1) 풍 × 풍 = 1 0 −1 0 = 1 0 1 0 1 0 (-1,0,1) (-1/2,0,1) 푥 Point at infinty Computer Vision I: Image Formation Process 22/12/2014 20 2D conic “Kegelschnitt” • Conics are shapes that arise when a plane intersects a cone • In compact form: 풙풕푪 풙 = ퟎ where C has the form: 푎 푏/2 푑/2 C = 푏/2 푐 푒/2 푑/2 푒/2 푓 • This can be written in in-homogenous coordinates: 푎푥2 + 푏푥푦 + 푐푦2 + 푑푥 + 푒푦 + 푓 = 0 where 풙~ = (푥, 푦) • 푪 has 5DoF since unique up to scale: 풙풕푪 풙 = 푘풙풕푪 풙 = 풙풕푘푪 풙 = ퟎ • Properties: 풍 is tangent to 푪 at a point 풙 if 풍 = 푪풙 Computer Vision I: Image Formation Process 22/12/2014 21 Example: 2D Conic y A circle: 2 2 2 r 푥 + 푦 − 푟 = 0 x Parabola: −푥2 + 푦 = 0 Computer Vision I: Image Formation Process 22/12/2014 22 Define a conic with five points Given 5 points (푥푖, 푦푖, 1) we can write: as with That gives: This is a 5 × 6 matrix. The 1D null-space gives the conic up to scale. Compute null-space with Gaussian elimination or SVD (to come later) Computer Vision I: Image Formation Process 22/12/2014 23 2D Transformations 2D Transformations in 푅2 Definition: • Euclidean: translation + rotation • Similarity (rigid body transform): Euclidean + scaling • Affine: Similarity + shearing • Projective: arbitrary linear transform in homogenous coordinates Computer Vision I: Image Formation Process 22/12/2014 24 2D Transformations of points • 2D Transformations in homogenous coordinates: 푥′ 푎 푏 푑 푥 푦′ = 푒 푓 ℎ 푦 1 푤′ 푖 푗 푙 Transformation matrix • Example: translation 1 0 푡푥 푥 푥′ 푦 푥′ 푥 푡 푦′ = 0 1 푡푦 = + 푥 1 푦′ 푦 푡푦 1 0 0 1 homogeneous coordinates inhomogeneous coordinates Advantage of homogeneous coordinates (going into 푃2) Computer Vision I: Image Formation Process 22/12/2014 25 2D Transformations of points (of a square) (two special points on the line at infity ) Here 푟푖푗 is a 2 x 2 rotation matrix with 1 DoF cos Θ −sin Θ which can be written as: sin Θ cos Θ [from Hartley Zisserman Page 44] Computer Vision I: Image Formation Process 22/12/2014 26 2D transformations of lines and conics All points move: 풙‘ = 푯풙 then: 풙′ 1) Line (defined by points) moves: 풙 ′ −1 t 풍 = (푯 ) 풍 푯 풍 풍′ 2) conic (defined by points) moves: 푪′ = (푯−1)t 푪 푯−1 Proof: 1) Assume 풙1 and 풙2 lie on 풍.
Recommended publications
  • Real Composition Algebras by Steven Clanton
    Real Composition Algebras by Steven Clanton A Thesis Submitted to the Faculty of The Wilkes Honors College in Partial Fulfillment of the Requirements for the Degree of Bachelor of Arts in Liberal Arts and Sciences with a Concentration in Mathematics Wilkes Honors College of Florida Atlantic University Jupiter, FL May 2009 Real Composition Algebras by Steven Clanton This thesis was prepared under the direction of the candidates thesis advisor, Dr. Ryan Karr, and has been approved by the members of his supervisory committee. It was sub- mitted to the faculty of The Honors College and was accepted in partial fulfillment of the requirements for the degree of Bachelor of Arts in Liberal Arts and Sciences. SUPERVISORY COMMITTEE: Dr. Ryan Karr Dr. Eugene Belogay Dean, Wilkes Honors College Date ii Abstract Author: Steve Clanton Title: Real Composition Algebras Institution: Wilkes Honors College of Florida Atlantic University Thesis Advisor: Dr. Ryan Karr Degree: Bachelor of Arts in Liberal Arts and Sciences Concentration: Mathematics Year: 2009 According to Koecher and Remmert [Ebb91, p. 267], Gauss introduced the \com- position of quadratic forms" to study the representability of natural numbers in binary (rank 2) quadratic forms. In particular, Gauss proved that any positive defi- nite binary quadratic form can be linearly transformed into the two-squares formula 2 2 2 2 2 2 w1 + w2 = (u1 + u2)(v1 + v2). This shows not only the existence of an algebra for every form but also an isomorphism to the complex numbers for each. Hurwitz gen- eralized the \theory of composition" to arbitrary dimensions and showed that exactly four such systems exist: the real numbers, the complex numbers, the quaternions, and octonions [CS03, p.
    [Show full text]
  • Module 2 : Fundamentals of Vector Spaces Section 2 : Vector Spaces
    Module 2 : Fundamentals of Vector Spaces Section 2 : Vector Spaces 2. Vector Spaces The concept of a vector space will now be formally introduced. This requires the concept of closure and field. Definition 1 (Closure) A set is said to be closed under an operation when any two elements of the set subject to the operation yields a third element belonging to the same set. Example 2 The set of integers is closed under addition, multiplication and subtraction. However, this set is not closed under division. Example 3 The set of real numbers ( ) and the set of complex numbers (C) are closed under addition, subtraction, multiplication and division. Definition 4 (Field) A field is a set of elements closed under addition, subtraction, multiplication and division. Example 5 The set of real numbers ( ) and the set of complex numbers ( ) are scalar fields. However, the set of integers is not a field. A vector space is a set of elements, which is closed under addition and scalar multiplication. Thus, associated with every vector space is a set of scalars (also called as scalar field or coefficient field) used to define scalar multiplication on the space. In functional analysis, the scalars will be always taken to be the set of real numbers ( ) or complex numbers ( ). Definition 6 (Vector Space): A vector space is a set of elements called vectors and scalar field together with two operations. The first operation is called addition which associates with any two vectors a vector , the sum of and . The second operation is called scalar multiplication, which associates with any vector and any scalar a vector (a scalar multiple of by Thus, when is a linear vector space, given any vectors and any scalars the element .
    [Show full text]
  • Euclidean Space - Wikipedia, the Free Encyclopedia Page 1 of 5
    Euclidean space - Wikipedia, the free encyclopedia Page 1 of 5 Euclidean space From Wikipedia, the free encyclopedia In mathematics, Euclidean space is the Euclidean plane and three-dimensional space of Euclidean geometry, as well as the generalizations of these notions to higher dimensions. The term “Euclidean” distinguishes these spaces from the curved spaces of non-Euclidean geometry and Einstein's general theory of relativity, and is named for the Greek mathematician Euclid of Alexandria. Classical Greek geometry defined the Euclidean plane and Euclidean three-dimensional space using certain postulates, while the other properties of these spaces were deduced as theorems. In modern mathematics, it is more common to define Euclidean space using Cartesian coordinates and the ideas of analytic geometry. This approach brings the tools of algebra and calculus to bear on questions of geometry, and Every point in three-dimensional has the advantage that it generalizes easily to Euclidean Euclidean space is determined by three spaces of more than three dimensions. coordinates. From the modern viewpoint, there is essentially only one Euclidean space of each dimension. In dimension one this is the real line; in dimension two it is the Cartesian plane; and in higher dimensions it is the real coordinate space with three or more real number coordinates. Thus a point in Euclidean space is a tuple of real numbers, and distances are defined using the Euclidean distance formula. Mathematicians often denote the n-dimensional Euclidean space by , or sometimes if they wish to emphasize its Euclidean nature. Euclidean spaces have finite dimension. Contents 1 Intuitive overview 2 Real coordinate space 3 Euclidean structure 4 Topology of Euclidean space 5 Generalizations 6 See also 7 References Intuitive overview One way to think of the Euclidean plane is as a set of points satisfying certain relationships, expressible in terms of distance and angle.
    [Show full text]
  • Math 20480 – Example Set 03A Determinant of a Square Matrices
    Math 20480 { Example Set 03A Determinant of A Square Matrices. Case: 2 × 2 Matrices. a b The determinant of A = is c d a b det(A) = jAj = = c d For matrices with size larger than 2, we need to use cofactor expansion to obtain its value. Case: 3 × 3 Matrices. 0 1 a11 a12 a13 a11 a12 a13 The determinant of A = @a21 a22 a23A is det(A) = jAj = a21 a22 a23 a31 a32 a33 a31 a32 a33 (Cofactor expansion along the 1st row) = a11 − a12 + a13 (Cofactor expansion along the 2nd row) = (Cofactor expansion along the 1st column) = (Cofactor expansion along the 2nd column) = Example 1: 1 1 2 ? 1 −2 −1 = 1 −1 1 1 Case: 4 × 4 Matrices. a11 a12 a13 a14 a21 a22 a23 a24 a31 a32 a33 a34 a41 a42 a43 a44 (Cofactor expansion along the 1st row) = (Cofactor expansion along the 2nd column) = Example 2: 1 −2 −1 3 −1 2 0 −1 ? = 0 1 −2 2 3 −1 2 −3 2 Theorem 1: Let A be a square n × n matrix. Then the A has an inverse if and only if det(A) 6= 0. We call a matrix A with inverse a matrix. Theorem 2: Let A be a square n × n matrix and ~b be any n × 1 matrix. Then the equation A~x = ~b has a solution for ~x if and only if . Proof: Remark: If A is non-singular then the only solution of A~x = 0 is ~x = . 3 3. Without solving explicitly, determine if the following systems of equations have a unique solution.
    [Show full text]
  • Automatic Variational Inference in Stan
    Automatic Variational Inference in Stan Alp Kucukelbir Data Science Institute Department of Computer Science Columbia University [email protected] Rajesh Ranganath Department of Computer Science Princeton University [email protected] Andrew Gelman Data Science Institute Depts. of Political Science, Statistics Columbia University [email protected] David M. Blei Data Science Institute Depts. of Computer Science, Statistics Columbia University [email protected] June 15, 2015 Abstract Variational inference is a scalable technique for approximate Bayesian inference. Deriving variational inference algorithms requires tedious model-specific calculations; this makes it difficult to automate. We propose an automatic variational inference al- gorithm, automatic differentiation variational inference (advi). The user only provides a Bayesian model and a dataset; nothing else. We make no conjugacy assumptions and arXiv:1506.03431v2 [stat.ML] 12 Jun 2015 support a broad class of models. The algorithm automatically determines an appropri- ate variational family and optimizes the variational objective. We implement advi in Stan (code available now), a probabilistic programming framework. We compare advi to mcmc sampling across hierarchical generalized linear models, nonconjugate matrix factorization, and a mixture model. We train the mixture model on a quarter million images. With advi we can use variational inference on any model we write in Stan. 1 Introduction Bayesian inference is a powerful framework for analyzing data. We design
    [Show full text]
  • Copyrighted Material
    JWST582-IND JWST582-Ray Printer: Yet to Come September 7, 2015 10:27 Trim: 254mm × 178mm Index Adjoint Map and Left and Right Translations, 839–842, 845, 850–854, 857–858, 126 861–862, 880, 885, 906, 913–914, Admissible Virtual Space, 91, 125, 524, 925, 934, 950–953 560–561, 564, 722, 788, 790, 793 Scaling, 37, 137, 161, 164, 166, 249 affine combinations, 166 Translation, 33–34, 42–43, 75, 77, 92, 127, Affine Space, 33–34, 41, 43, 136–137, 139, 136–137, 155, 164, 166–167, 280, 155, 248, 280, 352–353, 355–356 334, 355–356, 373, 397, 421, 533, Affine Transformation 565, 793 Identity, 21–22, 39, 47, 60–61, 63–64, Algorithm For Newton Method of Iteration, 67–68, 70, 81, 90, 98–100, 104–106, 441 116, 125–126, 137, 255, 259, 268, Algorithm: Numerical Iteration, 445 331, 347, 362–363, 367, 372, 375, Allowable Parametric Representation, 37, 377, 380, 398, 538, 550, 559, 562, 140, 249 564–565, 568, 575–576, 581, 599, An Important Lemma, 89, 605, 839 609, 616, 622, 628, 648, 655, 663, Angle between Vectors, 259 668, 670, 730, 733, 788, 790, Angular Momentum, 523–524, 537–538, 543, 792–793, 797–798, 805–806, 547–549, 553–554, 556, 558, 575, 585, 811, 825, 834, 852, 862, 871–872, 591–592, 594, 608, 622, 640, 647–648, 916 722, 748, 750, 755–756, 759–760, 762, Rotation, 1, 6, 9–15, 17, 22, 27, 33, 36, 39, 769–770, 776, 778, 780–783, 787, 805, 44, 64, 74, 90–94, 97–101, 104–108, 815, 822, 828, 842 110, 112, 114–116, 118–121, Angular Velocity Tensor, 125, 127, 526, 542, 123–127, 131, 137, 165–166, 325, 550, 565, 571, 723, 794, 800 334, 336–337, 353, 356,
    [Show full text]
  • Redimensioning of Euclidean Spaces Arxiv:1307.7010V1 [Math.CA] 26 Jul
    Redimensioning of Euclidean Spaces Piyush Ahuja∗ Subiman Kunduy Abstract A vector space over a field F is a set V together with two binary operations, called vector n addition and scalar multiplication. It is standard practice to think of a Euclidean space R as an n-dimensional real coordinate space i.e. the space of all n-tuples of real numbers (Rn), with vector operations defined using real addition and multiplication coordinate-wise. A natural question which arises is if it is possible to redefine vector operations on the space in such a way that it acquires some other dimension, say k (over the same field i.e., R). In this paper, we answer the question in the affirmative, for all k 2 N. We achieve the required dimension k by `dragging' the structure of a standard k-dimensional Euclidean space (R ) on the n-tuple of real numbers (Rn). At the heart of the argument is Cantor's counterintuitive result that n R is numerically equivalent to R for all n 2 N, which can be proved through an elegant construction. Finally, we generalize the result to all finite dimensional vector spaces. Keywords. Vector Space, Euclidean Space, Numerical Equivalence, Dimension, Isomorphism 1 Introduction arXiv:1307.7010v1 [math.CA] 26 Jul 2013 An engineer, a physicist, and a mathematician are discussing how to visualise four dimensions: Engineer: I never really get it. Physicist: Oh it's really easy, just imagine three dimensional space over a time - that adds your fourth dimension. n Mathematician: No, it's way easier than that; just imagine R , then set n = 4.
    [Show full text]
  • Mathematics Curriculum
    New York State Common Core Mathematics Curriculum Table of Contents1 Vectors and Matrices PRECALCULUS AND ADVANCED TOPICS • MODULE 2 Module Overview ................................................................................................................................................. X Topic A: Networks and Matrices (N‐VM.6, N‐VM.7, N‐VM.8) ............................................................................. X Lesson 1: Introduction to Networks ........................................................................................................ X Lesson 2: Networks and Matrix Arithmetic ............................................................................................. X Lesson 3: Matrix Arithmetic in its Own Right .......................................................................................... X Topic B: Linear Transformations of Planes and Space (N‐VM.7, N‐VM.8, N‐VM.9, N‐VM.10, N‐VM.11, N‐VM.12) ............................................................................................................................................................... X Lesson 4: Linear Transformations Review ............................................................................................... X Lesson 5: Coordinates of Points in Space ................................................................................................ X Lesson 6: Linear Transformations as Matrices ........................................................................................ X Lesson 7: Linear Transformations
    [Show full text]
  • MATH 206 Linear Algebra I
    MATH 206 Linear Algebra I 1. Catalog Description MATH 206 Linear Algebra I 4 units Prerequisite: MATH 143. Systems of linear equations. Matrix algebra, including inverses and determinants. Vectors, vector spaces, bases and linear transformations in real coordinate space of n dimensions. Eigenvalues, eigenvectors and diagonalization. Applications of linear algebra. Introduction to inner products and orthogonality. 4 lectures. 2. Required Background or Experience Math 143 or consent of instructor. 3. Learning Objectives The student should be able to: a. Understand the concept of matrices and their role in linear algebra and applied mathematics. b. Have a complete understanding of linear systems Ax = b, and the role of rank, subspace, linear independence, etc. in the analysis of these systems. c. Understand eigenvalues and eigenvectors of matrices and their computation. d. Know the concept of determinant and its properties. e. Understand the concepts of vector space and linear maps when the vector space is Rn. f. Understand important definitions in linear algebra and the ability to do very elementary proofs. 4. Text and References To be chosen by instructor. Suggested texts include: • Lay, David C., Linear Algebra and its Applications • Bretscher, Otto, Linear Algebra with Applications 5. Minimum Student Materials Paper, pencils, and notebook. 2019/20 6. Minimum University Facilities Classroom with ample chalkboard space for class use. Use of a computer lab is optional. 7. Content and Method a. Systems of linear equations b. Matrix and vector operations c. Properties of Rn (linear combinations, bases, spanning set, dimensions, dimension equation) d. Coordinate systems and change of basis e. Determinants f. Eigenvalues and eigenvectors g.
    [Show full text]
  • Ethan D. Bloch
    ETHAN D. BLOCH Professor of Mathematics Bard College ! !Address: !Phone, Email, Website: Department of Mathematics 845-758-7266 Bard College [email protected] 30 Campus Road http://math.bard.edu/bloch/ Annandale-on-Hudson, NY 12504-5000 ! !Education: • 1983 Ph.D. (mathematics) Cornell University, Ithaca, NY • 1981 M.S. (mathematics) Cornell University, Ithaca, NY • 1978 B.A. (mathematics) Reed College, Portland, OR ! !Academic Positions: Bard College: • 2003 - 2007 Chair, Division of Science, Mathematics, & Computing • 2001 - Professor of Mathematics • 1993 - 1995 Associate Dean for Academic Services • 1991 - 2000 Associate Professor of Mathematics ! • 1986 - 1991 Assistant Professor of Mathematics Hebrew University of Jerusalem: ! • 2007 - 2008 Visiting Professor of Mathematics University of Utah: ! • 1983 - 1986 Instructor of Mathematics Cornell University: • 1978 - 1982 Teaching Assistant ! !Grants and Awards: • Asher Edelman Released-Time Fellow, Bard College, Fall 1988, Spring 1990, Fall 1992. • National Science Foundation Grant DMS-8503388, 7/85 - 12/87. • Hutchinson Fellowship, Cornell University, 1982 - 1983. ! !Memberships: American Mathematical Society, Phi Beta Kappa ! !Citizenship: !U.S.A. !1 ! Publications: ! Books: 1. A First Course in Geometric Topology and Differential Geometry, Birkhäuser, Boston, 1996. 2. Proofs and Fundamentals: A First Course in Abstract Mathematics, Birkhäuser, Boston, 2000. 3. Proofs and Fundamentals: A First Course in Abstract Mathematics, 2nd edition, Springer, New York, 2010. 4. The Real Numbers and Real Analysis, Springer, New York, 2011. ! Papers:! 1. (with R. Connelly and D. W. Henderson) The space of simplexwise linear homeomorphisms of a convex 2-disk, Topology 23 (1984), 161-175. 2. Simplexwise linear near-embeddings of a 2-disk into R2, Trans. Amer. Math. Soc.
    [Show full text]
  • Reference-Manual-2 19.Pdf
    Stan Reference Manual Version 2.19 Stan Development Team Contents Overview 7 Language 9 1. Character Encoding 10 1.1 Content Characters 10 1.2 Comment Characters 10 2. Includes 11 2.1 Recursive Includes 12 2.2 Include Paths 12 3. Comments 13 3.1 Line-Based Comments 13 3.2 Bracketed Comments 13 4. Whitespace 14 4.1 Whitespace Characters 14 4.2 Whitespace Neutrality 14 4.3 Whitespace Location 14 5. Data Types and Declarations 15 5.1 Overview of Data Types 15 5.2 Primitive Numerical Data Types 17 5.3 Univariate Data Types and Variable Declarations 18 5.4 Vector and Matrix Data Types 22 5.5 Array Data Types 29 5.6 Variable Types vs.Constraints and Sizes 34 5.7 Compound Variable Declaration and Definition 35 6. Expressions 37 6.1 Numeric Literals 37 6.2 Variables 38 2 CONTENTS 3 6.3 Vector, Matrix, and Array Expressions 40 6.4 Parentheses for Grouping 43 6.5 Arithmetic and Matrix Operations on Expressions 43 6.6 Conditional Operator 46 6.7 Indexing 48 6.8 Multiple Indexing and Range Indexing 49 6.9 Function Application 50 6.10 Type Inference 53 6.11 Higher-Order Functions 55 6.12 Chain Rule and Derivatives 57 7. Statements 59 7.1 Statement Block Contexts 59 7.2 Assignment Statement 59 7.3 Increment Log Density 62 7.4 Sampling Statements 64 7.5 For Loops 71 7.6 Foreach Loops 73 7.7 Conditional Statements 74 7.8 While Statements 74 7.9 Statement Blocks and Local Variable Declarations 75 7.10 Break and Continue Statements 77 7.11 Print Statements 79 7.12 Reject Statements 81 8.
    [Show full text]
  • Automatic Variational Inference in Stan
    Automatic Variational Inference in Stan Alp Kucukelbir Rajesh Ranganath Columbia University Princeton University [email protected] [email protected] Andrew Gelman David M. Blei Columbia University Columbia University [email protected] [email protected] Abstract Variational inference is a scalable technique for approximate Bayesian inference. Deriving variational inference algorithms requires tedious model-specific calcula- tions; this makes it difficult for non-experts to use. We propose an automatic varia- tional inference algorithm, automatic differentiation variational inference (advi); we implement it in Stan (code available), a probabilistic programming system. In advi the user provides a Bayesian model and a dataset, nothing else. We make no conjugacy assumptions and support a broad class of models. The algorithm automatically determines an appropriate variational family and optimizes the vari- ational objective. We compare advi to mcmc sampling across hierarchical gen- eralized linear models, nonconjugate matrix factorization, and a mixture model. We train the mixture model on a quarter million images. With advi we can use variational inference on any model we write in Stan. 1 Introduction Bayesian inference is a powerful framework for analyzing data. We design a model for data using latent variables; we then analyze data by calculating the posterior density of the latent variables. For machine learning models, calculating the posterior is often difficult; we resort to approximation. Variational inference (vi) approximates the posterior with a simpler distribution [1, 2]. We search over a family of simple distributions and find the member closest to the posterior. This turns ap- proximate inference into optimization. vi has had a tremendous impact on machine learning; it is typically faster than Markov chain Monte Carlo (mcmc) sampling (as we show here too) and has recently scaled up to massive data [3].
    [Show full text]