Fundamentals of Media Processing

Total Page:16

File Type:pdf, Size:1020Kb

Fundamentals of Media Processing Fundamentals of Media Processing Lecturer: 池畑 諭(Prof. IKEHATA Satoshi) 児玉 和也(Prof. KODAMA Kazuya) Support: 佐藤 真一(Prof. SATO Shinichi) 孟 洋(Prof. MO Hiroshi) Course Overview (15 classes in total) 1-10 Machine Learning by Prof. Satoshi Ikehata 11-15 Signal Processing by Prof. Kazuya Kodama Grading will be based on the final report. Fundamentals of Media Processing, Deep Learning Chapter 1-9 (out of 20) An introduction to a broad range of topics in deep learning, covering mathematical and conceptual background, deep learning techniques used in industry, and research perspectives. • Due to my background, I will mainly talk about “image” • I will introduce some applications beyond this book Fundamentals of Media Processing, Deep Learning 10/16 (Today) Introduction Chap. 1 Basic of Machine Learning (Maybe for beginners) 10/23 Basic mathematics (1) (Linear algebra, probability, numerical computation) Chap. 2,3,4 10/30 Basic mathematics (2) (Linear algebra, probability, numerical computation) Chap. 2,3,4 11/6 Machine Learning Basics (1) Chap. 5 11/13 Machine Learning Basics (2) Chap. 5 Basic of Deep Learning 11/20 Deep Feedforward Networks Chap. 6 11/27 Regularization and Deep Learning Chap. 7 12/4 Optimization for Training Deep Models Chap. 8 CNN and its Application Chap. 9 and more 12/11 Convolutional Neural Networks and Its Application (1) 12/18 Convolutional Neural Networks and Its Application (2) Chap. 9 and more Fundamentals of Media Processing, Deep Learning Linear Algebra Fundamentals of Media Processing, Deep Learning Scalars, Vectors, Matrices and Tensors Scalars 푎 = 1 0 풂 = = 0,1 푇 (1-D) Vector 1 0 1 0 2 (2-D) Matrix 퐴 = 퐴푇 = 퐴 ∈ ℝ2×2 2 3 1 3 퐴1,1 = 0, 퐴1,2 = 1, 퐴2,1 = 2, 퐴2,2 = 3 퐴 (N-D) Tensor 푖,푗,푘… Fundamentals of Media Processing, Deep Learning Multiplying Matrices and Vectors 풙푇풚 = 풚푇풙 퐴 퐵 + 퐶 = 퐴퐵 + 퐴퐶 풙 ∗ 풙 = 풙푇풙 = 풙 ퟐ 퐴퐵 푇 = 퐵푇퐴푇 푇 풙 2 = 1 Unit vector 풙 풚 = 0 Orthogonal −1 푇 Symmetric matrix 퐴퐴 = 퐼 Inverse matrix 퐴 = 퐴 푇 푇 퐴 퐴 = 퐴퐴 = 퐼 Orthogonal matrix Tr 퐴 = ෍ 퐴푖,푖 Trace operator 푖 L1 Norm 풙 ퟏ = ෍ |푥푖| L∞ Norm 풙 ∞ = max|푥푖| 풊 풊 ퟏ ퟐ ퟐ Frobenius Norm 퐴 = ෍ 퐴2 L2 Norm 풙 ퟐ = ෍ |푥푖| 퐹 푖,푗 풊 푖,푗 푇 L0 Norm 풙 ퟎ = # of nonzero entries 퐴 퐹 = Tr 퐴퐴 Fundamentals of Media Processing, Deep Learning Matrix Decomposition ◼ Decomposition of matrices shows us information about their functional properties that is not obvious from the representation of the matrix ⚫ Eigendecomposition ⚫ Singular value decomposition Differs in ⚫ LU decomposition ⚫ QR decomposition ◼ Applicable matrix ⚫ Jordan decomposition ◼ Decomposition type ⚫ Cholesky decomposition ◼ Application ⚫ Schur decomposition ⚫ Rank factorization ⚫ And more… Fundamentals of Media Processing, Deep Learning Eigendecomposition ⚫ Eigenvector and eigenvalue: 풗: Eigenvector 퐴풗 = 휆풗 휆: Eigenvalue ⚫ Eigendecomposition of a diagonalizable matrix : 푉: A set of eigenvectors 퐴 = 푉diag 휆 푉−1 휆: A set of eigenvalues ⚫ Every symmetric matrix has a real, orthogonal eigendecomposition 퐴 = 푄훬푄푇 ⚫ A matrix whose eigenvalues are all positive is positive definite, positive or zero is positive semidefinite Fundamentals of Media Processing, Deep Learning What Eigenvalues and Eigenvectors are? ◼ Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefix eigen- is adopted from the German word eigen for "proper", "characteristic". Originally utilized to study principal axes of the rotational motion of rigid bodies, eigenvalues and eigenvectors have a wide range of applications, for example in stability analysis, vibration analysis, atomic orbitals, facial recognition, and matrix diagonalization. In this shear mapping the red arrow changes direction but the blue arrow does not. The blue arrow is an eigenvector of this shear mapping because it does not change direction, and since its length is unchanged, its eigenvalue is 1. https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors Fundamentals of Media Processing, Deep Learning Calculation of Eigendecomposition ⚫ Simply solve a linear equation 퐴풗 = 휆풗 → 퐴 − 휆퐼 풗 = 0 ⚫ 퐴 − 휆퐼 must not have inverse matrix, otherwise 푣 = 0 푛1 2 푛3 det 퐴 − 휆퐼 = 0 → 푝 휆 = 휆 − 휆1 휆 − 휆2 휆 − 휆3 … ⚫ Once 휆푠 are calculated, eigenvectors are calculated by: 퐴 − 휆퐼 풗 = 0 Fundamentals of Media Processing, Deep Learning Singular Value Decomposition (SVD) ⚫ Similar to eigendecomposition, but matrix need not be square. Singular value decomposition of a real matrix A is 퐴 = 푈퐷푉푇 퐴풖 = 풖 퐴푇풗 = 풗 ⚫ The left singular vectors of 퐵 are the eigenvectors of 퐴퐴푇 ⚫ The right singular vectors of 퐵 are the eigenvectors of 퐴푇퐴 ⚫ SVD is useful for the matrix inversion of non-square matrix (pseudo inverse matrix) or rank approximation (E.g., find nearest matrix whose rank is k) or solving a homogenous system Fundamentals of Media Processing, Deep Learning Solving Linear Systems ◼ Number of Eq = Number of Unknown (Determined) 푎 + 2b + 3c = 3 1 2 3 푎 3 −1 2푎 − b + c = 2 2 −1 1 푏 = 2 풙 = 퐴 풚 5 1 −2 푐 1 5a + b − 2c = 1 퐴 풙 풚 ◼ Number of Eq < Number of Unknown (Underdetermined) 푎 푎 + 2b + 3c = 3 1 2 3 3 푏 = ? 2푎 − b + c = 2 2 −1 1 푐 2 ◼ Number of Eq > Number of Unknown (Overdetermined) 푎 + 2b + 3c = 3 1 2 3 푎 3 2푎 − b + c = 2 2 −1 1 2 푏 = 5 1 −2 1 ? 5a + b − 2c = 1 푐 8 −1 6 0 8a − b + 6c = 0 Fundamentals of Media Processing, Deep Learning Moore-Penrose Pseudoinverse ◼ SVD gives the pseudoinverse of A as + + 푇 푇 + 1 퐴 = 푉퐷 푈 (퐴 = 푈퐷푉 , 퐷푖푖 = ) 퐷푖푖 ◼ The solution to any linear systems is given by 풙 = 퐴+풚 ◼ If the linear system is: ⚫ Determined: this is same as the inverse ⚫ Underdetermined: this gives us the solution with the smallest norm of x ⚫ Overdetermined: this gives us the solution with the smallest error Fundamentals of Media Processing, Deep Learning The Homogeneous System ◼ We may want to solve the homogenous system that is defined as 퐴풙 = 0 ◼ If det(퐴푇퐴) = 0, numerical solutions to a homogeneous system can be found with SVD or eigendecompositoin 퐴푥 = 0 퐴푇퐴푥 = 0 퐴푇퐴푣 = 휆푣 ⚫ Eigenvector corresponding to smallest eigen value (~=0) ⚫ Or, the column vector in V which is corresponding to the smallest singular value (퐴 = 푈퐷푉푇) Fundamentals of Media Processing, Deep Learning Example: Structure-from-Motion https://www.youtube.com/watch?v=i7ierVkXYa8 Fundamentals of Media Processing, Deep Learning Example: Structure-from-Motion • Relative camera poses could be computed from the point correspondences on images • Given camera pose (R,t), and camera intrinsics (K), 3D points X is given by triangulation 푥1 = 퐾1 푅1 푇1 푋෨ 푥2 = 퐾2 푅2 푇2 푋෨ 퐴푋 = 0 Homogenous system Fundamentals of Media Processing, Deep Learning Probability and Information Theory Fundamentals of Media Processing, Deep Learning Why probability? Most real problem is not deterministic. Which is this picture about Dog or Cat? Fundamentals of Media Processing, Deep Learning Three possible sources of uncertainty ⚫ Inherent stochasticity in the system e.g., Randomly shuffled card ⚫ Incomplete observability e.g., Monty Hall problem ⚫ Incomplete modeling e.g., A robot that only sees the discretized space ⚫ A random variable is a variable that can take on different values randomly. e. g., 푥 ∈ x Fundamentals of Media Processing, Deep Learning Monty Hall problem Fundamentals of Media Processing, Deep Learning Discrete case: Probability Mass Function 푃 푥 ⚫ The domain of P must be the set of all possible states of x • ∀푥 ∈ x, 0 ≤ 푃 푥 ≤ 1. An impossible event has probability 0 and no state can be less probable than that. Likewise, an event that is guaranteed to happen has probability 1, and no state can have a greater chance of occurring • σ푥∈x 푃 푥 = 1. We refer to this property as being normalized. Without this property, we could obtain probabilities greater than one by computing the probability of one of many events occurring ⚫ Example • Dice : P(x)=1/6, x is an event where f(x)=1,2,3,4,5,6 Fundamentals of Media Processing, Deep Learning Continuous case: Probability Density Function 푝 푥 ⚫ The domain of 푝 must be the set of all possible states of x • ∀푥 ∈ x, 푝 푥 ≥ 0. Note that we do not require 푝 푥 ≤ 1 • ∫ 푝 푥 푑푥 = 1 • Does not give the probability of a specific state directly e.g., p(0.0001) + p(0.0002) +….. >= 100%! ⚫ Example • What is the probability that randomly selected value within [0,1] is more than 0.5? Fundamentals of Media Processing, Deep Learning Marginal Probability ◼ The probability distribution over the subset ⚫ ∀푥 ∈ x, P 푥 = σ푦 푃 푥, 푦 (Discrete) ⚫ 푝 푥 = ∫ 푝 푥, 푦 푑푦 (Continuous) Table: The statistics about the student Male Female Tokyo 0.4 0.3 Q. How often students come from Tokyo? Outside Tokyo 0.1 0.2 Fundamentals of Media Processing, Deep Learning Conditional Probability ◼ The probability of an event, given that some other event has happened ⚫ 푃 푦 푥 = 푃 푦, 푥 /푃(푥) ⚫ 푃 푦, 푥 = 푃 푦 푥 푃(푥) ◼ Example: The boy and girl problem Mr. Jones has two children. One is a girl. What is the probability that the other is a boy? • Each child is either male or female. • Each child has the same chance of being male as of being female. • The sex of each child is independent of the sex of the other. Fundamentals of Media Processing, Deep Learning Conditional Probability [Boy/Boy, Boy/Girl, Girl/Boy, Girl/Girl] ⚫ 푃 푦 : The probability that “The other is a boy” ⚫ 푃 푥 : The probability that “One is a girl” ⚫ 푃 푦|푥 : The probability that “The other is a boy” when “One is a girl” ⚫ 푃 푦, 푥 : The probability that “One is a girl, the other is a boy” 푃 푦, 푥 2/4 2 푃 푦 푥 = = = 푃 푥 3/4 3 Fundamentals of Media Processing, Deep Learning Chain rule of Probability 1 (푛) 1 푛 푖 1 (푖−1) ⚫ 푃 푥 , … , 푥 = 푃 푥 Π푖=2푃(푥 |푥 , … , 푥 )
Recommended publications
  • Projects Applying Linear Algebra to Calculus
    PROJECTS APPLYING LINEAR ALGEBRA TO CALCULUS Dr. Jason J. Molitierno Department of Mathematics Sacred Heart University Fairfield, CT 06825-1000 MAJOR PROBLEMS IN TEACHING A LINEAR ALGEBRA CLASS: • Having enough time in the semester to cover the syllabus. • Some problems are long and involved. ◦ Solving Systems of Equations. ◦ Finding Eigenvalues and Eigenvectors • Concepts can be difficult to visualize. ◦ Rotation and Translation of Vectors in multi-dimensioanl space. ◦ Solutions to systems of equations and systems of differential equations. • Some problems may be repetitive. A SOLUTION TO THESE ISSUES: OUT OF CLASS PROJECTS. THE BENEFITS: • By having the students learn some material outside of class, you have more time in class to cover additional material that you may not get to otherwise. • Students tend to get lost when the professor does a long computation on the board. Students are like "scribes." • Using technology, students can more easily visualize concepts. • Doing the more repetitive problems outside of class cuts down on boredom inside of class. • Gives students another method of learning as an alternative to be lectured to. APPLICATIONS OF SYSTEMS OF LINEAR EQUATIONS: (taken from Lay, fifth edition) • Nutrition: • Electrical Currents: • Population Distribution: VISUALIZING LINEAR TRANSFORMATIONS: in each of the five exercises below, do the following: (a) Find the matrix A that performs the desired transformation. (Note: In Exercises 4 and 5 where there are two transformations, multiply the two matrices with the first transformation matrix being on the right.) (b) Let T : <2 ! <2 be the transformation T (x) = Ax where A is the matrix you found in part (a). Find T (2; 3).
    [Show full text]
  • Direct Shear Mapping – a New Weak Lensing Tool
    MNRAS 451, 2161–2173 (2015) doi:10.1093/mnras/stv1083 Direct shear mapping – a new weak lensing tool C. O. de Burgh-Day,1,2,3‹ E. N. Taylor,1,3‹ R. L. Webster1,3‹ and A. M. Hopkins2,3 1School of Physics, David Caro Building, The University of Melbourne, Parkville VIC 3010, Australia 2The Australian Astronomical Observatory, PO Box 915, North Ryde NSW 1670, Australia 3ARC Centre of Excellence for All-sky Astrophysics (CAASTRO), Australian National University, Canberra, ACT 2611, Australia Accepted 2015 May 12. Received 2015 February 16; in original form 2013 October 24 ABSTRACT We have developed a new technique called direct shear mapping (DSM) to measure gravita- tional lensing shear directly from observations of a single background source. The technique assumes the velocity map of an unlensed, stably rotating galaxy will be rotationally symmet- ric. Lensing distorts the velocity map making it asymmetric. The degree of lensing can be inferred by determining the transformation required to restore axisymmetry. This technique is in contrast to traditional weak lensing methods, which require averaging an ensemble of background galaxy ellipticity measurements, to obtain a single shear measurement. We have tested the efficacy of our fitting algorithm with a suite of systematic tests on simulated data. We demonstrate that we are in principle able to measure shears as small as 0.01. In practice, we have fitted for the shear in very low redshift (and hence unlensed) velocity maps, and have obtained null result with an error of ±0.01. This high-sensitivity results from analysing spa- tially resolved spectroscopic images (i.e.
    [Show full text]
  • Vol 45 Ams / Maa Anneli Lax New Mathematical Library
    45 AMS / MAA ANNELI LAX NEW MATHEMATICAL LIBRARY VOL 45 10.1090/nml/045 When Life is Linear From Computer Graphics to Bracketology c 2015 by The Mathematical Association of America (Incorporated) Library of Congress Control Number: 2014959438 Print edition ISBN: 978-0-88385-649-9 Electronic edition ISBN: 978-0-88385-988-9 Printed in the United States of America Current Printing (last digit): 10987654321 When Life is Linear From Computer Graphics to Bracketology Tim Chartier Davidson College Published and Distributed by The Mathematical Association of America To my parents, Jan and Myron, thank you for your support, commitment, and sacrifice in the many nonlinear stages of my life Committee on Books Frank Farris, Chair Anneli Lax New Mathematical Library Editorial Board Karen Saxe, Editor Helmer Aslaksen Timothy G. Feeman John H. McCleary Katharine Ott Katherine S. Socha James S. Tanton ANNELI LAX NEW MATHEMATICAL LIBRARY 1. Numbers: Rational and Irrational by Ivan Niven 2. What is Calculus About? by W. W. Sawyer 3. An Introduction to Inequalities by E. F.Beckenbach and R. Bellman 4. Geometric Inequalities by N. D. Kazarinoff 5. The Contest Problem Book I Annual High School Mathematics Examinations 1950–1960. Compiled and with solutions by Charles T. Salkind 6. The Lore of Large Numbers by P.J. Davis 7. Uses of Infinity by Leo Zippin 8. Geometric Transformations I by I. M. Yaglom, translated by A. Shields 9. Continued Fractions by Carl D. Olds 10. Replaced by NML-34 11. Hungarian Problem Books I and II, Based on the Eotv¨ os¨ Competitions 12. 1894–1905 and 1906–1928, translated by E.
    [Show full text]
  • Multi-View Metric Learning in Vector-Valued Kernel Spaces
    Multi-view Metric Learning in Vector-valued Kernel Spaces Riikka Huusari Hachem Kadri Cécile Capponi Aix Marseille Univ, Université de Toulon, CNRS, LIS, Marseille, France Abstract unlabeled examples [26]; the latter tries to efficiently combine multiple kernels defined on each view to exploit We consider the problem of metric learning for information coming from different representations [11]. multi-view data and present a novel method More recently, vector-valued reproducing kernel Hilbert for learning within-view as well as between- spaces (RKHSs) have been introduced to the field of view metrics in vector-valued kernel spaces, multi-view learning for going further than MKL by in- as a way to capture multi-modal structure corporating in the learning model both within-view and of the data. We formulate two convex opti- between-view dependencies [20, 14]. It turns out that mization problems to jointly learn the metric these kernels and their associated vector-valued repro- and the classifier or regressor in kernel feature ducing Hilbert spaces provide a unifying framework for spaces. An iterative three-step multi-view a number of previous multi-view kernel methods, such metric learning algorithm is derived from the as co-regularized multi-view learning and manifold reg- optimization problems. In order to scale the ularization, and naturally allow to encode within-view computation to large training sets, a block- as well as between-view similarities [21]. wise Nyström approximation of the multi-view Kernels of vector-valued RKHSs are positive semidefi- kernel matrix is introduced. We justify our ap- nite matrix-valued functions. They have been applied proach theoretically and experimentally, and with success in various machine learning problems, such show its performance on real-world datasets as multi-task learning [10], functional regression [15] against relevant state-of-the-art methods.
    [Show full text]
  • MATRICES Part 2 3. Linear Equations
    MATRICES part 2 (modified content from Wikipedia articles on matrices http://en.wikipedia.org/wiki/Matrix_(mathematics)) 3. Linear equations Matrices can be used to compactly write and work with systems of linear equations. For example, if A is an m-by-n matrix, x designates a column vector (i.e., n×1-matrix) of n variables x1, x2, ..., xn, and b is an m×1-column vector, then the matrix equation Ax = b is equivalent to the system of linear equations a1,1x1 + a1,2x2 + ... + a1,nxn = b1 ... am,1x1 + am,2x2 + ... + am,nxn = bm . For example, 3x1 + 2x2 – x3 = 1 2x1 – 2x2 + 4x3 = – 2 – x1 + 1/2x2 – x3 = 0 is a system of three equations in the three variables x1, x2, and x3. This can be written in the matrix form Ax = b where 3 2 1 1 1 A = 2 2 4 , x = 2 , b = 2 1 1/2 −1 푥3 0 � − � �푥 � �− � A solution to− a linear system− is an assignment푥 of numbers to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by x1 = 1 x2 = – 2 x3 = – 2 since it makes all three equations valid. A linear system may behave in any one of three possible ways: 1. The system has infinitely many solutions. 2. The system has a single unique solution. 3. The system has no solution. 3.1. Solving linear equations There are several algorithms for solving a system of linear equations. Elimination of variables The simplest method for solving a system of linear equations is to repeatedly eliminate variables.
    [Show full text]
  • Karhunen-Loève Analysis for Weak Gravitational Lensing
    c Copyright 2012 Jacob T. Vanderplas arXiv:1301.6657v1 [astro-ph.CO] 28 Jan 2013 Karhunen-Lo`eve Analysis for Weak Gravitational Lensing Jacob T. Vanderplas A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy University of Washington 2012 Reading Committee: Andrew Connolly, Chair Bhuvnesh Jain Andrew Becker Program Authorized to Offer Degree: Department of Astronomy University of Washington Abstract Karhunen-Lo`eve Analysis for Weak Gravitational Lensing Jacob T. Vanderplas Chair of the Supervisory Committee: Professor Andrew Connolly Department of Astronomy In the past decade, weak gravitational lensing has become an important tool in the study of the universe at the largest scale, giving insights into the distribution of dark matter, the expansion of the universe, and the nature of dark energy. This thesis research explores several applications of Karhunen-Lo`eve (KL) analysis to speed and improve the comparison of weak lensing shear catalogs to theory in order to constrain cosmological parameters in current and future lensing surveys. This work addresses three related aspects of weak lensing analysis: Three-dimensional Tomographic Mapping: (Based on work published in VanderPlas et al., 2011) We explore a new fast approach to three-dimensional mass mapping in weak lensing surveys. The KL approach uses a KL-based filtering of the shear signal to reconstruct mass structures on the line-of-sight, and provides a unified framework to evaluate the efficacy of linear reconstruction techniques. We find that the KL-based filtering leads to near-optimal angular resolution, and computation times which are faster than previous approaches.
    [Show full text]
  • Practical Computer Vision: Theory & Applications
    Practical Computer Vision: Theory & Applications Carmen Alonso Montes 23rd-27th November 2015 Contents Course Structure Basic concepts of Computer Vision Pixels Colour vs. Black&White Noise and linear filtering Image enhancement Some practical examples My first programme with images Image denoising Image enhancement Basic commands matlab Carmen Alonso Montes 2 Course Structure This course is an introductory course for students with very few practical knowledge on computer vision Resources available for students through BCAM website Course material: slides, exercises and examples What do you need for this course? The most important thing is your self-motivation! Programming: examples will be done in Matlab, and also, if needed, I will explain how to do it in OpenCV (C++) For those C++ or Phyton programmers, I suggest to use a proper code editor (eclipse, netbeans, …) Some examples of useful tools (not needed) but good to know Versioning: git, SVN Automatic documentation generation: Doxygen Project management: Redmine Carmen Alonso Montes 3 Course Structure: Contents Each class will last 2 hours and a half 50 min Theory (9:30-10:20) 10 min break 50 min Theory (10:30-11:20) 10 min break 30 hour practice (11:30-12:00) Each slide, will contain a blue box with the command in matlab, in case you want to try it during the lessons imread At the end of the slides, you will have a table with the commands used during the lessons, for a quick guide. Carmen Alonso Montes 4 Some good books Digital Image Processing, 3rd Ed. (DIP/3e) by Gonzalez and Woods, (2008) This is the bible in this domain Learning OpenCV, Computer Vision in C++ with the OpenCV Library, by Adrian Kaehler, Gary Bradski.
    [Show full text]
  • Algorithms for Designing Pop-Up Cards
    Algorithms for Designing Pop-Up Cards Zachary Abel∗ Erik D. Demainey Martin L. Demainey Sarah Eisenstaty Anna Lubiwz Andr´eSchulzx Diane L. Souvaine{ Giovanni Vigliettak Andrew Winslow{ Abstract We prove that every simple polygon can be made as a (2D) pop-up card/book that opens to any desired angle between 0 and 360◦. More precisely, given a simple polygon attached to the two walls of the open pop-up, our polynomial-time algorithm subdivides the polygon into a single-degree-of-freedom linkage structure, such that closing the pop-up flattens the linkage without collision. This result solves an open problem of Hara and Sugihara from 2009. We also show how to obtain a more efficient construction for the special case of orthogonal polygons, and how to make 3D orthogonal polyhedra, from pop-ups that open to 90◦, 180◦, 270◦, or 360◦. 1 Introduction Pop-up books have been entertaining children with their surprising and playful mechanics since their mass production in the 1970s. But the history of pop-ups is much older [27], and they were originally used for scientific and historical illustrations. The earliest known example of a \movable book" is Matthew Paris's Chronica Majora (c. 1250), which uses turnable disks (volvelle) to represent a calendar and uses flaps to illustrate maps. A more recent scientific example is George Spratt's Obstetric Tables (1850), which uses flaps to illustrate procedures for delivering babies including by Caesarean section. From the same year, Dean & Sons' Little Red Riding Hood (1850) is the first known movable book where a flat page rises into a 3D scene, though here it was actuated by pulling a string.
    [Show full text]
  • Generalized Multiple Correlation Coefficient As a Similarity
    Generalized Multiple Correlation Coefficient as a Similarity Measurement between Trajectories Julen Urain1 Jan Peters1;2 Abstract— Similarity distance measure between two trajecto- ries is an essential tool to understand patterns in motion, for example, in Human-Robot Interaction or Imitation Learning. The problem has been faced in many fields, from Signal Pro- cessing, Probabilistic Theory field, Topology field or Statistics field. Anyway, up to now, none of the trajectory similarity measurements metrics are invariant to all possible linear trans- formation of the trajectories (rotation, scaling, reflection, shear mapping or squeeze mapping). Also not all of them are robust in front of noisy signals or fast enough for real-time trajectory classification. To overcome this limitation this paper proposes a similarity distance metric that will remain invariant in front of any possible linear transformation. Based on Pearson’s Correlation Coefficient and the Coefficient of Determination, our similarity metric, the Generalized Multiple Correlation Fig. 1: GMCC applied for clustering taught trajectories in Coefficient (GMCC) is presented like the natural extension of Imitation Learning the Multiple Correlation Coefficient. The motivation of this paper is two-fold: First, to introduce a new correlation metric that presents the best properties to compute similarities between trajectories invariant to linear transformations and compare human-robot table tennis match. In [11], HMM were used it with some state of the art similarity distances. Second, to to recognize human gestures. The recognition was invariant present a natural way of integrating the similarity metric in an to the starting position of the gestures. The sequences of Imitation Learning scenario for clustering robot trajectories.
    [Show full text]
  • Direct Shear Mapping: Prospects for Weak Lensing Studies of Individual Galaxy–Galaxy Lensing Systems
    Publications of the Astronomical Society of Australia (PASA), Vol. 32, e040, 16 pages (2015). C Astronomical Society of Australia 2015; published by Cambridge University Press. doi:10.1017/pasa.2015.39 Direct Shear Mapping: Prospects for Weak Lensing Studies of Individual Galaxy–Galaxy Lensing Systems C. O. de Burgh-Day1,2,3,4, E. N. Taylor1,3,R.L.Webster1,3 and A. M. Hopkins2,3 1School of Physics, David Caro Building, The University of Melbourne, Parkville VIC 3010, Australia 2The Australian Astronomical Observatory, PO Box 915, North Ryde NSW 1670, Australia 3ARC Centre of Excellence for All-sky Astrophysics (CAASTRO), School of Physics, David Caro Building, The University of Melbourne, Parkville VIC 3010, Australia 4Email: [email protected] (Received April 23, 2012; Accepted September 21, 2012) Abstract Using both a theoretical and an empirical approach, we have investigated the frequency of low redshift galaxy-galaxy lensing systems in which the signature of 3D weak lensing might be directly detectable. We find good agreement between these two approaches. Using data from the Galaxy and Mass Assembly redshift survey we estimate the frequency of detectable weak lensing at low redshift. We find that below a redshift of z ∼ 0.6, the probability of a galaxy being weakly γ ≥ . ∼ . − lensed by 0 02 is 0 01. We have also investigated the feasibility of measuring the scatter in the M∗ Mh relation using shear statistics. We estimate that for a shear measurement error of γ = 0.02 (consistent with the sensitivity of the Direct Shear Mapping technique), with a sample of ∼50,000 spatially and spectrally resolved galaxies, the scatter in the − M∗ Mh relation could be measured.
    [Show full text]
  • Matrix & Power Series Methods
    ii MATRIX AND POWER SERIES METHODS Mathematics 306 All You Ever Wanted to Know About Matrix Algebra and Infinite Series But Were Afraid To Ask By John W. Lee Department of Mathematics Oregon State University January 2006 Contents Introduction ix I Background and Review 1 1ComplexNumbers 3 1.1Goals................................. 3 1.2Overview............................... 3 1.3TheComplexNumberSystem.................... 4 1.4PropertiesofAbsoluteValuesandComplexConjugates..... 8 1.5TheComplexPlane......................... 8 1.6CirclesintheComplexPlane.................... 12 1.7ComplexFunctions.......................... 14 1.8SuggestedProblems......................... 15 2 Vectors, Lines, and Planes 19 2.1Goals................................. 19 2.2Overview............................... 19 2.3Vectors................................ 19 2.4DotProducts............................. 24 2.5RowandColumnVectors...................... 27 2.6LinesandPlanes........................... 28 2.7SuggestedProblems......................... 30 II Matrix Methods 33 3 Linear Equations 35 3.1Goals................................. 35 3.2Overview............................... 35 3.3ThreeBasicPrinciples........................ 36 3.4SystematicEliminationofUnknowns................ 40 3.5Non-SquareSystems......................... 45 3.6 EfficientEvaluationofDeterminants................ 47 iii iv CONTENTS 3.7ComplexLinearSystems....................... 48 3.8SuggestedProblems......................... 48 4 Matrices and Linear Systems 51 4.1Goals................................
    [Show full text]
  • Euclidean Plane and Its Relatives
    Euclidean Plane and its Relatives A minimalistic introduction Anton Petrunin arXiv:1302.1630v7 [math.HO] 25 Jan 2015 This work is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/4.0/. http://www.createspace.com/4122116 ISBN-13: 978-1481918473 ISBN-10: 1481918478 Contents Introduction 7 Prerequisite. Overview. 1 Preliminaries 11 What is axiomatic approach? What is model? Metric spaces. Examples. Shortcut for distance. Isometries, mo- tions and lines. Half-lines and segments. Angles. Reals modulo 2 π. Continuity. Congruent triangles. · Euclidean geometry 2 The Axioms 23 The Axioms. Lines and half-lines. Zero angle. Straight angle. Vertical angles. 3 Half-planes 29 Sign of angle. Intermediate value theorem. Same sign lem- mas. Half-planes. Triangle with the given sides. 4 Congruent triangles 37 Side-angle-side condition. Angle-side-angle condition. Isosceles triangles. Side-side-side condition. 5 Perpendicular lines 41 Right, acute and obtuse angles. Perpendicular bisector. Uniqueness of perpendicular. Reflection. Perpendicular is shortest. Angle bisectors. Circles. Geometric construc- tions. 3 4 CONTENTS 6 Parallel lines and similar triangles 51 Parallel lines. Similar triangles. Pythagorean theorem. Angles of triangle. Transversal property. Parallelograms. Method of coordinates. 7 Triangle geometry 61 Circumcircle and circumcenter. Altitudes and orthocenter. Medians and centroid. Bisector of triangle. Incenter. Inversive geometry 8 Inscribed angles 69 Angle between a tangent line and a chord. Inscribed angle. Inscribed quadrilaterals. Arcs. 9 Inversion 77 Cross-ratio. Inversive plane and circlines. Ptolemy’s iden- tity. Perpendicular circles. Angles after inversion. Non-Euclidean geometry 10 Absolute plane 89 Two angles of triangle.
    [Show full text]