Determinantal Point Processes in Randomized Numerical Linear Algebra

Total Page:16

File Type:pdf, Size:1020Kb

Determinantal Point Processes in Randomized Numerical Linear Algebra Determinantal Point Processes in Randomized Numerical Linear Algebra Michał Derezi´nskiand Michael W. Mahoney 1. Introduction Matrix problems are central to much of applied mathemat- Randomized Numerical Linear Algebra (RandNLA) is an ics, from traditional scientific computing and partial dif- area which uses randomness, most notably random sam- ferential equations to statistics, machine learning, and ar- pling and random projection methods, to develop im- tificial intelligence. Generalizations and variants of matrix proved algorithms for ubiquitous matrix problems. It be- problems are central to many other areas of mathematics, gan as a niche area in theoretical computer science about via more general transformations and algebraic structures, fifteen years ago [11], and since then the area has exploded. nonlinear optimization, infinite-dimensional operators, etc. Much of the work in RandNLA has been propelled Michał Derezi´nskiis a postdoctoral research scientist in the department of statis- by recent developments in machine learning, artificial in- tics at the University of California at Berkeley. His email address is mderezin telligence, and large-scale data science, and RandNLA both @berkeley.edu. draws upon and contributes back to both pure and applied Michael W. Mahoney is an associate professor in the department of statistics mathematics. at the University of California at Berkeley and a research scientist at the In- A seemingly different topic, but one which has a long ternational Computer Science Institute. His email address is mmahoney@stat history in pure and applied mathematics, is that of De- .berkeley.edu . terminantal Point Processes (DPPs). A DPP is a stochas- A complete list of references is available in the technical report version of this paper: https://arxiv.org/pdf/2005.03185.pdf. tic point process, the probability distribution of which is characterized by subdeterminants of some matrix. Such Communicated by Notices Associate Editor Reza Malek-Madani. processes were first introduced to model the distribution For permission to reprint this article, please contact: of fermions at thermal equilibrium [19]. In the context [email protected]. of random matrix theory, DPPs emerged as the eigenvalue DOI: https://doi.org/10.1090/noti2202 34 NOTICES OF THE AMERICAN MATHEMATICAL SOCIETY VOLUME 68, NUMBER 1 distribution for standard random matrix ensembles, and 2. RandNLA: Randomized Numerical they are of interest in other areas of mathematics such Linear Algebra as graph theory, combinatorics, and quantum mechan- Much of the early work in numerical linear algebra (NLA) ics [17]. More recently, DPPs have also attracted signifi- focused on deterministic algorithms. However, Monte cant attention within machine learning and statistics as a Carlo sampling approaches demonstrated that random- tractable probabilistic model that is able to capture a bal- ness inside the algorithm is a powerful computational re- ance between quality and diversity within data sets and source which can lead to both greater efficiency and robust- that admits efficient algorithms for sampling, marginaliza- ness to worst-case data [12, 20]. The success of RandNLA tion, conditioning, etc. [18]. This resulted in practical ap- methods has been proven in many domains, e.g., when plication of DPPs in experimental design, recommenda- the randomized least squares solvers such as Blendenpik tion systems, stochastic optimization, and more. or LSRN have outperformed the established high perfor- Until very recently, DPPs have had little if any presence mance computing software LAPACK or other methods in within RandNLA. However, recent work has uncovered parallel/distributed environments, respectively, or when deep connections between these two topics. The purpose RandNLA methods have been used in conjunction with tra- of this article is to provide an overview of RandNLA, with ditional scientific computing solvers for low-rank approxi- an emphasis on discussing and highlighting these connec- mation problems. tions with DPPs. In particular, we will show how random In a typical RandNLA setting, we are given a large dataset sampling with a DPP leads to new kinds of unbiased esti- in the form of a real-valued matrix, say 퐗 ∈ ℝ푛×푑, and our mators for the classical RandNLA task of least squares re- goal is to compute quantities of interest quickly. To do so, gression, enabling a more refined statistical and inferen- we efficiently downsize the matrix using a randomized al- tial understanding of RandNLA algorithms. We will also gorithm, while approximately preserving its inherent struc- demonstrate that a DPP is, in some sense, an optimal ture, as measured by some objective. In doing so, we ob- randomized method for low-rank approximation, another tain a new matrix 퐗˜ (often called a sketch of 퐗) which is ubiquitous matrix problem. Finally, we also discuss how either smaller or sparser than the original matrix. This pro- a standard RandNLA technique, called leverage score sam- cedure can often be represented by matrix multiplication, pling, can be derived as the marginal distribution of a DPP, i.e., 퐗˜ = 퐒퐗, where 퐒 is called the sketching matrix. Many as well as the algorithmic consequences this has for effi- applications of RandNLA follow the sketch-and-solve para- cient DPP sampling. digm: Instead of performing a costly operation on 퐗, we We start (in Section 2) with a brief review of a prototyp- first construct 퐗˜ (the sketch); we then perform the expen- ical RandNLA algorithm, focusing on the ubiquitous least sive operation (more cheaply) on the smaller 퐗˜ (the solve); squares problem and highlighting key aspects that will put and we use the solution from 퐗˜ as a proxy for the solution in context the recent work we will review. In particular, we would have obtained from 퐗. Here, cost often means we discuss the trade-offs between standard sampling meth- computational time, but it can also be communication or ods from RandNLA, including uniform sampling, norm- storage space or even human work. squared sampling, and leverage score sampling. Next (in Many different approaches have been established for Section 3), we introduce the family of DPPs, highlighting randomly downsizing data matrices 퐗 (see [12, 20] for some important subclasses and the basic properties that a detailed survey). While some methods randomly zero make them appealing for RandNLA. Then (in Section 4), out most of the entries of the matrix, most randomly we describe the fundamental connections between certain keep only a small random subset of rows and/or columns. classes of DPPs and the classical RandNLA tasks of least In either case, however, the choice of randomness is squares regression and low-rank approximation, as well as crucial in preserving the structure of the data. For ex- the relationship between DPPs and the RandNLA method ample, if the data matrix 퐗 contains a few dominant of leverage score sampling. Finally (in Section 5), we dis- entries/rows/columns (e.g., as measured by their abso- cuss the algorithmic aspects of both leverage scores and lute value or norm or some other “importance” score), DPPs. We conclude (in Section 6) by briefly mentioning then we should make sure that our sketch is likely to several other connections between DPPs and RandNLA, retain the information they carry. This leads to data- as well as a recently introduced class of random matrices, dependent sampling distributions that will be the focus of called determinant preserving, which has proven useful in our discussion. However, data-oblivious sketching tech- this line of research. niques, where the random sketching matrix 퐒 is inde- pendent of 퐗 (typically called a “random rotation” or a “random projection,” even if it is not precisely a rota- tion or projection in the linear algebraic sense), have also proven very successful [16,24]. Among the most common JANUARY 2021 NOTICES OF THE AMERICAN MATHEMATICAL SOCIETY 35 푑 푑 To illustrate the types of guarantees achieved by RandNLA methods on the least squares task, we will fo- 퐗˜ 퐗˜ cus on row sampling, i.e., sketches consisting of a small 푛 푛 random subset of the rows of 퐗, in the case that 푛 ≫ 푑. Concretely, the considered meta-strategy is to draw ran- dom i.i.d. row indices 푗1, ..., 푗푘 from {1, ..., 푛}, with each in- Rank-preserving sketch Low-rank approximation dex distributed according to (푝1, ..., 푝푛), and then solve the (e.g., Projection DPP) (e.g., L-ensemble DPP) subproblem formed from those indices: Figure 1. Two RandNLA settings which are differentiated by ˆ퐰= argmin ‖퐗퐰˜ − ̃퐲‖2 = 퐗˜† ̃퐲, (2) whether the sketch (matrix 퐗˜) aims to preserve the rank of 퐗 퐰 (left) or obtain a low-rank approximation (right). In Section 4 ⊤ 1 ⊤ 1 we associate each setting with a different DPP. where ̃퐱 = 퐱 and 푦̃푖 = 푦푗 for 푖 = 1, ..., 푘 푖 푘푝 푗푖 푘푝 푖 √ 푗푖 √ 푗푖 examples of such random transformations are i.i.d. Gauss- denote the 푖th row of 퐗˜ and entry of ̃퐲, respectively. The ian matrices, fast Johnson-Lindenstrauss transforms and rescaling is introduced to account for the biases caused by count sketches, all of which provide different trade-offs be- nonuniform sampling. We consider the following stan- tween efficiency and accuracy. These “data-oblivious ran- dard sampling distributions: dom projections” can be interpreted either in terms of the 1. Uniform: 푝푖 = 1/푛 for all 푖. Johnson-Lindenstrauss lemma or as a preconditioner for 2 2 2. Squared norms: 푝푖 = ‖퐱푖‖ /‖퐗‖퐹 , where ‖ ⋅ ‖퐹 is the the “data aware random sampling” methods we discuss. Frobenius (Hilbert-Schmidt) norm. Most RandNLA techniques can be divided into one of 2 3. Leverage scores: 푝푖 = 푙푖/푑 for 푙푖 = ‖퐱푖‖(퐗⊤퐗)−1 (푖th two settings, depending on the dimensionality or aspect 퐗 ‖퐯‖ = √퐯⊤퐀퐯 ratio of 퐗, and on the desired size of the sketch (see leverage score of ) and 퐀 . Figure 1): Both squared norm and leverage score sampling are stan- 1.
Recommended publications
  • Special Western Canada Linear Algebra Meeting (17W2668)
    Special Western Canada Linear Algebra Meeting (17w2668) Hadi Kharaghani (University of Lethbridge), Shaun Fallat (University of Regina), Pauline van den Driessche (University of Victoria) July 7, 2017 – July 9, 2017 1 History of this Conference Series The Western Canada Linear Algebra meeting, or WCLAM, is a regular series of meetings held roughly once every 2 years since 1993. Professor Peter Lancaster from the University of Calgary initiated the conference series and has been involved as a mentor, an organizer, a speaker and participant in all of the ten or more WCLAM meetings. Two broad goals of the WCLAM series are a) to keep the community abreast of current research in this discipline, and b) to bring together well-established and early-career researchers to talk about their work in a relaxed academic setting. This particular conference brought together researchers working in a wide-range of mathematical fields including matrix analysis, combinatorial matrix theory, applied and numerical linear algebra, combinatorics, operator theory and operator algebras. It attracted participants from most of the PIMS Universities, as well as Ontario, Quebec, Indiana, Iowa, Minnesota, North Dakota, Virginia, Iran, Japan, the Netherlands and Spain. The highlight of this meeting was the recognition of the outstanding and numerous contributions to the areas of research of Professor Peter Lancaster. Peter’s work in mathematics is legendary and in addition his footprint on mathematics in Canada is very significant. 2 Presentation Highlights The conference began with a presentation from Prof. Chi-Kwong Li surveying the numerical range and connections to dilation. During this lecture the following conjecture was stated: If A 2 M3 has no reducing eigenvalues, then there is a T 2 B(H) such that the numerical range of T is contained in that of A, and T has no dilation of the form I ⊗ A.
    [Show full text]
  • 500 Natural Sciences and Mathematics
    500 500 Natural sciences and mathematics Natural sciences: sciences that deal with matter and energy, or with objects and processes observable in nature Class here interdisciplinary works on natural and applied sciences Class natural history in 508. Class scientific principles of a subject with the subject, plus notation 01 from Table 1, e.g., scientific principles of photography 770.1 For government policy on science, see 338.9; for applied sciences, see 600 See Manual at 231.7 vs. 213, 500, 576.8; also at 338.9 vs. 352.7, 500; also at 500 vs. 001 SUMMARY 500.2–.8 [Physical sciences, space sciences, groups of people] 501–509 Standard subdivisions and natural history 510 Mathematics 520 Astronomy and allied sciences 530 Physics 540 Chemistry and allied sciences 550 Earth sciences 560 Paleontology 570 Biology 580 Plants 590 Animals .2 Physical sciences For astronomy and allied sciences, see 520; for physics, see 530; for chemistry and allied sciences, see 540; for earth sciences, see 550 .5 Space sciences For astronomy, see 520; for earth sciences in other worlds, see 550. For space sciences aspects of a specific subject, see the subject, plus notation 091 from Table 1, e.g., chemical reactions in space 541.390919 See Manual at 520 vs. 500.5, 523.1, 530.1, 919.9 .8 Groups of people Add to base number 500.8 the numbers following —08 in notation 081–089 from Table 1, e.g., women in science 500.82 501 Philosophy and theory Class scientific method as a general research technique in 001.4; class scientific method applied in the natural sciences in 507.2 502 Miscellany 577 502 Dewey Decimal Classification 502 .8 Auxiliary techniques and procedures; apparatus, equipment, materials Including microscopy; microscopes; interdisciplinary works on microscopy Class stereology with compound microscopes, stereology with electron microscopes in 502; class interdisciplinary works on photomicrography in 778.3 For manufacture of microscopes, see 681.
    [Show full text]
  • Mathematics (MATH) 1
    Mathematics (MATH) 1 MATH 103 College Algebra and Trigonometry MATHEMATICS (MATH) Prerequisites: Appropriate score on the Math Placement Exam; or grade of P, C, or better in MATH 100A. MATH 100A Intermediate Algebra Notes: Credit for both MATH 101 and 103 is not allowed; credit for both Prerequisites: Appropriate score on the Math Placement Exam. MATH 102 and MATH 103 is not allowed; students with previous credit in Notes: Credit earned in MATH 100A will not count toward degree any calculus course (Math 104, 106, 107, or 208) may not earn credit for requirements. this course. Description: Review of the topics in a second-year high school algebra Description: First and second degree equations and inequalities, absolute course taught at the college level. Includes: real numbers, 1st and value, functions, polynomial and rational functions, exponential and 2nd degree equations and inequalities, linear systems, polynomials logarithmic functions, trigonometric functions and identities, laws of and rational expressions, exponents and radicals. Heavy emphasis on sines and cosines, applications, polar coordinates, systems of equations, problem solving strategies and techniques. graphing, conic sections. Credit Hours: 3 Credit Hours: 5 Max credits per semester: 3 Max credits per semester: 5 Max credits per degree: 3 Max credits per degree: 5 Grading Option: Graded with Option Grading Option: Graded with Option Prerequisite for: MATH 100A; MATH 101; MATH 103 Prerequisite for: AGRO 361, GEOL 361, NRES 361, SOIL 361, WATS 361; MATH 101 College Algebra AGRO 458, AGRO 858, NRES 458, NRES 858, SOIL 458; ASCI 340; Prerequisites: Appropriate score on the Math Placement Exam; or grade CHEM 105A; CHEM 109A; CHEM 113A; CHME 204; CRIM 300; of P, C, or better in MATH 100A.
    [Show full text]
  • "Mathematics in Science, Social Sciences and Engineering"
    Research Project "Mathematics in Science, Social Sciences and Engineering" Reference person: Prof. Mirko Degli Esposti (email: [email protected]) The Department of Mathematics of the University of Bologna invites applications for a PostDoc position in "Mathematics in Science, Social Sciences and Engineering". The appointment is for one year, renewable to a second year. Candidates are expected to propose and perform research on innovative aspects in mathematics, along one of the following themes: Analysis: 1. Functional analysis and abstract equations: Functional analytic methods for PDE problems; degenerate or singular evolution problems in Banach spaces; multivalued operators in Banach spaces; function spaces related to operator theory.FK percolation and its applications 2. Applied PDEs: FK percolation and its applications, mathematical modeling of financial markets by PDE or stochastic methods; mathematical modeling of the visual cortex in Lie groups through geometric analysis tools. 3. Evolutions equations: evolutions equations with real characteristics, asymptotic behavior of hyperbolic systems. 4. Qualitative theory of PDEs and calculus of variations: Sub-Riemannian PDEs; a priori estimates and solvability of systems of linear PDEs; geometric fully nonlinear PDEs; potential analysis of second order PDEs; hypoellipticity for sums of squares; geometric measure theory in Carnot groups. Numerical Analysis: 1. Numerical Linear Algebra: matrix equations, matrix functions, large-scale eigenvalue problems, spectral perturbation analysis, preconditioning techniques, ill- conditioned linear systems, optimization problems. 2. Inverse Problems and Image Processing: regularization and optimization methods for ill-posed integral equation problems, image segmentation, deblurring, denoising and reconstruction from projection; analysis of noise models, medical applications. 3. Geometric Modelling and Computer Graphics: Curves and surface modelling, shape basic functions, interpolation methods, parallel graphics processing, realistic rendering.
    [Show full text]
  • Randnla: Randomized Numerical Linear Algebra
    review articles DOI:10.1145/2842602 generation mechanisms—as an algo- Randomization offers new benefits rithmic or computational resource for the develop ment of improved algo- for large-scale linear algebra computations. rithms for fundamental matrix prob- lems such as matrix multiplication, BY PETROS DRINEAS AND MICHAEL W. MAHONEY least-squares (LS) approximation, low- rank matrix approxi mation, and Lapla- cian-based linear equ ation solvers. Randomized Numerical Linear Algebra (RandNLA) is an interdisci- RandNLA: plinary research area that exploits randomization as a computational resource to develop improved algo- rithms for large-scale linear algebra Randomized problems.32 From a foundational per- spective, RandNLA has its roots in theoretical computer science (TCS), with deep connections to mathemat- Numerical ics (convex analysis, probability theory, metric embedding theory) and applied mathematics (scientific computing, signal processing, numerical linear Linear algebra). From an applied perspec- tive, RandNLA is a vital new tool for machine learning, statistics, and data analysis. Well-engineered implemen- Algebra tations have already outperformed highly optimized software libraries for ubiquitous problems such as least- squares,4,35 with good scalability in par- allel and distributed environments. 52 Moreover, RandNLA promises a sound algorithmic and statistical foundation for modern large-scale data analysis. MATRICES ARE UBIQUITOUS in computer science, statistics, and applied mathematics. An m × n key insights matrix can encode information about m objects ˽ Randomization isn’t just used to model noise in data; it can be a powerful (each described by n features), or the behavior of a computational resource to develop discretized differential operator on a finite element algorithms with improved running times and stability properties as well as mesh; an n × n positive-definite matrix can encode algorithms that are more interpretable in the correlations between all pairs of n objects, or the downstream data science applications.
    [Show full text]
  • Advances in Matrices, Finite And
    Journal of Applied Mathematics Advances in Matrices, Finite and Guest Editors: P. N. Shivakumar, Panayiotis Psarrakos, K. C. Sivakuma r, Yang Zhang, and Carlos da Fonseca Advances in Matrices, Finite and Infinite, with Applications 2014 JournalofAppliedMathematics Advances in Matrices, Finite and Infinite, with Applications 2014 Guest Editors: P. N. Shivakumar, Panayiotis J. Psarrakos, K. C. Sivakumar, Yang Zhang, and Carlos M. da Fonseca Copyright © 2014 Hindawi Publishing Corporation. All rights reserved. This is a special issue published in “Journal of Applied Mathematics.” All articles are open access articles distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Editorial Board Saeid Abbasbandy, Iran Ru-Dong Chen, China Laura Gardini, Italy Mina B. Abd-El-Malek, Egypt Zhang Chen, China Bernard J. Geurts, The Netherlands Mohamed A. Abdou, Egypt Zhi-Zhong Chen, Japan Sandip Ghosal, USA Subhas Abel, India Xinkai Chen, Japan Pablo Gonzlez-Vera, Spain Jnos Abonyi, Hungary Rushan Chen, China Alexander N. Gorban, UK M. Montaz Ali, South Africa Ke Chen, UK Laurent Gosse, Italy Mohammad R, Aliha, Iran Eric Cheng, Hong Kong Keshlan S. Govinder, South Africa Carlos J. S. Alves, Portugal Ching-Hsue Cheng, Taiwan Said R. Grace, Egypt Mohamad Alwash, USA Qi Cheng, USA Jose L. Gracia, Spain Igor Andrianov, Germany Chin-Hsiang Cheng, Taiwan Maurizio Grasselli, Italy Boris Andrievsky, Russia Jin Cheng, China Zhi-Hong Guan, China Whye-Teong Ang, Singapore Hui Cheng, China Nicola Guglielmi, Italy Abul-Fazal M. Arif, Saudi Arabia Francisco Chiclana, UK Fred´ eric´ Guichard, Canada Sabri Arik, Turkey Jen-Tzung Chien, Taiwan Kerim Guney, Turkey Ali R.
    [Show full text]
  • Numerical Linear Algebra and Matrix Analysis
    Numerical Linear Algebra and Matrix Analysis Higham, Nicholas J. 2015 MIMS EPrint: 2015.103 Manchester Institute for Mathematical Sciences School of Mathematics The University of Manchester Reports available from: http://eprints.maths.manchester.ac.uk/ And by contacting: The MIMS Secretary School of Mathematics The University of Manchester Manchester, M13 9PL, UK ISSN 1749-9097 1 exploitation of matrix structure (such as sparsity, sym- Numerical Linear Algebra and metry, and definiteness), and the design of algorithms y Matrix Analysis to exploit evolving computer architectures. Nicholas J. Higham Throughout the article, uppercase letters are used for matrices and lower case letters for vectors and scalars. Matrices are ubiquitous in applied mathematics. Matrices and vectors are assumed to be complex, unless ∗ Ordinary differential equations (ODEs) and partial dif- otherwise stated, and A = (aji) denotes the conjugate ferential equations (PDEs) are solved numerically by transpose of A = (aij ). An unsubscripted norm k · k finite difference or finite element methods, which lead denotes a general vector norm and the corresponding to systems of linear equations or matrix eigenvalue subordinate matrix norm. Particular norms used here problems. Nonlinear equations and optimization prob- are the 2-norm k · k2 and the Frobenius norm k · kF . lems are typically solved using linear or quadratic The notation “i = 1: n” means that the integer variable models, which again lead to linear systems. i takes on the values 1; 2; : : : ; n. Solving linear systems of equations is an ancient task, undertaken by the Chinese around 1AD, but the study 1 Nonsingularity and Conditioning of matrices per se is relatively recent, originating with Arthur Cayley’s 1858 “A Memoir on the Theory of Matri- Nonsingularity of a matrix is a key requirement in many ces”.
    [Show full text]
  • A History of Modern Numerical Linear Algebra
    Gene Golub, Stanford University 1 A History of Modern Numerical Linear Algebra Gene H. Golub Stanford University Gene Golub, Stanford University 2 What is Numerical Analysis? (a.k.a Scientific Computing) The definitions are becoming shorter... Webster’s New Collegiate Dictionary (1973): • ”The study of quantitative approximations to the solutions of mathematical problems including consideration of the errors and bounds to the errors involved.” The American Heritage Dictionary (1992): • ”The study of approximate solutions to mathematical problems, taking into account the extent of possible errors.” L. N. Trefethen, Oxford University (1992): • ”The study of algorithms for the problems of continuous mathematics.” Gene Golub, Stanford University 3 Numerical Analysis: How It All Started Numerical analysis motivated the development of the earliest computers. Ballistics • Solution of PDE’s • Data Analysis • Early pioneers included: J. von Neumann, A. M. Turing In the beginning... von Neumann & Goldstine (1947): ”Numerical Inversion of Matrices of High Order” Gene Golub, Stanford University 4 Numerical Linear Algebra Numerical Linear Algebra (NLA) is a small but active area of research: Less than 200 active, committed persons. But the community involves many scientists. Gene Golub, Stanford University 5 Top Ten Algorithms in Science (Jack Dongarra, 2000) 1. Metropolis Algorithm for Monte Carlo 2. Simplex Method for Linear Programming 3. Krylov Subspace Iteration Methods 4. The Decompositional Approach to Matrix Computations 5. The Fortran Optimizing Compiler 6. QR Algorithm for Computing Eigenvalues 7. Quicksort Algorithm for Sorting 8. Fast Fourier Transform 9. Integer Relation Detection 10. Fast Multipole Method Red: Algorithms within the exclusive domain of NLA research. • Blue: Algorithms strongly (though not exclusively) connected to NLA research.
    [Show full text]
  • Mathematics (MATH) 1
    Mathematics (MATH) 1 MATHEMATICS (MATH) MATH 505: Mathematical Fluid Mechanics 3 Credits MATH 501: Real Analysis Kinematics, balance laws, constitutive equations; ideal fluids, viscous 3 Credits flows, boundary layers, lubrication; gas dynamics. Legesgue measure theory. Measurable sets and measurable functions. Prerequisite: MATH 402 or MATH 404 Legesgue integration, convergence theorems. Lp spaces. Decomposition MATH 506: Ergodic Theory and differentiation of measures. Convolutions. The Fourier transform. MATH 501 Real Analysis I (3) This course develops Lebesgue measure 3 Credits and integration theory. This is a centerpiece of modern analysis, providing a key tool in many areas of pure and applied mathematics. The course Measure-preserving transformations and flows, ergodic theorems, covers the following topics: Lebesgue measure theory, measurable sets ergodicity, mixing, weak mixing, spectral invariants, measurable and measurable functions, Lebesgue integration, convergence theorems, partitions, entropy, ornstein isomorphism theory. Lp spaces, decomposition and differentiation of measures, convolutions, the Fourier transform. Prerequisite: MATH 502 Prerequisite: MATH 404 MATH 507: Dynamical Systems I MATH 502: Complex Analysis 3 Credits 3 Credits Fundamental concepts; extensive survey of examples; equivalence and classification of dynamical systems, principal classes of asymptotic Complex numbers. Holomorphic functions. Cauchy's theorem. invariants, circle maps. Meromorphic functions. Laurent expansions, residue calculus. Conformal
    [Show full text]
  • A Complete Treatment of Linear Algebra
    C5106 FL.qxd 3/14/07 5:07 PM Page 1 C5106 FL NEW! Editor-in-Chief Leslie Hogben, Iowa State University, Ames, USA Associate Editors Richard A. Brualdi, University of Wisconsin–Madison, USA Anne Greenbaum, University of Washington, Seattle, USA Roy Mathias, University of Birmingham, UK A volume in the series Discrete Mathematics and Its Applications Edited by Kenneth H. Rosen, Monmouth University, West Long Branch, New Jersey, USA Contents LINEAR ALGEBRA A COMPLETE TREATMENT OF LINEAR ALGEBRA BASIC LINEAR ALGEBRA The Handbook of Linear Algebra provides comprehensive cover- Vectors, Matrices and Systems of Linear age of linear algebra concepts, applications, and computational Equations software packages in an easy-to-use handbook format. The Linear Independence, Span, and Bases esteemed international contributors guide you from the very ele- Linear Transformations mentary aspects of the subject to the frontiers of current research. Determinants and Eigenvalues The book features an accessible layout of parts, chapters, and sec- Inner Product Spaces, Orthogonal tions, with each section containing definition, fact, and example Projection, Least Squares and Singular segments. The five main parts of the book encompass the funda- Value Decomposition mentals of linear algebra, combinatorial and numerical linear alge- bra, applications of linear algebra to various mathematical and non- MATRICES WITH SPECIAL PROPERTIES mathematical disciplines, and software packages for linear algebra Canonical Forms computations. Within each section, the facts (or theorems) are pre- Unitary Similarity, Normal Matrices and sented in a list format and include references for each fact to encourage further reading, Spectral Theory while the examples illustrate both the definitions and the facts.
    [Show full text]
  • Math 477 – Numerical Linear Algebra
    Math 477 – Numerical Linear Algebra Course Description from Bulletin: Fundamentals of matrix theory; least squares problems; computer arithmetic, conditioning and stability; direct and iterative methods for linear systems; eigenvalue problems. (3-0-3) Enrollment: Elective for AM and other majors. Textbook(s): Lloyd N. Trefethen and D. Bau, Numerical Linear Algebra, SIAM (1997), ISBN 0-89871-361-7. D. Kincaid and W. Cheney, Numerical Analysis: Mathematics of Scientific Computing, 3rd Ed, Brooks/Cole (2002), ISBN 0-534-38905-8. Other required material: Matlab Prerequisites: MATH 350 Introduction to Computational Mathematics or MMAE 350, or consent of the instructor Objectives: 1. Students will learn the basic matrix factorization methods for solving systems of linear equations and linear least squares problems. 2. Students will learn basic computer arithmetic and the concepts of conditioning and stability of a numerical method. 3. Students will learn the basic numerical methods for computing eigenvalues. 4. Students will learn the basic iterative methods for solving systems of linear equations. 5. Students will learn how to implement and use these numerical methods in Matlab (or another similar software package). Lecture schedule: 3 50 minutes (or 2 75 minutes) lectures per week Course Outline: Hours 1. Fundamentals 5 a. Matrix-vector multiplication b. Orthogonal vectors and matrices c. Norms d. Computer arithmetic 2. Singular Value Decomposition 3 3. QR Factorization and Least Squares 8 a. Projectors b. QR factorization c. Gram-Schmidt orthogonalization d. Householder triangularization e. Least squares problems 4. Conditioning and Stability 5 a. Conditioning and condition numbers b. Stability 5. Systems of Equations 5 a. Gaussian elimination b.
    [Show full text]
  • Numerical Linear Algebra Texts in Applied Mathematics 55
    Grégoire Allaire 55 Sidi Mahmoud Kaber TEXTS IN APPLIED MATHEMATICS Numerical Linear Algebra Texts in Applied Mathematics 55 Editors J.E. Marsden L. Sirovich S.S. Antman Advisors G. Iooss P. Holmes D. Barkley M. Dellnitz P. Newton Texts in Applied Mathematics 1. Sirovich: Introduction to Applied Mathematics. 2. Wiggins: Introduction to Applied Nonlinear Dynamical Systems and Chaos. 3. Hale/Koc¸ak: Dynamics and Bifurcations. 4. Chorin/Marsden: A Mathematical Introduction to Fluid Mechanics, 3rd ed. 5. Hubbard/West: Differential Equations: A Dynamical Systems Approach: Ordinary Differential Equations. 6. Sontag: Mathematical Control Theory: Deterministic Finite Dimensional Systems, 2nd ed. 7. Perko: Differential Equations and Dynamical Systems, 3rd ed. 8. Seaborn: Hypergeometric Functions and Their Applications. 9. Pipkin: A Course on Integral Equations. 10. Hoppensteadt/Peskin: Modeling and Simulation in Medicine and the Life Sciences, 2nd ed. 11. Braun: Differential Equations and Their Applications, 4th ed. 12. Stoer/Bulirsch: Introduction to Numerical Analysis, 3rd ed. 13. Renardy/Rogers: An Introduction to Partial Differential Equations. 14. Banks: Growth and Diffusion Phenomena: Mathematical Frameworks and Applications. 15. Brenner/Scott: The Mathematical Theory of Finite Element Methods, 2nd ed. 16. Van de Velde: Concurrent Scientific Computing. 17. Marsden/Ratiu: Introduction to Mechanics and Symmetry, 2nd ed. 18. Hubbard/West: Differential Equations: A Dynamical Systems Approach: Higher-Dimensional Systems. 19. Kaplan/Glass: Understanding Nonlinear Dynamics. 20. Holmes: Introduction to Perturbation Methods. 21. Curtain/Zwart: An Introduction to Infinite-Dimensional Linear Systems Theory. 22. Thomas: Numerical Partial Differential Equations: Finite Difference Methods. 23. Taylor: Partial Differential Equations: Basic Theory. 24. Merkin: Introduction to the Theory of Stability of Motion.
    [Show full text]