Mathematical Methods

Total Page:16

File Type:pdf, Size:1020Kb

Mathematical Methods LECTURE NOTES IMPERIAL COLLEGE LONDON DEPARTMENT OF COMPUTING Mathematical Methods Marc Deisenroth, Mahdi Cheraghchi Version: December 1, 2017 Contents 1 Course Overview3 2 Sequences6 2.1 The Convergence Definition.......................6 2.2 Illustration of Convergence.......................7 2.3 Common Converging Sequences.....................8 2.4 Combinations of Sequences.......................9 2.5 Sandwich Theorem............................9 2.6 Ratio Tests for Sequences........................ 11 2.7 Proof of Ratio Tests............................ 12 2.8 Useful Techniques for Manipulating Absolute Values......... 13 2.9 Properties of Real Numbers....................... 14 3 Series 15 3.1 Geometric Series............................. 16 3.2 Harmonic Series............................. 16 3.3 Series of Inverse Squares......................... 17 3.4 Common Series and Convergence.................... 18 3.5 Convergence Tests............................ 19 3.6 Absolute Convergence.......................... 22 3.7 Power Series and the Radius of Convergence.............. 23 3.8 Proofs of Ratio Tests........................... 24 4 Power Series 27 4.1 Basics of Power Series.......................... 27 4.2 Maclaurin Series............................. 27 4.3 Taylor Series............................... 31 4.4 Taylor Series Error Term......................... 33 4.5 Deriving the Cauchy Error Term..................... 34 4.6 Power Series Solution of ODEs..................... 34 5 Linear Algebra 36 5.1 Linear Equation Systems......................... 38 5.2 Groups................................... 39 5.2.1 Definitions............................ 39 5.2.2 Examples............................. 40 2 Table of Contents CONTENTS 5.3 Matrices.................................. 40 5.3.1 Matrix Multiplication....................... 40 5.3.2 Inverse and Transpose...................... 42 5.3.3 Multiplication by a Scalar.................... 43 5.3.4 Compact Representations of Linear Equation Systems..... 43 5.4 Gaussian Elimination........................... 43 5.4.1 Example: Solving a Simple Linear Equation System...... 44 5.4.2 Elementary Transformations................... 45 5.4.3 The Minus-1 Trick for Solving Homogeneous Equation Systems 50 5.4.4 Applications of Gaussian Elimination in Linear Algebra.... 51 5.5 Vector Spaces............................... 53 5.5.1 Examples............................. 54 5.5.2 Vector Subspaces......................... 55 5.6 Linear Independence........................... 56 5.7 Generating Sets, Basis and Dimension................. 60 5.7.1 Generating Sets.......................... 60 5.7.2 Basis................................ 60 5.7.3 Rank................................ 64 5.8 Intersection of Subspaces........................ 65 5.9 Linear Mappings............................. 67 5.9.1 Image and Kernel (Null Space)................. 68 5.9.2 Matrix Representation of Linear Mappings........... 70 5.9.3 Basis Change........................... 72 5.10 Determinants............................... 76 5.11 Eigenvalues................................ 79 5.11.1 Characteristic Polynomial.................... 80 5.11.2 Example: Eigenspace Computation............... 81 5.11.3 Eigenvalues in Practical Applications.............. 82 5.12 Diagonalization.............................. 84 5.12.1 Applications............................ 88 5.12.2 Cayley-Hamilton Theorem.................... 88 5.13 Inner Products.............................. 89 5.13.1 Lengths, Distances, Orthogonality................ 90 5.13.2 Applications of Inner Products.................. 91 5.14 Orthogonal Projections.......................... 92 5.14.1 Projection onto 1-Dimensional Subspaces (Lines)....... 93 5.14.2 Projection onto General Subspaces............... 95 5.14.3 Applications of Projections.................... 97 5.15 Affine Subspaces............................. 98 5.15.1 Intersection of Affine Subspaces................. 100 5.16 Affine Mappings............................. 104 5.17 Rotations................................. 105 5.17.1 Rotations in the Plane...................... 106 5.17.2 Rotations in Three Dimensions................. 106 5.17.3 Rotations in n Dimensions.................... 107 3 CONTENTS Table of Contents 5.17.4 Properties of Rotations...................... 108 4 CONTENTS Contents Preface These lecture notes are intended for undergraduate students of computer science at Imperial College London and cover basic mathematical concepts that are required in other courses. The lecture notes are based on Jeremy Bradley’s material from 2014/15, but the course has been partially restructured since then. Many people have helped improving these lecture notes by providing comments, suggestions and corrections. In particular, we want to thank Romain Barnoud, Ruhi Choudhury and Tony Field for valuable feedback and additions. Marc Deisenroth and Mahdi Cheraghchi August 2017 1 Contents CONTENTS 2 Chapter 1 Course Overview This introductory course covers background that is essential to many other courses in the Department of Computing. We will approximately split this course into two parts: Analysis and Linear Algebra. In Analysis, we will cover topics, including power series and convergence of sequences and series. The part on Linear Algebra will cover linear equation systems (and their solution), linear mappings and their matrix representations, vector spaces, eigenvalues, affine mappings, and applications, e.g., projections and rotations. There are many courses in the department that require an understanding of math- ematical concepts. These courses range from optimization, operations research and complexity to machine learning, robotics, graphics and computer vision. The department offers additional courses that cover other and more advanced math- ematical concepts: Probability and Statistics (2nd year), Computational Techniques (2nd year), Mathematics for Inference and Machine Learning (4th year). Figures 1.1–1.2 give an overview of the relevance of this course to other courses in the Department of Computing. Figure 1.1 details the topics from this course that are relevant in other courses, whereas Figure 1.2 shows which courses in Years 2–4 rely on material covered in this introductory course. 3 Chapter 1. Course Overview Computational Techniques Mathematics for Inference and Machine Learning Information and Coding Theory Robotics Linear Equation Systems Simulation & Modelling Machine Learning Operations Research Learning in Autonomous Systems Computational Optimisation Graphics Groups Information and Coding Theory Advanced Robotics Medical Image Computing Computational Optimisation Mathematics for Inference and Machine Learning Data Analysis and Probabilistic Inference Advanced Robotics Graphics Robotics Linear Algebra Machine Learning Eigenvalues Operations Research Computer Vision Advanced Statistical Machine Learning and Pattern Recognition Quantum Computing Computational Techniques Advanced Computer Graphics Computational Optimisation Learning in Autonomous Systems Computational Techniques Advanced Computer Graphics Robotics Linear Mappings Mathematical Machine Learning Advanced Robotics Methods Information and Coding Theory Quantum Computing Medical Image Computing Medical Image Computing Computational Optimisation Advanced Robotics Computer Vision Advanced Computer Graphics Affine Mappings Robotics Information and Coding Theory Graphics Graphics Power Series Advanced Computer Graphics Operations Research Algorithms Convergence Computational Optimisation Analysis Computational Techniques Algorithms Discrete Mathematics Sequences/Series Computational Optimisation Introduction to AI Figure 1.1: Overview of the relevance of topics covered in this course to other courses in the Department of Computing. 4 Chapter 1. Course Overview Computational Techniques Mathematics for Inference and Machine Learning Information and Coding Theory Robotics Linear Equation Systems Simulation & Modelling Machine Learning Operations Research Learning in Autonomous Systems Computational Optimisation Graphics Groups Information and Coding Theory Advanced Robotics Medical Image Computing Computational Optimisation Mathematics for Inference and Machine Learning Data Analysis and Probabilistic Inference Advanced Robotics Graphics Robotics Linear Algebra Machine Learning Eigenvalues Operations Research Computer Vision Advanced Statistical Machine Learning and Pattern Recognition Quantum Computing Computational Techniques Advanced Computer Graphics Computational Optimisation Learning in Autonomous Systems Computational Techniques Advanced Computer Graphics Robotics Linear Mappings Mathematical Machine Learning Advanced Robotics Methods Information and Coding Theory Quantum Computing Medical Image Computing Medical Image Computing Computational Optimisation Advanced Robotics Computer Vision Advanced Computer Graphics Affine Mappings Robotics Information and Coding Theory Graphics Graphics Power Series Advanced Computer Graphics Operations Research Algorithms Convergence Computational Optimisation Analysis Computational Techniques Algorithms Discrete Mathematics Sequences/Series Computational Optimisation Introduction to AI Figure 1.2: Overview of courses (ordered by year) in the Department of Computing that rely on topics covered in this course. 5 Chapter 2 Sequences 2.1 The Convergence Definition Given a sequence of real numbers a1;a2;:::, we would like to have a precise definition of what it means for this sequence to converge to a limit l. You may already
Recommended publications
  • Arxiv:2003.06292V1 [Math.GR] 12 Mar 2020 Eggnrtr N Ignlmti.Tedaoa Arxi Matrix Diagonal the Matrix
    ALGORITHMS IN LINEAR ALGEBRAIC GROUPS SUSHIL BHUNIA, AYAN MAHALANOBIS, PRALHAD SHINDE, AND ANUPAM SINGH ABSTRACT. This paper presents some algorithms in linear algebraic groups. These algorithms solve the word problem and compute the spinor norm for orthogonal groups. This gives us an algorithmic definition of the spinor norm. We compute the double coset decompositionwith respect to a Siegel maximal parabolic subgroup, which is important in computing infinite-dimensional representations for some algebraic groups. 1. INTRODUCTION Spinor norm was first defined by Dieudonné and Kneser using Clifford algebras. Wall [21] defined the spinor norm using bilinear forms. These days, to compute the spinor norm, one uses the definition of Wall. In this paper, we develop a new definition of the spinor norm for split and twisted orthogonal groups. Our definition of the spinornorm is rich in the sense, that itis algorithmic in nature. Now one can compute spinor norm using a Gaussian elimination algorithm that we develop in this paper. This paper can be seen as an extension of our earlier work in the book chapter [3], where we described Gaussian elimination algorithms for orthogonal and symplectic groups in the context of public key cryptography. In computational group theory, one always looks for algorithms to solve the word problem. For a group G defined by a set of generators hXi = G, the problem is to write g ∈ G as a word in X: we say that this is the word problem for G (for details, see [18, Section 1.4]). Brooksbank [4] and Costi [10] developed algorithms similar to ours for classical groups over finite fields.
    [Show full text]
  • Orthogonal Reduction 1 the Row Echelon Form -.: Mathematical
    MATH 5330: Computational Methods of Linear Algebra Lecture 9: Orthogonal Reduction Xianyi Zeng Department of Mathematical Sciences, UTEP 1 The Row Echelon Form Our target is to solve the normal equation: AtAx = Atb ; (1.1) m×n where A 2 R is arbitrary; we have shown previously that this is equivalent to the least squares problem: min jjAx−bjj : (1.2) x2Rn t n×n As A A2R is symmetric positive semi-definite, we can try to compute the Cholesky decom- t t n×n position such that A A = L L for some lower-triangular matrix L 2 R . One problem with this approach is that we're not fully exploring our information, particularly in Cholesky decomposition we treat AtA as a single entity in ignorance of the information about A itself. t m×m In particular, the structure A A motivates us to study a factorization A=QE, where Q2R m×n is orthogonal and E 2 R is to be determined. Then we may transform the normal equation to: EtEx = EtQtb ; (1.3) t m×m where the identity Q Q = Im (the identity matrix in R ) is used. This normal equation is equivalent to the least squares problem with E: t min Ex−Q b : (1.4) x2Rn Because orthogonal transformation preserves the L2-norm, (1.2) and (1.4) are equivalent to each n other. Indeed, for any x 2 R : jjAx−bjj2 = (b−Ax)t(b−Ax) = (b−QEx)t(b−QEx) = [Q(Qtb−Ex)]t[Q(Qtb−Ex)] t t t t t t t t 2 = (Q b−Ex) Q Q(Q b−Ex) = (Q b−Ex) (Q b−Ex) = Ex−Q b : Hence the target is to find an E such that (1.3) is easier to solve.
    [Show full text]
  • ED16 Abstracts 73
    ED16 Abstracts 73 IP1 industrial problems. She will give examples of some indus- Mathematical Modeling with Elementary School- trial research problems that student teams have tackled, Aged Students discuss some of the skills that are needed to be successful working with and in industry, discuss the challenges that Modeling, a cyclic process by which mathematicians de- one may face when working with corporate partners, and velop and use mathematical tools to represent, understand, present a summary of some of the lessons that have been and solve real-world problems, provides important learning learned over the years in these collaborations. opportunities for school students. Modeling opportunities in secondary schools are apparent, but what about in the Suzanne L. Weekes younger grades? Two questions are critical in mathemati- Worcester Polytechnic Institute (WPI) cal modeling in K-5 settings. (1) How should opportunities [email protected] for modeling in K-5 settings be constructed and carried out? (2) What are the tasks of teaching when engaging el- ementary students in mathematical modeling? In this talk IP5 I will present a framework for teaching mathematical mod- Title Not Available at Time of Publication eling in elementary classrooms and provide illustrations of its use by elementary grades teachers. Abstract not available at time of publication. Elizabeth A. Burroughs Philip Uri Treisman Montana State University The University of Texas at Austin [email protected] [email protected] IP2 CP1 Graduate Student Education in Computational Regime Switching Models and the Mental Account- Mathematics and Scientific Computing ing Framework Abstract not available at time of publication.
    [Show full text]
  • Implementation of Gaussian- Elimination
    International Journal of Innovative Technology and Exploring Engineering (IJITEE) ISSN: 2278-3075, Volume-5 Issue-11, April 2016 Implementation of Gaussian- Elimination Awatif M.A. Elsiddieg Abstract: Gaussian elimination is an algorithm for solving systems of linear equations, can also use to find the rank of any II. PIVOTING matrix ,we use Gaussian Jordan elimination to find the inverse of a non singular square matrix. This work gives basic concepts The objective of pivoting is to make an element above or in section (1) , show what is pivoting , and implementation of below a leading one into a zero. Gaussian elimination to solve a system of linear equations. The "pivot" or "pivot element" is an element on the left Section (2) we find the rank of any matrix. Section (3) we use hand side of a matrix that you want the elements above and Gaussian elimination to find the inverse of a non singular square matrix. We compare the method by Gauss Jordan method. In below to be zero. Normally, this element is a one. If you can section (4) practical implementation of the method we inherit the find a book that mentions pivoting, they will usually tell you computation features of Gaussian elimination we use programs that you must pivot on a one. If you restrict yourself to the in Matlab software. three elementary row operations, then this is a true Keywords: Gaussian elimination, algorithm Gauss, statement. However, if you are willing to combine the Jordan, method, computation, features, programs in Matlab, second and third elementary row operations, you come up software.
    [Show full text]
  • Naïve Gaussian Elimination Jamie Trahan, Autar Kaw, Kevin Martin University of South Florida United States of America [email protected]
    nbm_sle_sim_naivegauss.nb 1 Naïve Gaussian Elimination Jamie Trahan, Autar Kaw, Kevin Martin University of South Florida United States of America [email protected] Introduction One of the most popular numerical techniques for solving simultaneous linear equations is Naïve Gaussian Elimination method. The approach is designed to solve a set of n equations with n unknowns, [A][X]=[C], where A nxn is a square coefficient matrix, X nx1 is the solution vector, and C nx1 is the right hand side array. Naïve Gauss consists of two steps: 1) Forward Elimination: In this step, the unknown is eliminated in each equation starting with the first@ D equation. This way, the equations @areD "reduced" to one equation and@ oneD unknown in each equation. 2) Back Substitution: In this step, starting from the last equation, each of the unknowns is found. To learn more about Naïve Gauss Elimination as well as the pitfall's of the method, click here. A simulation of Naive Gauss Method follows. Section 1: Input Data Below are the input parameters to begin the simulation. This is the only section that requires user input. Once the values are entered, Mathematica will calculate the solution vector [X]. èNumber of equations, n: n = 4 4 è nxn coefficient matrix, [A]: nbm_sle_sim_naivegauss.nb 2 A = Table 1, 10, 100, 1000 , 1, 15, 225, 3375 , 1, 20, 400, 8000 , 1, 22.5, 506.25, 11391 ;A MatrixForm 1 10@88 100 1000 < 8 < 18 15 225 3375< 8 <<D êê 1 20 400 8000 ji 1 22.5 506.25 11391 zy j z j z j z j z è nx1 rightj hand side array, [RHS]: z k { RHS = Table 227.04, 362.78, 517.35, 602.97 ; RHS MatrixForm 227.04 362.78 @8 <D êê 517.35 ji 602.97 zy j z j z j z j z j z k { Section 2: Naïve Gaussian Elimination Method The following sections divide Naïve Gauss elimination into two steps: 1) Forward Elimination 2) Back Substitution To conduct Naïve Gauss Elimination, Mathematica will join the [A] and [RHS] matrices into one augmented matrix, [C], that will facilitate the process of forward elimination.
    [Show full text]
  • Linear Programming
    Linear Programming Lecture 1: Linear Algebra Review Lecture 1: Linear Algebra Review Linear Programming 1 / 24 1 Linear Algebra Review 2 Linear Algebra Review 3 Block Structured Matrices 4 Gaussian Elimination Matrices 5 Gauss-Jordan Elimination (Pivoting) Lecture 1: Linear Algebra Review Linear Programming 2 / 24 columns rows 2 3 a1• 6 a2• 7 = a a ::: a = 6 7 •1 •2 •n 6 . 7 4 . 5 am• 2 3 2 T 3 a11 a21 ::: am1 a•1 T 6 a12 a22 ::: am2 7 6 a•2 7 AT = 6 7 = 6 7 = aT aT ::: aT 6 . .. 7 6 . 7 1• 2• m• 4 . 5 4 . 5 T a1n a2n ::: amn a•n Matrices in Rm×n A 2 Rm×n 2 3 a11 a12 ::: a1n 6 a21 a22 ::: a2n 7 A = 6 7 6 . .. 7 4 . 5 am1 am2 ::: amn Lecture 1: Linear Algebra Review Linear Programming 3 / 24 rows 2 3 a1• 6 a2• 7 = 6 7 6 . 7 4 . 5 am• 2 3 2 T 3 a11 a21 ::: am1 a•1 T 6 a12 a22 ::: am2 7 6 a•2 7 AT = 6 7 = 6 7 = aT aT ::: aT 6 . .. 7 6 . 7 1• 2• m• 4 . 5 4 . 5 T a1n a2n ::: amn a•n Matrices in Rm×n A 2 Rm×n columns 2 3 a11 a12 ::: a1n 6 a21 a22 ::: a2n 7 A = 6 7 = a a ::: a 6 . .. 7 •1 •2 •n 4 . 5 am1 am2 ::: amn Lecture 1: Linear Algebra Review Linear Programming 3 / 24 2 3 2 T 3 a11 a21 ::: am1 a•1 T 6 a12 a22 ::: am2 7 6 a•2 7 AT = 6 7 = 6 7 = aT aT ::: aT 6 .
    [Show full text]
  • 2006 - 2007 Annual Report Department of Mathematics College of Arts & Sciences Drexel University Photo by R
    2006 - 2007 Annual Report Department of Mathematics College of Arts & Sciences Drexel University Photo by R. Andrew Hicks Andrew R. by Photo TABLE OF CONTENTS PEOPLE 1 Message From the Department Head 2 Tenure Track Faculty 4 Auxiliary Faculty 5 Visiting Faculty & Fellows 5 Emeritus Faculty 6 Staff 6 Teaching Assistants & Research Assistants 7 New Faculty Profiles AWARDS / HIGHLIGHTS 8 Faculty Awards 9 Faculty Grants 10 Faculty Appointments / Conference Organizations 11 Faculty Publications / Patents 13 Faculty Presentations 17 Faculty New Course Offerings 18 Honors Day Awards 20 Graduate Student Day Awards 21 Albert Herr Teaching Assistant Award 21 Student Presentations 22 Bachelor of Science Degrees Awarded 22 Master of Science Degrees Awarded 22 Doctor of Philosophy Degree Awarded DEPARTMENT 23 Colloquium Photo by R. Andrew Hicks 25 Analysis Seminars 27 Dean’s Seminar 28 Departmental Committees 30 Mathematics Resource Center 31 Departmental Outreach 33 Financial Mathematics Advisory Board EVENTS 35 Student Activities 36 SIAM Chapter / Math Bytes 37 Departmental Social Events GIVING 38 Support the Mathematics Department MESSAGE FROM THE DEPARTMENT HEAD Dear Alumni and Friends, It has been another great year for our department. It is wonderful to have award winning faculty in the department. Dr. Pavel Grinfeld received the College of Arts and Sciences Excellence Award for both his research and teaching accomplishments, and Ms. Marna Mozeff Hartmann received the Barbara G. Hornum Award for Excellence in Teaching for her dedication to teaching and advising our undergraduates. Also several of our students have been recognized. Teaching Assistants Emek Kose and Jason Aran have won two of Drexel’s Teaching Assistant Excellence Awards.
    [Show full text]
  • Gaussian Elimination and Lu Decomposition (Supplement for Ma511)
    GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1. Matrix multiplication The rule for multiplying matrices is, at first glance, a little complicated. If A is m × n and B is n × p then C = AB is defined and of size m × p with entries n X cij = aikbkj k=1 The main reason for defining it this way will be explained when we talk about linear transformations later on. THEOREM 1.1. (1) Matrix multiplication is associative, i.e. A(BC) = (AB)C. (Henceforth, we will drop parentheses.) (2) If A is m × n and Im×m and In×n denote the identity matrices of the indicated sizes, AIn×n = Im×mA = A. (We usually just write I and the size is understood from context.) On the other hand, matrix multiplication is almost never commutative; in other words generally AB 6= BA when both are defined. So we have to be careful not to inadvertently use it in a calculation. Given an n × n matrix A, the inverse, if it exists, is another n × n matrix A−1 such that AA−I = I;A−1A = I A matrix is called invertible or nonsingular if A−1 exists. In practice, it is only necessary to check one of these. THEOREM 1.2. If B is n × n and AB = I or BA = I, then A invertible and B = A−1. Proof. The proof of invertibility of A will need to wait until we talk about deter- minants.
    [Show full text]
  • Numerical Analysis Lecture Notes Peter J
    Numerical Analysis Lecture Notes Peter J. Olver 4. Gaussian Elimination In this part, our focus will be on the most basic method for solving linear algebraic systems, known as Gaussian Elimination in honor of one of the all-time mathematical greats — the early nineteenth century German mathematician Carl Friedrich Gauss. As the father of linear algebra, his name will occur repeatedly throughout this text. Gaus- sian Elimination is quite elementary, but remains one of the most important algorithms in applied (as well as theoretical) mathematics. Our initial focus will be on the most important class of systems: those involving the same number of equations as unknowns — although we will eventually develop techniques for handling completely general linear systems. While the former typically have a unique solution, general linear systems may have either no solutions or infinitely many solutions. Since physical models require exis- tence and uniqueness of their solution, the systems arising in applications often (but not always) involve the same number of equations as unknowns. Nevertheless, the ability to confidently handle all types of linear systems is a basic prerequisite for further progress in the subject. In contemporary applications, particularly those arising in numerical solu- tions of differential equations, in signal and image processing, and elsewhere, the governing linear systems can be huge, sometimes involving millions of equations in millions of un- knowns, challenging even the most powerful supercomputer. So, a systematic and careful development of solution techniques is essential. Section 4.5 discusses some of the practical issues and limitations in computer implementations of the Gaussian Elimination method for large systems arising in applications.
    [Show full text]
  • Math 304 Linear Algebra from Wednesday: I Basis and Dimension
    Highlights Math 304 Linear Algebra From Wednesday: I basis and dimension I transition matrix for change of basis Harold P. Boas Today: [email protected] I row space and column space June 12, 2006 I the rank-nullity property Example Example continued 1 2 1 2 1 1 2 1 2 1 1 2 1 2 1 Let A = 2 4 3 7 1 . R2−2R1 2 4 3 7 1 −−−−−→ 0 0 1 3 −1 4 8 5 11 3 R −4R 4 8 5 11 3 3 1 0 0 1 3 −1 Three-part problem. Find a basis for 1 2 1 2 1 1 2 0 −1 2 R3−R2 R1−R2 5 −−−−→ 0 0 1 3 −1 −−−−→ 0 0 1 3 −1 I the row space (the subspace of R spanned by the rows of the matrix) 0 0 0 0 0 0 0 0 0 0 3 I the column space (the subspace of R spanned by the columns of the matrix) Notice that in each step, the second column is twice the first column; also, the second column is the sum of the third and the nullspace (the set of vectors x such that Ax = 0) I fifth columns. We can analyze all three parts by Gaussian elimination, Row operations change the column space but preserve linear even though row operations change the column space. relations among the columns. The final row-echelon form shows that the column space has dimension equal to 2, and the first and third columns are linearly independent.
    [Show full text]
  • Philosophical Magazine A
    This article was downloaded by:[Massachusetts Institute of Technology] On: 7 January 2008 Access Details: [subscription number 768485848] Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Philosophical Magazine A Publication details, including instructions for authors and subscription information: http://www.informaworld.com/smpp/title~content=t713396797 Towards thermodynamics of elastic electric conductors M. Grinfeld a; P. Grinfeld b a The Educational Testing Service, Princeton, New Jersey, USA b Department of Mathematics, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA Online Publication Date: 01 May 2001 To cite this Article: Grinfeld, M. and Grinfeld, P. (2001) 'Towards thermodynamics of elastic electric conductors', Philosophical Magazine A, 81:5, 1341 - 1354 To link to this article: DOI: 10.1080/01418610108214445 URL: http://dx.doi.org/10.1080/01418610108214445 PLEASE SCROLL DOWN FOR ARTICLE Full terms and conditions of use: http://www.informaworld.com/terms-and-conditions-of-access.pdf This article maybe used for research, teaching and private study purposes. Any substantial or systematic reproduction, re-distribution, re-selling, loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.
    [Show full text]
  • Applications of Symbolic Computation to the Calculus of Moving Surfaces a Thesis Submitted to the Faculty of Drexel University B
    Applications of Symbolic Computation to the Calculus of Moving Surfaces A Thesis Submitted to the Faculty of Drexel University by Mark W. Boady in partial fulfillment of the requirements for the degree of Doctor of Philosophy June 2016 c Copyright 2016 Mark W. Boady. All Rights Reserved. i Dedications To my lovely wife, Petrina. Without your endless kindness, love, and patience, this thesis never would have been completed. To my parents, Bill and Charlene, you have loved and supported me in every endeavor. I am eternally grateful to be your son. To my in-laws, Jerry and Annelie, your endless optimism and assistance helped keep me motivated throughout this project. ii Acknowledgments I would like to express my sincere gratitude to my supervisors Dr. Jeremy Johnson and Dr. Pavel Grinfeld. Dr. Johnson's patience and insight provided a safe harbor during the seemingly endless days of debugging and testing code. I cannot thank him enough for improvements in my writing, coding, and presenting skills. This thesis could not have existed without Dr. Grinfeld's pioneering research. His vast knowledge and endless enthusiasm propelled us forward at every roadblock. A special thanks also goes out to the members of my thesis committee: Dr. Bruce Char, Dr. Dario Salvucci, and Dr. Hoon Hong. Without their valuable input, this work would not have been fully realized. They consistently helped to keep this project focused while I was buried in the details of development. All the members of the Drexel Computer Science Department have provided endless help in my development as a teacher and researcher.
    [Show full text]