M.Sc. in Mathematical Modelling and Scientific Computing
Total Page:16
File Type:pdf, Size:1020Kb
M.Sc. in Mathematical Modelling and Scientific Computing Dissertation Projects December 2019 Contents 1 Numerical Analysis Projects 3 1.1 Operator Preconditioning and Accuracy of Linear System Solutions . 3 1.2 Computing Common Zeros of Trivariate Functions . 4 1.3 Univariate Optimisation for Smooth Functions . 4 1.4 Algorithms for Least-Squares and Underdetermined Linear Systems Based on CholeskyQR . 5 1.5 Tensor and Matrix Eigenvalue Perturbation Theory . 6 1.6 Chebfun Dissertation Topics . 7 1.7 Rational Functions Dissertation Topics . 8 1.8 Low-Rank Plus Sparse Matrix Model and its Application . 9 1.9 Efficient Exemplar Selection for Representation Learning . 10 1.10 Numerical Solution of a Problem in Electrochemistry . 11 1.11 Nonconvex Optimisation . 12 2 Biological and Medical Application Projects 13 2.1 Localisation of Turing Patterning in Discrete Heterogeneous Media . 13 2.2 Distributed Delay in Reaction-Diffusion Systems . 14 2.3 Aspects of Flagellate Swimmer Dynamics and Mechanics . 16 2.4 Learning Time Dependence with Minimal Time Information . 17 2.5 Surrogate Modelling for Particle Infiltration into Tumour Spheroids . 18 2.6 Mathematical Modelling of polyelectrolyte hydrogels for Application in Regenerative Medicine . 19 3 Physical Application Projects 21 3.1 Evaporation-Driven Instabilities in Complex Fluids . 21 1 3.2 Phase Separation in Swollen Hydrogels . 22 3.3 Pattern Formation in Polymers . 23 4 Data Science 25 4.1 Computational Topology for Analysing Neuronal Branching Models and Data . 25 5 Research Interests of Academic Staff 26 2 1 Numerical Analysis Projects 1.1 Operator Preconditioning and Accuracy of Linear System Solu- tions Supervisors: Prof Yuji Nakatsukasa and Dr Carolina Urzua Torres Contact: [email protected] and [email protected] Preconditioning is widely recognised as a crucial technique for improving the convergence of iterative (usually Krylov) methods for linear systems Ax = b. A major body of research in numerical linear algebra is devoted to devising preconditioners for certain classes of matrices. One aspect that is seldom discussed is that if A is ill-conditioned, then even if a good preconditioner M is used, usually the final (relative) accuracy achievable is limited by the condition number of the original linear system O(uκ2(A)), where u is the unit roundoff. This is because applying A or M involves errors of that order. Another, more recent, line of work is operator preconditioning (e.g. [1, 3]) in the context of solving differential equations (which usually lead to a large linear system). Here the idea is to precondition the problem at the operator level to reduce the problem to that of a well-conditioned differential operator, which is then discretised to give Ax~ = ~b. This procedure hints that the effect of numerical errors could be significantly reduced, and hence the accuracy would no longer be limited by uκ2(A) but by uκ2(A~), which can be much better, potentially as small as O(u); a similar effect is described in [2]. The goal of this work is to explore operator preconditioning in terms of the solution accuracy. The project could involve a subset of the following: (i) analyzing the properties of the linear systems and solutions (forward and backward errors), (ii) roundoff error analysis, (iii) examining various equations and preconditioners, and (iv) implementing the preconditioners. The study might also lead to the design of good preconditioners at the matrix level. Exposure to basic numerical linear algebra and differential equations would be helpful for the project. References [1] R. Hiptmair. Operator preconditioning. Computers and Mathematics with Applica- tions, 52(5):699{706, 2006. [2] S. Olver and A. Townsend. A fast and well-conditioned spectral method. SIAM Rev., 55(3):462{489, 2013. [3] C. Urzua-Torres, R. Hiptmair, and C. Jerez-Hanckes. Optimal operator precondi- tioning for Galerkin boundary element methods on 3D screens. SIAM J. Numer. Anal., 2019. 3 1.2 Computing Common Zeros of Trivariate Functions Supervisor: Prof Yuji Nakatsukasa Contact: [email protected] Chebfun can deal with functions in 1D, 2D, and 3D. Its functionalities in 1D are mostly complete and highly sophisticated. In higher dimensions, and in particular 3D, the situation is sometimes different. Among the key missing operations in Chebfun3 is roots(f,g,h)—finding common zeros of three trivariate functions, which generically have zero-dimensional solutions consisting of disjoint points. The 1D version roots(f) is re- markably efficient and reliable, based on domain subdivision and colleague eigenvalue problems. The 2D version was developed in [1], which also uses these two ideas, together with resultant methods to eliminate variables. The goal of this project is to develop and implement a Chebfun3 roots(f,g,h) algorithm and command. It is envisaged that a successful algorithm will involve a mixture of mathematical analysis and numerical/computational techniques. In particular, condi- tioning and numerical stability will need to be investigated carefully, in light of the analysis in [2]. Since there is little software publicly available for trivariate rootfinding, a successful completion would be an important contribution to multivariate polynomial computing. Background in approximation theory would be very helpful. The Part C course \Ap- proximation of Functions" is highly recommended. References [1] Y. Nakatsukasa, V. Noferini, and A. Townsend. Computing the common zeros of two bivariate functions via B´ezoutresultants. Numer. Math., 129:181{209, 2015. [2] V. Noferini and A. Townsend. Numerical instability of resultant methods for multi- dimensional rootfinding. SIAM J. Numer. Anal., 54(2):719{743, 2016. 1.3 Univariate Optimisation for Smooth Functions Supervisor: Prof Yuji Nakatsukasa Contact: [email protected] Finding the (global) minimum of a univariate function minx f(x) is a classical problem in continuous optimisation. It arises naturally when a search direction is found, and one looks for an appropriate step size, e.g. in line search algorithms. For example in the currently very popular stochastic gradient descent method, usually the step size is fixed. However, an optimal step size can significantly outperform fixed and suboptimal choices. When the function is convex (or unimodal), bisection or golden section are established techniques for finding the global minimum. Another powerful approach would be to approximate the function with a polynomial and minimize the polynomial. This tech- nique is used in Chebfun (together with interval subdivision as necessary), but appears 4 not to be widely used elsewhere. We expect this approach to have a number of ad- vantages, including (i) it can easily deal with nonconvex functions, (ii) it can find local minima/maxima, and (iii) the convergence should be fast (fewer samples are needed) if f is smooth, e.g. analytic. The goals of this project include exploring these aspects, analyzing convergence, and developing efficient implementations. Extensions can be considered, such as using rational approximation or minimising bivariate functions. The student should have some prior exposure to basic optimisation [1], and ideally approximation theory [2]. The Part C courses \Continuous Optimisation" and \Ap- proximation of Functions" are recommended. References [1] J. Nocedal and S. J. Wright. Numerical Optimization. Springer New York, second edition, 1999. [2] L. N. Trefethen. Approximation Theory and Approximation Practice. SIAM, Philadelphia, 2013. 1.4 Algorithms for Least-Squares and Underdetermined Linear Sys- tems Based on CholeskyQR Supervisor: Prof Yuji Nakatsukasa Contact: [email protected] CholeskyQR|based on RT R = AT A (Cholesky factorization) and Q = AR−1|is an algorithm for computing the QR factorization of matrices, which is very attractive in speed especially in parallel computing architectures. It is unfortunately numerically unstable, and hence has not been widely used. However, the instability can be improved significantly by techniques such as repeating and shifting [1, 2]. The QR factorization is used in various problems in numerical linear algebra, including least-squares problems and underdetermined linear systems. This project aims to explore the use of CholeskyQR in these contexts: (1) We observe that when CholeskyQR is used for least-squares problems (together with a Krylov solver; a neat mixture of direct and iterative linear algebra methods), we arrive at a robust and efficient solver, achieving up to ×5 speedup. Establishing backward stability, however, remains an important open problem, which we hope to address here. Other possible directions include exploring HPC implementations and exploring its use in applications. (2) Similarly, CholeskyQR can be used for finding the minimal solution for an under- determined linear system. This line of work is at a young stage and many directions are possible: proving stability, refining the algorithm, implementation, and applications (e.g. compressed sensing). A good background in linear algebra is desirable (e.g. the part C course Numerical Linear Algebra). 5 References [1] T. Fukaya, R. Kannan, Y. Nakatsukasa, Y. Yamamoto, and Y. Yanagisawa. Shifted CholeskyQR for computing the QR factorization of ill-conditioned matrices, arXiv:1809.11085, 2018. [2] Y. Yamamoto, Y. Nakatsukasa, Y. Yanagisawa, and T. Fukaya. Roundoff error analysis of the CholeskyQR2 algorihm. Electron. Trans. Numer. Anal., 44:306{326, 2015. 1.5 Tensor and Matrix Eigenvalue Perturbation Theory