Randomized Linear Algebra for Model Order Reduction Oleg Balabanov
Total Page:16
File Type:pdf, Size:1020Kb
Randomized linear algebra for model order reduction Oleg Balabanov To cite this version: Oleg Balabanov. Randomized linear algebra for model order reduction. Numerical Analysis [math.NA]. École centrale de Nantes; Universitat politécnica de Catalunya, 2019. English. NNT : 2019ECDN0034. tel-02444461v2 HAL Id: tel-02444461 https://tel.archives-ouvertes.fr/tel-02444461v2 Submitted on 14 Feb 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. THÈSE DE DOCTORAT DE École Centrale de Nantes COMUE UNIVERSITÉ BRETAGNE LOIRE ÉCOLE DOCTORALE N° 601 Mathématiques et Sciences et Technologies de l’Information et de la Communication Spécialité : Mathématiques et leurs Interactions Par Oleg BALABANOV Randomized linear algebra for model order reduction Thèse présentée et soutenue à l’École Centrale de Nantes, le 11 Octobre, 2019 Unité de recherche : Laboratoire de Mathématiques Jean Leray, UMR CNRS 6629 Thèse N° : Rapporteurs avant soutenance : Bernard HAASDONK Professeur, University of Stuttgart Tony LELIÈVRE Professeur, École des Ponts ParisTech Composition du Jury : Président : Albert COHEN Professeur, Sorbonne Université Examinateurs : Christophe PRUD’HOMME Professeur, Université de Strasbourg Laura GRIGORI Directeur de recherche, Inria Paris, Sorbonne Université Marie BILLAUD-FRIESS Maître de Conférences, École Centrale de Nantes Dir. de thèse : Anthony NOUY Professeur, École Centrale de Nantes Dir. de thèse : Núria PARÉS MARINÉ LaCàN, Universitat Politècnica de Catalunya, Spain ECOL´ E CENTRALE DE NANTES Nantes, France UNIVERSITAT POLITECN´ ICA DE CATALUNYA Barcelona, Spain Randomized linear algebra for model order reduction by Oleg Balabanov A thesis submitted in partial fulfillment for the degree of Doctor of Philosophy in the Doctoral school of Mathematics and STIC Eco´ le Centrale de Nantes Department of Civil and Environmental Engineering Universitat Polit´ecnica de Catalunya September 2019 President : Albert Cohen Sorbonne Universit´e Jury members : Bernard Haasdonk University of Stuttgart Tony Leli`evre Eco´ le des Ponts ParisTech Christophe Prud’homme Universit´e de Strasbourg Laura Grigori Inria Paris, Sorbonne Universit´e Marie Billaud-Friess Eco´ le Centrale de Nantes Advisor : Anthony Nouy Eco´ le Centrale de Nantes Co-advisor: Nu´ria Par´es Marin´e Universitat Polit`ecnica de Catalunya To beloved mom and dad. Abstract Solutions to high-dimensional parameter-dependent problems are in great demand in the contemporary applied science and engineering. The standard approximation methods for parametric equations can require computational resources that are exponential in the dimension of the parameter space, which is typically referred to as the curse of dimensionality. To break the curse of dimensionality one has to appeal to nonlinear methods that exploit the structure of the solution map, such as projection-based model order reduction methods. This thesis proposes novel methods based on randomized linear algebra to enhance the efficiency and stability of projection-based model order reduction methods for solving parameter-dependent equations. Our methodology relies on random projections (or random sketching). Instead of operating with high-dimensional vectors we first efficiently project them into a low-dimensional space. The reduced model is then efficiently and numerically stably constructed from the projections of the reduced approximation space and the spaces of associated residuals. Our approach allows drastic computational savings in basically any modern computational architecture. For instance, it can reduce the number of flops and memory consumption and improve the efficiency of the data flow (characterized by scalability or communication costs). It can be employed for improving the efficiency and numerical stability of classical Galerkin and minimal residual methods. It can also be used for the efficient estimation of the error, and post-processing of the solution of the reduced order model. Furthermore, random sketching makes computationally feasible a dictionary-based approximation method, where for each parameter value the solution is approximated in a subspace with a basis selected from a dictionary of vectors. We also address the efficient construction (using random sketching) of parameter-dependent preconditioners that can be used to improve the quality of Galerkin projections or for effective error certification for problems with ill-conditioned operators. For all proposed methods we provide precise conditions on the random sketch to guarantee accurate and stable estimations with a user-specified probability of success. A priori estimates to determine the sizes of the random matrices are provided as well as a more effective adaptive procedure based on a posteriori estimates. Key words— model order reduction, parameter-dependent equations, random sketching, subspace embedding, reduced basis, dictionary-based approximation, preconditioner Acknowledgements This work is a result of a fruitful four years cooperation with Anthony Nouy – my advisor, teacher and simply, a brilliant mind. It was a blessing to have a chance to work with him. I would like to express a sincere gratitude for his special guidance, patience and, especially, our vast discussions, every single one of which was very instructive and fascinating. Anthony’s exceptional view and ideas have led to very interesting discoveries partially presented in this manuscript. A significant contribution to this work was also done by my co-supervisor N´uria Par´esMarin´e.Our few, yet productive meetings have led to a better presentation and polishing of the methodology. The discussions with N´uriahave as well resulted in many interesting ideas and open questions, which lie out of the scope of the present manuscript and are to be investigated in the future. In addition to Anthony and N´uria,I also would like to thank Pedro D´ıezfor all his guidelines and help since the beginning of my scientific career. I additionally would like to thank professors Bernard Haasdonk, Tony Leli`evre, Albert Cohen, Christophe Prud’homme, Laura Grigori and Marie Billaud-Friess for their kind agreement to be on my defence committee. Another word of gratitude goes to the colleagues at GeM and LMJL at the Centrale Nantes, and LaC`anat the Polytechnic University of Catalonia for enriching my life in the lab with cheerful debates and discussions. Special thanks are to Daria Koliesnikova. Finally, I am infinitely grateful to my family for unconditional support during this difficult path. My loving, kind and mindful parents Balabanov Ihor and Balabanova Tetyana were always there for me. My brothers Valeriy and Oleksandr never stopped boosting my brain by clever riddles and new interesting ideas. Also thanks to all my friends for their support. Contents 1 Introduction and overview of the literature1 1.1 Projection-based model reduction....................4 1.1.1 Semi-discrete framework.....................5 1.1.2 State-of-the-art model reduction.................6 1.1.3 Empirical interpolation method................. 11 1.1.4 Partitioning of the parameter domain.............. 12 1.1.5 Minimal residual methods.................... 13 1.1.6 Parameter-dependent preconditioners for MOR........ 14 1.1.7 Recent advances and the limitations of model reduction.... 14 1.2 Randomized linear algebra........................ 15 1.2.1 Random sketching: a least-squares problem.......... 16 1.2.2 Randomized linear algebra for MOR.............. 20 1.3 Computational architectures....................... 22 1.4 Contributions and overview of the manuscript............. 24 2 Random sketching for Galerkin methods and error estimation 27 2.1 Introduction................................ 29 2.1.1 Main contributions........................ 30 2.2 Projection-based model order reduction methods............ 33 2.2.1 Galerkin projection........................ 34 2.2.2 Error estimation......................... 35 2.2.3 Primal-dual correction...................... 36 2.2.4 Reduced basis generation..................... 36 2.3 Random sketching............................ 38 2.3.1 `2-embeddings........................... 39 2.3.2 Data-oblivious embeddings.................... 40 2.4 `2-embeddings for projection-based MOR................ 43 2.4.1 Galerkin projection........................ 43 2.4.2 Error estimation......................... 45 2.4.3 Primal-dual correction...................... 45 2.4.4 Computing the sketch...................... 47 2.4.5 Efficient evaluation of the residual norm............ 50 2.5 Efficient reduced basis generation.................... 52 2.5.1 Greedy algorithm......................... 53 2.5.2 Proper Orthogonal Decomposition................ 54 2.6 Numerical examples............................ 57 2.6.1 3D thermal block......................... 57 2.6.2 Multi-layered acoustic cloak................... 64 2.7 Conclusions and perspectives....................... 72 2.8 Appendix................................. 75 2.9 Supplementary material......................... 82 3 Random sketching for minres methods and dictionary-based ap- proximation