Leadership Computing in the Age of Cloud

Total Page:16

File Type:pdf, Size:1020Kb

Leadership Computing in the Age of Cloud LEADERSHIP COMPUTING IN THE AGE OF CLOUD Dan Stanzione Executive Director, Texas Advanced Computing Center Associate Vice President for Research, The University of Texas at Austin Amazon Seminar November 2019 THE BASIC OUTLINE What the heck is TACC? What is Frontera (and why do you care)? What sort of stuff needs to run on Frontera? What sort of stuff would be better off on AWS? What do future science workflows look like, and why will they run where they run? 1/13/20 2 WHAT IS TACC? Grendel, 1993 The Texas Advanced Computing Center, at UT Austin is a (primarily) NSF-funded center to Frontera, 2019 provide and apply large scale computing resources to the open science community. 1/13/20 3 TACC AT A GLANCE - 2019 Personnel 185 Staff (~70 PhD) Facilities 12 MW Data center capacity Two office buildings, Three Datacenters, two visualization facilities, and a chilling plant. Systems and Services >Seven Billion compute hours per year >5 Billion files, >100 Petabytes of Data, NSF Frontera (Track 1), Stampede2 (XSEDE FlaGship), Jetstream (Cloud), Chameleon (Cloud Testbed) system Usage >15,000 direct users in >4,000 projects, >50,000 web/portal users, User demand 8x available system time. Thousands of training/outreach participants annually 1/13/20 4 MODERN COMPUTATIONAL SCIENCE Simulation Computationally query our *mathematical models* of the world Machine Learning/AI Analytics Computationally query our Computationally analyze our *data sets* *experiments* (depending on technique, (driven by instruments that produce also called deep learning) lots of digital information) I would argue that modern science and engineering combine all three 1/13/20 5 TACC LAUNCHED IN JUNE, 2001 AFTER EXTERNAL REVIEW In 2001, budget of $600k staff of 12 (some shared). 50GF computing resource (1/200,000th of the current system). 1/13/20 6 RAPID GROWTH FROM THEN TO NOW… 2003 – First Terascale Linux cluster for open science (#26) 2004 – NSF funding to join the Teragrid 2006 – UT System Partnership to provide Lonestar-3 (#12) 2007 - $59M NSF award – largest in UT history – to deploy Ranger, the world’s largest open system (#4) 2008 – funding for new Vis software and launch of revamped visualization lab. 2009 - $50M iPlant Collaborative award (largest NSF bioinformatics award) moves a major component to TACC, life sciences group launched. In 2009, we reached, 65 employees. 1/13/20 7 NOW, A WORLD LEADER IN CYBERINFRASTRUCTURE 2010, TACC becomes a core partner (1 of 4) in XSEDE, the TeraGrid Replacement 2012, Stampede replaces Ranger with new $51.5M NSF Award 2013, iPlant is renewed, expanded to $100M 2015, Wrangler, first data intensive supercomputer is deployed. 2015, Chameleon cloud is launched 2015, DesignSafe, the cyberinfrastructure for natural hazard engineering, is launched. 2016 Stampede-2 awarded the largest academic system in the United States, 2017-2021. 2019 -- Frontera 1/13/20 8 HPC DOESN’T LOOK LIKE IT USED TO. HPC-Enabled Jupyter Notebooks Web Portal Narrative analytics and exploration environment Data management and accessible batch computing Event-driven Data Processing Extensible end-to-end framework to integrate planning, experimentation, validation and analytics From Batch Processing and single simulations of many MPI Tasks – to that, plus new modes of computing, automated workflows, users who avoid the command line, reproducibility and data reuse, collaboration, end-to-end data management, • Simulation where we have models • Machine Learning where we have data or incomplete models And most things are a blend of most of these. SUPPORTING AN EVOLVING CYBERINFRASTRUCTURE Success in Computational/Data Intensive Science and Engineering takes more than systems. Modern Cyberinfrastructure requires many modes of computing, many skillsets, and many parts of the scientific workflow. Data lifecycle, reproducibility, sharing and collaboration, event driven processing, APIs, etc. Our team and software investments are larger than our system investments Advanced Intefaces – Web front ends, Rest API, Vis/VR/AR Algorithms – Partnerships with ICES @ UT to shape future systems, applications and libraries. 1/13/20 10 FRONTERA SYSTEM --- PROJECT A new, NSF supported project to do 3 things: Deploy a system in 2019 for the largest problems scientists and engineers currently face. Support and operate this system for 5 years. Plan a potential phase 2 system, with 10x the capabilities, for the future challenges scientists will face. Frontera is the #5 ranked system in the world – and the fastest at any university in the world. Highest ranked Dell system ever, Fastest primarily Intel-based system Frontera and Stampede2 are #1 and #2 among US Universities (and Lonestar5 is still in the Top 10). On the current Top 500 list, TACC provides 77% of *all* performance available to US Universities. 1/13/20 11 FRONTERA IS A GREAT MACHINE – AND MORE THAN A MACHINE 1/13/20 12 A LITTLE ON HARDWARE AND INFRASTRUCTURE “Main” Compute Partition: 8,008 nodes Node: Dual–socket, 192GB, HDR-100 IB interface, local drive. Processor: Intel 8280 “Cascade Lake” Intel 2nd generation scalable Xeon 28 Cores 2.7Ghz clock “rate” (sometimes) 6 DIMM Channels, 2933Mhz DIMMS Core count+15%, clock rate +30%, memory bandwidth +15% vs. Skylake Why? They are universal, and not experimental 1/13/20 13 FRONTERA SYSTEM --- INFRASTRUCTURE Frontera consumes almost 6 Megawatts of Power at Peak Measured HPL power; 59+KW/rack, 5400KW from Compute nodes Direct water cooling of primary compute racks (CoolIT/DellEMC) Oil immersion Cooling (GRC) Solar, Wind inputs. TACC Machine Room Chilled Water Plant 1/13/20 14 INTERCONNECT Mellanox HDR , Fat Tree topology 8008 nodes = 88*91 = 91 Compute Racks Mellanox ASICS == 40 HDR ports. Chassis switches have 800 ports. Each rack is divided in half, with it’s own TOR switch: 44 compute nodes at HDR-100 == 22 HDR ports 18 uplink 200Gb HDR ports, 3 lines (600Gb) to each of 6 core switches. No oversubscription in higher layers of tree (11-9 in rack). No oversubscription to storage, DTN, service nodes (all connected to all 6 switches). 8200+ cards, 182 TOR switches, 6 core switches, 50 miles of cable. Good news: 8,008 compute nodes use only 3,276 fibers to connect to core. 1/13/20 15 FILESYSTEMS Lustre, POSIX, and that’s it. Disk: 50PB Flash: 3PB We have come to believe that most user’s codes accessing the filesystem look like this: While (1) { fork(); fopen(); fclose(); //optional } Mpirun –np 80000 kill_the_filesystem 1/13/20 16 FILESYSTEMS We no longer need to scale filesystem size to scale Bandwidth. The size of the filesystem is mostly to support concurrent users – Bandwidth is the limit for individual user (or IOPS for pathological ones). So – we aren’t going to build one big filesystem any more. /home1 , /home2, /home3 /scratch1, /scratch2, /scratch3 (initial assignment round robin) Flash will be a separate filesystem with some clever name, like /flash. This will require you to request access; or to be identified by our analytics as maxing a filesystem. Roughly 100GB/s to each scratch, 1.2TB/sec to /flash The code on the previous slide can trash, at most, 1/7th of the available filesystems. (Seriously, we have put in some tools to limit those; we may ask you to use a library we have that wraps Open(), and limits the number per second). 1/13/20 17 WHY DO WE HAVE COMMERCIAL CLOUD PARTNERSHIPS ON FRONTERA AWS is part of the Frontera project! Cloud/HPC is not, in my opinion, an either/or question. It’s OK to have more than one tool in the toolbox. We want to utilize the strengths of the commercial cloud, hence we are partnering in three areas: Long term data publication Access to unique and ever-changing hardware (you deploy faster than we do!) Hybrid workflows stitched together via web services (more on this later). 1/13/20 18 WHAT KINDS OF THINGS REALLY NEEDS TO RUN ON FRONTERA? 1/13/20 19 20 CENTER FOR THE PHYSICS OF LIVING CELLS ALEKSEI AKSIMENTIEV UNIVERSITY OF ILLINOIS AT URBANA- CHAMPAIGN • The nuclear pore complex serves as a gatekeeper, regulating the transport of biomolecules in and out of the nucleus of a biological cell. • To uncover the mechanism of such selective transport, the Aksimentiev lab at UIUC constructed a computational model of the complex. • The team simulated the model using memory-optimized NAMD 2.13, 8tasks/node, MPI+SMP. • Ran on up to 7,780 nodes on Frontera. • One of the largest biomolecular simulations ever performed. • Scaled close to linear on up to half of the machine. • Plan to build a new system twice as large to take advantage of large runs 21 FRONTIERS OF COARSE- GRAINING GREGORY VOTH UNIVERSITY OF CHICAGO • Mature HIV-1 capsid proteins self-assemble into large fullerene-cone structures. • These capsids enclose the infective genetic material of the virus and transport viral DNA from virion particles into the nucleus of newly infected cells. • On Frontera, Voth’s team simulated a viral capsids containing RNA and stabilizing cellular factors in full “State-of-the-art supercomputing resources like atomic detail for over 500 ns. Frontera are an invaluable resource for researchers. Molecular processes that determine • First molecular simulations of HIV capsids that contain the chemistry of life are often interconnected and biological components of the virus within the capsid. difficult to probe in isolation. Frontera enables large-scale simulations that examine these • The team ran on 4,000 nodes on Frontera. processes, and this type of science simply cannot be performed on smaller supercomputing • Measured the response of the capsid to molecular resources.” components such as including genetic cargo and cellular factors that affect the stability of the capsid. -Alvin Yu, Postdoctoral Scholar in Voth Group 22 LATTICE GAUGE THEORY AT THE INTENSITY FRONTIER CARLTON DETAR UNIVERSITY OF UTAH • Ab initio numerical simulations of quantum chromodynamics (QCD) help obtain precise predictions for the strong-interaction environment of the decays of mesons that contain a heavy bottom quark.
Recommended publications
  • Fenics-Shells Release 2018.1.0
    FEniCS-Shells Release 2018.1.0 Aug 02, 2021 Contents 1 Subpackages 1 2 Module contents 13 3 Documented demos 15 4 FEniCS-Shells 61 Bibliography 65 Python Module Index 67 Index 69 i ii CHAPTER 1 Subpackages 1.1 fenics_shells.analytical package 1.1.1 Submodules 1.1.2 fenics_shells.analytical.lovadina_clamped module Analytical solution for clamped Reissner-Mindlin plate problem from Lovadina et al. 1.1.3 fenics_shells.analytical.simply_supported module Analytical solution for simply-supported Reissner-Mindlin square plate under a uniform transverse load. 1.1.4 fenics_shells.analytical.vonkarman_heated module Analytical solution for elliptic orthotropic von Karman plate with lenticular thickness subject to a uniform field of inelastic curvatures. fenics_shells.analytical.vonkarman_heated.analytical_solution(Ai, Di, a_rad, b_rad) 1 FEniCS-Shells, Release 2018.1.0 1.1.5 Module contents 1.2 fenics_shells.common package 1.2.1 Submodules 1.2.2 fenics_shells.common.constitutive_models module fenics_shells.common.constitutive_models.psi_M(k, **kwargs) Returns bending moment energy density calculated from the curvature k using: Isotropic case: .. math:: D = frac{E*t^3}{24(1 - nu^2)} W_m(k, ldots) = D*((1 - nu)*tr(k**2) + nu*(tr(k))**2) Parameters • k – Curvature, typically UFL form with shape (2,2) (tensor). • **kwargs – Isotropic case: E: Young’s modulus, Constant or Expression. nu: Poisson’s ratio, Constant or Expression. t: Thickness, Constant or Expression. Returns UFL form of bending stress tensor with shape (2,2) (tensor). fenics_shells.common.constitutive_models.psi_N(e, **kwargs) Returns membrane energy density calculated from e using: Isotropic case: .. math:: B = frac{E*t}{2(1 - nu^2)} N(e, ldots) = B(1 - nu)e + nu mathrm{tr}(e)I Parameters • e – Membrane strain, typically UFL form with shape (2,2) (tensor).
    [Show full text]
  • Fenics-HPC: Automated Predictive High-Performance Finite Element
    FEniCS-HPC: Automated predictive high-performance finite element computing with applications in aerodynamics Johan Hoffman1, Johan Jansson2, and Niclas Jansson3 1 Computational Technology Laboratory, School of Computer Science and Communication, KTH, Stockholm, Sweden and BCAM - Basque Center for Applied Mathematics, Bilbao, Spain [email protected] 2 BCAM - Basque Center for Applied Mathematics, Bilbao, Spain and Computational Technology Laboratory, School of Computer Science and Communication, KTH, Stockholm, Sweden [email protected] 3 RIKEN Advanced Institute for Computational Science, Kobe, Japan [email protected] Abstract. Developing multiphysics finite element methods (FEM) and scalable HPC implementations can be very challenging in terms of soft- ware complexity and performance, even more so with the addition of goal-oriented adaptive mesh refinement. To manage the complexity we in this work present general adaptive stabilized methods with automated implementation in the FEniCS-HPC automated open source software framework. This allows taking the weak form of a partial differential equation (PDE) as input in near-mathematical notation and automati- cally generating the low-level implementation source code and auxiliary equations and quantities necessary for the adaptivity. We demonstrate new optimal strong scaling results for the whole adaptive framework applied to turbulent flow on massively parallel architectures down to 25000 vertices per core with ca. 5000 cores with the MPI-based PETSc backend and for assembly down to 500 vertices per core with ca. 20000 cores with the PGAS-based JANPACK backend. As a demonstration of the high impact of the combination of the scalability together with the adaptive methodology allowing prediction of gross quantities in turbulent flow we present an application in aerodynamics of a full DLR-F11 aircraft in connection with the HiLift-PW2 benchmarking workshop with good match to experiments.
    [Show full text]
  • The DUNE-Fem-DG Framework
    Extendible and Efficient Python Framework for Solving Evolution Equations with Stabilized Discontinuous Galerkin Methods Andreas Dedner, Robert Klofkorn¨ ∗ Abstract This paper discusses a Python interface for the recently published DUNE-FEM- DG module which provides highly efficient implementations of the Discontinuous Galerkin (DG) method for solving a wide range of non linear partial differential equations (PDE). Although the C++ interfaces of DUNE-FEM-DG are highly flexible and customizable, a solid knowledge of C++ is necessary to make use of this powerful tool. With this work easier user interfaces based on Python and the Unified Form Language are provided to open DUNE-FEM-DG for a broader audience. The Python interfaces are demonstrated for both parabolic and first order hyperbolic PDEs. Keywords DUNE,DUNE-FEM, Discontinuous Galerkin, Finite Volume, Python, Advection-Diffusion, Euler, Navier-Stokes MSC (2010): 65M08, 65M60, 35Q31, 35Q90, 68N99 In this paper we introduce a Python layer for the DUNE-FEM-DG1 module [14] which is available open-source. The DUNE-FEM-DG module is based on DUNE [5] and DUNE- FEM [20] in particular and makes use of the infrastructure implemented by DUNE-FEM for seamless integration of parallel-adaptive Finite Element based discretization methods. DUNE-FEM-DG focuses exclusively on Discontinuous Galerkin (DG) methods for var- ious types of problems. The discretizations used in this module are described by two main papers, [17] where we introduced a generic stabilization for convection dominated problems that works on generally unstructured and non-conforming grids and [9] where we introduced a parameter independent DG flux discretization for diffusive operators.
    [Show full text]
  • Universidade Federal Do Rio Grande Do Sul
    UNIVERSIDADE FEDERAL DO RIO GRANDE DO SUL ESCOLA DE ENGENHARIA FACULDADE DE ARQUITETURA PROGRAMA DE PÓS-GRADUAÇÃO EM DESIGN Eduardo da Cunda Fernandes DESIGN NO DESENVOLVIMENTO DE UM PROJETO DE INTERFACE: Aprimorando o processo de modelagem em programas de análise de estruturas tridimensionais por barras Dissertação de Mestrado Porto Alegre 2020 EDUARDO DA CUNDA FERNANDES Design no desenvolvimento de um projeto de interface: aprimorando o processo de modelagem em programas de estruturas tridimensionais por barras Dissertação apresentada ao Programa de Pós- Graduação em Design da Universidade Federal do Rio Grande do Sul, como requisito parcial à obtenção do título de Mestre em Design. Orientador: Prof. Dr. Fábio Gonçalves Teixeira Porto Alegre 2020 Catalogação da Publicação Fernandes, Eduardo da Cunda DESIGN NO DESENVOLVIMENTO DE UM PROJETO DEINTERFACE: Aprimorando o processo de modelagem em programas de análise de estruturas tridimensionais por barras / Eduardo da Cunda Fernandes. -- 2020. 230 f. Orientador: Fábio Gonçalves Teixeira. Dissertação (Mestrado) -- Universidade Federal do Rio Grande do Sul, Escola de Engenharia, Programa de Pós- Graduação em Design, Porto Alegre, BR-RS, 2020. 1. Design de Interface. 2. Análise Estrutural. 3.Modelagem Preditiva do Comportamento Humano. 4.Heurísticas da Usabilidade. 5. KLM-GOMS. I. Teixeira, Fábio Gonçalves, orient. II. Título. FERNANDES, E. C. Design no desenvolvimento de um projeto de interface: aprimorando o processo de modelagem em programas de análise de estruturas tridimensionais por barras. 2020. 142 f. Dissertação (Mestrado em Design) – Escola de Engenharia / Faculdade de Arquitetura, Universidade Federal do Rio Grande do Sul, Porto Alegre, 2020. Eduardo da Cunda Fernandes DESIGN NO DESENVOLVIMENTO DE UM PROJETO DED INTERFACE: aprimorando o processo de modelagem em programas de análise de estruturas tridimensionais por barras Esta Dissertação foi julgada adequada para a obtenção do Título de Mestre em Design, e aprovada em sua forma final pelo Programa de Pós-Graduação em Design da UFRGS.
    [Show full text]
  • Abstractions and Automated Algorithms for Mixed Domain Finite Element Methods
    Abstractions and automated algorithms for mixed domain finite element methods CÉCILE DAVERSIN-CATTY, Simula Research Laboratory, Norway CHRIS N. RICHARDSON, BP Institute, University of Cambridge, United Kingdom ADA J. ELLINGSRUD, Simula Research Laboratory, Norway MARIE E. ROGNES, Simula Research Laboratory, Norway Mixed dimensional partial differential equations (PDEs) are equations coupling unknown fields defined over domains of differing topological dimension. Such equations naturally arise in a wide range of scientific fields including geology, physiology, biology and fracture mechanics. Mixed dimensional PDEs arealso commonly encountered when imposing non-standard conditions over a subspace of lower dimension e.g. through a Lagrange multiplier. In this paper, we present general abstractions and algorithms for finite element discretizations of mixed domain and mixed dimensional PDEs of co-dimension up to one (i.e. nD-mD with jn −mj 6 1). We introduce high level mathematical software abstractions together with lower level algorithms for expressing and efficiently solving such coupled systems. The concepts introduced here have alsobeen implemented in the context of the FEniCS finite element software. We illustrate the new features through a range of examples, including a constrained Poisson problem, a set of Stokes-type flow models and a model for ionic electrodiffusion. CCS Concepts: • Mathematics of computing → Solvers; Partial differential equations; • Computing methodologies → Modeling methodologies. Additional Key Words and Phrases: FEniCS project, mixed dimensional, mixed domains, mixed finite elements 1 INTRODUCTION Mixed dimensional partial differential equations (PDEs) are systems of differential equations coupling solution fields defined over domains of different topological dimensions. Problem settings that call for such equations are in abundance across the natural sciences [27, 47], in multi-physics problems [13, 48], and in mathematics [8, 29].
    [Show full text]
  • Coupling Fenics and Openfoam Via Precice
    MUNICH SCHOOL OF ENGINEERING TECHNICAL UNIVERSITY OF MUNICH Bachelor’s Thesis in Engineering Science Chair of Scientific Computing, Department of Informatics Partitioned Fluid Structure Interaction: Coupling FEniCS and OpenFOAM via preCICE Richard Hertrich MUNICH SCHOOL OF ENGINEERING TECHNICAL UNIVERSITY OF MUNICH Bachelor’s Thesis in Engineering Science Chair of Scientific Computing, Department of Informatics Partitioned Fluid Structure Interaction: Coupling FEniCS and OpenFOAM via preCICE Partitionierte Fluid Struktur Wechselwirkung: Ein gekoppelter Ansatz mit FEniCS, OpenFOAM und preCICE Author: Richard Hertrich Supervisor: Prof. Dr. Hans-Joachim Bungartz Advisor: M.Sc. (hons) Benjamin Rüth Submission Date: September 27, 2019 I confirm that this bachelor’s thesis in engineering science is my own work and I have documented all sources and material used. Munich, September 27, 2019 Richard Hertrich Abstract In partitioned fluid structure interaction, a structure solver and a fluid solver are coupled via boundary conditions at the interface. This thesis presents partitioned FSI simulations using OpenFOAM for the fluid, preCICE as a coupling tool, and a structure solver I developed with the FEM library FEniCS. I extended the preCICE- FEniCS adapter to match the requirements for FSI, such that users can couple FEniCS simulations with vector functions and read conservatively mapped quantities from preCICE to a FEniCS solver. Plus, the adapter now features a mapping between pseudo-3D OpenFOAM and 2D FEniCS. The setup is tested with two FSI scenarios: An elastic flap in a channel and the FSI3 benchmark. I compare the results to validated partitioned FSI methods and reference results in literature to validate the preCICE-FEniCS adapter and the structure solver.
    [Show full text]
  • Loads, Load Factors and Load Combinations
    Overall Outline 1000. Introduction 4000. Federal Regulations, Guides, and Reports Training Course on 3000. Site Investigation Civil/Structural Codes and Inspection 4000. Loads, Load Factors, and Load Combinations 5000. Concrete Structures and Construction 6000. Steel Structures and Construction 7000. General Construction Methods BMA Engineering, Inc. 8000. Exams and Course Evaluation 9000. References and Sources BMA Engineering, Inc. – 4000 1 BMA Engineering, Inc. – 4000 2 4000. Loads, Load Factors, and Load Scope: Primary Documents Covered Combinations • Objective and Scope • Minimum Design Loads for Buildings and – Introduce loads, load factors, and load Other Structures [ASCE Standard 7‐05] combinations for nuclear‐related civil & structural •Seismic Analysis of Safety‐Related Nuclear design and construction Structures and Commentary [ASCE – Present and discuss Standard 4‐98] • Types of loads and their computational principles • Load factors •Design Loads on Structures During • Load combinations Construction [ASCE Standard 37‐02] • Focus on seismic loads • Computer aided analysis and design (brief) BMA Engineering, Inc. – 4000 3 BMA Engineering, Inc. – 4000 4 Load Types (ASCE 7‐05) Load Types (ASCE 7‐05) • D = dead load • Lr = roof live load • Di = weight of ice • R = rain load • E = earthquake load • S = snow load • F = load due to fluids with well‐defined pressures and • T = self‐straining force maximum heights • W = wind load • F = flood load a • Wi = wind‐on‐ice loads • H = ldload due to lllateral earth pressure, ground water pressure,
    [Show full text]
  • Hierarchical a Posteriori Error Estimation of Bank-Weiser Type in the Fenics Project Raphaël Bulle, Jack Hale, Alexei Lozinski, Stéphane Bordas, Franz Chouly
    Hierarchical a posteriori error estimation of Bank-Weiser type in the FEniCS Project Raphaël Bulle, Jack Hale, Alexei Lozinski, Stéphane Bordas, Franz Chouly To cite this version: Raphaël Bulle, Jack Hale, Alexei Lozinski, Stéphane Bordas, Franz Chouly. Hierarchical a posteriori error estimation of Bank-Weiser type in the FEniCS Project. 2021. hal-03137595 HAL Id: hal-03137595 https://hal.archives-ouvertes.fr/hal-03137595 Preprint submitted on 10 Feb 2021 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Hierarchical a posteriori error estimation of Bank–Weiser type in the FEniCS Project∗ Raphaël Bulle† Jack S. Hale† Alexei Lozinski‡ Stéphane P. A. Bordas† Franz Chouly§ February 10, 2021 Abstract. In the seminal paper of Bank and Weiser [Math. Comp., 44 (1985), pp. 283–301] a new a posteriori estimator was introduced. This estimator requires the solution of a local Neumann problem on every cell of the finite element mesh. Despite the promise of Bank– Weiser type estimators, namely locality, computational efficiency, and asymptotic sharpness, they have seen little use in practical computational problems. The focus of this contribution is to describe a novel implementation of hierarchical estimators of the Bank–Weiser type in a modern high-level finite element software with automatic code generation capabilities.
    [Show full text]
  • A Cross-Sectional Aeroelastic Analysis and Structural Optimization Tool For
    Composite Structures 253 (2020) 112755 Contents lists available at ScienceDirect Composite Structures journal homepage: www.elsevier.com/locate/compstruct A cross-sectional aeroelastic analysis and structural optimization tool for slender composite structures ⇑ Roland Feil a, , Tobias Pflumm b, Pietro Bortolotti a, Marco Morandini c a National Renewable Energy Laboratory, Boulder, Colorado, USA b Technical University of Munich, Germany c Politecnico di Milano, Italy ARTICLE INFO ABSTRACT Keywords: A fully open‐source available framework for the parametric cross‐sectional analysis and design optimization of SONATA slender composite structures, such as helicopter or wind turbine blades, is presented. The framework— VABS Structural Optimization and Aeroelastic Analysis (SONATA)—incorporates two structural solvers, the commer- ANBA4 cial tool VABS, and the novel open‐source code ANBA4. SONATA also parameterizes the design inputs, post- Parametric design framework processes and visualizes the results, and generates the structural inputs to a variety of aeroelastic analysis Composite structures tools. It is linked to the optimization library OpenMDAO. This work presents the methodology and explains the fundamental approaches of SONATA. Structural characteristics were successfully verified for both VABS and ANBA4 using box beam examples from literature, thereby verifying the parametric approach to generating the topology and mesh in a cross section as well as the solver integration. The framework was furthermore exercised by analyzing and evaluating a fully resolved highly flexible wind turbine blade. Computed structural characteristics correlated between VABS and ANBA4, including off‐diagonal terms. Stresses, strains, and defor- mations were recovered from loads derived through coupling with aeroelastic analysis. The framework, there- fore, proves effective in accurately analyzing and optimizing slender composite structures on a high‐fidelity level that is close to a three‐dimensional finite element model.
    [Show full text]
  • Numerical Modeling and Experimental Investigation of Fine Particle Coagulation and Dispersion in Dilute Flows Bart Janssens
    Numerical modeling and experimental investigation of fine particle coagulation and dispersion in dilute flows Bart Janssens To cite this version: Bart Janssens. Numerical modeling and experimental investigation of fine particle coagulation and dispersion in dilute flows. Mechanics [physics.med-ph]. Université de La Rochelle, 2014. English. NNT : 2014LAROS014. tel-01175006 HAL Id: tel-01175006 https://tel.archives-ouvertes.fr/tel-01175006 Submitted on 10 Jul 2015 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. VON KARMAN INSTITUTE FOR FLUID DYNAMICS TURBOMACHINERY &PROPULSION DEPARTMENT UNIVERSITE´ DE LA ROCHELLE - UFR SCIENCE ET TECHNOLOGIE ECOLE DOCTORALE SCIENCES ET INGENIERIE´ EN MATERIAU´ ,MECANIQUE´ , ENERG´ ETIQUE´ ET AERONAUTIQUE´ LABORATOIRE DES SCIENCES DE L’INGENIEUR´ POUR L’ENVIRONNEMENT ROYAL MILITARY ACADEMY DEPARTMENT OF MECHANICAL ENGINEERING Numerical modeling and experimental investigation of fine particle coagulation and dispersion in dilute flows Cover art: Particle-laden turbulent channel flow, with vertical cuts and streamlines colored by streamwise particle velocity and the bottom wall colored by particle concentration. Thesis presented by Bart Janssens in order to obtain the degree of “Doctor of Philosophy in Applied Sciences - Mechanics”, Universite´ de la Rochelle, France and Royal Military Academy, Belgium, 10th of July 2014.
    [Show full text]
  • Automated Computational Modelling for Complicated Partial Differential Equations
    Automated computational modelling for complicated partial differential equations Automated computational modelling for complicated partial differential equations Proefschrift ter verkrijging van de graad van doctor aan de Technische Universiteit Delft, op gezag van de Rector Magnificus prof. ir. K.C.A.M. Luyben, voorzitter van het College voor Promoties, in het openbaar te verdedigen op dinsdag 3 december 2013 om 12.30 uur door Kristian Breum ØLGAARD Master of Science in Civil Engineering, Aalborg Universitet Esbjerg geboren te Ringkøbing, Denemarken Dit proefschrift is goedgekeurd door de promotor: Prof. dr. ir. L. J. Sluys Copromotor: Dr. G. N. Wells Samenstelling promotiecommissie: Rector Magnificus Voorzitter Prof. dr. ir. L. J. Sluys Technische Universiteit Delft, promotor Dr. G. N. Wells University of Cambridge, copromotor Dr. ir. M. B. van Gijzen Technische Universiteit Delft Prof. dr. P. H. J. Kelly Imperial College London Prof. dr. R. Larsson Chalmers University of Technology Prof. dr. L. R. Scott University of Chicago Prof. dr. ir. C. Vuik Technische Universiteit Delft Prof. dr. A. Scarpas Technische Universiteit Delft, reservelid Copyright © 2013 by K. B. Ølgaard Printed by Ipskamp Drukkers B.V., Enschede, The Netherlands ISBN 978-94-6191-990-8 Foreword This thesis represents the formal end of my long and interesting journey as a PhD student. The sum of many experiences over the past years has increased my knowledge and contributed to my personal development. All these experiences originate from the interaction with many people to whom I would like to express my gratitude. I am most grateful to Garth Wells for giving me the opportunity to come to Delft and to study under his competent supervision.
    [Show full text]
  • Numerical Analysis, Modelling and Simulation
    Numerical Analysis, Modelling and Simulation Griffin Cook Numerical Analysis, Modelling and Simulation Numerical Analysis, Modelling and Simulation Edited by Griffin Cook Numerical Analysis, Modelling and Simulation Edited by Griffin Cook ISBN: 978-1-9789-1530-5 © 2018 Library Press Published by Library Press, 5 Penn Plaza, 19th Floor, New York, NY 10001, USA Cataloging-in-Publication Data Numerical analysis, modelling and simulation / edited by Griffin Cook. p. cm. Includes bibliographical references and index. ISBN 978-1-9789-1530-5 1. Numerical analysis. 2. Mathematical models. 3. Simulation methods. I. Cook, Griffin. QA297 .N86 2018 518--dc23 This book contains information obtained from authentic and highly regarded sources. All chapters are published with permission under the Creative Commons Attribution Share Alike License or equivalent. A wide variety of references are listed. Permissions and sources are indicated; for detailed attributions, please refer to the permissions page. Reasonable efforts have been made to publish reliable data and information, but the authors, editors and publisher cannot assume any responsibility for the validity of all materials or the consequences of their use. Copyright of this ebook is with Library Press, rights acquired from the original print publisher, Larsen and Keller Education. Trademark Notice: All trademarks used herein are the property of their respective owners. The use of any trademark in this text does not vest in the author or publisher any trademark ownership rights in such trademarks, nor does the use of such trademarks imply any affiliation with or endorsement of this book by such owners. The publisher’s policy is to use permanent paper from mills that operate a sustainable forestry policy.
    [Show full text]