Jack Dongarra University of Tennessee and Oak Ridge National Laboratory

Total Page:16

File Type:pdf, Size:1020Kb

Jack Dongarra University of Tennessee and Oak Ridge National Laboratory Seminario EEUU/Venezuela de Computación de Alto Rendimiento 2000 US/Venezuela Workshop on High Performance Computing 2000 - WoHPC 2000 4 al 6 de Abril del 2000 Puerto La Cruz Venezuela Jack Dongarra University of Tennessee and Oak Ridge National Laboratory http://www.cs.utk.edu/~dongarra/ 1 ♦ Look at trends in HPC Ø Top500 statistics ♦ NetSolve Ø Example of grid middleware ♦ Performance on today’s architecture Ø ATLAS effort ♦ Tools for performance evaluation Ø Performance API (PAPI) 2 - Started in 6/93 by JD,Hans W. Meuer and Erich Strohmaier - Basis for analyzing the HCP market - Quantify observations - Detection of trends (market, architecture, technology) 3 - Listing of the 500 most powerful Computers in the World - Yardstick: Rmax from LINPACK MPP Ax=b, dense problem TPP performance - Updated twice a year Rate Size SC‘xy in November Meeting in Mannheim in June - All data available from www.top500.org 4 ♦ Original classes of ♦ A way for machines tracking trends Ø Sequential Ø SMPs Øin performance Ø MPPs Ø in market Ø SIMDs Øin classes of ♦ Two new classes HPC systems Ø Beowulf-class ØArchitecture systems Ø ØTechnology Clustering of SMPs and DSMs ØRequires additional terminology 5 Ø “Constellation” ClusterCluster ofof ClustersClusters ♦ An ensemble of N nodes each comprising p computing elements ♦ The p elements are tightly bound shared memory (e.g. smp, dsm) ♦ The N nodes are loosely coupled, i.e.: distributed memory 4TF Blue Pacific SST 3 x 480 4-way SMP nodes ♦ p is greater than N 3.9 TF peak performance 2.6 TB memory ♦ Distinction is which 2.5 Tb/s bisectional bandwidth 62 TB disk layer gives us the most 6.4 GB/s delivered I/O bandwidth power through parallelism 6 Gflop/s Intel ASCI Red Xeon (9632) 2000 ASCI Blue Pacific SST(5808) 1500 SGI ASCI Blue Mountain (5040) 1000 Intel ASCI Red (9152) 750 Hitachi CP-PACS (2040) 500 Intel Paragon (6788) Fujitsu VPP-500 (140) TMC CM-5 (1024) 250 TMC CM-2 NEC SX-3 (4) Fujitsu VP-2600 (2048) 100 Cray Y-MP (8) 0 91 93 Year 95 97 99 MANU- RMAX AREA OF RANK COMPUTER INSTALLATION SITE COUNTRY YEAR # PROC FACTURER [GF/S] INSTALLATION 1 Intel ASCI Red 2379.6 Sandia National Labs USA 1999 Research 9632 Albuquerque ASCI Blue- Lawrence Livermore 2 IBM Pacific SST, 2144 USA 1999 Research 5808 National Laboratory IBM SP 604E 3 SGI ASCI Blue 1608 Los Alamos National Lab USA 1998 Research 6144 Mountain 4 SGI T3E 1200 891.5 Government USA 1998 Classified 1084 5 Hitachi SR8000 873.6 University of Tokyo Japan 1999 Academic 128 6 SGI T3E 900 815.1 Government USA 1997 Classified 1324 7 SGI Orgin 2000 690.9 Los Alamos National Lab USA 1999 Research 2048 /ACL Naval Oceanographic Research 8 Cray/SGI T3E 900 675.7 USA 1999 1084 Office, Bay Saint Louis Weather 9 SGI T3E 1200 671.2 Deutscher Wetterdienst Germany 1999 Research 812 Weather 10 IBM SP Power3 558.13 UCSD/San Diego USA 1999 Research 1024 Supercomputer Center, IBM/Poughkeepsie 100 Tflop/s 10 Tflop/s 1 Tflop/s 100 Gflop/s 10 Gflop/s 1 Gflop/s Intel XP/S140 Sandia 100 Mflop/s Fujitsu 'NWT' NAL SNI VP200EX [60G - 400 M][2.4 Tflop/s 33 Gflop/s],[1Uni 40 Dresden Tf] Schwab #12, 1/2 each year, 100 > 100 Gf, faster than Moore’s law SUM Fujitsu Jun-93 'NWT' NAL Cray Y-MP Nov-93 KFA JülichM94/4 N=1 Hitachi/Tsukuba Jun-94 CP-PACS/2048 Y-MP C94/364 Cray Nov-94 'EPA' USA N=500 ASCI RedIntel Jun-95 Sandia 50.97 TF/s SGI CHALLANGEPOWER Nov-95 GOODYEAR ASCI RedIntel 2.38 TF/s Sun Ultra Sandia Jun-96 HPC 1000 Nov-96 InternationalNews ASCI RedIntel Sandia Jun-97 HPC 10000Sun 33.1 GF/s Merril Lynch Nov-97 IBM 604e 69 proc Jun-98 Nabisco Nov-98 Jun-99 Nov-99 9 1000000 100000 10000 1000 100 Sum Performance [GFlop/s] 10 N=1 N=10 1 0.1 Earth Simulator N=500 Jun-93 ASCI Nov-94 1 TFlop/s Jun-96 1 PFlop/s Nov-97 Jun-99 Entry 1 T 2005 and 1 P 2010 Nov-00 Jun-02 Nov-03 Jun-05 Nov-06 Jun-08 Nov-09 10 500 400 SIMD 300 200 100 0 Constellation ProcessorSingle Jun-93 Nov-93 Jun-94 MPP Nov-94 Jun-95 SMP Cluster Nov-95 Jun-96 Nov-96 Jun-97 Nov-97 Jun-98 Nov-98 Jun-99 Nov-99 11 500 400 300 200 100 Intel TMC 0 "Japan Inc" Jun-93 Cray SGI others Nov-93 Jun-94 IBM Convex/HP Nov-94 Jun-95 Nov-95 Sun Jun-96 Nov-96 Jun-97 Nov-97 Jun-98 TaraN o-> Crayv- 9Inc8 $15M Jun-99 Nov-99 12 500 400 300 200 100 0 Jun-93 Inmos Transputer Nov-93 intel Other Jun-94 Nov-94 Jun-95 SUN Nov-95 MIPS Jun-96 HP Nov-96 IBM Jun-97 Alpha Nov-97 Jun-98 Nov-98 Jun-99 Nov-99 500 400 300 200 100 0 ClassifiedVendor Jun-93 Academic Nov-93 Jun-94 Industry Nov-94 Jun-95 Research Nov-95 Jun-96 Nov-96 Jun-97 Nov-97 Jun-98 Nov-98 Jun-99 Nov-99 14 ♦ Clustering of shared memory machines for scalability Ø Emergence of PC commodity systems Ø Pentium/Alpha based, Linux or NT driven Ø “Supercomputer performance at mail-order prices” Ø Beowulf-Class Systems (Linux+PC) Ø Distributed Shared Memory (clusters of processors connected) Ø Shared address space w/deep memory hierarchy ♦ Efficiency of message passing and data parallel programming Ø Helped by standards efforts such as PVM, MPI, Open-MP and HPF ♦ Many of the machines as a single user environments ♦ Pure COTS 15 MANU- AREA OF # RANK COMPUTER RMAX INSTALLATION SITE COUNTRY YEAR FACTURER INSTALLATION PROC 33 Sun HPC 450 272.1 Sun, Burlington USA 1999 Vendor 720 Cluster Compaq Computer Corp. 34 Compaq Alpha Server SC 271.4 USA 1999 Vendor 512 Littleton … … … … … … … … … Sandia National 44 Self-made Cplant Cluster 232.6 USA 1999 Research 580 Laboratories … … … … … … … … … Institute of Physical and 169 Self-made Alphleet Cluster 61.3 Japan 1999 Research 140 Chemical Res. (RIKEN) … … … … … … … … … Los Alamos National Lab/ 265 Self-made Avalon Cluster 48.6 USA 1998 Research 140 CNLS … … … … … … … … … 351 Siemens hpcLine Cluster 41.45 Universitaet Paderborn/PC2 Germany 1999 Academic 192 … … … … … … … … … 454 Self-made Parnass2 Cluster 34.23 University of Bonn/ Germany 1999 Academic 128 Applied Mathematic 16 ♦ Allow networked resources to be integrated into the desktop. ♦ Not just hardware, but also make available software resources. ♦ Locate and “deliver” software or solutions to the user in a directly usable and “conventional” form. ♦ Part of the motivation - software maintenance 17 18 ♦ Three basic scenarios: Ø Client, servers and agents anywhere on Internet (3(10)-150(80-ws/mpp)-Mcell) Ø Client, servers and agents on an Intranet Ø Client, server and agent on the same machine ♦ “Blue Collar” Grid Based Computing Ø User can set things up, no “su” required Ø Doesn’t require deep knowledge of network programming ♦ Focus on Matlab users Ø OO language, objects are matrices (pse, eg os) Ø One of the most popular desktop systems for 19 numerical computing, 400K Users • No knowledge of networking involved • Hide complexity of numerical software • Computation location transparency • Provides access to Virtual Libraries : • Component grid-based framework • Central management of library resources • User not concerned with most up-to-date version • Automatic tie to Netlib repository in project • Provides synchronous or asynchronous calls (User level parallelism) 20 >> define sparse matrix A >> define rhs …>> [x, its] = netsolve('itmeth',’petsc’, A, rhs, 1.e-6 call NETSL(‘DGESV()’,NSINFO, N,1,A,MAX,IPIV,B,MAX,INFO) Asynchronous Calls also available Computational Server : • Various Software resources installed on various Hardware Resources • Configurable and Extensible : • Framework to easily add arbitrary software ... • Many numerical libraries being integrated by the NetSolve team • Many software being integrated by users Agent : • Gateway to the computational servers • Performs Load Balancing among the resources http://www.cs.utk.edu/netsolve NetSolve agent : predicts the execution times and sorts the servers Prediction for a server based on : • Its distance over the network - Latency and Bandwidth - Statistical Averaging • Its performance (LINPACK benchmark) • Its workload • The problem size and the algorithm complexity Quick estimate Cached data workload out of date ? http://www.cs.utk.edu/netsolve UniversityUniversity ofof Tennessee’sTennessee’s GridGrid Prototype:Prototype: ScalableScalable IntracampusIntracampus ResearchResearch Grid:Grid: SInRGSInRG 24 command1(A,sequence(A, B, B) E) Client Server result C netsl_begin_sequence( ); input A, netsl(“command1”, A, B, C); command2(A,intermediate output C) C netsl(“command1”, A, B, C); netsl(“command2”, A, C, D); Client Server netsl(“command3”, D, E, F); result D intermediate output D, netsl_end_sequence(C, D); input E command3(D, E) Client Server result F 25 NetSolve Client Client-Proxy Client-Proxy Client-Proxy Client-Proxy Interface Interface Interface Interface NetSolve Globus Proxy Ninf Legion Proxy Proxy Proxy GRAM NetSolve Services Ninf GASS Services Legion NetSolve Agent MDS Services NetSolve Servers 26 ♦ Kerberos used to maintain Access Control Lists and manage access to computational resources. ♦ NetSolve properly handles authorized and non-authorized components together in the same system. ♦ In use by DOD Modernization program at Army Research Lab 27 ♦ Multiple requests to single problem. ♦ Previous Solution: – Many calls to netslnb( ); /* non-blocking */ ♦ New Solution: –– SingleSingle callcall toto netsl_farm(netsl_farm( );); 28 SCIRun torso defibrillator application (A(A bitbit inin thethe future)future) Matrix ♦xx == netsolvenetsolve(‘linear(‘linear system’,A,b)system’,A,b) type Ø NetSolve examines the Sparse user’s data together Dense with the resources available (in the grid sense) and makes Direct Iterative decisions on best time to solution dynamically.
Recommended publications
  • Jack Dongarra: Supercomputing Expert and Mathematical Software Specialist
    Biographies Jack Dongarra: Supercomputing Expert and Mathematical Software Specialist Thomas Haigh University of Wisconsin Editor: Thomas Haigh Jack J. Dongarra was born in Chicago in 1950 to a he applied for an internship at nearby Argonne National family of Sicilian immigrants. He remembers himself as Laboratory as part of a program that rewarded a small an undistinguished student during his time at a local group of undergraduates with college credit. Dongarra Catholic elementary school, burdened by undiagnosed credits his success here, against strong competition from dyslexia.Onlylater,inhighschool,didhebeginto students attending far more prestigious schools, to Leff’s connect material from science classes with his love of friendship with Jim Pool who was then associate director taking machines apart and tinkering with them. Inspired of the Applied Mathematics Division at Argonne.2 by his science teacher, he attended Chicago State University and majored in mathematics, thinking that EISPACK this would combine well with education courses to equip Dongarra was supervised by Brian Smith, a young him for a high school teaching career. The first person in researcher whose primary concern at the time was the his family to go to college, he lived at home and worked lab’s EISPACK project. Although budget cuts forced Pool to in a pizza restaurant to cover the cost of his education.1 make substantial layoffs during his time as acting In 1972, during his senior year, a series of chance director of the Applied Mathematics Division in 1970– events reshaped Dongarra’s planned career. On the 1971, he had made a special effort to find funds to suggestion of Harvey Leff, one of his physics professors, protect the project and hire Smith.
    [Show full text]
  • A Peer-To-Peer Framework for Message Passing Parallel Programs
    A Peer-to-Peer Framework for Message Passing Parallel Programs Stéphane GENAUD a;1, and Choopan RATTANAPOKA b;2 a AlGorille Team - LORIA b King Mongkut’s University of Technology Abstract. This chapter describes the P2P-MPI project, a software framework aimed at the development of message-passing programs in large scale distributed net- works of computers. Our goal is to provide a light-weight, self-contained software package that requires minimum effort to use and maintain. P2P-MPI relies on three features to reach this goal: i) its installation and use does not require administra- tor privileges, ii) available resources are discovered and selected for a computation without intervention from the user, iii) program executions can be fault-tolerant on user demand, in a completely transparent fashion (no checkpoint server to config- ure). P2P-MPI is typically suited for organizations having spare individual comput- ers linked by a high speed network, possibly running heterogeneous operating sys- tems, and having Java applications. From a technical point of view, the framework has three layers: an infrastructure management layer at the bottom, a middleware layer containing the services, and the communication layer implementing an MPJ (Message Passing for Java) communication library at the top. We explain the de- sign and the implementation of these layers, and we discuss the allocation strategy based on network locality to the submitter. Allocation experiments of more than five hundreds peers are presented to validate the implementation. We also present how fault-management issues have been tackled. First, the monitoring of the infras- tructure itself is implemented through the use of failure detectors.
    [Show full text]
  • Wrangling Rogues: a Case Study on Managing Experimental Post-Moore Architectures Will Powell Jeffrey S
    Wrangling Rogues: A Case Study on Managing Experimental Post-Moore Architectures Will Powell Jeffrey S. Young Jason Riedy Thomas M. Conte [email protected] [email protected] [email protected] [email protected] School of Computational Science and Engineering School of Computer Science Georgia Institute of Technology Georgia Institute of Technology Atlanta, Georgia Atlanta, Georgia Abstract are not always foreseen in traditional data center environments. The Rogues Gallery is a new experimental testbed that is focused Our Rogues Gallery of novel technologies has produced research on tackling rogue architectures for the post-Moore era of computing. results and is being used in classes both at Georgia Tech as well as While some of these devices have roots in the embedded and high- at other institutions. performance computing spaces, managing current and emerging The key lessons from this testbed (so far) are the following: technologies provides a challenge for system administration that • Invest in rogues, but realize some technology may be are not always foreseen in traditional data center environments. short-lived. We cannot invest too much time in configuring We present an overview of the motivations and design of the and integrating a novel architecture when tools and software initial Rogues Gallery testbed and cover some of the unique chal- stacks may be in an unstable state of development. Rogues lenges that we have seen and foresee with upcoming hardware may be short-lived; they may not achieve their goals. Find- prototypes for future post-Moore research. Specifically, we cover ing their limits is an important aspect of the Rogues Gallery.
    [Show full text]
  • Simulating Low Precision Floating-Point Arithmetic
    SIMULATING LOW PRECISION FLOATING-POINT ARITHMETIC Higham, Nicholas J. and Pranesh, Srikara 2019 MIMS EPrint: 2019.4 Manchester Institute for Mathematical Sciences School of Mathematics The University of Manchester Reports available from: http://eprints.maths.manchester.ac.uk/ And by contacting: The MIMS Secretary School of Mathematics The University of Manchester Manchester, M13 9PL, UK ISSN 1749-9097 SIMULATING LOW PRECISION FLOATING-POINT ARITHMETIC∗ NICHOLAS J. HIGHAMy AND SRIKARA PRANESH∗ Abstract. The half precision (fp16) floating-point format, defined in the 2008 revision of the IEEE standard for floating-point arithmetic, and a more recently proposed half precision format bfloat16, are increasingly available in GPUs and other accelerators. While the support for low precision arithmetic is mainly motivated by machine learning applications, general purpose numerical algorithms can benefit from it, too, gaining in speed, energy usage, and reduced communication costs. Since the appropriate hardware is not always available, and one may wish to experiment with new arithmetics not yet implemented in hardware, software simulations of low precision arithmetic are needed. We discuss how to simulate low precision arithmetic using arithmetic of higher precision. We examine the correctness of such simulations and explain via rounding error analysis why a natural method of simulation can provide results that are more accurate than actual computations at low precision. We provide a MATLAB function chop that can be used to efficiently simulate fp16 and bfloat16 arithmetics, with or without the representation of subnormal numbers and with the options of round to nearest, directed rounding, stochastic rounding, and random bit flips in the significand. We demonstrate the advantages of this approach over defining a new MATLAB class and overloading operators.
    [Show full text]
  • Design of a Modern Distributed and Accelerated Linear Algebra Library
    SLATE: Design of a Modern Distributed and Accelerated Linear Algebra Library Mark Gates Jakub Kurzak Ali Charara [email protected] [email protected] [email protected] University of Tennessee University of Tennessee University of Tennessee Knoxville, TN Knoxville, TN Knoxville, TN Asim YarKhan Jack Dongarra [email protected] [email protected] University of Tennessee University of Tennessee Knoxville, TN Knoxville, TN Oak Ridge National Laboratory University of Manchester ABSTRACT USA. ACM, New York, NY, USA, 13 pages. https://doi.org/10.1145/3295500. The SLATE (Software for Linear Algebra Targeting Exascale) library 3356223 is being developed to provide fundamental dense linear algebra capabilities for current and upcoming distributed high-performance systems, both accelerated CPU–GPU based and CPU based. SLATE will provide coverage of existing ScaLAPACK functionality, includ- ing the parallel BLAS; linear systems using LU and Cholesky; least 1 INTRODUCTION squares problems using QR; and eigenvalue and singular value The objective of SLATE is to provide fundamental dense linear problems. In this respect, it will serve as a replacement for Sca- algebra capabilities for the high-performance computing (HPC) LAPACK, which after two decades of operation, cannot adequately community at large. The ultimate objective of SLATE is to replace be retrofitted for modern accelerated architectures. SLATE uses ScaLAPACK [7], which has become the industry standard for dense modern techniques such as communication-avoiding algorithms, linear algebra operations in distributed memory environments. Af- lookahead panels to overlap communication and computation, and ter two decades of operation, ScaLAPACK is past the end of its life task-based scheduling, along with a modern C++ framework.
    [Show full text]
  • R00456--FM Getting up to Speed
    GETTING UP TO SPEED THE FUTURE OF SUPERCOMPUTING Susan L. Graham, Marc Snir, and Cynthia A. Patterson, Editors Committee on the Future of Supercomputing Computer Science and Telecommunications Board Division on Engineering and Physical Sciences THE NATIONAL ACADEMIES PRESS Washington, D.C. www.nap.edu THE NATIONAL ACADEMIES PRESS 500 Fifth Street, N.W. Washington, DC 20001 NOTICE: The project that is the subject of this report was approved by the Gov- erning Board of the National Research Council, whose members are drawn from the councils of the National Academy of Sciences, the National Academy of Engi- neering, and the Institute of Medicine. The members of the committee responsible for the report were chosen for their special competences and with regard for ap- propriate balance. Support for this project was provided by the Department of Energy under Spon- sor Award No. DE-AT01-03NA00106. Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the organizations that provided support for the project. International Standard Book Number 0-309-09502-6 (Book) International Standard Book Number 0-309-54679-6 (PDF) Library of Congress Catalog Card Number 2004118086 Cover designed by Jennifer Bishop. Cover images (clockwise from top right, front to back) 1. Exploding star. Scientific Discovery through Advanced Computing (SciDAC) Center for Supernova Research, U.S. Department of Energy, Office of Science. 2. Hurricane Frances, September 5, 2004, taken by GOES-12 satellite, 1 km visible imagery. U.S. National Oceanographic and Atmospheric Administration. 3. Large-eddy simulation of a Rayleigh-Taylor instability run on the Lawrence Livermore National Laboratory MCR Linux cluster in July 2003.
    [Show full text]
  • Mathematics People
    NEWS Mathematics People or up to ten years post-PhD, are eligible. Awardees receive Braverman Receives US$1 million distributed over five years. NSF Waterman Award —From an NSF announcement Mark Braverman of Princeton University has been selected as a Prizes of the Association cowinner of the 2019 Alan T. Wa- terman Award of the National Sci- for Women in Mathematics ence Foundation (NSF) for his work in complexity theory, algorithms, The Association for Women in Mathematics (AWM) has and the limits of what is possible awarded a number of prizes in 2019. computationally. According to the Catherine Sulem of the Univer- prize citation, his work “focuses on sity of Toronto has been named the Mark Braverman complexity, including looking at Sonia Kovalevsky Lecturer for 2019 by algorithms for optimization, which, the Association for Women in Math- when applied, might mean planning a route—how to get ematics (AWM) and the Society for from point A to point B in the most efficient way possible. Industrial and Applied Mathematics “Algorithms are everywhere. Most people know that (SIAM). The citation states: “Sulem every time someone uses a computer, algorithms are at is a prominent applied mathemati- work. But they also occur in nature. Braverman examines cian working in the area of nonlin- randomness in the motion of objects, down to the erratic Catherine Sulem ear analysis and partial differential movement of particles in a fluid. equations. She has specialized on “His work is also tied to algorithms required for learning, the topic of singularity development in solutions of the which serve as building blocks to artificial intelligence, and nonlinear Schrödinger equation (NLS), on the problem of has even had implications for the foundations of quantum free surface water waves, and on Hamiltonian partial differ- computing.
    [Show full text]
  • Co 2019 Vivekanandan Balasubramanian ALL RIGHTS
    c 2019 Vivekanandan Balasubramanian ALL RIGHTS RESERVED A PROGRAMMING MODEL AND EXECUTION SYSTEM FOR ADAPTIVE ENSEMBLE APPLICATIONS ON HIGH PERFORMANCE COMPUTING SYSTEMS by VIVEKANANDAN BALASUBRAMANIAN A dissertation submitted to the School of Graduate Studies Rutgers, The State University of New Jersey In partial fulfillment of the requirements For the degree of Doctor of Philosophy Graduate Program in Electrical and Computer Engineering Written under the direction of Shantenu Jha and Matteo Turilli and approved by New Brunswick, New Jersey October, 2019 ABSTRACT OF THE DISSERTATION A programming model and execution system for adaptive ensemble applications on high performance computing systems by Vivekanandan Balasubramanian Dissertation Directors: Shantenu Jha and Matteo Turilli Traditionally, advances in high-performance scientific computing have focused on the scale, performance, and optimization of an application with a large, single task, and less on applications comprised of multiple tasks. However, many scientific problems are expressed as applications that require the collective outcome of one or more ensembles of computational tasks in order to provide insight into the problem being studied. Depending on the scientific problem, a task of an ensemble can be any type of a program: from a molecular simulation, to a data analysis or a machine learning model. With different communication and coordination patterns, both within and across ensembles, the number and type of applications that can be formulated as ensembles is vast and spans many scientific domains, including biophysics, climate science, polar science and earth science. The performance of ensemble applications can be improved further by using partial results to adapt the application at runtime. Partial results of ongoing executions can be analyzed with multiple methods to adapt the application to focus on relevant portions of the problem space or reduce the time to execution of the application.
    [Show full text]
  • Michel Foucault Ronald C Kessler Graham Colditz Sigmund Freud
    ANK RESEARCHER ORGANIZATION H INDEX CITATIONS 1 Michel Foucault Collège de France 296 1026230 2 Ronald C Kessler Harvard University 289 392494 3 Graham Colditz Washington University in St Louis 288 316548 4 Sigmund Freud University of Vienna 284 552109 Brigham and Women's Hospital 5 284 332728 JoAnn E Manson Harvard Medical School 6 Shizuo Akira Osaka University 276 362588 Centre de Sociologie Européenne; 7 274 771039 Pierre Bourdieu Collège de France Massachusetts Institute of Technology 8 273 308874 Robert Langer MIT 9 Eric Lander Broad Institute Harvard MIT 272 454569 10 Bert Vogelstein Johns Hopkins University 270 410260 Brigham and Women's Hospital 11 267 363862 Eugene Braunwald Harvard Medical School Ecole Polytechnique Fédérale de 12 264 364838 Michael Graetzel Lausanne 13 Frank B Hu Harvard University 256 307111 14 Yi Hwa Liu Yale University 255 332019 15 M A Caligiuri City of Hope National Medical Center 253 345173 16 Gordon Guyatt McMaster University 252 284725 17 Salim Yusuf McMaster University 250 357419 18 Michael Karin University of California San Diego 250 273000 Yale University; Howard Hughes 19 244 221895 Richard A Flavell Medical Institute 20 T W Robbins University of Cambridge 239 180615 21 Zhong Lin Wang Georgia Institute of Technology 238 234085 22 Martín Heidegger Universität Freiburg 234 335652 23 Paul M Ridker Harvard Medical School 234 318801 24 Daniel Levy National Institutes of Health NIH 232 286694 25 Guido Kroemer INSERM 231 240372 26 Steven A Rosenberg National Institutes of Health NIH 231 224154 Max Planck
    [Show full text]
  • CV January 2001 Geoffrey Charles
    Geoffrey Charles Fox Phone Email Cell: 8122194643 [email protected] Informatics: 8128567977 [email protected] Lab: 8128560927 [email protected] Home: 8123239196 Web Site: http://www.infomall.org Addresses ..................................................................................................... 2 Date of Birth ................................................................................................. 2 Education ...................................................................................................... 2 Professional Experience .................................................................................. 2 Awards and Honors ........................................................................................ 3 Board Membership ......................................................................................... 3 Industry Experience ....................................................................................... 3 Current Journal Editor..................................................................................... 3 Previous Journal Responsibility ........................................................................ 4 PhD Theses supervised ................................................................................... 4 Research Activities ......................................................................................... 7 Conferences Organized ................................................................................... 8 Publications: Introduction ..............................................................................
    [Show full text]
  • NHBS Backlist Bargains 2011 Main Web Page NHBS Home Page
    www.nhbs.com [email protected] T: +44 (0)1803 865913 The Migration Biochemistry How To Identify How and Why Collins Field Guide: The Iberian Lynx: Ecology of Birds £51.99 £30.50 Trees in Southen Species Multiply: Birds of the Extinction or Africa The Radiation of Palearctic - Non- Recovery? £83.00 £48.99 £14.99 £8.99 Darwin's Finches Passerines £6.99 £4.50 £27.95 £16.50 £24.99 £12.99 For Love of Insects Field Guide to the The Status and Atlas of Seeds and The Biology of A Primer of £18.95 £11.50 Birds of The Distribution of Fruits of Central African Savannahs Ecological Statistics Gambia and Dragonflies of the and East-European £34.95 £20.50 £32.99 £19.50 Senegal Mediterranean Flora £24.99 £14.99 Region £359.00 £209.99 £9.99 £6.50 Browse the subject pages Dear Customers, Mammals Birds Welcome to the 2011 Backlist Bargains Catalogue! Every year we offer you the Reptiles & Amphibians chance to update your library collections, top up on textbooks or explore new interests, at greatly reduced prices. This year we have nearly 5000 books at up Fishes to 50% off. Invertebrates Palaeontology You'll find books from across our range of scientific and environmental subjects, Marine & Freshwater Biology from heavyweight science and monographs to field guides and natural history General Natural History writing. Regional & Travel Botany & Plant Science Please enjoy browsing the catalogue. The NHBS Backlist Bargains sale ends March Animal & General Biology 31st 2011. Take advantage of these great discounts - Order Now! Evolutionary Biology Ecology Happy reading and buying, Habitats & Ecosystems Conservation & Biodiversity Nigel Massen Managing Director Environmental Science Physical Sciences Sustainable Development Using the Backlist Bargains Catalogue Data Analysis Reference Now that NHBS Catalogues are online there are many ways to access the title information that you need.
    [Show full text]
  • Studying the Regulatory Landscape of Flowering Plants
    Studying the Regulatory Landscape of Flowering Plants Jan Van de Velde Promoter: Prof. Dr. Klaas Vandepoele Co-Promoter: Prof. Dr. Jan Fostier Ghent University Faculty of Sciences Department of Plant Biotechnology and Bioinformatics VIB Department of Plant Systems Biology Comparative and Integrative Genomics Research funded by a PhD grant of the Institute for the Promotion of Innovation through Science and Technology in Flanders (IWT Vlaanderen). Dissertation submitted in fulfilment of the requirements for the degree of Doctor in Sciences:Bioinformatics. Academic year: 2016-2017 Examination Commitee Prof. Dr. Geert De Jaeger (chair) Faculty of Sciences, Department of Plant Biotechnology and Bioinformatics, Ghent University Prof. Dr. Klaas Vandepoele (promoter) Faculty of Sciences, Department of Plant Biotechnology and Bioinformatics, Ghent University Prof. Dr. Jan Fostier (co-promoter) Faculty of Engineering and Architecture, Department of Information Technology (INTEC), Ghent University - iMinds Prof. Dr. Kerstin Kaufmann Institute for Biochemistry and Biology, Potsdam University Prof. Dr. Pieter de Bleser Inflammation Research Center, Flanders Institute of Biotechnology (VIB) and Department of Biomedical Molecular Biology, Ghent University, Ghent, Belgium Dr. Vanessa Vermeirssen Faculty of Sciences, Department of Plant Biotechnology and Bioinformatics, Ghent University Dr. Stefanie De Bodt Crop Science Division, Bayer CropScience SA-NV, Functional Biology Dr. Inge De Clercq Department of Animal, Plant and Soil Science, ARC Centre of Excellence in Plant Energy Biology, La Trobe University and Faculty of Sciences, Department of Plant Biotechnology and Bioinformatics, Ghent University iii Thank You! Throughout this PhD I have received a lot of support, therefore there are a number of people I would like to thank. First of all, I would like to thank Klaas Vandepoele, for his support and guidance.
    [Show full text]