OGF Director, NSF Cloud and Autonomic Computing Center Senior Scientist, High Performance Computing Center Adjunct Professor of Physics Texas Tech University
Total Page:16
File Type:pdf, Size:1020Kb
The Open Grid Forum: History, Introduction and Process Alan Sill VP of Standards, OGF Director, NSF Cloud and Autonomic Computing Center Senior Scientist, High Performance Computing Center Adjunct Professor of Physics Texas Tech University Open Grid Forum 44, May 21-22, 2015 EGI Conference, Lisbon, Portugal ©2015 Open Grid Forum 1 About the Open Grid Forum: Open Grid Forum (OGF) is a global organization operating in the areas of cloud, grid and related forms of advanced distributed computing. The OGF community pursues these topics through an open process for development, creation and promotion of relevant specifications and use cases. OGF actively engages partners and participants throughout the international arena through an open forum with open processes to champion architectural blueprints related to cloud and grid computing. The resulting specifications and standards enable pervasive adoption of advanced distributed computing techniques for business and research worldwide. © 2015 Open Grid Forum OGF 44 - EGI Conference Lisbon, Portugal May 18-22, 2015 2 History and Background • Began in 2001 as an organization to promote the advancement of distributed computing worldwide. • Grid Forum --> Global Grid Forum --> GGF + Enterprise Grid Alliance --> formation of OGF in 2005. • Mandate is to take on all forms of distributed computing and to work to promote cooperation, information exchange, and best practices in use and standardization. • OGF is best known for a series of important computing, security and network standards that form the basis for major science and business-based distributed computing (BES, GridFTP, DRMAA, JSDL, RNS, GLUE, UR, etc.). • We also develop cloud, networking and data standards (OCCI, DFDL, WS-Agreement, NSI/NML, etc.) in wide use. • Cooperative work agreements with other SDOs in place. © 2015 Open Grid Forum OGF 44 - EGI Conference Lisbon, Portugal May 18-22, 2015 3 OGF Standards OGF has an extensive set of applicable standards related to advanced distributed grid and cloud computing and associated storage management and network operation: - Managing the Trust Eco-System (CA operations, AuthN/AuthZ) - Job Submission and Workflow Management (JSDL, BES) - Network Management (NSI, NML, NMC, NM) - Federated Identity Management (FedSec-CG) - Virtual Organizations (VOMS) - Secure, fast multi--party data transfer (GridFTP, SRM) - Service Agreements (WS-Agreement, WS-Agreement Negotiation) - Data Format Description (DFDL) - Cloud Computing interfaces (OCCI) - Distributed resource management APIs (DRMAA, SAGA, etc.) - Firewall Traversal (FiTP) - (Many others under development) © 2015 Open Grid Forum OGF 44 - EGI Conference Lisbon, Portugal May 18-22, 2015 4 OGF HPC Standards In Use In Industry: • DRMAA: Distributed Resource Management Application API Grid Engine (Univa), Open Grid Scheduler: (open source); TORQUE and related products: Adaptive Computing; PBS Works: Altair Engineering; Gridway: DSA Research; HTCondor: U. of Wisconsin / Red Hat; • OGSA® Basic Execution Service Version 1.0 and BES HPC Profile: BES++ for LSF/SGE/PBS: Platform Computing; Windows HPC Server 2008: Microsoft Corporation; PBS Works - (client only): Altair Engineering; • JSDL: Job Submission Description Language (family of specifications): BES++ for LSF/SGE/PBS and Platform LSF: Platform Computing; Windows HPC Server: Microsoft Corporation; PBS Works and PBS/Pro: Altair Engineering; Tivoli Workload Scheduler: IBM Corporation; • WS-Agreement (family of specifications): ElasticLM License-as-a-Service: ElasticLM; BEinGrid SLA Negotiator, LM-Architecture and Framework: (Multiple partners); BREIN SLA Management Framework: (Multiple partners); WSAG4J, Web Services Agreement for Java (framework implementation): Fraunhofer SCAI. © 2015 Open Grid Forum OGF 44 - EGI Conference Lisbon, Portugal May 18-22, 2015 5 http://occi-wg.org Starting Point: OGF Documents http://ogf.org/documents © 2015 Open Grid Forum OGF 44 - EGI Conference Lisbon, Portugal May 18-22, 2015 7 Public Comment process http://redmine.ogf.org/projects/editor-pubcom/boards/ © 2015 Open Grid Forum OGF 44 - EGI Conference Lisbon, Portugal May 18-22, 2015 8 OGF Document Types • Informational: To inform the community about a useful idea or set of ideas. • Experimental: To inform the community about a useful experiment, testbed or implementation of idea or set of ideas. • Community Practice: To inform the community of common practice or process, with the objective to influence the community and/or document its current practices. • Recommendations: To publish a specification, analogous to an Internet Standards track document. Recommendations are initially designated as "proposed," and following further experience and review may become full recommendations. • Further information including guidance and advice contained in GFD.152 at: http://ogf.org/documents/GFD.152.pdf © 2015 Open Grid Forum OGF 44 - EGI Conference Lisbon, Portugal May 18-22, 2015 9 EGI international presence Value (yearly increase) Storage Value (yearly increase) 235 PB CPU cores 361,300 across 53 countries Disk (PB) (1.44 M job/day) (+69%) Tape (PB) 176 PB (+32%) EGI-InSPIRE RI-261323EGI-InSPIRE RI-261323 www.egi.eu www.egi.eu Standards-based international collaboration EGI Federated Cloud: A successful standards-based international federated cloud infrastructure Cyfronet FZJ OeRC EGI.eu CESNET GWDG TUD CNRS IN2P3 KTH Technologies Masaryk Members •OpenStack •OpenNebula INFN FCTSG •70 individuals •StratusLab •40 institutions •CloudStack (in CETA •13 countries evaluation) CESGA •Synnefo •WNoDeS IGI SARA Stakeholders Standards RADICAL •23 Resource Providers IFCA •10 Technology Providers •OCCI (control) •OVF (images) STFC •7 User Communities •X.509 (authN) SZTAKI •4 Liaisons •CDMI (storage - under development) BSC GRNET Imperial DANTE LMU IPHC IISAS SixSq(Updated July 2014) 100%IT IFAE SRCE Credit: David Wallom Chair EGI Federated Cloud Task Force Federated Cloud architecture Domain specific services in Standards used to enable federation • Virtual Machine Images OCCI: VM Image management • OVF: VM Image format • GLUE2: Resource • X509: Authentication discovery and Description • (CDMI: Storage) • Others in development Virtual organisations FedCloud User interfaces Federation monitoring Cloud hypervisor • Information system (BDII) (e.g. OpenStack, OpenNebula, • Monitoring (SAM) EmotiveCloud, Okeanos…) • Accounting (APEL) FedCloud interfaces Operation Cloud site • AAI (Perun) academic/commercial Open to new members: www.egi.eu EGI-InSPIRE RI-261323Join as user, or as an IaaS/PaaS/SaaS service provider: http://go.egi.eu/cloud Example: Worldwide LHC Computing Grid ~450,000 cpu cores Total worldwide grid ~430 Pb storage capacity: ~2x WLCG Typical data transfer across all grids and rate: ~12 GByte/sec VOs © 2015 Open Grid Forum OGF 44 - EGI Conference Lisbon, Portugal May 18-22, 2015 13 XSEDE: The Next Generation of US National Supercomputing Infrastructure The Role of Standards for Risk Reduction and Inter-operation in XSEDE Cloud and grid standards now power some of the largest academic supercomputing infrastructures in the world! LSN-MAGIC Meeting XSEDE Services Layer: February 22, 2012 Simple services combined in many ways –Resource Namespace Service 1.1 –OGSA Basic ExecuNon Service –OGSA WSRF BP – metadata and noNficaNon –OGSA-ByteIO Examples – (not –GridFTP a complete list) –JSDL, BES, BES HPC Profile –WS Trust Secure Token Services –WSI BSP for transport of credenNals –… (more than we have room to cover here) Basic message: XSEDE represents best-of-breed engagement of open computing standards with the US cyberinfrastructure. 15 US National Cyberinfrastructure Grids Blacklight Shared Memory Open Science Grid Trestles 4k Xeon cores High throughput IO-intensive 10k cores Blue Waters 124 sites 160 GB SSD/Flash Leadership FutureGrid * Gordon Yellowstone Extend Data intensive Geosciences the impact Promote an of cyber- 64 TB memory Prepare open, robust, infrastructure 300 TB Flash Mem the current collaborative, Darter and next and innovative 24k cores generation ecosystem Nautilus SuperMIC Visualization 380 nodes – 1PF Data Analytics Wrangler Stampede (Ivy bridge, Xeon Phi, Collaborate Adopt, Comet Data Analytics 460K cores GPU) Keeneland with other CI create and “Long Tail Science” w. Xeon Phi Maverick groups and disseminat CPU/GPGPU Provide 47k cores/2 PF >1000 users Visualization projects e High throughput Data Analytics technical knowledge Upgrade in 2015 expertise and support services Credit: Irene Over 13 million service units/day Qualters, US National Science typically delivered as of 2014 across Foundation all XSEDE supercomputing sites (about ACI-REF Campus sharing, 3 million core hours/day), totaling NSF Cloud (shared) about 1.6 billion core hours per year LSN-MAGIC Meeting Why Open Standards? February 22, 2012 • Risk reducon • Best-of-breed mix-and-match • Allows innovaMon/compeMMon at more interesMng layers • Facilitates interoperaMon with other infrastructures Takeaway message • The use of standards permits XSEDE to interoperate with other infrastructures, reduces risks including 17 vendor lock-in, and allows us to focus on higher level capabiliMes and less on the mundane Andrew Grimshaw 600k - 800k jobs/day! Distributed Across 124 Sites Open Science Grid currently consists of over 124 geographical sites, operating on a wide variety of computing systems OGF Cooperative Agreements In Place as of May 2015 OGF and IEEE: • OGF co-sponsors activities at many IEEE conferences; pursuing engagements with P2301 & P2302