Software Defined Networking for Big-Data Science

Total Page:16

File Type:pdf, Size:1020Kb

Software Defined Networking for Big-Data Science Software Defined Networking for big-data science Eric Pouyoul Chin Guok Inder Monga (presenting) SRS presentation November 15th, Supercomputing 2012 Acknowledgements Many folks at ESnet who helped with the deployment and planning • Sanjay Parab (CMU), Brian Tierney, John Christman, Mark Redman, Patrick Dorn among other ESnet NESG/OCS folks Ciena Collaborators: • Rodney Wilson, Marc Lyonnais, Joshua Foster, Bill Webb SRS Team • Andrew Lee, Srini Seetharaman DOE ASCR research funding that has made this work possible Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science ESnet: World’s Leading Science Network ASIA-PACIFIC RUSSIA US R&E CANADA (ASGC/Kreonet2/ AND CHINA CANADA FRANCE CERN (DREN/Internet2/NLR) (CANARIE) TWAREN) (GLORIAD) (CANARIE) LHCONE (OpenTransit) (USLHCNet) RUSSIA AND CHINA (GLORIAD) ASIA-PACIFIC (KAREN/KREONET2/ NUS-GP/ODN/ SEATTLE REANNZ/SINET/ TRANSPAC/TWAREN) PNNL ASIA-PACIFIC (BNP/HEPNET) AUSTRALIA BOSTON US R&E (AARnet) BOISE (Internet2/ US R&E NLR) (DREN/Internet2/ NISN/NLR) CERN LATIN AMERICA SACRAMENTO CLARA/CUDI CHICAGO BNL NEW YORK CANADA US R&E FNAL (CANARIE) SUNNYVALE LBNL AMES PPPL ASIA-PACIFIC (DREN/Internet2/ ANL (ASCC/KAREN/ NASA) EUROPE KREONET2/NUS-GP/ WASHINGTON DC (GÉANT/ ODN/REANNZ/ US R&E NORDUNET) SINET/TRANSPAC) (NASA/NISN/ SLAC USDOI) KANSAS CITY DENVER ASIA-PACIFIC JLAB (SINET) ORNL NASHVILLE AUSTRALIA ALBUQUERQUE EUROPE (AARnet) (GÉANT) ATLANTA LATIN AMERICA (AMPATH/CLARA) El PASO LATIN AMERICA 100G IP Hubs US R&E (CLARA/CUDI) (DREN/Internet2/ NISN) 4x10G IP Hub HOUSTON Major R&E and International peering connections Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science Opportunities for innovation (1) Elephant Flows: ‘big-data’ movement for Science, end-to-end Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science Opportunity (2): Global Multi-Domain Collaborations like LHC detector 1 PB/s O(1-10) meter Level 1 and 2 triggers O(10-100) meters mile CERN →T1 kms s Level 3 trigger France 350 565 O(1) km LHC Tier 0 Deep archive and send Italy 570 920 CERN Computer Center data to Tier 1 centers UK 625 1000 ~50 Gb/s (25Gb/s ATLAS, 25Gb/s CMS) Netherlands 625 1000 500-10,000 km Germany 700 1185 Universities/ Universities/ physics physics Spain 850 1400 Universities/ LHC Tier 1 Data physics groups groups Universities/ Nordic 1300 2100 groups physics Centers Universities/ groups USA – New York 3900 6300 physics groups USA - Chicago 4400 7100 Universities/ physics Canada – BC 5200 8400 Universities/ groups physics Taiwan 6100 9850 groups Universities/ physics groups Universities/ The LHC Open physics groups Network Universities/ Environment physics (LHCONE) groups Universities/ physics Universities/ groups physics groups Universities/ Universities/ The LHC Optical physics physics LHC Tier 2 Analysis Source: Bill Private Network groups Universities/ groups physics Universities/ Universities/ Centers Johnston (LHCOPN) groups physics physics groups groups Software-Defined Networking Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science What is Software-Defined Networking? (as defined by Scott Shenker, October 2011) http://opennetsummit.org/talks/shenker-tue.pdf “The ability to master complexity is not the same as the ability to extract simplicity” “Abstractions key to extracting simplicity” “SDN is defined precisely by these three abstractions • Distribution, forwarding, configuration “ Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science Fundamental Network Abstraction: a end-to-end circuit Wavelength, PPP, MPLS, L2TP, GRE, NSI-CS… A Z Switching points, store and forward, transformation … Simple, Point-to-point, Provisonable Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science New Network Abstraction: “WAN Virtual Switch” WAN Virtual Switch Simple, Multipoint, Programmable Configuration abstraction: • Expresses desired behavior • Hides implementation on physical infrastructure It is not only about the concept, but implementation Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science Simple Example: One Virtual Switch per Collaboration ALCF WAN Virtual Switch OLCF NERSC ASIA-PACIFIC RUSSIA US R&E CANADA (ASGC/Kreonet2/ AND CHINA CANADA FRANCE CERN (DREN/Internet2/NLR) (CANARIE) TWAREN) (GLORIAD) (CANARIE) LHCONE (OpenTransit) (USLHCNet) RUSSIA AND CHINA (GLORIAD) ASIA-PACIFIC (KAREN/KREONET2/ NUS-GP/ODN/ SEATTLE REANNZ/SINET/ TRANSPAC/TWAREN) PNNL ASIA-PACIFIC (BNP/HEPNET) AUSTRALIA BOSTON US R&E (AARnet) BOISE (Internet2/ US R&E NLR) (DREN/Internet2/ NISN/NLR) CERN LATIN AMERICA SACRAMENTO CLARA/CUDI CHICAGO BNL NEW YORK CANADA US R&E FNAL (CANARIE) SUNNYVALE LBNL AMES PPPL ASIA-PACIFIC (DREN/Internet2/ ANL (ASCC/KAREN/ NASA) EUROPE KREONET2/NUS-GP/ WASHINGTON DC (GÉANT/ ODN/REANNZ/ US R&E NORDUNET) SINET/TRANSPAC) (NASA/NISN/ SLAC USDOI) KANSAS CITY DENVER ASIA-PACIFIC JLAB (SINET) ORNL NASHVILLE AUSTRALIA ALBUQUERQUE EUROPE (AARnet) (GÉANT) ATLANTA LATIN AMERICA (AMPATH/CLARA) El PASO LATIN AMERICA US R&E (CLARA/CUDI) (DREN/Internet2/ NISN) HOUSTON Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science Programmability Site Domain OpenFlow Controller OF protocol WAN WAN Virtual Switch Domain Expose ‘flow’ programming interface leveraging standard OF protocol Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science “Programmable” by end-sites Program flows: Science Flow1: Science Flow2: Science Flow3: WAN Virtual Switch App App 1 2 App App 1 2 OF Ctrl. OF Ctrl. Multi-Domain Wide Area Network Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science Many collaborations, Many Virtual Switches WAN Virtual Switch WANWAN Virtual Virtual Switch Switch WAN Virtual Switch WAN Virtual Switch Multi-Domain Wide Area Network Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science SRS Demonstration Physical Topology Ciena 5410 @BNL @Ciena booth @ANL DTNs SRS Brocade @SCinet OSCARS virtual circuits NEC IP8800 @ LBL DTNs: Data Transfer Nodes Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science Virtual Switch Implementation: Mapping abstract model to the physical Create Virtual switch: • Specify edge OF ports • Specify backplane topology and SRS Virtual Switch bandwidth • Policy constraints like flowspace • Store the switch into a topology Virtual A B C D service Physical C B OF Switch OF Switch A OF Switch OF Switch D Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science WAN Virtual Switch: Deploying it as a service App 1 App 2 Policy/Isolation of OF customer OF End-site Customer Flowvisor OF controller control OF OpenFlow API Virtual Switch Virtual Switch Application Software stack Virtual SW OSCARS Controller Client Infrastructure OF OSCARS API Software, Slicing Network Flowvisor OSCARS and provisioning OF OF Switch OF Switch Wide Area Network OF Switch OF Switch Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science Example of ping across WAN virtual switch 3. Need to ARP 6. virtual-mac-addresses learned and mapped to real flow SRS Virtual Switch 4. ARP 2. Packet_in 5. ARP response H2 4. ARP 1. Ping H2 H1 OF Switch 7. flow_mod 8. Ping H2 OF Switch 8. Ping H2 OF Switch Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science What does this mean for networking? User OpenFlow SDN Controller Customer/User Control Plane Policy and Isolation Multi-domain Network WAN Virtual Switch Network Service Service Interface Programmable service provisioning plane Interface Legacy and OpenFlow control plane End-to-End Dataplane • Creation of a programmable network provisioning layer • Sits on top of the “network OS” Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science Summary • Powerful network abstraction • Files / Storage • Benefits • Simplicity for the end-site • Works with off-the-shelf, open-source controller • Topology simplification • Generic code for the network provider • Virtual switch can be layered over optical, routed or switched network elements • OpenFlow support needed on edge devices only, core stays same • Programmability for applications • Allows end-sites to innovate and use the WAN effectively Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science Future Work Harden the architecture and software implementation • Move from experiment to test service Verify scaling of the model • Using virtual machines, other emulation environments Automation and Intelligent provisioning • Work over multi-domain • Wizards for provisioning • Dynamic switch backplane Create recurring abstractions • Virtual switch in campus • How do we deal with a “network” of virtual switches Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science Questions – please contact imonga at es.net www.es.net Thank you! Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science Computer virtualization Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science .
Recommended publications
  • Esnet: Advanced NETWORKING for SCIENCE
    ENERGY SCIENCES NETWORK ESnet: Advanced NETWORKING for SCIENCE Researchers around the world using advanced computing for scientific discovery are connected via the DOE-operated Energy Sciences Network (ESnet). By providing a reliable, high-performance communications infrastructure, ESnet facilitates the large-scale, collaborative science endeavors fundamental to Office of Science missions. Energy Sciences Network tive science. These include: sharing of massive In many ways, the dramatic achievements of 21st amounts of data, supporting thousands of collab- century scientific discovery—often involving orators worldwide, distributed data processing enormous data handling and remote collabora- and data management, distributed simulation, tion requirements—have been made possible by visualization, and computational steering, and accompanying accomplishments in high-per- collaboration with the U.S. and international formance networking. As increasingly advanced research and education (R&E) communities. supercomputers and experimental research facil- To ensure that ESnet continues to meet the ities have provided researchers with powerful requirements of the major science disciplines a tools with unprecedented capabilities, advance- new approach and a new architecture are being ments in networks connecting scientists to these developed. This new architecture includes ele- tools have made these research facilities available ments supporting multiple, high-speed national to broader communities and helped build greater backbones with different characteristics—redun- collaboration within these communities. The dancy, quality of service, and circuit oriented DOE Office of Science (SC) operates the Energy services—all the while allowing interoperation of Sciences Network (ESnet). Established in 1985, these elements with the other major national and ESnet currently connects tens of thousands of international networks supporting science.
    [Show full text]
  • Annual Report
    2015 Annual Report ANNUAL 2015 REPORT CONTENTS i Letter from the President 4 ii NYSERNet Names New President 6 iii NYSERNet Members Institutions 8 iv Membership Update 9 v Data Center 10 vi VMWare Quilt Project 11 vii Working Groups 12 viii Education Services 13 ix iGlass 14 x Network 16 xi Internet Services 17 xii Board Members 18 xiii Our Staff 19 xiv Human Face of Research 20 LETTER FROM THE PRESIDENT Dear Colleagues, I am pleased to present to you NYSERNet’s 2015 Annual Report. Through more than three decades, NYSERNet’s members have addressed the education and research community’s networking and other technology needs together, with trust in each other guiding us through every transition. This spring inaugurates more change, as City. The terrible attack of Sept. 11, 2001, we welcome a new president and I will step complicated achievement of that goal, made down from that position to focus on the it more essential, and taught a sobering research community’s work and needs. lesson concerning the importance of communication and the need to harden the By itself, working with NYSERNet’s infrastructure that supports it. We invested extraordinary Board and staff to support in a wounded New York City, deploying fiber and building what today has become a global exchange point at “ These two ventures formed pieces 32 Avenue of the Americas. In the process, we forged partnerships in a puzzle that, when assembled, that have proved deep and durable. benefited all of New York and beyond.” Despite inherent risks, and a perception that New York City the collective missions of our members institutions might principally benefit, for the past 18 years has been a privilege NYSERNet’s Board unanimously supported beyond my imagining.
    [Show full text]
  • CANHEIT 2011 DI Presentation
    11-12-10 The New National Dream: A Vision for Digital Infrastructure in Canada Jonathan Schaeffer Rick Bunt University of Alberta University of Saskatchewan State of DI in Canada Today • DI is fundamental to contemporary research in almost all fields • No longer solely the sciences and engineering, but rapidly expanding into humanities and social sciences • DI is increasingly complex (and very expensive) • No university can provide everything their researchers need to be successful 1 11-12-10 State of DI in Canada Today • Our national organizations (CANARIE, Compute Canada) do good jobs on their respective pieces • Problems: • Policy gap, fragmented approaches, overlapping jurisdictions, multiple voices, inconsistent funding, focus is on equipment rather than people, … What’s Missing • A national vision for DI • A coordinated approach • A single locus of responsibility • Public policy • Funding to sustain success 2 11-12-10 Compute Canada National organization for high performance computing WestGrid (British Columbia, CLUMEQ (Quebec) Alberta, Saskatchewan, Manitoba) RQCHP (Quebec) SHARCNET (Ontario) ACEnet (Nova Scotia, New Brunswick, Prince Edward SciNet (Ontario) Island, Newfoundland and Labrador) HPCVL (Ontario) Compute Canada: Today CFI funding in 2002 was for half of the consortia Money has run out and the facilities are dated CFI funding in 2006 (National Platforms Fund) was for the other half All the money will be spent by the end of 2011 No new CFI NPF program on the horizon 3 11-12-10 Compute Canada: Plans September:
    [Show full text]
  • The HOPI Project
    The HOPI Project Rick Summerhill Associate Director, Backbone Network Infrastructure, Internet2 JET Roadmap Workshop Jefferson Lab Newport News, VA April 13, 2004 Outline Resources • Abilene • NLR • Experimental MAN LAN Facility • RONs The HOPI Project – Hybrid Optical and Packet Infrastructure • Architectures based on availability of optical infrastructure –Based on dark fiber acquisitions at the national, regional, local level 4/16/2004 2 Abilene Particulars Performance • 6.2 gpbs single flows across Abilene • Consistent 9.5 gbps traffic patterns during SC2003 from Phoenix • The performance is good, but we need to look to the future Agreement with Qwest ends in 2.5 years • How should we go forward? 4/16/2004 3 NLR Summary Largest higher-ed owned/managed optical networking & research facility in the world • ~10,000 route-miles of dark fiber • Four 10-Gbps λ’s provisioned at outset – One allocated to Internet2 – One an experimental IP network – One a national scale Ethernet – One a spare and quick start An experimental platform for research • Research committee integral in NLR governance • Advance reservation of λ capacity for research • Experimental support center 4/16/2004 4 NLR footprint and physical layer topology – Phase 1 SEA 4 1/0 POR BOI 4 4/0 /03 OGD CHI 11 4 /04 CLE 3/0 SVL 7 DEN 4 PIT 8/0 4 WDC 2/0 KAN RAL LAX 4 6/0 ATL 4 SAN 8/0 15808 Terminal JAC 15808 OADM 15808 Regen Fiber route Leased waves Note: California (SAN-LAX-SVL) routes shown are part of CalREN; NLR is adding waves to CalREN systems.
    [Show full text]
  • Illuminating Diverse Research
    An nren case study by illuminating diverse research It’s not easy to read when it’s dim; you need a bright light to see properly. The same is true for scanning the details of microscopic objects – the brighter the light the better, and the Canadian Light Source (CLS) in Saskatoon is one of the brightest light sources around. As a national research facility, CLS produces intense beams of X-ray, ultraviolet, and infrared light for research in a highly diverse set of fields: biomedicine, palaeontology, chemistry, anthropology, material science, biology, quantum research, and agriculture, to name a few. The light from CLS is one million times brighter than the sun and enables many scientific experiments to be run simultaneously. But capturing the giant amounts data created by these experiments has always been a challenge. Hard drive history Diverse discoveries Many CLS experiments create huge multi-dimension The exploration and big data science of CLS is being data sets of samples under study by capturing high- used to watch precisely how batteries chemically react, resolution views of an object at high speed. For example, helping improve their performance as well as reduce 3D imaging. This, as well as many other CLS datasets their failure rate. It’s examining the body’s reaction to – were too large to be effectively transferred over a cystic fibrosis in ways that are simply not possible with network. The precious experimental data would be placed a standard X-ray clinic. It’s helping probe the boundaries on hard drives and shipped back to the researcher’s home institution or tucked in someone’s carry-on luggage.
    [Show full text]
  • Nysernet Staff Sector, Like Energy, Climate, and Health Care, We Have Engaged New York’S Cor- Sharon M
    NYSERNet Board of Directors Jeanne Casares Voldemar Innus David E. Lewis Chief Information Officer Vice President & CIO Vice Provost & CIO Rochester Institute of Technology Buffalo State College University of Rochester Brian Cohen Robert W. Juckiewicz Marilyn McMillan Associate Vice Chancellor & CIO Vice President for IT Vice President for IT & Chief IT Officer City University of New York Hofstra University for NYU NY Campus, New York University Elias Eldayrie John E. Kolb Mark Reed Associate Vice President & CIO VP for Information Services and Associate Vice President for IT University at Buffalo Technology and CIO Binghamton University Rensselaer Polytechnic Institute Candace Fleming Richard Reeder Vice President & CIO Vace Kundakci Director of IT & CIO Columbia University Assistant Vice President Stony Brook University for IT & CIO Armand Gazes City College of New York Gary O. Roberts Director, Information Technology Director Information Technology Services Operations and Network Security Timothy L. Lance Alfred University The Rockefeller University President NYSERNet Christopher M. Sedore Christine Haile Vice President for IT & CIO Chief Information Officer Francis C. Lees Syracuse University University at Albany Chief Information Officer American Museum of Natural History David Sturm Vice President & CIO The New York Public Library William Thirsk Vice President for IT & CIO Marist College R. David Vernon Director of Information Technology Cornell University Robert Wood Director of Government Relations Clarkson University 2 Dear Colleagues, I am pleased to present NYSERNet’s 2009 annual report. One might ask why, in our silver anniversary year, this is the first such report. The answer lies in our evolution. From its beginning, NYSERNet has had an engaged, active Board.
    [Show full text]
  • A Nation Goes Online a Nation Goes Online Table of Contents
    A NATION GOES ONLINE A NATION GOES ONLINE TABLE OF CONTENTS Foreword 5 Acknowledgements 6 Introduction 8 Chapter 1 UNCERTAIN BEGINNINGS 12 Chapter 2 NETWORKING TAKES ROOT 24 Chapter 3 A NATIONAL NETWORK (…AT LAST) 45 Chapter 4 CANADA CATCHES UP 60 Chapter 5 THE BIRTH OF CA*NET 90 Chapter 6 FROM CA*NET TO INTERNET 104 Epilogue 128 FOREWORD A NATION GOES ONLINE More Canadians are connected to the Internet than any other country. This should come as no surprise, since we are global leaders in information communications technologies and Internet development. We did not get there by accident – we got there by innovation and establishing world class design expertise. Canada is proud of its advanced networking history. As this publication illustrates, we have built an Internet infrastructure which links Canadians to each other and rein- forces the economic and social underpinnings which define a modern nation. Canada’s networking success is one based on partnership and co-operation between the academic and research community and the public and private sectors. The story told in these pages is a testament to this successful approach. It is not the work of a single group rather that of a series of grass-roots efforts that took shape at universities and other institutions in regions across the country. These pioneers worked to connect a population scattered over immense distances, to create opportunity from potential isolation, and to develop regional collaboration and cohesion. That determination spurred much of the early networking research at Canadian universities and ultimately the national partnerships that led to the creation of CA*net, Canada’s first information highway.
    [Show full text]
  • Trends on Network Testbeds in the World
    2-3 Trends on Network Testbeds in the World MURASE Ichiro becoming to the infrastructure of any kinds of R&D. Internet2 in U.S. includes various strate- gic networks and projects. And in Europe there are collaboration among many network testbeds leaded by EU. Regarding characteristics on network testbeds, we have three issues. One is an adoption of photonic technology. Two is a focus on middle ware technology. Three is an attention on applica- tion technology. In the future, each network testbeds will have an originality, and they have many collaboration each other over the border. Keywords JGNⅡ, Internet, Network testbed, Next generation network, Photonic network 1 Introduction the network. (3) New network technologies are implement- The Japan Gigabit Network (JGN) began ed on the network. operations in 1999, and was hailed at the time However, the above three conditions are as the fastest network testbed in the world. Its not necessarily absolute; a network may be successor, the JGNⅡ, has been in operation regarded as a network testbed even if it fails to since 2004. Network testbeds are now operat- meet one of the conditions. It is widely known ing in North America, Europe, Asia, and else- that the JGNⅡ network is currently hosting a where, with the incorporation of photonic wide range of R&D, with numerous associated technology, middleware research and develop- research institutes linked to this network. The ment, and the investigation of a range of appli- network also incorporates the latest photonic cations. This paper provides an introduction to technology. these current trends in R&D of network test- Network testbeds are designed to provide beds.
    [Show full text]
  • Annex 1. Institutional Information on the OECD/CERI Case Studies Respondents
    ANNEX 1 – 227 Annex 1. Institutional information on the OECD/CERI case studies respondents Name of Mode of delivery Size Other characteristics institution Status Types Aoyama Campus Students: about 150 – Specialised in international Gakuin Private not for profit management and finance University Business school (graduate Academic staff: – Most of the students have working (Japan) school) about 70 experience and basic IT skills – Active in partnerships/consortium Asian Campus Students: 1 703 – Offers only graduate-level degrees as Institute of Public well as lifelong learning programmes Technology (intergovernmental) Academic staff: 176 – Does not have tenured faculty Technical institution – Funded by several countries and (graduate school) development agencies – Targeted for “professionals who will play a leading role in the sustainable development of the region” – Capacity building in the region – Active in partnerships – Provide off-shore face-to-face provision Carnegie Campus Students: about 8 500 – Offers diverse disciplines Mellon Private not for profit – Of 8 500 students, around 5 200 are University Research and teaching Academic staff: undergraduates (USA) about 1 400 – It has branch campus (Carnegie Mellon West near San Francisco and Athens Institute of Technology Campus in Greece) – Is actively involved in partnerships with overseas institutions Kyoto Campus Students: – Is in an early stage of e-learning University Public (changing from about 22 000 development (Japan) national institute to – It has numerous international independent
    [Show full text]
  • GÉANT Global Connectivity
    GÉANT Global Connectivity June 2012 connect • communicate • collaborate GÉANT Global Connectivity: Index Slide No. Content Slide 3 Global Connectivity Map Slide 4 North America Slide 5 Latin America Slide 6 The Caribbean Slide 7 North Africa & the Middle East Slide 8 Sub-Saharan Africa Slide 9 South Caucasus & Central Asia Slide 10 Asia-Pacific Slide 11 Full list of NRENs outside Europe reached by GÉANT connect • communicate • collaborate GÉANT: At the heart of the Global R&E Village http://global.geant.net/ connect • communicate • collaborate Global Links: North America End A End B Capacity Type Connects to Amsterdam New York 3 x 10 Gbps IP CANARIE; ESnet; Internet2; NLR; SINET; TWAREN Frankfurt Washington DC 2 x 10 Gbps IP ESnet; Internet2; NISN Vienna New York 5 Gbps IP ESnet Paris New York 10 Gbps Point-to-point CANARIE; ESnet; Internet2 London New York 10 Gbps Point-to-point CANARIE; ESnet; Internet2 Amsterdam Chicago 10 Gbps Point-to-point ESnet; Internet2 connect • communicate • collaborate Global Links: Latin America End A End B Capacity Type Connects to Madrid Sao Paulo, Brazil 2.5 Gbps IP RedCLARA Latin American NRENs connected to RedCLARA are: NREN Country NREN Country INNOVA|Red Argentina RAGIE Guatemala RNP Brazil CUDI Mexico REUNA Chile RedCyT Panama RENATA Colombia RAAP Peru Red CONARE Costa Rica RAU Uruguay CEDIA Ecuador REACCIUN Venezuela RAICES El Salvador connect • communicate • collaborate Global Links: Caribbean End A End B Capacity Type Connects to Paris Santo Domingo, 155 Mbps IP C@ribNET Dominican Republic Caribbean Countries Connected to C@ribNET Anguilla Jamaica Antigua Montserrat Barbados St.
    [Show full text]
  • Formal Knowledge Networks: a Study of Canadian Experiences
    IISD Knowledge cover 1/18/99 12:46 Page 1 Formal Knowledge Networks: A Study of Canadian Experiences FormalFormal KnowledgeKnowledge NetworksNetworks A Study of Canadian Experiences Howard C. Clark INTERNATIONAL INSTITUTE FOR IISD SUSTAINABLE DEVELOPMENT INSTITUT INTERNATIONAL DU DÉVELOPPEMENT DURABLE IISD IISD Knowledge cover 1/18/99 12:46 Page 2 Copyright © IISD 1998 Published by the International Institute for Sustainable Development All Rights Reserved Printed in Canada This publication is printed on recycled paper. International Institute for Sustainable Development 161 Portage Avenue East – 6th Floor Winnipeg, Manitoba, Canada R3B 0Y4 Tel: 1-204-958-7700 Fax: 1-204-958-7710 E-mail: [email protected] Internet: http://iisd.ca IISD-Knowledge Net (Clark) 1/18/99 12:44 Page i FormalFormal KnowledgeKnowledge NetworksNetworks A Study of Canadian Experiences Howard C. Clark INTERNATIONAL INSTITUTE FOR SUSTAINABLE DEVELOPMENT INSTITUT INTERNATIONAL DU DÉVELOPPEMENT DURABLE IISD i IISD-Knowledge Net (Clark) 1/18/99 12:44 Page ii Formal Knowledge Networks ACKNOWLEDGEMENT Formal Knowledge Networks: A Study of Canadian Experiences was produced with the aid of a grant from the International Development Research Centre, Ottawa, Canada. ii IISD-Knowledge Net (Clark) 1/18/99 12:44 Page iii A Study of Canadian Experiences CONTENTS A Note v Executive Summary 1 Introduction 7 Formal Knowledge Networks 9 Canada’s Experiments and Experiences 15 Conclusion 31 Suggested Actions 35 Appendix 39 Canadian Bacterial Diseases Network 41 Canadian Genetic Diseases Network
    [Show full text]
  • Press Release
    ® Press Release Wednesday, November 8, 2006 For more information contact: For Immediate Release Greg D. Kubiak www.sura.org Director of Relations and Communications 202-408-7872 * [email protected] AtlanticWave Improves International Research Collaboration Washington, D.C. – The AtlanticWave service, officially launched by the Southeastern Universities Research Association (SURA) and a group of collaborating not-for-profit organizations, is a distributed international research network exchange and peering facility along the Atlantic coast of North and South America. The main goal of AtlanticWave is to facilitate research and education (R&E) collaborations between U.S. and Latin American institutions. AtlanticWave will provide R&E network exchange and peering services for existing networks that interconnect at key exchange points along the Atlantic Coast of North and South America, including MAN LAN in New York City, MAX GigaPOP and NGIX-East in Washington D.C., SoX GigaPOP in Atlanta, AMPATH in Miami, and the São Paulo, Brazil exchange point operated by the Academic Network of São Paulo (ANSP). AtlanticWave supports the GLIF (Global Lambda Integrated Facility - www.glif.is) Open Lightpath Exchange (GOLE) model. AtlanticWave was proposed as an integral component of the successful proposal submitted to the National Science Foundation International Research Network Connections (IRNC) program by Florida International University (FIU) and the Corporation for Education Network Initiatives in California (CENIC). SURA has played a vital role in the actual creation of AtlanticWave, by providing the initial funds needed to purchase a 10-Gigabit Ethernet wave on the National LambdaRail (NLR) and the Florida LambdaRail (FLR) to interconnect the four east coast U.S.
    [Show full text]