Intrachip Optical Networks for a Future Supercomputer-On-A-Chip

Total Page:16

File Type:pdf, Size:1020Kb

Intrachip Optical Networks for a Future Supercomputer-On-A-Chip IntraChip Optical Networks for a Future Supercomputer-on-a-Chip Jeffrey Kash, IBM Research IntraChip Optical Networks 29 February, 2008 © 2008 IBM Corporation Acknowledgements IBM Research: Yurii Vlasov, Clint Schow, Will Green, Fengnian Xia, Jose Moreira, Eugen Schenfeld, Jose Tierno, Alexander Rylyakov, Columbia University: Keren Bergman, Luca Carloni, Rick Osgood Cornell University: David Albonesi, Alyssa Apsel, Michal Lipson, Jose Martinez UC Santa Barbara: Daniel Blumenthal, John Bowers 2 IntraChip Optical Networks 29 February, 2008 © 2008 IBM Corporation Outline Optics in today’s HPCs Trends in microprocessor design – Multi-core designs for power efficiency Vision for future IntraChip Optical Networks (ICON) – 3D stack of logic, memory, and global optical interconnects Required devices and processes – Low power and small footprint 3 IntraChip Optical Networks 29 February, 2008 © 2008 IBM Corporation Today’s High Performance Server Clusters: Racks are mainly electrically connected, but going optical NEC Earth Simulator (during installation) Real systems are 10-100s of server racks -All copper -Next gen: Optics and several racks of switches – Rack-to-rack interconnects (≤100m) now moving to optics – Interconnects within racks (≤5m) now primarily copper Over time, optics will increasingly replace copper at shorter and shorter distances All Electrical All Optical • Backplane and card interconnects (≤1m) after rack-to-rack – Trend will accelerate as bitrates (in the media) increase and costs come down Snap 12 module • 2.5Gb/s Æ 5 Gb/s Æ 10Gb/s Æ20Gb/s(?) 12 Tx or Rx at 2.5Gb/s placement at back of rack • Target ~ $1/Gb/s IBM Federation Switch for ASCI Purple (LLNL) (backside of a switch rack) - Copper (bulk, bend, weight, air cooling) - Optical (very organized but more expensive) 4 IntraChip Optical Networks 29 February, 2008 © 2008 IBM Corporation Beyond bitrate, density is a major driver of optics. Connectors Cables HM-Zd 10Gbps connector 40 differential pairs (25mm wide) Fiber-Ribbon MT fiber ferrule High-Speed 48 fibers Copper Cabling extendable to 72 or 96 (7mm wide) Electrical Transmission Lines and Optical Waveguides 400μm 35 x 35μm 343 62.5μm pitch 1.26 6 7.24 mils But optics must be packaged deep within the system to achieve density improvements 5 IntraChip Optical Networks 29 February, 2008 © 2008 IBM Corporation Packaging of Optical Interconnects is Critical • Better to put optics close to logic rather than at the card edge 9Avoids distortion, power, & cost of electrical link on each end of optical link 9Breaks through pin-count limitation of multi-chip modules (MCMs) Operation to >15 Gb/s: no equalization required Opto module ~2cm traces Laser+driver IC 1.7cm traces fiber NIC Operation at 10 Gb/s: Bandwidth limited Ceramic 1cm Flex equalization required by # of pins Organic card Optics on-MCM Optical bulkhead connector 1.7cm~2cm tracestraces Opto module NIC Laser+driver IC Ceramic Organic cardOrganic card 1cm Flex fiber >12.5cm traces Good:Optics Optics on-card on-card (with or w/o via stubs) Colgan, et. al., “Direct integration of dense parallel optical interconnects on a first level package for high-end servers”, ECTC 2005, 55th, pp. 228- 233, Vol. 1., 31 May-3 June 2005. 6 IntraChip Optical Networks 29 February, 2008 © 2008 IBM Corporation Current architecture: Electronic Packet Switching Current architecture (electronic switch Central switch racks chips, interconnected by electrical or optical links, in multi-stage networks) works well now--- – Scalable BW & application- optimized cost • Multiple switches in parallel – Modular building blocks • many identical switch chips & links) -- but challenging in the future – Switch chip throughput stresses the hardest aspects of chip design • I/O & packaging – Multi-stage networks will require multiple E-O-E conversions • N-stage Exabyte/s network = N*Exabytes/s of cost N*Exabytes/s of power Mare Nostrum, Barcelona Supercomputing Center 7 IntraChip Optical Networks 29 February, 2008 © 2008 IBM Corporation Possible new architecture: Optical Circuit Switching (Optics is not electronics, maybe a different architecture can use it better) All-Optical Packet Switches are hard Scalable Optical Circuit Switch (OCS) – e.g., IBM/Corning OSMOSIS project • Expensive, and required complex electrical control network – No optical memory or optical logic – Probably not cost-competitive against electronic packet switches, even in 2015- 2020 But Optical Circuit Switches (~10millisecond switching time) are available today – Several technologies (MEMS, piezo-, thermo-,..) OCS – Low power • OCS power essentially zero, compared to electronic switch • no extra O-E-O conversion OCS Concept – But require single-mode optics Input fiber Output fibers (one channel – In ~2015, with silicon photonics, ~1nsec shown) switching time • Does 6 orders of magnitude make approach more suitable to general-purpose computing? MEMS-based OCS HW is commercially 2-axis MEMS available (Calient, Glimmerglass,..) Mirror • 20 ms switching time (one channel shown) • <100 Watts 8 IntraChip Optical Networks 29 February, 2008 © 2008 IBM Corporation Outline Optics in today’s HPCs Trends in microprocessor design – Multi-core designs for power efficiency Vision for future IntraChip Optical Networks (ICON) – 3D stack of logic, memory, and global optical interconnects Required devices and processes – Low power and small footprint 9 IntraChip Optical Networks 29 February, 2008 © 2008 IBM Corporation Chip MultiProcessors (CMPs) IBM Cell, Sun Niagara, Intel Montecito, … (note that the processors on the chip are not identical) IBM Cell: Parameter Value Technology process 90nm SOI with low-κ dielectrics and 8 metal layers of copper interconnect Chip area 235mm^2 Number of transistors ~234M Operating clock frequency 4Ghz Power dissipation ~100W Percentage of power dissipation due to 30-50% global interconnect Intra-chip, inter-core communication 1.024 Tbps, 2Gb/sec/lane (four shared bandwidth buses, 128 bits data + 64 bits address each) I/O communication bandwidth 0.819 Tbps (includes external memory) 10 IntraChip Optical Networks 29 February, 2008 © 2008 IBM Corporation …but perhaps a hierarchical design of several cores grouped into a supercore will emerge ~2017 Multiple “supercores” on a chip Electrical communication within supercore Optical communications between supercores After Moray McLaren, HP Labs 11 IntraChip Optical Networks 29 February, 2008 © 2008 IBM Corporation Theme: How to continue to get exponential performance increase over time (Moore’s Law extension) from silicon ICs even though CMOS scaling by itself is no longer enough (Moore’s Law extension) Communications and Architecture Exa-scale (~2017) Can Si photonics provide this Increased # of performance increase? (log) Processors Peta-scale (~2012) Uniprocessor (original Moore’s Law applies here) performance Transistors Performance Tera-scale (today) Time (linear) IBM Cell Processor 9 processors, ~200GFLOPs On- and Off-chip BW~100GB/sec (0.5B/FLOP) BW requirements must scale with System Performance, ~1Byte/FLOP 12 IntraChip Optical Networks 29 February, 2008 © 2008 IBM Corporation Outline Optics in today’s HPCs Trends in microprocessor design – Multi-core designs for power efficiency Vision for future IntraChip Optical Networks (ICON) – 3D stack of logic, memory, and global optical interconnects Required devices and processes – Low power and small footprint 13 IntraChip Optical Networks 29 February, 2008 © 2008 IBM Corporation Inter-core communication trends – network on chip INTEL Polaris 2007 Research Chip: 100 Million Transistors ● 80 cores (tiles) ● 275mm2 i.e., 3D Integration (why not go to optical plane, too?) Higher BW and lower Power with Optics? 14 IntraChip Optical Networks 29 February, 2008 © 2008 IBM Corporation Photonics in Multi-Core Processors Intra-Chip Communications Network Photonics changes the rules OPTICS: Modulate/receive ultra-high bandwidth data stream once per communication event ELECTRONICS: Broadband switch fabric is uses | Buffer, receive and re-transmit very little power at every switch ○ highly scalable | Off chip is pin-limited and Off-chip and on-chip can use really power hungry essentially the same technology ○ Much more off-chip BW available RX RX RX RX TX RX TX RX TX TX TX TX 15 IntraChip Optical Networks 29 February, 2008 © 2008 IBM Corporation Integration Concept Processor System Stack 3D layer stacking will be prevalent in the 22nm timeframe BEOL vertical electrical interconnects Intra-chip optics can take advantage of this technology Processor Plane w/ local memory cache Photonics layer (with supporting electrical circuits) more easily Memory Plane integrated with high performance logic and memory layers Memory Plane Layers can be separately optimized for performance and Memory Plane yield Optical Off-chip Interconnects Photonic Network Interconnect Plane (includes optical devices, electronic drivers & amplifiers and electronic control network) 16 IntraChip Optical Networks 29 February, 2008 © 2008 IBM Corporation Vision for Silicon Photonics: Intra-Chip Optical Networks Pack ~36 IBM Cell processor “supercores” on a single ~600mm2 die in 22nm CMOS • In each Cell supercore, there are 9 cores (PPE + 8SPEs) • 324 processors in one chip • Power and area dramatically lower than today at comparable clock speeds • Each supercore is electrically interconnected • Communication between supercores and off-chip are optical • BW between supercores is similar to today’s off-Cell BW (i.e., 1-2Tbps per
Recommended publications
  • OFC/NFOEC 2011 Program Archive
    OFC/NFOEC 2011 Archive Technical Conference: March 6-10, 2011 Exposition: March 8-10, 2011 Los Angeles Convention Center, Los Angeles, CA, USA At OFC/NFOEC 2011, the optical communications industry was buzzing with the sounds of a larger exhibit hall, expanded programming, product innovations, cutting-edge research presentations, and increased attendance March 6 - 10 in Los Angeles. The exhibit hall grew by 20 percent over last year, featuring new programming for service providers and data center operators, and more exhibitors filling a larger space, alongside its core show floor programs and activities. The more than 500 companies in the exhibition hall showcased innovations in areas such as 100G, tunable XFPs, metro networking, Photonic Integrated Circuits, and more. On hand to demonstrate where the industry is headed were network and test equipment vendors, sub-system and component manufacturers, as well as software, fiber cable and specialty fiber manufacturers. Service providers and enterprises were there to get the latest information on building or upgrading networks or datacenters. OFC/NFOEC also featured expanded program offerings in the areas of high-speed data communications, optical internetworking, wireless backhaul and supercomputing for its 2011 conference and exhibition. This new content and more was featured in standing-room only programs such as the Optical Business Forum, Ethernet Alliance Program, Optical Internetworking Forum Program, Green Touch Panel Session, a special symposium on Meeting the Computercom Challenge and more. Flagship programs Market Watch and the Service Provider Summit also featured topics on data centers, wireless, 100G, and optical networking. Hundreds of educational workshops, short courses, tutorial sessions and invited talks at OFC/NFOEC covered hot topics such as datacom, FTTx/in-home, wireless backhaul, next generation data transfer technology, 100G, coherent, and photonic integration.
    [Show full text]
  • Boston University Photonics Center Annual Report | 2015
    Boston University Photonics Center Annual Report | 2015 Letter from the Director THIS ANNUAL REPORT summarizes activities of the Boston University Photonics Center in the 2014–2015 academic year. In it, you will find quantitative and descrip- tive information regarding our photonics programs in education, interdisciplinary research, business innovation, and technology development. Located at the heart of Boston University’s large urban campus, the Photonics Center is an interdisciplinary hub for education, research, scholarship, innovation, and tech- nology development associated with practical uses of light. Our iconic building houses world-class research facilities and shared laboratories dedicated to photonics research, and sustains the work of 46 faculty members, 10 staff members, and more than 100 grad- uate students and postdoctoral fellows. This has been a good year for the Photonics Center. In the following pages, you will see that the center’s faculty received prodigious honors and awards, generated more than 100 notable scholarly publications in the leading journals in our field, and attracted $18.6M in new research grants/contracts. Faculty and staff also expanded their efforts in education and training, and were awarded two new National Science Foundation– sponsored sites for Research Experiences for Undergraduates and for Teachers. As a community, we hosted a compelling series of distinguished invited speakers, and empha- The Boston sized the theme of Advanced Materials by Design for the 21st Century at our annual symposium. We continued to support the National Photonics Initiative, and are a part University of a New York–based consortium that won the competition for a new photonics-themed Photonics Center node in the National Network of Manufacturing Institutes.
    [Show full text]
  • Nanoelectronics, Nanophotonics, and Nanomagnetics Report of the National Nanotechnology Initiative Workshop February 11–13, 2004, Arlington, VA
    National Science and Technology Council Committee on Technology Nanoelectronics, Subcommittee on Nanoscale Science, Nanophotonics, and Nanomagnetics Engineering, and Technology National Nanotechnology Report of the National Nanotechnology Initiative Workshop Coordination Office February 11–13, 2004 4201 Wilson Blvd. Stafford II, Rm. 405 Arlington, VA 22230 703-292-8626 phone 703-292-9312 fax www.nano.gov About the Nanoscale Science, Engineering, and Technology Subcommittee The Nanoscale Science, Engineering, and Technology (NSET) Subcommittee is the interagency body responsible for coordinating, planning, implementing, and reviewing the National Nanotechnology Initiative (NNI). The NSET is a subcommittee of the Committee on Technology of the National Science and Technology Council (NSTC), which is one of the principal means by which the President coordinates science and technology policies across the Federal Government. The National Nanotechnology Coordination Office (NNCO) provides technical and administrative support to the NSET Subcommittee and supports the subcommittee in the preparation of multi- agency planning, budget, and assessment documents, including this report. For more information on NSET, see http://www.nano.gov/html/about/nsetmembers.html . For more information on NSTC, see http://www.ostp.gov/cs/nstc . For more information on NNI, NSET and NNCO, see http://www.nano.gov . About this document This document is the report of a workshop held under the auspices of the NSET Subcommittee on February 11–13, 2004, in Arlington, Virginia. The primary purpose of the workshop was to examine trends and opportunities in nanoscale science and engineering as applied to electronic, photonic, and magnetic technologies. About the cover Cover design by Affordable Creative Services, Inc., Kathy Tresnak of Koncept, Inc., and NNCO staff.
    [Show full text]
  • CLEO:2012 Program Archive
    CLEO: 2012 Laser Science to Photonic Applications Technical Conference: 6-11 May 2012 Expo: 6-11 May 2012 Short Courses: 6-8 May 2012 Baltimore Convention Center, Baltimore, Maryland, USA Applications in ultrafast lasers, nanophotonics, biophotonics, sensing among hot topics SAN JOSE, May 14—CLEO: 2012,concluded in San Jose last week after six days of technical and business programming highlighting the latest research and developments in the fields of lasers and electro-optics. Attendees heard presentations on ultrafast lasers, OCT, optical sensing, and nanophotonic devices from some of the top scientists, engineers, and business people around the world. High-Quality Technical Programming The week kicked off with a special tribute symposium to the late laser pioneer Anthony Siegman, which featured talks on unstable laser cavities, speckle, and Siegman’s founding contributions to the field of quantum nonlinear optics. It was one of seven special symposia at the conference, ranging in topics from quantum engineering to space optical systems. The ubiquity of lasers in research and applications was evident in the more than 1,800 technical presentations in three core areas. The CLEO: Applications & Technology track included a presentation on the development of a small, flexible endoscope fitted with a femtosecond laser “scalpel” that can remove diseased or damaged tissue while leaving healthy cells untouched. Under the CLEO: Science & Innovations program, researchers demonstrated a counterintuitive concept: solar cells should be designed to be more like LEDs, able to emit light as well as absorb it. The CLEO: QELS Fundamental Science track featured research from French and Canadian scientists who developed a new method to study electron motion using isolated, precisely timed, and incredibly fast pulses of light.
    [Show full text]
  • Ph.D. in Nanoscale Science
    . ~'i; The University of North Carolina at Charlotte 9201 University City Boulevard Charlotte, N C 28223-0001 Office of the Chancellor Telephone: 704/687-2201 April 28, 2006 Facsimile: 704/687-3219 Dr. Alan Mabe Interim Senior Vice President for Academic Affairs and Vice President for Academic Planning Office of the President University of North Carolina Post Office Box 2688 Chapel Hill, North Carolina 27515-2688 Dear Dr. Mabe: Enclosed is UNC Charlotte's request for authorization to establish a Ph.D. program in Nanoscale Science. The proposed Nanoscale Science program emerged from a feasibility study conducted by our Department of Chemistry, College of Arts and Sciences, and Graduate School. The proposed program will build on the UNC Charlotte's strengths in precision metrology, optics, biomedical engineering and biotechnology. Thank you for your consideration of this request. Provost Joan Lorden or I would be pleased to respond to any questions that you may have regarding this request. Cordially, W Philip L. Dubois Chancellor Enclosure (5 copies of the proposal) cc: Provost Joan F. Lorden Dr. Nancy Gutierrez The University of North Carolina is composedof the sixteenpublic senior institutions in North Carolina An Equal Opportunity/Affirmative Action Employer . The University of North Carolina at Charlotte Ph.D. in Nanoscale Science Request for Authorization to Plan Request for Authorization to Establish Ph.D. in Business Administration THE UNIVERSITY OF NORTH CAROLINA Request for Authorization to Establish a New Degree Program INSTRUCTIONS: Please submit five copies of the proposal to the Senior Vice President for Academic Affairs, UNC Office of the President. Each proposal should include a 2-3 page executive summary.
    [Show full text]
  • Silicon Photonics for Next Generation Computing Systems
    Silicon photonics for next generation computing systems O / I l a ic t p O Tutorial given at the European Conference on Optical Communications, September 22, 2008 http://www.ecoc2008.org/documents/SC2_Vlasov.pdf Yurii Vlasov IBM Research Division © 2008 IBM Corporation TJ Watson Research Center, Yorktown Heights, NY www.research.ibm.com / photonics Outline . Hierarchy of interconnects in HPC . Active cables for rack-to-rack communications . Board level interconnects . On-chip optical interconnects – CMOS integration challenges – Photonic network on a chip . Silicon nanophotonics: – WDM – Light sources – Modulators – Switches – Detectors . Conclusions DISCLAIMER. The views expressed in this document are those of the author and do not necessarily represent. the views of IBM Corporation. 2 ECOC September 2008 www.research.ibm.com / photonics September 2008 Hierarchy of interconnects in HPC systems Backplanes Blades Server Server Cabinet I/O Cabinet Server Server Cabinet Switch Cabinet Cabinet Server Server Cabinet Cabinet Switch Cabinet Server … Server … Cabinet… Cabinet Shelves . Multiple blades on shelf interconnected through an electrical backplane . Optical interconnects between shelves – Interconnects moderated by the switch . Many Tb/sec off node card and growing (20-50% CGR) 3 ECOC September 2008 www.research.ibm.com / photonics September 2008 Moore’s law in HPCS 100 PB/s Biggest machine 1 PB/s 10 TB/s 100 GB/s After Jack Dongarra, top500.org 209 of the top 500 supercomputers have been built by IBM Sum total performance 14.03596 PF
    [Show full text]