The future of packaging with silicon photonics By Deborah Patterson [Patterson Group]; Isabel De Sousa, Louis-Marie Achard [IBM Canada, Ltd.] t has been almost a decade Optics have traditionally been center design. Besides upgrading optical since the introduction of employed to transmit data over long cabling, links and other interconnections, I the iPhone, a device that so distances because light can carry the legacy data center, comprised of many successfully blended sleek hardware considerably more information off-the-shelf components, is in the process with an intuitive user interface that it content (bits) at faster speeds. Optical of a complete overhaul that is leading to effectively jump-started a global shift in transmission becomes more energy significant growth and change in how the way we now communicate, socialize, efficient as compared to electronic transmit, receive, and switching functions manage our lives and fundamentally alternatives when the transmission are handled, especially in terms of next- interact. Today, smartphones and countless length and bandwidth increase. As the generation Ethernet speeds. In addition, other devices allow us to capture, create need for higher data transfer speeds at as 5G ramps, high-speed interconnect and communicate enormous amounts of greater baud rate and lower power levels between data centers and small cells will content. The explosion in data, storage intensifies, the trend is for optics to also come into play. These roadmaps and information distribution is driving move closer to the die. Optoelectronic will fuel multi-fiber waveguide-to-chip extraordinary growth in internet traffic interconnect is now being designed interconnect solutions, laser development, and cloud services. The sidebar entitled, to interface directly to the processor, and the application of advanced multi-chip “Trends driving data center growth,” application specific integrated circuit packaging within the segment. provides an appreciation for the incredible (ASIC) or field programmable gate The high-end or “Hyperscale” data center increase in data generation and its array (FPGA) to support switching, is massive in both size and scalability. It continued growth through 2020. transceiver, signal conditioning, provides a single compute architecture To process and manage the unabated and multiplexer/ growth in data traffic, silicon photonics demultiplexer (Mux/ will be used to define new data center Demux) applications. architectures. This article discusses the Figure 1 shows a impact that silicon photonics will have on forecast for silicon data center technology trends, and on the photonics adoption next-generation microelectronic packaging through 2025 with data developments that address optical-to- centers dominating electrical interconnection as photon and initial growth. Silicon electron conversion moves to the level of the photonics are also being package and microelectronic (logic) chip. developed to support applications as diverse Data center dynamics as high-performance The large-scale restructuring of data computing and optical centers is one of the most dynamic sensors. transformations taking place in information The data center Figure 1: Silicon photonics growth rates will initially be dominated by technology. The need to re-architect the need for speed and applications within the data center. SOURCE: Yole Développement, Oct. 2016 data center is being propelled by the capacity. Figure 2 staggering surge in shared and stored illustrates forecasted data along with an increasing demand data center traffic by to effectively interpret the tremendous 2019. One of the more amounts of content being generated. In notable trends is that addition to the huge growth in data traffic, almost three-quarters the infrastructure supporting the Internet of all data center traffic of Everything (IoE) will emphasize real- will originate from time responsiveness between people and/or within the data center. objects. The next wave in data processing The recognition of this and data traffic management will require statistic, compounded the ability to support cloud computing, by the enormous cognitive computing and big data analysis increase in data traffic, Figure 2: Data center traffic and bit rates show remarkable growth. The vast along with the necessary speed and has significantly altered majority of data center traffic will reside within the data center. SOURCE: Cisco capacity to deliver a timely response. the approach to data Global Cloud Index, 2014–2019 Chip Scale Review January • February • 2017 [ChipScaleReview.com] 1 made up of small individual servers and Data centers are being “disaggregated” interest. This is because data center peripheral functions such as memory, so that compute, storage, memory, architecture is undergoing a fundamental power management, and networking, all networking and power conditioning/ shift away from traditional three-tiered woven together through layers of redundant distribution can be reorganized and designs to route information. Power systems. Hyperscale data centers are redistributed systemically. Signal routing consumption and latency have increased represented by companies such as Amazon, efficiencies, better thermal management, as data traffic volumes have surged. This is Microsoft, Google, and Web 2.0 companies and densification of connections due to too many “hops” or handoffs when like Facebook. They can house over a brought forth by well-designed package routing data between source and destination hundred thousand to a million servers. integration is expected to improve system (controlled by routers and switches). “Leaf- Conversely, alternative data center upgradability, flexibility, and reliability spine” network architecture is replacing architectures are also under consideration. while lowering cost. Silicon photonics will the older tiered approach. The leaf-spine An example is the build-out of many be especially useful in enabling the high network is controlled by ASIC switches. small data centers linked together through communication bandwidth requirements of A “leaf” is typically the top-of-rack heterogeneous networks. No matter which disaggregated systems. (ToR) switch that links to all servers approach, moving the enormous volume Transceivers represent the initial high- within a common tower or rack (shown of data within and between data centers volume application for silicon photonics in Figure 3). The next layer of switches will continue to require increased speed as optics migrate as close as possible to is represented by the “spine.” The spine and bandwidth with lower latencies the origin of the data. Outside the data is a higher capacity switch (40 or 100 and greater power efficiencies. Silicon center, optical transceivers are used in gigabyte per link) that connects to leaf photonics is positioned to address these transport, enterprise, carrier routing and switches (across the server racks), to fundamental conditions. switching markets. Within the data center, other spine switches, and to the next level Servers will be reconfigured to transceivers are located with each server. of “Core” switches. This is known as a support higher speeds along with new However, they are mounted at the edge “flat architecture” and it is implemented componentry at 10 gigabit Ethernet (GbE), of the board, resulting in large distances through high-capacity ASIC switches. 25, 40, 100 and 400GbE. Meanwhile, new between the optical components and the The largest ASICs currently offer 32 x standards such as 1TbE will eventually processor chip. The IBM research team 100GbE ports. ASIC switch capacity and be introduced. Figure 3 illustrates typical in Yorktown, NY has proposed advanced port count dictate the size of the leaf-spine data center connections, nomenclature, packaging designs where the silicon network, e.g., the number of servers that Ethernet speeds and link distances. photonic die can be integrated directly into can be linked together. Higher speed ports Because each data center server the processor module, bypassing today’s drive lower bandwidth costs per Gb/s and generation lasts 3-5 years and the standard transceiver housings. Integrating higher capacity switches drive the total infrastructure (buildings) typically lasts the transceiver functions within the silicon number of links (minimizing the number 3-5 generations, the technology choices photonic die or processor module is an area of data hops, associated latency, and incorporated now will become the of concentrated activity. power consumption). legacy infrastructure over the next 10- Ethernet switches, on the other hand, In addition to the electrical domain 25 years. Therefore, determining which currently rely on electronic interconnect but switching described above, there is technologies deliver maximum flexibility, are another targeted area of silicon photonic significant interest in on-chip switching capacity and reduction in total system cost-of-ownership is non-trivial and leading-edge development emphasizing concurrent semiconductor and package design is receiving greater visibility. Optoelectronic integration of transceivers and switches Because optics can transport more data at significantly lower power than electronic transmission, the intent is to drive optoelectronic conversion as close to the chip and microelectronic packaging level as possible. The data remains in optical form – leveraging high optical densities – until it enters the package and interfaces with the silicon photonic die.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-