data center

The past, present and future of top data center components Stephen J. Bigelow

A photostory 1 2 3 4 5 6 7 8 9

The time traveler’s guide to data center planning

Remember your first the business need, and of server? First virtual course doing more with cluster? With Moore’s Law less overhead and power pushing faster, cheaper demand. and more powerful hard- ware in each product cycle, Look at how far data cen- it’s worth taking a look at ter components have come how far we’ve come and since the first mainframes what’s ahead before tack- coexisted with poodle ling data center planning. skirts and the advent of rock ‘n’ roll, and what to It’s not all about more, expect from the future in more, more -- tomorrow’s servers, mainframes, net- data center will focus on working, storage and more. synchronizing hardware with its application work- Courtesy of Express and load, scaling precisely with Star/Thinkstock 1 2 3 4 5 6 7 8 9

in just a few decades, Every workload imposes servers have gone from unique computing de- Forget the ‘90s -- workloads large, UNIX-based systems mands. to smaller, generic, stan- demand new types of servers dards-based commodity The complex instruction computing platforms. sets of x86 processors will yield to reduced instruc- The types of servers that tion set computing (RISC) rule the data center today processors for workloads wouldn’t recognize early such as Web servers. computing systems. The Reducing the instruction IBM AS/400 Advanced set speeds processor 36 Model 436 exemplified performance while using 1990s server technolo- considerably less energy gies, with one single-chip than commodity servers processor and nearly 18 for the same workload. W power. Today’s mid- RISC servers deliver vastly range servers, like the more computing power to Dell PowerEdge 420, use workloads when they need multicore processors at it than scaling back. This 80+ W. On-server storage is a core requirement for memory quadrupled and scalable cloud computing, gained resiliency features. and experimental systems Current high-end x86 serv- like Hewlett-Packard Co.’s ers run multiple 10-core Project Moonshot show processors, hundreds of promise. gigabytes of memory and far more internal storage. Future server technologies Moore’s Law marched on will enable a modular par- from the ‘90s to today, but adigm, replacing complete that upward trajectory is rack or blade systems with leveling out. independent functional modules for processing, The next frontier is to memory, I/O and more. This match the server archi- disaggregated approach tecture to the workload. allows organizations to With raw computing power change out specific com- accelerating more slowly, puting elements rather than the principal expectations replacing complete servers. for tomorrow’s enterprise server types are better Chris Dag/Flickr scalability and efficiency. 1 2 3 4 5 6 7 8 9

mainframes have been a zEnterprise BladeCenter staple of business comput- Extension (zBX) to integrate A look at mainframe technologies ing since IBM released the up to 112 blade modules, vacuum-tube behemoth including WebSphere ap- 700 series in 1952, and big pliances with x86 or Power through the decades iron has solidified many blades -- all communicating concepts. Though the very across redundant 10 GBps earliest mainframe technolo- Ethernet ports. Models offer gies were synonymous with up to 120 zEC12 z/Architec- incompatibility, the technol- ture processor cores and ogy is on a continual path to up to 3 TB memory in a re- integrating with the rest of dundant array for improved the data center. resiliency.

Core IT principles stem Operating system and from IBM’s transistor-based virtualization tools also are System/360 business-class opening up from narrow, mainframe. Hardware-based proprietary beginnings. IBM memory protection pre- introduced support for Win- vented user programs from dows Server 2008 on x86 disrupting the OS or other blades for the zBX, and both programs, a crucial principle proprietary and Linux-based of virtualization. Emulation operating systems. In 2013, let newer System/360s run IBM released a version of older mainframe programs the System z114 server that -- a nod to backward com- runs only Linux on top of the patibility. The System/360 z/VM hypervisor. paved the way to peripheral standardization via a chan- Future mainframe systems nel scheme for card reader, will blur the lines of hard- printer, early tape and disk ware, software and systems storage and other device management. Management interfaces. Compilers, job software tools will play an queuing -- the list of main- enormous role in heteroge- frame firsts goes on. neous support. Third-party tools are transforming to The trend in modern control heterogeneous mainframe technology is to mainframe, x86 and Power bridge the divide between infrastructures. mainframes and traditional servers. IBM’s zEnterprise is Getty Images/iStockphoto based on a System z server combined with an IBM 1 2 3 4 5 6 7 8 9

Ethernet networks accelerate from 0 to 100 GBps in 40 years

Ethernet networks and speeds grew to 100 MBps, Token Ring networking then to GBps. Enterprise coexisted in data centers data centers adopted GigE past, but Ethernet was as a network backbone. efficient and less expen- Today, GigE is the common sive, scaling well beyond standard network adapter Token-Ring’s 16 MBps data for almost every server or rates. And Ethernet is still endpoint computer. Data scaling as users demand center servers often com- more data. bine GigE with TCP offload engine network adapters to Token Ring network handle high network traffic. architecture operated like a round robin -- passing The future holds even a token around a ring of faster Ethernet standards. interconnected devices While 10 GigE is primarily until a node needed to used for high-bandwidth exchange data. Ethernet applications, expect it to networks introduced a slowly filter down to indi- collision approach; nodes vidual servers and endpoint simultaneously competed systems. Speeds of 40 for access to the wire. The GigE and even 100 GigE result was a chaotic yet ef- are currently standardized ficient use of the network, in IEEE 802.3ba and sev- and early coaxial cabling eral subsequent additions. was replaced by much Those high-speed Ethernet less expensive twisted technologies primarily pair cabling. Ethernet was are based on optical fiber popularized by then-fledg- (though 40 GigE is possible ling 3Com from the 1970s over Category 8 twisted into the 1980s. The IEEE pair cable) and await broad standardized Ethernet in adoption across the data the mid-1980s. center, paired with innova- tions like software-defined As contention for use of networking the wires and higher-band- width applications caused Ian Wilson/Wikimedia serious latencies, Ethernet Commons 1 2 3 4 5 6 7 8 9

From punchard beginnings, data centers reach for petabyte storage

The first state-of-the- serially rather than in par- art aftermarket disk allel. Even modest storage drives wouldn’t even store arrays like the HP Modular a handful of high-resolu- Smart Array 2040 hold 24 tion images from a digital disks; full-sized storage camera today. In the future, systems like an EMC Isilon more sophisticated tech- provide anywhere from 18 nologies will bring petabyte TB to 20 PB of storage storage to the average data across thousands of indi- center, and disk drives may vidual disks. become obsolete. The role of magnetic disk Early rotating magnetic storage will greatly change drum or disk data storage in tomorrow’s enterpris- systems -- the IBM 350 and es, thanks to solid-state 353, for example -- worked storage. Solid state disk on IBM mainframes dating (SSD) devices like Intel’s back to the 1950s. Disk DC S3500 series and I/O proved fast and reliable as accelerator devices (some- a storage media, becoming times called solid-state ac- commercially available in celerators) like Fusion-io’s servers and endpoint devic- ioDrive2 will increasingly es by the mid-1980s. take on tier-1 storage tasks for the most demanding Today’s disk storage sys- enterprise workloads. Fu- tems hit multi-terabyte (TB) ture SSDs will offer better capacities with petabytes reliability and wear leveling (PB) just over the horizon. algorithms to maximize the Modern 2U and larger device’s working life servers easily house four to eight disks, interfacing Daniel Gies/Flickr 1 2 3 4 5 6 7 8 9

data -- customer lists, Disk drives have largely product designs -- is valu- replaced tape drives for Alongside time-tested tape, able and needs protection. enterprise data , , disaster recovery thanks to lower costs new data backup systems thrive and business continuity (roughly $0.08 per GB in are fusing together into an 2010) and the ability to overarching enterprise data locate files on-demand protection scheme. rather than spooling. Disk contents can be replicated low-cost and long-last- easily across a network ing drives for redundancy or disaster have been a staple of data recovery. Disk storage backup systems since the enables other creative data IBM System/360 main- protection technologies, frame days. Remember such as snapshots, essen- the venerable 2400 series tial in today’s virtualized using spools of half-inch data centers. tape writing in seven tracks with data densities up to Storage and network 600 bits per inch? bandwidth are becoming so plentiful, inexpensive and Cartridge-style tape -- ubiquitous that the data Travan, , backup function in IT can Linear Tape-Open (LTO), be outsourced to third-par- among others -- eventually ty service providers. Cloud- superseded reel-to-reel based backup and disaster models. LTO designs recovery as a service, from evolved from 100 GB of Zerto, Latisys, Windstream uncompressed storage ca- and many others, address pacity written at 20 MBps how, not to what, you back in the early 21st century to up data. 2.5 TB of uncompressed capacity written at 160 Synchronizing data across MBps today. LTO-7 and data centers in real time LTO-8 versions are coming is also on the horizon for soon, with 6.4 TB and 12.8 enterprises. With real-time TB of uncompressed ca- data duplication, problems pacity. Compression during in a storage array in one fa- backups can vastly improve cility won’t affect availability the effective capacity per cartridge. Matthew Ratzloff/Flickr 1 2 3 4 5 6 7 8 9

Server room temperature, CRAC revamps on next facility blueprint

Heat kills electronics. The American Society of This was as true in the days Heating, Refrigerating and of room-sized vacuum-tube Air Conditioning Engineers. mainframes as it is in mod- Warmer IT equipment op- ern data centers, yet much erating temperatures give has changed about CRAC cooling systems a break cooling and ideal server without compromising room temperature settings. reliability.

Computer room air con- Non-refrigeration cool- ditioners (CRACs) are the ing methodologies will bedrock of data center augment or even replace cooling, and the emphasis traditional CRAC cooling. is shifting to operational ef- Heat exchangers based on ficiency and cost reduction ambient air and water are in mechanical refrigeration. gaining traction, though Enterprise-class CRAC many organizations must cooling eats up a lot of wait to deploy these space, money and electric- capital-intensive options ity. Rather than cooling the until they build new data whole facility, data centers centers. Immersion cool- can implement containment ing -- which submerges strategies, in particular hot the server into a cooled aisle/cold aisle contain- bath of non-corrosive, ment. More aggressive non-conductive fluid -- is a containment approaches quiet alternative to CRAC replace a centralized CRAC compressors and fans, and with individual units on or at the liquids enable hours of the end of each rack. thermal ride-through during a disruption, as compared IT professionals can stop to just minutes when air freezing their servers, cooling fails. too. Elevated server room temperature ranges are taking hold in data centers, Patrick Finnegan/Flickr with endorsements from 1 2 3 4 5 6 7 8 9

Powering servers and IT reporting and energy utili- systems is often the hard- zation monitoring. Data center PDUs, UPS and SMPS est part about operating a data center. Intelligence is Backup uninterruptable getting smarter than their ancestors the key to better data cen- power supply (UPS) ter PDU, UPS and SMPS systems haven’t changed designs. conceptually in decades, converting utility AC to Every server and device DC for batteries, which uses a DC-to-AC rectifier convert DC back to AC for with switched-mode power the IT equipment. Constant supply (SMPS) to modulate improvements in monitoring output, and the technology and battery technology has really progressed. Early add reliability and improve SMPS designs achieved efficiency. 60% to 70% efficiency, wasting energy and cre- The future of data center ating heat. Modern SMPS power heads in many products hit 95% efficiency possible directions. Utility by using more effective power costs, reliability and switching frequencies environmental concerns and superior rectification, make alternative energy and because data center sources, such as solid managers size the power oxide fuel cell generators supply appropriately to the powered by natural gas or server, storage array or biofuels, appealing. Some other load. organizations are flipping the power equation, making Early rack power distri- utility power the backup bution units (PDUs) were and renewables the main little more than multi-outlet power source. Higher power strips. Today’s PDU utility voltages in new data designs use sensors to centers, such as 208 VAC monitor power use -- often or 240 VAC rather than per outlet -- along with net- 120 VAC, will cut down on work tie-in so that the PDU conversions. Facebook’s can report energy use to a Open Compute Project is centralized monitoring sys- developing standards for tem, such as a data center a DC power distribution infrastructure management scheme to run data center (DCIM) server. Similarly, equipment, eliminating the DCIM platform can power supplies at each control each outlet. Future server. data center PDUs will add intelligence that refines slkoceva/Thinkstock 1 2 3 4 5 6 7 8 9

Information technology careers advance alongside data centers

you didn’t think every work and server clusters, component of the data storage tier and application center could evolve without tuning, and so on. those people who operate it all changing as well, did New IT positions are pop- you? Information technol- ping up, like data scientists ogy careers, much like the who manage big data an- mainframes and servers alytics projects. The influx they’re based on, look a of mobile devices means lot different after 40, 20 or enterprise applications even five years. have to work on mobile OSes, and data centers Just like your data center must support two or three equipment, your IT staff times as many endpoint should be working smarter, devices than they’re accus- not harder. IT staff who tomed to. perform manual, repeti- tive tasks are dinosaurs. No matter which informa- Today’s data center tion technology career you workforce should think choose -- Linux administra- automation first, grunt tion, application architec- work second. They should ture, security, networking be generalists rather than -- be sure to hone your specialists, ready to work skills for a cloud-based in- with an interconnected frastructure. Even the most heterogeneous mix of traditional enterprise data different vendor products, center uses Software as a cloud stacks and third-par- Service, an Amazon Web ty tools. Changes to one Services stack for the de- component in the system velopers or another cloud- will trickle down to unrelat- based architecture as part ed components, so IT pros of their IT operations. need to understand the relationship between net- slkoceva/Thinkstock