® September 06, 2012 GET TECHNOLOGY RIGHT® Review: blade servers tip the scales Dell’s M1000e blade system wows with novel new blades, improved management, modular I/O, and 40G out the back

By Paul Venezia | InfoWorld

It seems that Dell has been quite busy, such as the PowerEdge M420, the impressive especially where its blade offerings are con- PS-M4110 storage blade, and the cerned. Back in 2010, I reviewed the Dell MXL 10G/40G blade switch — offer more PowerEdge M1000e blade chassis and blade flexibility and scalability than ever. servers [1], along with competing blade sys- But Dell has also taken steps to lighten the tems from HP and IBM [1], and I came away administrative burden, layering on sleek and with a very good impression overall. At a functional management tools that add to the lower cost, the Dell blades proved as fast M1000e’s charm. Integrating Force10 switch and as manageable as the others, but there management into the mix is a work in prog- weren’t as many different types of blade serv- ress, and Dell still must face the task of central- ers to be had compared to the competition. izing the management of multiple chassis. In What a difference a couple of years makes. the meantime, Dell has already succeeded in with the same capabilities as the full-size Back in 2010, Dell had a few varieties of turning out a very well-rounded blade system. PS4100 arrays. compute blades, but that was it. There were The PS-M4110 storage blade uses the same two-CPU or four-CPU blades, but no higher- Little big SAN firmware and drives exactly the same as the density blades, virtualization-centric blades, Among Dell’s debut blade options, the outboard array. This means it can be controlled or storage blades. Now, all of those options brand-new EqualLogic PS-M4110 storage as part of an existing EqualLogic group that can exist, and they are delivered in the same 10U blade should take center stage. This is a comprise up to 16 EqualLogic arrays of varying M1000e chassis. half-height, double-wide blade that houses types, assuming that 6100-series arrays are part From a purely hardware perspective, the 14 2.5-inch disks and two redundant con- of the mix. Otherwise, the 4000-series arrays Dell PowerEdge M1000e is quite a compel- trollers, connected to the switching fabric are limited to two per group. ling system. The vastly increased number through two internal 10G interfaces, one per The EqualLogic PS-M4110 storage blade of individual blade options — including the controller. Despite its tiny footprint, this is a opens like a drawer, exposing the 14 hot- introduction of novel high-density blades fully functional EqualLogic iSCSI SAN array swap disks and hot-swap controllers that enter and exit from the top. In the front are Test Center Scorecard a series of LEDs that show the status of each disk and controller at a glance, and when the drawer is open, each disk and controller have status lights on the top as well. Code-named Colossus, this little monster can hold up to 14TB of raw storage with 14 1TB SAS disks, or it can be split into a tiering solution with five SSDs and nine SAS disks. Further, up to four PS-M4110 storage arrays Performance Availability Management Scalability Serviceability Value Overall Score can be housed in a single M1000e chassis, 20% 20% 20% 20% 10% 10% taking up eight slots, but leaving another eight for compute blades. Dell PowerEdge 9.0 It’s important to note that unlike some stor- 9 9 9 9 9 9 age blades offered by other blade vendors, the M1000e Blade System Excellent PS-M4110 is not designed to connect directly

Copyright © 2012 by InfoWorld Media Group, Inc., a subsidiary of IDG Communications Inc. Posted from InfoWorld, 501 Second Street, San Francisco, CA 94107. #C12584 Managed by The YGS Group, 800.290.5460. For more information visit www.theYGSgroup.com/content. to an adjacent blade as a DAS solution. It’s a fully ible PCIe card, such as RAID controllers and functional iSCSI SAN array that connects to the so forth. Since the card edges are accessible network just like any other iSCSI SAN would. from the front of the blade, these blades can be cabled up to external storage arrays. It’s not a common requirement, but if you have a need An abundance of blade options for blades to house PCIe cards, the M610x is In addition to the Colossus, Dell has added right up your alley. an array of compute blades to the lineup. The Also in the mix is the M910, which offers four baseline blade would probably be the Power- 8-core or 10-core Xeon CPUs, up to 1TB of Edge M520. This is a two-socket, half-height RAM across 32 sockets, and two 2.5-inch hot- blade designed for general virtualization and swap drive bays. As with all the other blades, business application workloads. It houses two the I/O options are backed by the same mez- Intel Xeon E5-2400-series CPUs, up to 384GB Quarter-height PowerEdge M420 blade servers zanine cards and include 1G, 10G, and Fibre of RAM with 32GB DIMMs across the 12 DIMM allow you to squeeze as many as 64 CPUs, Channel ports, as well as a dual-port Infini- sockets, and four gigabit NICs, plus the option and a whole lot of I/O, into a single M1000e Band module. enclosure. to add up to two mezzanine I/O cards, such as On the AMD side, there’s the M915 blade. or 10G . Up front are This is another full-height blade driving four two hot-swap 2.5-inch SAS bays, though the 16-core AMD Opteron CPUs, up to 512GB of of these little servers in a single chassis. That’s 64 M520 also has dual internal SD cards that can RAM, and two 2.5-inch hot-swap disk bays up CPUs with up to eight cores each, or 512 cores in be used to boot embedded hypervisors and front. The I/O capacity of this blade is substan- a single chassis. If you drop the beaucoup bucks remove the need for physical disks. tial, as you can drive up to a dozen 10G Ether- to max out the RAM with 32GB DIMMs, you Like all the other blades, the M520 has an net ports. The 512GB max for RAM seems a bit could accompany those cores with more than embedded iDRAC remote management card low, however. 6TB of RAM. That’s some serious density. that allows for remote access to the blade’s console and provides myriad management Breaking out of the box capabilities. The M1000e chassis I/O capabilities have Next up is the M520’s bigger brother, the M620, grown as rich as the blade options, due in which is essentially identical in form but adds no small part to Dell’s acquisition of Force10 horsepower with Intel E5-2600-series CPUs, up Networks [4]. To the basic 1G passthrough, to 768GB of RAM, and embedded dual 10G Ether- Dell PowerConnect, and Cisco I/O switching net interfaces. As with the M520, that I/O can be modules available previously, Dell has added expanded with one or two mezzanine I/O cards, the Force10 MXL I/O switching module, so you could conceivably have six 10G interfaces, which boasts 32 internal 10G interfaces and or four 10G and two Fibre Channel or InfiniBand two external 40G interfaces, with two FlexIO interfaces. Suffice it to say, there’s plenty of avail- module slots for further 10G fiber or copper able I/O. expansion. This is undoubtedly a significant Kicking things up another notch, we find the The EqualLogic PS-M4110 Blade Array holds 14 advancement for Dell, not the least of which M820. This is a full-height, four-socket blade 2.5-inch SAS drives or five SSDs and nine SAS is the capability of 40G uplinks from those with heavy specs. It runs Intel E5-4600-series drives. The M1000e enclosure can accommodate switches. Further, up to six of these switches CPUs, up to 1.5TB of RAM, and two dual-port four PS-M4110 Blade Arrays, leaving eight slots can be stacked, allowing the switching for 10G interfaces. There are four 2.5-inch hot- for compute blades. multiple chassis to be consolidated and cen- swap SAS bays up front, and there’s room for trally managed. four mezzanine I/O cards; you can really pack Finally, the Dell PowerEdge M420 may be the However, the MXL and chassis integration this blade full of network and storage I/O. The best and most interesting blade of them all. This is not yet fully baked. While the switches mezzanine cards are not only interchangeable, is a quarter-height, two-socket blade housed in behave as you’d expect, they represent their but they work with all the blades. The same a full-height sleeve that holds four of these little internal 10G interfaces generically — such as dual-port 10G, Fibre Channel, and InfiniBand blades vertically. Each M420 has one or two Intel TenGigabitEthernet 0/1, TenGigabitEthernet cards can be used in all blade models. The E5-2400-series CPUs and up to 192GB of RAM, 0/2, and so forth — and there’s no simple way M820 is a big-time blade, destined for large, but only six DIMM slots — three per CPU — and to map those ports back to the blades they’re heavily threaded, and RAM-hungry workloads. no hard drive options. The local storage is han- connected to. If you’re looking to configure One of the more interesting blades is the dled by either two hot-swap 1.8-inch SSDs or the the 10G port for the second 10G interface in M610x. This blade is destined for a niche mar- embedded SD cards for hypervisor installations. the M620 blade in slot 7, for instance, you ket, as it sports two full-length PCIe expan- The M420 has two 10G interfaces built in, will need a chart to figure out which inter- sion ports within the blade, with the card and it can handle a single mezzanine I/O card, face that corresponds to on the MXL. When edges exposed at the front. The compute side so you could drop four 10G interfaces in this you’re faced with configuring four or six 10G of the M610x is based on two Intel “Westmere” quarter-height blade. Alternatively, you could interfaces per blade or a fully loaded chas- 5600-series CPUs, up to 192GB of RAM, and two have two 10G interfaces and two 8Gb Fibre sis composed of 32 M420 blades with 64 10G gigabit NICs. Channel or InfiniBand interfaces. That’s a lot interfaces, that will get really confusing really But those PCIe slots set this blade apart. of I/O in a very small package. quickly. They can support dual PCIe GPUs for VDI Somewhat surprising, there are no popula- Tighter integration between the switch- deployments, for instance — or any compat- tion restrictions on these blades. You can fit 32 ing modules and the chassis itself is needed to provide those mappings within the switch rebooting, and bringing the host back into the CLI. Network administrators don’t like to have cluster. For stand-alone servers, the reboot to refer to spreadsheets to find out which port process is still necessary, but that can be auto- they need to tweak. mated as well. Aside from this, the Dell PowerConnect Dell has added a form of multichassis man- M8024 module is available with 16 internal agement, in that you can configure the CMC to 10G ports and up to eight external ports using connect to CMC instances on other chassis and the FlexIO slots in the module. There are 4Gb jump to that management console from a single and 8Gb Fibre Channel modules, including click. This isn’t true multichassis management, the Brocade M5424, two InfiniBand modules however; there are no facilities to directly man- supporting either QDR (Quad Data Rate) and age multiple chassis from within the same con- The Dell Chassis Management Controller puts DDR (Double Data Rate), and more basic 1G sole. But linking the independent management blade details and alerts right at your switches. There are also passthrough modules fingertips. consoles together is a step in the right direction. for 10G, 1G, and Fibre Channel ports. Dell also provides direct VMware integration Dell has added Switch Independent Par- [5], via a virtual appliance and a plug-in for the titioning or NIC partitioning functionality, the M1000e makes it both simple to perform vSphere client. The appliance handles the data which allows the 10G interfaces on each blade tedious tasks such as mass BIOS upgrades and storage and distribution tasks, and the plug-in to be carved up into four logical interfaces with very easy to dig into the specific information allows admins to work within the vSphere cli- various QoS and prioritization rules attached about each blade. With a single click, you can ent to manage hardware tasks and check vari- to each logical interface. The OS sees several retrieve a display containing every firmware ous system status elements. independent interfaces that are all subsets of version across the server, including its instal- All of this comes via the Dell iDRAC with the 10G interface, allowing administrators to lation date; another click brings specific infor- the base “Express for Blades” license. Unlike allocate bandwidth to various services at the mation on every hardware component, from the base iDRAC license for rack servers, the NIC level. This is a welcome addition that’s individual DIMMs to what’s on the PCI bus. It’s base license for blades still permits graphical been missing in previous Dell solutions. extremely handy. console access. For years, Dell offered graphi- There are also provisions for scheduled hard- cal console access via the base iDRAC license, ware inventories and warranty status checks. while HP required an advanced license. Now Blade management en masse In addition, the scheduled firmware upgrades Dell is following HP’s lead for rack servers. For- Beyond the management of the new Force10- can source either a local share or Dell’s FTP tunately, Dell has left graphical console access based switches, the overall management toolset service for the firmware files to be distributed under the base iDRAC license for blade serv- present in the M1000e is quite extensive. Dell to the hosts. ers intact. has paid much attention to the needs of higher- The idea is to make this process as seam- This story, “Review: Dell blade servers tip the density blade chassis management and has taken less as possible, allowing administrators to scales ,” was originally published at InfoWorld. steps to reduce the repetitive tasks associated schedule firmware updates across multiple com. Keep up on the latest developments in com- with blade infrastructure. disparate servers that automate the process of puter hardware and the at InfoWorld. By leveraging the iDRAC remote manage- putting a host in maintenance mode (assuming com. For the latest business technology news, fol- ment processors in each blade and the Dell it’s a virtualization host), applying the updates, low InfoWorld.com on Twitter. CMC (Chassis Management Controller) tools,