Efficient Object Storage Journaling in a Distributed Parallel File System Presented by Sarp Oral

Efficient Object Storage Journaling in a Distributed Parallel File System Presented by Sarp Oral

Efficient Object Storage Journaling in a Distributed Parallel File System Presented by Sarp Oral Sarp Oral, Feiyi Wang, David Dillow, Galen Shipman, Ross Miller, and Oleg Drokin FAST’10, Feb 25, 2010 A Demanding Computational Environment Jaguar XT5 18,688 224,256 300+ TB 2.3 PFlops Nodes Cores memory Jaguar XT4 7,832 31,328 63 TB 263 TFlops Nodes Cores memory Frost (SGI Ice) 128 Node institutional cluster Smoky 80 Node software development cluster Lens 30 Node visualization and analysis cluster 2 FAST’10, Feb 25, 2010 Spider Fastest Lustre file system in the world Demonstrated bandwidth of 240 GB/s on the center wide file system Largest scale Lustre file system in the world Demonstrated stability and concurrent mounts on major OLCF systems • Jaguar XT5 • Jaguar XT4 • Opteron Dev Cluster (Smoky) • Visualization Cluster (Lens) Over 26,000 clients mounting the file system and performing I/O General availability on Jaguar XT5, Lens, Smoky, and GridFTP servers Cutting edge resiliency at scale Demonstrated resiliency features on Jaguar XT5 • DM Multipath • Lustre Router failover 3 FAST’10, Feb 25, 2010 Designed to Support Peak Performance 100.00 ReadBW GB/s WriteBW GB/s 90.00 80.00 70.00 60.00 50.00 Bandwidth GB/s 40.00 30.00 20.00 10.00 0.00 1/1/10 0:00 1/6/10 0:00 1/11/10 0:00 1/16/10 0:00 1/21/10 0:00 1/26/10 0:00 1/31/10 0:00 Timeline (January 2010) Max data rates (hourly) on ½ of available storage controllers 4 FAST’10, Feb 25, 2010 Motivations for a Center Wide File System • Building dedicated file systems for platforms does not scale – Storage often 10% or more of new system cost – Storage often not poised to grow independently of attached machine – Different curves for storage and compute technology – Data needs to be moved between different compute islands • Simulation platform to visualization platform Jaguar – Dedicated storage is only accessible Jaguar XT5 Ewok Smoky XT4 when its machine is available Lens Ewok – Managing multiple file systems SION Network & Spider System Jaguar XT4 requires more manpower Jaguar XT5 Smoky Lens data sharing path 5 FAST’10, Feb 25, 2010 Spider: A System At Scale • Over 10.7 PB of RAID 6 formatted capacity • 13,440 1 TB drives • 192 Lustre I/O servers • Over 3 TB of memory (on Lustre I/O servers) • Available to many compute systems through high-speed SION network – Over 3,000 IB ports – Over 3 miles (5 kilometers) cables • Over 26,000 client mounts for I/O • Peak I/O performance is 240 GB/s • Current Status – in production use on all major OLCF computing platforms 6 FAST’10, Feb 25, 2010 Lustre File System • Developed and maintained by CFS, then Sun, now Oracle • POSIX compliant, open source parallel file system, driven by DOE Labs • Metadata server (MDS) manages Metadata Server Metadata Target namespace (MDS) (MDT) • Object storage server (OSS) manages Object storage targets (OST) • OST manages block devices – ldiskfs on OSTs High- performance • V. 1.6 superset of ext3 interconnect • V. 1.8 + superset of ext3 or ext4 • High-performance Lustre Clients Object Storage Servers Object Storage Targets – Parallelism by object striping (OSS) (OST) • Highly scalable • Tuned for parallel block I/O 7 FAST’10, Feb 25, 2010 Spider - Overview • Currently providing high-performance scratch space to all major OLCF platforms Aggregation 2 60 DDR 5 DDR 96 DDN S2A9900 IB Lens/Everest V Smoky Controllers (Singlets) 192 Lustre I/O servers 32 DDR 192 4x DDR IB Core 1 Core 2 64 DDR 64 DDR IBV IBV IBV Aggregation 1 192 4x DDR IB 24 DDR 24 DDR Jaguar XT4 SAS connections SAS segment 96 DDR 96 DDR Jaguar XT5 SION IB Network segment 13,440 SATA-II Disks 48 Leaf Switches 1,344 (8+2) RAID level 6 arrays (tiers) 96 DDR 96 DDR IBV 192 DDR Spider 8 FAST’10, Feb 25, 2010 Spider - Speeds and Feeds XT5 Serial ATA InfiniBand SeaStar2+ 3D Torus 3 Gbit/sec 16 Gbit/sec 9.6 Gbytes/sec 366 384 Gbytes/s 384 384 Gbytes/s Gbytes/s Gbytes/s Jaguar XT5 96 Gbytes/s Jaguar XT4 Other Systems (Viz, Clusters) Enterprise Storage Storage Nodes SION Network Lustre Router Nodes controllers and large run parallel file system provides connectivity run parallel file system racks of disks are connected software and manage between OLCF client software and via InfiniBand. incoming FS traffic. resources and forward I/O operations primarily carries from HPC clients. 48 DataDirect S2A9900 192 dual quad core storage traffic. controller pairs with Xeon servers with 192 (XT5) and 48 (XT4) 1 Tbyte drives 16 Gbytes of RAM each 3000+ port 16 Gbit/sec one dual core and 4 InifiniBand InfiniBand switch Opteron nodes with connections per pair complex 8 GB of RAM each 9 FAST’10, Feb 25, 2010 Spider - Couplet and Scalable Cluster 280 1TB Disks DDN S2A 9900 in 5 diskDisks trays DDNCouplet Couplet 16 SC units on the floor 280 in Disks5 trays (2 (2controllers)DDN controllers) Couplet 2 racks for each SC 280 in 5 trays (2 controllers) SC SC SC SC 24 IB ports Lustre I/O Servers Flextronics24 IB Switchports 24 IB ports OSS(4 Dell (4 Dellnodes) nodes) Flextronics Switch IB SC SC SC SC Flextronics Switch OSS (4 Dell nodes) PortsIB Ports IB Ports SC SC SC SC UplinkUplink to to CiscoCisco Core CoreUplink Switch Switch to Unit 1 Cisco Core Switch Unit 2 SC SC SC SC Unit 3 A Spider Scalable Cluster (SC) 10 FAST’10, Feb 25, 2010 Spider - DDN S2A9900 Couplet Controller1 Controller2 Power Supply Power Supply (House) (House) Power Supply Power Supply (UPS) (UPS) IO A1 A2 IO Module B1 B2 Module IO C1 C2 IO Module Module D1 D2 IO E1 E2 IO Module F1 F2 Module IO G1 G2 IO Module Module H1 H2 IO P1 P2 IO Module Module S1 S2 DEM D1 ... D14 DEM A1 A2 DEM D15 ... D28 DEM DEM D29 ... D42 DEM B1 B2 DEM D43 ... D56 DEM Power Supply Power Supply (House) (UPS) Disk Enclosure 1 Disk Enclosure 2 ... Disk Enclosure 5 11 FAST’10, Feb 25, 2010 Spider - DDN S2A9900 (cont’d) • RAID 6 (8+2) Disk Controller 1 Disk Controller 2 ... ... D D D D D D Channel A 1A 2A 14A 15A 16A 28A Channel A ... ... D D D D D D Channel B 1B 2B 14B 15B 16B 28B Channel B 8 data drives ... ... Channel P D1P D2P D14P D15P D16P D28P Channel P 2 parity drives ... ... Channel S D1S D2S D14S D15S D16S D28S Channel S ... ... Tier1 Tier 2 Tier 14 Tier 15 Tier 16 Tier 28 12 FAST’10, Feb 25, 2010 Spider - How Did We Get Here? • 4 years project • We didn’t just pick up phone and order a center-wide file system – No single vendor could deliver this system – Trail blazing was required • Collaborative effort was key to success – ORNL – Cray – DDN – Cisco – CFS, SUN, and now Oracle 13 FAST’10, Feb 25, 2010 Spider – Solved Technical Challenges • Performance – Asynchronous journaling – Network congestion avoidance • Scalability SeaStar Torus – 26,000 file system clients Congestion Hard bounce of 7844 nodes via 48 routers 120 Combined R/W MB/s Combined R/W IOPS Bounce XT4 @ 206s 100 • Fault tolerance design 80 OST Evicitions Full I/O @ 524s – Network 60 – I/O servers I/O returns @ 435s 40 – Storage arrays Percent of observed peak {MB/s,IOPS} RDMA Timeouts 20 • Infiniband support on XT SIO Bulk Timeouts 0 0 100 200 300 400 500 600 700 800 900 Elapsed time (seconds) 14 FAST’10, Feb 25, 2010 ldiskfs Journaling Overhead • Even sequential writes exhibit random I/O behavior due to journaling • Observed 4-8 KB writes along with 1 MB sequential writes on DDNs • DDN S2A9900’s are not well tuned for small I/O access • For enhanced reliability write-back cache on DDNs are turned off • Special file (contiguous block space) reserved for journaling on ldiskfs – Labeled as journal device – Beginning on physical disk layout • Ordered mode • After file data portion committed on disk journal meta data portion needs to be committed • Extra head seek needed for every journal transaction commit! 15 FAST’10, Feb 25, 2010 between two events. The write-back mode thus provides meta data consistency but does not provide any file data consistency. The ordered mode is the default journaling mode in Linux Ext3 file system. In this mode, though only updated meta data blocks are journaled the file data is guaranteed to be written to their fixed locations on disk before committing the meta data updates in the journal, thus providing an order between the meta data and file data commits on disk. The data journaling mode journals both the meta data and the file data. 5.3.1 Journaling Overhead The ldiskfs file system by default performs journaling in ordered mode by first writing the data blocks to disk followed by meta data blocks to the journal. The journal is then written to disk and marked as committed. In ldiskfsthe worstJournaling case, such as appending to a file,Overhead this can result in one 16 KB write(Cont’d) (on average – for bitmap, inode block map, inode, and super block) and another 4 KB write for the journal commit record for every 1 MB write. These extra small writes cause at least two extra disk head seeks. Due to the poor IOP performance of SATA disks, these additional head seeks and small writes can substantially degrade the aggregate block I/O • Block levelperformance.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    25 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us