Linux Block IO—Present and Future

Linux Block IO—Present and Future

Linux Block IO—present and future Jens Axboe SuSE [email protected] Abstract and #ifdef’s to work in all of the various 2.4 kernels that are out there, which doesn’t ex- One of the primary focus points of 2.5 was fix- actly enhance code coverage or peer review. ing up the bit rotting block layer, and as a result The block layer fork didn’t just happen for the 2.6 now sports a brand new implementation of fun of it of course, it was a direct result of basically anything that has to do with passing the various problem observed. Some of these IO around in the kernel, from producer to disk are added features, others are deeper rewrites driver. The talk will feature an in-depth look attempting to solve scalability problems with at the IO core system in 2.6 comparing to 2.4, the block layer core or IO scheduler. In the looking at performance, flexibility, and added next sections I will attempt to highlight specific functionality. The rewrite of the IO scheduler problems in these areas. API and the new IO schedulers will get a fair treatment as well. 1.1 IO Scheduler No 2.6 talk would be complete without 2.7 speculations, so I shall try to predict what The main 2.4 IO scheduler is called changes the future holds for the world of Linux elevator_linus, named after the benev- block I/O. olent kernel dictator to credit him for some of the ideas used. elevator_linus is a one-way scan elevator that always scans in 1 2.4 Problems the direction of increasing LBA. It manages latency problems by assigning sequence One of the most widely criticized pieces of numbers to new requests, denoting how many code in the 2.4 kernels is, without a doubt, the new requests (either merges or inserts) may block layer. It’s bit rotted heavily and lacks pass this one. The latency value is dependent various features or facilities that modern hard- on data direction, smaller for reads than for ware craves. This has led to many evils, rang- writes. Internally, elevator_linus uses ing from code duplication in drivers to mas- a double linked list structure (the kernels sive patching of block layer internals in ven- struct list_head) to manage the request dor kernels. As a result, vendor trees can eas- structures. When queuing a new IO unit with ily be considered forks of the 2.4 kernel with the IO scheduler, the list is walked to find a respect to the block layer code, with all of suitable insertion (or merge) point yielding an the problems that this fact brings with it: 2.4 O(N) runtime. That in itself is suboptimal in block layer code base may as well be consid- presence of large amounts of IO and to make ered dead, no one develops against it. Hard- matters even worse, we repeat this scan if the ware vendor drivers include many nasty hacks request free list was empty when we entered 52 • Linux Symposium 2004 • Volume One the IO scheduler. The latter is not an error when a driver subtracts requests for IO sub- condition, it will happen all the time for even mission. A single global lock is a big enough moderate amounts of write back against a problem on its own (bigger SMP systems will queue. suffer immensely because of cache line bounc- ing), but add to that long runtimes and you have 1.2 struct buffer_head a really huge IO scalability problem. Linux vendors have long shipped lock scalabil- The main IO unit in the 2.4 kernel is the ity patches for quite some time to get around struct buffer_head. It’s a fairly unwieldy this problem. The adopted solution is typically structure, used at various kernel layers for dif- to make the queue lock a pointer to a driver lo- ferent things: caching entity, file system block, cal lock, so the driver has full control of the and IO unit. As a result, it’s suboptimal for ei- granularity and scope of the lock. This solu- ther of them. tion was adopted from the 2.5 kernel, as we’ll From the block layer point of view, the two see later. But this is another case where driver biggest problems is the size of the structure writers often need to differentiate between ven- and the limitation in how big a data region it dor and vanilla kernels. can describe. Being limited by the file system one block semantics, it can at most describe a 1.4 API problems PAGE_CACHE_SIZE amount of data. In Linux on x86 hardware that means 4KiB of data. Of- Looking at the block layer as a whole (includ- ten it can be even worse: raw io typically uses ing both ends of the spectrum, the producers the soft sector size of a queue (default 1KiB) and consumers of the IO units going through for submitting io, which means that queuing the block layer), it is a typical example of code eg 32KiB of IO will enter the io scheduler 32 that has been hacked into existence without times. To work around this limitation and get much thought to design. When things broke at least to a page at the time, a 2.4 hack was or new features were needed, they had been introduced. This is called vary_io. A driver grafted into the existing mess. No well de- advertising this capability acknowledges that it fined interface exists between file system and can manage buffer_head’s of varying sizes block layer, except a few scattered functions. at the same time. File system read-ahead, an- Controlling IO unit flow from IO scheduler other frequent user of submitting larger sized to driver was impossible: 2.4 exposes the IO io, has no option but to submit the read-ahead scheduler data structures (the ->queue_head window in units of the page size. linked list used for queuing) directly to the driver. This fact alone makes it virtually im- 1.3 Scalability possible to implement more clever IO schedul- ing in 2.4. Even the recently (in the 2.4.20’s) With the limit on buffer_head IO size and added lower latency work was horrible to work elevator_linus runtime, it doesn’t take a with because of this lack of boundaries. Veri- lot of thinking to discover obvious scalability fying correctness of the code is extremely dif- problems in the Linux 2.4 IO path. To add in- ficult; peer review of the code likewise, since a sult to injury, the entire IO path is guarded by a reviewer must be intimate with the block layer single, global lock: io_request_lock. This structures to follow the code. lock is held during the entire IO queuing op- eration, and typically also from the other end Another example on lack of clear direction is Linux Symposium 2004 • Volume One • 53 the partition remapping. In 2.4, it’s the driver’s 4. Must be able to stack easily for IO stacks responsibility to resolve partition mappings. such as raid and device mappers. This in- A given request contains a device and sector cludes full redirect stacking like in 2.4, as offset (i.e. /dev/hda4, sector 128) and the well as partial redirections. driver must map this to an absolute device off- set before sending it to the hardware. Not only Once the primary goals for the IO struc- does this cause duplicate code in the drivers, ture were laid out, the struct bio was it also means the IO scheduler has no knowl- born. It was decided to base the layout edge of the real device mapping of a particular on a scatter-gather type setup, with the bio request. This adversely impacts IO scheduling containing a map of pages. If the map whenever partitions aren’t laid out in strict as- count was made flexible, items 1 and 2 on cending disk order, since it causes the io sched- the above list were already solved. The uler to make the wrong decisions when order- actual implementation involved splitting the ing io. data container from the bio itself into a struct bio_vec structure. This was mainly 2 2.6 Block layer done to ease allocation of the structures so that sizeof(struct bio) was always con- bio_vec The above observations were the initial kick off stant. The structure is simply a tu- {page, length, offset} for the 2.5 block layer patches. To solve some ple of , and the bio of these issues the block layer needed to be can be allocated with room for anything BIO_MAX_PAGES turned inside out, breaking basically anything- from 1 to . Currently Linux io along the way. defines that as 256 pages, meaning we can sup- port up to 1MiB of data in a single bio for 2.1 bio a system with 4KiB page size. At the time of implementation, 1MiB was a good deal be- yond the point where increasing the IO size fur- Given that struct buffer_head was one ther didn’t yield better performance or lower of the problems, it made sense to start from CPU usage. It also has the added bonus of scratch with an IO unit that would be agree- making the bio_vec fit inside a single page, able to the upper layers as well as the drivers. so we avoid higher order memory allocations The main criteria for such an IO unit would be (sizeof(struct bio_vec) == 12 on 32- something along the lines of: bit, 16 on 64-bit) in the IO path.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    14 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us