Informed Prefetching and Caching
Total Page:16
File Type:pdf, Size:1020Kb
Informed Prefetching and Caching Russel Hugo Patterson III December 1997 CMU-CS-97-204 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Submitted in partial fulfillment of the requirements for the degree Doctor of Philosphy in Electrical and Computer Engineering Thesis Committee: Garth A. Gibson, Chair Mahadev Satyanrayanan Daniel P. Siewiorek F. Roy Carlson, Quantum Corporation © 1997 by Russel Hugo Patterson III This research was supported in part by Advanced Research Projects Agency contracts DABT63-93- C-0054 and N00174-96-0002, in part by the Data Storage Systems Center under National Science Foundation grant number ECD-8907068, in part by an IBM Graduate Fellowship, and in part by generous contributions from the member companies of the Parallel Data Consortium: Hewlett- Packard Laboratories, Symbios Logic, Data General, Compaq, IBM Corporation, Seagate Technol- ogy, EMC Corporation, Storage Technology Corporation, and Digital Equipment Corporation. The views and conclusions contained in this document are my own and should not be interpreted as rep- resenting the official policies, either expressed or implied, of any supporting organization or the U.S. Government. Keywords: prefetching, caching, file systems, resource management, cache management, TIP, I/O, cost-benefit analysis, economic markets, disk arrays, RAID For Lee Ann v Abstract Disk arrays provide the raw storage throughput needed to balance rapidly increasing pro- cessor performance. Unfortunately, many important, I/O-intensive applications have serial I/O workloads that do not benefit from array parallelism. The performance of a single disk remains a bottleneck on overall performance for these applications. In this dissertation, I present aggressive, proactive mechanisms that tailor file-system resource management to the needs of I/O-intensive applications. In particular, I will show how to use application- disclosed access patterns (hints) to expose and exploit I/O parallelism, and to dynamically allocate file buffers among three competing demands: prefetching hinted blocks, caching hinted blocks for reuse, and caching recently used data for unhinted accesses. My approach estimates the impact of alternative buffer allocations on application elapsed time and applies run-time cost-benefit analysis to allocate buffers where they will have the greatest impact. I implemented TIP, an informed prefetching and caching manager, in the Digital UNIX operating system and measured its performance on a 175 MHz Digital Alpha workstation equipped with up to 10 disks running a range of applications. Informed prefetching on a ten-disk array reduces the wall-clock elapsed time of computational physics, text search, scientific visualization, relational database queries, speech recogni- tion, and object linking by 10-84% with an average of 63%. On a single disk, where stor- age parallelism is unavailable and avoiding disk accesses is most beneficial, informed caching reduces the elapsed time of these same applications by up to 36% with an average of 13% compared to informed prefetching alone. Moreover, applied to multiprogrammed, I/O-intensive workloads, TIP increases overall throughput. vi vii Acknowledgments First, I would like to thank my advisor, Garth Gibson, for all the help, guidance, and advice he has given me. He has spent many long hours teaching me how to do systems research. Throughout, he has been a tireless advocate of excellence, and this document is much better for it. He has also provided a huge amount of support. I am particularly grate- ful for the way he anticipated my needs as I started work on cost-benefit resource manage- ment and ensured that all the other components, including benchmark applications, programming assistance, and a hardware platform, were in place when I needed them. He let me focus on what I could do best and, by providing what I could not, guaranteed the success of the project. Beyond that specific support, in founding the Parallel Data Lab, Garth has created a wonderful environment for systems research, and pulled together an impressive group of staff and students. I feel lucky to have been a member of the Lab. For all of this, and for being such a good sport when I couldn’t resist teasing him, I thank him. I would like to thank Satya for his help and advice throughout. Recently, I had the opportunity to review some notes of our early meetings. I now realize that at the time I only understood a fraction of our discussions. Still, some if it sank in. I thank him for help- ing me understand the key problem and develop a solution to it. Thanks are also due to the other members of my thesis committee. I thank Rick Carl- son for his patience as I finished up. I look forward to continuing to work with him at Quantum as I make the transition to the real world. I am very grateful to Dan Siewiorek who gallantly joined the committee at the last minute and read the dissertation on short notice. viii I’d also like to thank the many faculty that provided early guidance and support including Dave Lambeth, Stan Charap, Onat Menzilcioglu, H.T. Kung, and Sandro Forin. They all patiently endured many long meetings with me and helped me find my way when I was a newcomer to the field of I/O systems research. I am particularly indebted to Jim Zelenka for his tireless efforts debugging and improving the raw TIP code I threw over the wall to him during the crush to produce the SOSP paper. He has an uncanny ability to track down kernel bugs which I gave him many opportunities to exercise. I am fortunate to have both benefited and learned from this skill. I am also deeply grateful to Danner Stodolsky who helped optimize both the first and sec- ond TIP prototypes, annotated Sphinx and Gnuld to give hints, and spent many a sleepless night running experiments and helping in other ways to get the SOSP paper out the door. Both Danner and Jim helped with many constructive conversations about the TIP design and implementation. Many thanks to them both. A number of other students also helped me in my research along the way and I would like to thank each of them. Eka Ginting annotated Postgres to give hints and helped with other aspects of the SOSP paper. Mark Holland set me up with RAIDSim which was the foundation of the simulation environment for TIPTOE. And, Jiawen Su investigated asyn- chronous name resolution in TIP and ported the first TIP prototype to Digital UNIX. I would like particularly to thank Andrew Tomkins for taking up the problem of deep prefetching and becoming such an energetic collaborator on the TIP project. His work developing TIPTOE extended TIP in a significant and exciting way. Without him, the OSDI and Sigmetrics papers would never have existed, and the TIP project would not have been nearly as complete. Writing a dissertation requires more than just research, and I am grateful to the many folks who helped me keep my senses of perspective and humor over the years. Peter Stout, besides answering my endless computer questions, provided a fine example of how it should be done. He also taught me how to drink coffee in large quantities. In the later years, David Steere and Brian Noble were always ready with a humorous diversion even when the slogging seemed slow and endless. Thanks too to Patty Mackiewicz who helped with many practical problems and always looked out for me. But, more importantly, she found a way to make me smile even in the face of adversity. ix None of this would have been possible without my parents. I thank them for the many years of love and support that first prepared me to undertake this work and then so eased my way during the long travail. Finally, I would like to thank Lee Ann who first encouraged me to go back to school and pursue an advanced degree. That pursuit turned into a task larger than either of us ever imagined and demanded sacrifices neither of us anticipated. Lee Ann generously made those sacrifices and supported me throughout. When times were good, she cheered and shared in my joy. When times were tough, she cheered me up and gave me strength. Lee Ann has been there for me in these and many other ways I’m sure I cannot even imagine. I am deeply and forever grateful. As a small sign of my appreciation, I dedicate this disser- tation to you. Hugo Patterson Pittsburgh, Pennsylvania December, 1997 x xi Table of Contents List of Figures . .xv List of Tables . xvii List of Equations . xix 1 Introduction . 1 2 Asynchrony + Throughput = Low Latency . 7 2.1 Disk drive performance characteristics. 9 2.2 ASAP: the four virtues for I/O workloads . 12 2.2.1 Avoiding accesses avoids latency . 13 2.2.2 Increasing sequentiality increases channel utilization . 16 2.2.3 Asynchrony masks latency . 21 2.2.4 Parallelizing I/O workloads increases array utilization . 24 2.2.5 ASAP summary . 26 2.3 Disclosure hints for aggressive prefetching and I/O parallelism . 27 2.4 Related work . 31 2.5 Conclusions. 34 3 Disclosing I/O Requests in Hints . 37 3.1 Hints that disclose. 38 3.2 The hint interface . 41 3.3 Annotation techniques . 44 3.3.1 In-line hinting. 45 3.3.2 Loop duplication . 45 3.3.3 Loop splitting. 46 3.4 Annotating applications to give hints . 47 3.4.1 Agrep . 47 3.4.2 Gnuld . 48 3.4.3 Postgres . 49 3.4.4 Davidson . 51 3.4.5 XDataSlice . 52 3.4.5.1 XDataSlice organization . 53 3.4.5.2 Extending HDF to disclose hints to TIP .