
APRIL 2009 VOLUME 34 NUMBER 2 OPINION Musings 2 Rik Farrow The Cloud Minders 5 MaRk BuRgess THE USENIX MAGAZINE OPERATinG Tapping into Parallelism with SYSTEMS Transactional Memory 12 Arrvindh shRiRaMan, sandhya dwaRkadas, and Michael l. scott Difference Engine 24 diwakeR gupta , sangMin lee, Michael Vr aBle, steFan savage, alex c . snoeRen, geoRge vaRghese, geoffrey M . voelkeR, and a Min vahdat PROGRAMMinG Leveraging Legacy Code for Web Browsers 32 John douceuR, JeReMy elson, Jon howell , and JacoB R. loRch, with Rik Farrow Python 3: The Good, the Bad, and the Ugly 40 david Beazley HARDWARE The Basics of Power 54 TUTORIAL Rudi van dRunen COLUMns Practical Perl Tools: Polymourphously Versioned 64 david n. Blank-edelMan Pete’s All Things Sun (PATS): The Sun Virtualization Guide 69 peteR BaeR galvin iVoyeur: Top 5 2008 74 dave Josephsen /dev/random 79 RoBeRt g. Ferrell BooK REVIEWS Book Reviews 81 elizaBeth zwicky et al. USEniX NOTES Tribute to Jay Lepreau, 1952–2008 87 ellie young ConFEREncEs Report on the 8th USENIX Symposium on Operating Systems Design and Implementation (OSDI ’08) 89 Report on the Workshop on Supporting Diversity in Systems Research (Diversity ’08) 108 Report on the Workshop on Power Aware Computing and Systems (HotPower ’08) 110 Report on the First Workshop on I/O Virtualization (WIOV ’08) 114 The Advanced Computing Systems Association Apr09Covers.indd 1 3/9/09 11:09:40 AM 2009 USENIX ANNUAL TECHNICAL CONFERENCE June 14–19, 2009 • SAN DIEGO, CA Join us in San Diego, June 14–19, 2009, for the 2009 USENIX Annual Technical Conference. USENIX Annual Tech has always been the place to present ground- REGISTER WITH breaking research and cutting-edge practices in a wide variety of technologies PRIORITY CODE and environments. USENIX ’09 will be no exception. ATCLOG09 AND SAVE $100! USENIX ’09 will feature: • An extensive Training Program, covering crucial topics and led by highly respected instructors • Technical Sessions, featuring the Refereed Papers Track, Invited Talks, and a Poster/Demo Session • Workshop on Cloud Computing • And more! Join the community of programmers, developers, and systems professionals in sharing solutions and fresh ideas. Register by Monday, June 1, and save! http://www.usenix.org/usenix09/loa Apr09Covers.indd 2 3/9/09 11:09:42 AM OPINION Musings 2 Rik FArrOw The Cloud Minders 5 MArk Burgess OPERATinG Tapping into Parallelism with SYSTEMS Transactional Memory 12 Arrvindh shrIraman, SandhyA dwArkAdAs, and Michael L. Scott Difference Engine 24 contents DiwAkEr Gupta , Sangmin Lee, Michael vr able, Stefan Savage, Alex C . Snoeren, George VarGhEsE, GeoffrEy M . VoelkEr, and A min Vahdat PROGRAMMinG Leveraging Legacy Code for Web Browsers 32 JOhN Douceur, JEremy ELson, Jon Howell , and Jacob r. LOrCh, with Rik FArrOw Python 3: The Good, the Bad, and the Ugly 40 DavId beazley HARDWARE The Basics of Power 54 TUTORIAL rudI Van drunen COLUMns Practical Perl Tools: Polymourphously VOL. 34, #2, ApriL 2009 Versioned 64 Editor ;login: is the official DavId N. Blank-Edelman Rik Farrow magazine of the [email protected] USENIX Association. Pete’s All Things Sun (PATS): The Sun Virtualization Guide 69 ;login: (ISSN 1044-6397) is Managing Editor Peter Baer Galvin Jane-Ellen Long published bi-monthly by the [email protected] USENIX Association, 2560 iVoyeur: Top 5 2008 74 Ninth Street, Suite 215, DavE JOsEphsEN Copy Editor Berkeley, CA 94710. David Couzens /dev/random 79 [email protected] $90 of each member’s annual dues is for an annual sub- RoberT G. FErrell produCtion scription to ;login:. Subscrip- Casey Henderson tions for nonmembers are Jane-Ellen Long $125 per year. BooK REVIEWS Book Reviews 81 Michele Nelson Elizabeth zwicky et al. Jennifer Peterson Periodicals postage paid at Berkeley, CA, and additional typEsEttEr offices. Star Type USEniX NOTES Tribute to Jay Lepreau, 1952–2008 87 [email protected] POSTMASTER: Send address Ellie Young changes to ;login:, USEniX assoCiation USENIX Association, 2560 Ninth Street, 2560 Ninth Street, ConFEREncES Suite 215, Berkeley, Suite 215, Berkeley, Report on the 8th USENIX Symposium on California 94710 CA 94710. Operating Systems Design and Phone: (510) 528-8649 Implementation (OSDI ’08) 89 FAX: (510) 548-5738 ©2009 USENIX Association Report on the Workshop on Supporting http://www.usenix.org USENIX is a registered trade- http://www.sage.org mark of the USENIX Associa- Diversity in Systems Research tion. Many of the designations (Diversity ’08) 108 used by manufacturers and sellers to distinguish their Report on the Workshop on Power Aware products are claimed as trade- Computing and Systems (HotPower ’08) 110 marks. USENIX acknowledges all trademarks herein. Where Report on the First Workshop on those designations appear in I/O Virtualization (WIOV ’08) 114 this publication and USENIX is aware of a trademark claim, the designations have been printed in caps or initial caps. ;LOGIN: April 2009 ArTICLE TITLE 1 Login_articlesAPRIL09_final.indd 1 3/9/09 10:39:46 AM Where Was I? Oh yeah, In san DIegO again, but this time for OSDI. OSDI and Rik Farrow SOSP are the two big operating system conferences in the US, and I’ve particularly enjoyed being able to attend OSDI. I sat in the second row during all sessions, listening intently. In this issue, I’ve worked with the authors for two OSDI papers to share some musings of what I learned, and I asked researchers at Rik is the Editor of ;login:. the University of Rochester to write a survey [email protected] article about transactional memory (TM). You may not have heard of TM (unless you are old enough to remember Transcendental Meditation), but TM may become very important in both hard- ware and programming languages in the near fu- ture. As Dave Patterson said during his keynote at USENIX Annual Tech in 2008, multicore is here. Multicore processors have not just arrived, they are the clear path forward in increasing processor per- formance, and TM looks like a good way to make parallel programming techniques accessible to peo- ple besides database and systems programmers. Locks Mutual exclusion (mutex) locks have been the tech- nique of choice for protecting critical sections of code. You may recall past ;login: articles about re- moving the “big lock” from Linux or FreeBSD ker- nels. The big lock refers to having one lock that insures that only one thread can be running within the locked code at a time. Having one big lock is inefficient, as it blocks parallel execution of key kernel code. Over time, system programmers refined locking by replacing one big lock with fine-grained locks— locks that protect much smaller code segments. Programmers have to work carefully, because hav- ing multiple locks can result in code that deadlocks when one locked section requires access to a mutex for another already locked resource, which in turn requires access to the first locked section. TM replaces locking with transactions, where the results of an operation are either committed atomi- cally (all at once) or aborted. Within a processor’s ABI there are precious few atomic operations, as these play havoc with instruction pipelines. These operations are useful for implementing mutexes, but not for handling transactions that will span the much larger blocks of code found in critical sec- tions of locked code. 2 ; LOGIN : vOL . 34, NO.2 Login_articlesAPRIL09_final.indd 2 3/9/09 10:39:46 AM In Shriraman et al. you will learn of the various techniques in hardware, in software, and in mixed approaches to support TM. Hardware approaches are faster but inflexible and limited in scope. Software approaches are painfully slow, as they must emulate hardware features, and that also adds consider- ably to the amount of memory involved in a transaction. As I was reading this article, I found myself wanting to reread Hennesey and Patterson [1] about caches and cache coherence. If you don’t have access to this book, Wikipedia has a very decent entry on caches [2]. Many TM ap- proaches rely on tags added to caches for their operation, and the tags them- selves are related to cache coherency. Recall that only registers can access data within a single processor clock cycle. Level 1 (L1) caches provide more memory than registers, but access- ing the data requires multiple clock cycles. As the caches become larger (L2 and L3 caches), the number of clock cycles increases because of the hard- ware involved in determining whether a particular cache contains valid data for the desired memory address. In single-threaded programs and programs that do not explicitly share memory, coherency issues do not arise. Only one thread has access to each memory location. But in multi-threaded programs and programs that share memory, the caches associated with a core, such as L1 cache, will contain data that needs to be consistent with the data found in the rest of the mem- ory system. Cache coherency systems handle this by tagging each cache block with status, such as exclusive, shared, modified, and invalid. Processor- level hardware then manages coherency between the different caches by up- dating an invalided cache block when it is accessed, for example. Shriraman et al.’s favored solution involves extending coherency mecha- nisms to support flexible TM. I like this article, as it is a thorough survey of the approaches to TM, as well as a clear statement of the issues, such as how TM can be an easier mechanism for programmers to use that avoids dead- lock and approaches the performance of fine-grained locks. Although parallel programming is largely the domain of systems, databases, and some gaming programmers, the wider use of multicore processors sug- gests that more programmers who require high performance will be looking to add parallelism to their code.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages124 Page
-
File Size-