1 Contributions
Total Page:16
File Type:pdf, Size:1020Kb
EMERY BERGER Research Statement Despite years of research in the design and implementation of programming languages, programming remains difficult and error-prone. While high-level programming languages such as Java offer the promise of speeding development and helping programmers avoid errors, they often impose an unacceptable performance cost, either in execution time or memory consumption. The vast majority of today’s software applications—from mail servers, database managers and web servers, to nearly all desktop applications—are written in C and C++, two unsafe languages. Unfortunately, these languages leave applications defenseless against a wide range of programmer errors. These errors cause programs to misbehave, crash, and leave them susceptible to attack. Current architectural trends promise to further exacerbate these problems. Limitations on processor scaling due to energy consumption and heat dissipation mean that newer CPUs will not increase sequential performance, locking in the overhead of higher-level languages. The widespread adoption of multicore architectures means that programmers will be forced to use concurrency to increase performance, but multithreaded programs are notoriously difficult to write and to debug. New demands such as energy efficiency and non-uniform memory access times will pose further challenges to programmers. My work addresses all of these challenges: increasing the efficiency of high-level languages, making programs written in lower-level languages both safer and faster, and developing new languages that make it easier for program- mers to write efficient and correct programs. 1 Contributions My research agenda focuses on automatically improving application performance, security, and correctness. By working at the runtime system level, especially in the memory manager, it is possible to bring performance and reliability benefits to deployed software without programmer intervention. My work in this space has had considerable real-world impact. For example, my Hoard scalable memory manager has been downloaded over 40,000 times. It is currently in use by companies such as AOL, Business Objects, Novell, Reuters, and British Telecom, whose telephony servers Hoard sped up by a factor of five. DieHard, a system that automatically improves both reliability and security, has been downloaded over 10,000 times, and is currently being evaluated within Microsoft for deployment in an upcoming release of Microsoft Office. These systems not only perform well empirically, but also exhibit provably good properties. For instance, Hoard’s worst-case memory consumption is asymptotically equivalent to that of the best possible sequential memory manager, and DieHard’s probabilistic algorithms provide quantitative guarantees of the resilience to errors that it provides. This is a consistent theme of my work: whenever possible, I develop systems that one can reason about mathematically. My work also crosses the traditional boundaries between operating systems and runtime systems. I have devel- oped cooperative memory managers that combine operating system and garbage collection (GC) support to avoid paging, the costly shuttling of data between memory and the disk. Because disk access is approximately six orders of magnitude slower than main memory accesses, eliminating paging can dramatically increase performance. By avoid- ing paging, the bookmarking collector speeds Java programs on a modified Linux kernel by between 5X-41X and reduces pause times by 45X-218X (from minutes to milliseconds). Another line of my research focuses not on increasing performance but rather on eliminating performance degrada- tion that results from running contributory systems. These systems rely on a user community that donates CPU time, memory, and disk space (examples include peer-to-peer backup systems, Condor, and Folding@Home). Because these applications compete with the user by triggering paging or reducing available disk space, many users are reluctant to run them. I have developed operating system support that enables the transparent execution of contributory appli- cations, eliminating this key barrier to their widespread adoption. For example, our transparent memory manager limits the performance impact of paging to below 2% while donating hundreds of megabytes of memory. 1 While automatic approaches that improve performance or correctness are always desirable, sometimes programmer support is necessary. The second theme of my research focuses on the development of programming languages and software infrastructures that simplify correct and efficient programming. These include the Flux programming language, which lets programmers quickly build deadlock-free concurrent client-server applications from off-the- shelf sequential code. Flux makes it easier to write these applications, and the resulting servers match or exceed the performance of hand-written code. Finally, I have developed new measurement methodologies to perform quantitative memory management stud- ies. For example, memory management researchers often condemn special-purpose, “custom” memory managers, while practitioners advocate their use. My work demonstrates that some custom memory managers are either simpler to use or more efficient (up to 44% faster), but consume more space (up to 230% more). I introduced a new mem- ory management abstraction called reaps that captures the performance of custom memory managers while limiting their space consumption. I also developed a measurement infrastructure called oracular memory management, that for the first time quantifies the cost of using precise garbage collection versus explicit memory management: a good garbage collector can match the performance of explicit memory management, in exchange for 3X-5X more memory. Roadmap Table1 presents a full list of my research contributions to date; for reasons of space, I describe only my key con- tributions in detail here. Section2 presents my work on systems that automatically improve performance, security and correctness. Next, Section3 describes programming languages and software infrastructures that I have developed to simplify correct and efficient programming. Section4 then details my quantitative memory management studies. Finally, Section5 presents planned directions for future work. 2 Transparently Improving Reliability and Performance 2.1 Improving Reliability and Security Nearly all applications in wide use today are written in unsafe languages such as C and C++, and so are vulnerable to memory errors such as buffer overflows, dangling pointers, and reads of uninitialized data. These errors can lead to program crashes, security vulnerabilities, and unpredictable behavior. I have developed a new approach to attack these problems, based on randomization, replication, and both prob- abilistic analysis and statistical inference. Systems based on these techniques either allow programs to run correctly in the face of memory errors, or detect and automatically correct memory errors, all with high probability. To my knowledge, this is the first use of randomized algorithms to improve program reliability. DieHard: Tolerating Errors with Randomization DieHard [3,4] prevents heap corruption and provides probabilistic guarantees of avoiding memory errors like dan- gling pointers and heap buffer overflows. DieHard randomly locates program objects in a heap that is some factor M larger than required (e.g., twice as large). This scattering of memory objects all over memory not only makes some errors unlikely to happen, it also makes it virtually impossible for a hacker to know where vulnerable parts of the program’s data are, thus thwarting known heap-based exploits. DieHard’s random placement has an additional, even more important effect: it quantifiably increases the odds that a program will run correctly despite having memory errors. In particular, while DieHard prevents invalid and mul- tiple frees and heap corruption, it probabilistically avoids buffer overflows, dangling pointer errors, and uninitialized reads. These probabilities quantify the likelihood that a program will run correctly despite memory errors, providing probabilistic memory safety. DieHard works in two modes: standalone and replicated. The standalone version replaces the memory manager with the DieHard randomized memory manager. This randomization increases the odds that buffer overflows will have no effect, and reduces the risk of dangling pointers. The replicated version provides greater protection against errors by running several instances of the application simultaneously and voting on their output. Because each replica is randomized differently, each replica will likely have a different output if it has an error, and some replicas are likely to run correctly despite the error. 2 system description section Transparently improving performance and reliability randomized algorithms DieHard [3,4] tolerates memory errors with high probability (PLDI 2006) x2.1 Exterminator [17] automatically corrects memory errors (PLDI 2007) x2.1 Archipelago [16] tolerates/detects overflows with high probability x2.1 cooperative memory management (OS+GC) Bookmarking Collection [14] garbage collection without paging (PLDI 2005) x2.2 CRAMM [21, 20] dynamically adjusts heap to maximize performance (OSDI 2006) x2.2 transparency for contributory applications TMM [10] transparent memory management (USENIX 2006) x2.3 TFS [12, 11] transparent file system (FAST 2007) x2.3 efficient memory management Hoard [1,2] scalable concurrent memory manager