Reducing TCB Size by Using Untrusted Components — Small Kernels Versus Virtual-Machine Monitors
Total Page:16
File Type:pdf, Size:1020Kb
To be published in Proceedings of the 11th ACM SIGOPS European Workshop, Leuven, Belgium, 2004 Reducing TCB size by using untrusted components — small kernels versus virtual-machine monitors Michael Hohmuth Michael Peter Hermann Hartig¨ Jonathan S. Shapiro Technische Universitat¨ Dresden Johns Hopkins University Department of Computer Science Department of Computer Science Operating Systems Group Systems Research Laboratory hohmuth,peter,haertig ¡ @os.inf.tu-dresden.de [email protected] Abstract tems, such as message passing and memory sharing, the overall size of the TCB (which includes all components an Secure systems are best built on top of a small trusted oper- application relies on) can actually be reduced for a large class ating system: The smaller the operating system, the easier it of applications. can be assured or verified for correctness. Another assumption we address in this paper is that all In this paper, we oppose the view that virtual-machine components on which an application has operational depen- monitors (VMMs) are the smallest systems that provide se- dencies must be in this application’s TCB. This presumption cure isolation because they have been specifically designed leads to the unnecessary inclusion of many (protocol and de- to provide little more than this property. The problem with vice) drivers into the TCB. this assertion is that VMMs typically do not support inter- The basic idea for reducing TCB size is to extract sys- process communication, complicating the use of untrusted tem components from the TCB and consider them as un- components inside a secure systems. trusted without violating the security requirements of user We propose extending traditional VMMs with features for applications. There are two basic techniques that facilitate secure message passing and memory sharing to enable the this goal: trusted wrappers and inter-process communication use of untrusted components in secure systems. We argue (IPC), which comprises message passing and shared mem- that moving system components out of the TCB into the un- ory. Trusted wrappers encapsulate isolated untrusted sys- trusted part of the system and communicating with them us- tem components and provide additional security properties. ing IPC reduces the overall size of the TCB. Memory sharing and IPC are microkernel-like mechanisms We argue that many secure applications can make use of that enable efficient and controlled communication between untrusted components through trusted wrappers without risk- isolated system components. Adding microkernel-like fea- ing security properties such as confidentiality and integrity. tures (IPC) to VMMs may make the kernel more complex (first increasing TCB size), but enables using untrusted com- ponents outside the TCB, leading to an overall decrease in 1 Introduction TCB size. Additionally, by adding IPC system calls, emulation of What is the architecture of choice for building secure sys- communication devices (such as Ethernet emulation) for tems? Proponents of virtual-machine monitors (VMMs) are inter-task communication can be removed from the TCB. quick to point out that VMM technology leads to the small- Allowing untrusted system components enables the safe est, simplest systems that are the easiest to assure. Cer- reuse of existing operating-system code. Reuse is generally tainly, a superficial analysis and historic evidence seem to desirable because it provides the functionality of a large body support this thesis: VMMs are specifically designed to en- of known-to-work legacy code, and because it helps support- force isolation, the technology is mature, and of the four TC- ing backward compatibility with existing applications, data, SEC/A1 systems, the cheapest to build and validate was the or network protocols. Reuse is especially attractive for de- VAX/VMM hypervisor. vice and protocol drivers, as these components often make The basic assumption underlying this opinion is that the up the largest part of the operating system.1 For example, trusted computing base (TCB) of applications running inside device and protocol drivers make up more than 88 % of code a virtual machine cannot be made any smaller, as VMMs lines of the Linux 2.6 kernel2. However, reused code is usu- add to the bare hardware little more than isolation, hardware multiplexing, and trusted drivers, which constitute the basic 1Of course, drivers for devices that have unlimited DMA access always mechanisms needed for confinement. are a member of each subsystem’s TCB. We return to device drivers in Section 3.3. In this paper, we object to this view. We assert that by 2This figure only counts lines that contribute to a x86 configuration. Ir- adding kernel features originally meant for extensible sys- relevant headers and architecture-dependent code are not considered. To be published in Proceedings of the 11th ACM SIGOPS European Workshop, Leuven, Belgium, 2004 ally untrusted—the point of reuse is not to look at (and not Confidentiality: Only authorized users (entity, principal, to worry about) the code. This motive is particularly present etc.) can access information (data, programs, etc.). when reusing binary code for which source code is not avail- able. Integrity: Either information is current, correct, and com- This work further develops a theme we introduced in our plete, or it is possible to detect that these properties do European SIGOPS 2002 paper “Security Architectures Re- not hold. visited,” in which we argued that secure systems should be Availability: Data is available when and where an autho- based on small isolated components, and shortly outlined rized user needs it. the reuse of untrusted legacy components through trusted wrappers (which we somewhat confusingly called tunnel- Especially our definition of integrity is inconsistent with ing) [7]. Since then, several authors proposed conventional that of some prominent authors, including Gasser’s [5]: virtual-machine technology without support for using un- These authors define integrity to imply that data cannot be trusted components. The authors of Terra [4] even de- modified and destroyed without authorization. We refer to nounced microkernel technology as “exotic,” implying that this property, which implies both integrity and availability nothing can be learned from it. Therefore, in this paper we according to our definition, as recoverability. analyze the misconceptions that lead to this view and provide a blueprint for reducing TCB size. We deviate from the alternative integrity definition for two reasons. First, it creates overlap between integrity and avail- This paper is organized as follows. In Section 2, we re- ability, rendering the two categories nonorthogonal. Second, visit the term “trusted computing base” to point out com- it is useful to reason about our (weaker) definition of integrity mon problems in its use and to define it precisely. In Sec- and about availability in isolation from recoverability. tion 3, we explain how trusted wrappers work and in which The tools we have at our disposal for ensuring integrity scenarios they can help reducing TCB size. Section 4 com- are very different from the tools we use to ensure availabil- pares VMMs and microkernels and identifies kernel services ity: In general, we can secure integrity (and confidentiality) needed for the efficient support of untrusted components. We using cryptographic means, whereas we establish availabil- discuss related work in Section 5 and conclude the paper in ity (and recoverability) using software assessment and verifi- Section 6. cation and—especially when establishing trust through soft- ware assessment is impossible or impractical (e. g., when using untrusted networks), or if hardware failures are part 2 “Trusted computing base”: a re- of the threat model—by introducing redundancy and using visitation trusted backup media. The term “trusted computing base” (TCB) has become an 3 Trusted wrappers: Reusing un- imprecise, often misused term. For example, its users of- ten assume that there is only one TCB in a system. This is trusted components wrong because the word “trust” refers to relationships among For many applications, data confidentiality and integrity are components, each of which relies on a different set of com- vastly more important than availability; Gasser [5] conveys ponents for its correct function. We refer to the set of com- this observation as “I don’t care if it works, as long as it is ponents on which a subsystem S depends as the TCB of S. secure.” Here are some examples: Remote–file-system users We assume that adversaries can compromise untrusted are happy with not trusting networks and disks as long as system components, but not trusted ones. their data is backed up regularly (or permanently in a redun- It is illuminating to define precisely the meaning of the dant disk array) and integrity and confidentiality are not at word “trust” in TCB. It refers to the assertion that the TCB risk. Users of laptops and personal digital assistants (PDAs) fulfills certain specifications such as security, functional, and are more ready to take the risk of having their mobile device timing requirements. The security requirements fall into stolen (rendering all data on it unavailable) if data confiden- three mostly orthogonal main categories: confidentiality, in- tiality and integrity are ensured. People use legacy applica- tegrity, and availability. In this paper, without loss of gen- tions inside VMware on top of Linux, hoping that VMware erality we subsume all functional and nonfunctional (timing,