<<

An Overview of

A Project Report

submitted by Saran Neti Maruti Ramanarayana (100841098)

as Project 2 for COMP 5407F, Fall 2010

Carleton University

Abstract This project report gives a brief overview of the challenges in establishing trust among a network of potentially running malicious . We look at some key features in Trusted Computing including the hardware modications, novel protocols and key management ideas while highlighting various privacy con- cerns and open techincal challenges responsible for its slow adoption. We conclude with a few remarks about the social aspects of it. 1 Introduction 1

1 Introduction

The widespread use of desktop applications processing sensitive data (.g banking or health records) has resulted in an ecosystem based on exploiting vulnerabilities in software and compromising millions of comput- ers. Commercial security software, like anti-virus, exist to detect and remove infected les but new vulnerabilities are constantly discovered and exploited. While the cycle of discovering, exploitation and xing of security bugs is a never ending , billions of dollars are lost every year by companies (e.g nancial institutions), providing services through the Internet, due to malicious activity. A major concern among such service providers is their in- ability to determine if a client machine using their service can be trusted with critical information. In other words, even if the end user and the network can be trusted, there is no easy way to tell if a contains malicious software, and hence no way to refuse to serve a request originating from such a computer. Trusted Computing is an umbrella term used to describe security related enhancements to a modern computer to assist in establishing a trust rela- tionship under the assumption that it is potentially running compromised software. This process can be broadly described by Attestation - securely measuring the state of a computer, and Verication - remotely determining if a computer can be trusted using its state information. It is important to observe that Trusted Computing does not aim to re- duce vulnerabilities in software or prevent attacks against individual software components. Instead it aims to make the entire software ecosystem more se- cure by regulating interaction between networked computers and facilitating a universally secure computation of a trust metric. Naturally many privacy and ownership concerns arise when access to services are granted or denied based on judgment calls of whether a particular piece of software is trusted. Moreover, since the root of trust is located in hardware, running an alternate piece of software can be made impossible if desired as noted vociferously by [7]. Some of these issues have been cleverly addressed [12] but a lot remains to be done for widespread adoption by the community at large. We now look in detail at some of the fundamental problems, existing solutions and possible improvements to establish trust under the Trusted Computing model. 2 Assumptions about Hardware and Software 2

2 Assumptions about Hardware and Software

Algorithms to process data can either be implemented in hardware or soft- ware or a combination of both. Functionally, primitives or the building blocks of algorithms are designed on hardware, and are invariant for the lifetime of a physical computer as opposed to the higher layer logic which can be modied, loaded into memory and executed. For example, in a PC the instruc- tion set serves as a layer of abstraction between hardware and software. A modern computer consists of many peripheral devices connected to the CPU, where each device itself has evolved into a complex computational unit. The software running on these devices, e.g. network cards, graphic cards, storage drives, is called rmware and its update cycle can be more frequent than the lifetime of the hardware. In order to make clear assumptions about what is trusted and what isn't, it is important to understand the functionality and security that each layer oers.

Hardware Microprocessor, memory, display, keyboard, mice etc. are all types of hard- ware devices which are assumed to be trusted for the following reasons

1. The chips controlling them are manufactured in gigantic quantities; duplicate copies of them are found on millions of machines and modi- cation of their functionality is not typically possible without damaging the device.

2. The chips serve the sole purpose of managing the device, making it almost impossible for malware to make use of them for general purpose computation.

3. The code complexity in the chips is not too high, and hence the chips are not reprogrammable, although they may internally have a clear distinction between rmware and hardware.

Software All code executed by the Microprocessor starting from bootloader, kernel to userspace applications is untrusted. This layer implements most of the 2 Assumptions about Hardware and Software 3 functionality of modern computers and its contents are very dynamic and constantly changing.

Firmware Network and Graphics cards are examples of devices which contain repro- grammable rmware. Trust in this region is a Grey area, because rmware updates are usually delivered through software which is untrusted. Neverthe- less, rmware is often specialized and has access only to its device, making writing malware for it dicult. BIOS is an example of rmware that is untrusted - as it contains a lot of code [2] to initialize devices before hand- ing over control to the bootloader. Attacks have been devised that install backdoors resilient even to disk wipes [6]. In the Trusted Computing model, a specialized hardware called the TPM chip is introduced in the Hardware layer, and provides a way to measure every software ( and some rmware like BIOS ), that loads into the computer.

Trusted Platform Module (TPM)

Cryptographic Persistent memory processor Endorsement Key (EK) random number generator

t Storage Root Key (SRK)

RSA key generator Versatile memory

Platform Configuration Registers (PCR) SHA-1 hash generator

secured input - outpu Attestation Identity Keys (AIK)

encryption-decryption- storage keys signature engine

Fig. 2.1: TPM chip

The TPM chip includes 3 Measuring the state of a computer 4

1. Endorsement Key - A public/private keypair embedded into the TPM that uniquely identies it. The private key never leaves the chip.

2. Storage Rootkey - Used to protect other keys. This can be erased by wiping the TPM clean to its factory state.

3. Platform Conguration Registers (PCR) - Used to store the platform state.

4. Attestation Identity Key (AIK) - Used to validate the TPM chip during remote attestation.

It also includes hardware implementations of common cryptographic func- tions. The key idea is that in any operation output = CryptoF unction(data, key), the key, function and data reside in the same chip. If we can somehow gen- erate data to meaningfully reect the state of software output can be used to convey this information to remote parties securely.

3 Measuring the state of a computer

In order to be able to convey state information about a computer, we need to know the hardware conguration of a computer and also all the security relevant software that boots up in that computer, in a way that takes into account our trust model - all software is untrusted and can potentially contain malware. Hardware conguration can be vouched for by the vendor in the form of an Endorsement key in the TPM. TPM chips have been hacked into using electron microscopes [5], but our threat model assumes that the adversary does not have physical access to a computer. Measuring software for signs of infestation of malware is tricky. Design of software is also layered, but the abstractions are not necessarily made with strong security mind. When the process isolation guaranteed by an OS kernel doesn't hold up against attacks, lightweight Virtual Machines (VM) equipped with better isolation provided by hypervisors are employed [11]. And when an application is unable to operate on a collective bunch of VMs, software APIs are provided in the Hypervisor layer as an ecient means of communication. [9, 8]. This constant partitioning up of software into isolated containments, and then creating channels between them for communication in order to nd the optimum balance between security and eciency is a never ending process. But this is of interest to us - we want to determine 3 Measuring the state of a computer 5 that critical layer of software which can guarantee isolation of layers above it enabling in a reduction of the total amount of code needed to be trusted. i.e the (TCB).

The objective is to split our software stack into layers L0,L1, ..., Ln, and nd a k such that we 1. Minimize the amount of Trusted Code - Pk is minimized, where 0 |Li| |Li| is the sum of all lines of code running at layer Li.

2. Isolate higher layer applications - For all i > k, all applications A ∈ Li must be able to run in secure isolated execution environments.

3. Maximize Eciency - The performance penalty of applications running in a higher layer due to security must be minimized.

Finding this layer Lk is the big challenge. The older Flicker project uses only 250 lines of code in the TCB and makes heavy use of the inecient late launch instructions ( AMD SVM or TXT ) on newer commodity processors [17]. The Trustvisor project attempts to minimize the TCB while maintaining reasonable eciency, by using an easily veriable special-purpose hypervisor with about than 6k lines of code [16].

3.1 What software to measure? We now look at a simple mechanism using the TPM that will ensure that identity of all software loaded on a computer gets recorded. Let S1,S2,S3..., Sn be a list of software binaries that gets loaded into a computer in that order. Let H be a collision resistant cryptographic Hash function and I be a func- tion which computes the identity of a software. Before loading Si+1 into memory, Si computes D = I(Si+1) and sends it to the TPM using a hard- ware API. Using one of the PCRs, the TPM cumulatively stores the hashes. i.e PCR0 = H(PCR0||D). This append-only way of computing identities of software ensures that malware cannot erase its identity from PCR0, which is a secure hardware register.

Our assumption that Si will perform a measurement of Si+1 is valid only if every software we use honestly performs this measurement and we trust the rst software that loads up. In a typical example S1 would be the BIOS code, S2 would be the bootloader, S3 would be the hypervisor or directly the OS kernel and so on until we've measured until the required TCB. And 3 Measuring the state of a computer 6

the rst software S0 to boot up would be the trusted TPM code, that would measure and execute BIOS, establishing the chain of trust.

3.2 When should a software be measured? Before software is executed it exists as a binary le and a set of conguration les on disk. Upon execution its existence gets fragmented as some code resides in memory, some of it gets paged out to disk, as it writes intermediate data to les or accepts input from network. Even if code identity, i.e the binary and its conguration les, can be trusted in the beginning, upon execution its security state can be altered by external inputs. Thus to perfectly record all the code paths that an applica- tion has followed, it might be necessary to record dynamic properties about a program. Projects like XFI and CFI do inline binary modications to en- sure stack and code-ow integrity. Java and .NET runtimes can dynamically enforce certain security properties. However, Parno et.al. argue that dy- namic code properties do not contribute to increase in their trustworthiness, the rationale being : just by observing program behavior (like I/O access, or system calls), it is not possible to algorithmically categorize a particular application as malware [20]. Systems like pH try to precisely do just that but under dierent threat models [14]. Moreover, their results are heuristic and contain unacceptable number of false positives and false negatives for the requirements of Trusted Computing. So, to deterministically compute and store identities of software, a cryp- tographic hash of the binary and the conguration les is used. Figuring out of whether the binary actually contains malware is left for the remote verier to decide. This forms a basis for privacy and ownership issues which have plagued the reputation of Trusted Computing.

3.3 Late Launch Assume that a computer is running in a Trusted state, i.e all software that is running is in the TCB and the cumulative identities of each (binary, congu- ration) (B,C) pairs are stored in the TPM. Now suppose you happen to load a software, like a game or a custom program, that is not in the Approved list of the verier, the entire state of the platform becomes untrusted. Fur- thermore, the verier would want to excluding as much non-security relevant code from the TCB as possible - e.g a lot of variants of BIOS exist, and most 3 Measuring the state of a computer 7 of the code that runs may not be security relevant - in order to maintain a smaller set of chains of (B,C) pairs. So, the need arises to be able to switch between trusted and untrusted states. Intel and AMD introduced hardware support in their commodity pro- cessors under the names of Trusted eXecution Technology (TXT) and the Secure Virtual Machine (SVM) respectively. Basically, they add a hardware instruction which takes a single memory location as a parameter, resets the hardware platform including the TPM chip, atomically measures the soft- ware located at that memory location, extends it into one of the P CRs of the TPM and begins executing it in a hardware protected environment. This process is referred to as Late Launch and we shall briey look at what AMD's version of the instruction ( SKINIT ∗SLB ) does :

• Disables DMA access to physical memory pages containing the Secure Loader Block (SLB).

• Disables Interrupts and Hardware debuggers to prevent previously ex- ecuting code from regaining control.

• Resets the dynamic registers of the TPM PCR17−23 to 0. • Measures the executable (usually a VMM) at ∗SLB and executes it in a at 32-bit protected mode.

The SKINIT instruction can be issued only from the hardware ring-0 privilege mode, and is the only way to reset the TPM registers. Late launch provides many of the security benets of rebooting the computer (e.g., starting from a clean-slate), while bypassing the overhead of a full reboot (i.e., devices remain enabled, the BIOS and bootloader are not invoked, memory contents remain intact, etc.) [1]. The Flicker project [17] has noted that the late launch instruction was not designed to be invoked repeatedly. It attempted to minimize the ab- solute amount of code required to be trusted, by frequently transitioning between the smaller security-relevant trusted code base and the much larger base of untrusted applications. This incurred severe performance penalties. The Trustvisor project [16] attempts to optimize the performance by adding more code to the trusted base, thereby reducing the number of Late Launch transitions needed to run a high-level application. 4 Verifying the state of a computer 8

4 Verifying the state of a computer

Attestation is the process of securely measuring all the software including the TCB that gets executed on a computer. Verication is the process of granting or denying access to services by checking if that measurement falls within the list of trusted chains of measurements of software. It is possible that the verication happens on the same computer as the attestation, in which case only software that is in the pre-determined list of trusted software is allowed to execute. This is called Secure Boot [15]. We will look at a more general case where a remote entity desires to obtain the state of a local computer.

4.1 A simple protocol for verication

Let Rhost be a remote computer. It contains a set of trusted software measure- ments in the form of hashes - H = {H0,H1, ...Hn} where each Hi represents the cumulative hash of an ordered set of ( binary, conguration ) pairs. Let

Lhost be a local computer running a software stack Lsoft, containing a TPM which has measured Lsoft in one of its P CRs. Let us call that measurement V . Here is a protocol that enables Rhost to verify if V ∈ H even if Lsoft behaves maliciously.

nonce 1. Lhost ←− Rhost ; The remote host sends a nonce to the local com- puter over the network

nonce 2. LTPM ←− Lsoft ; The local software receives the nonce and sends it to the TPM chip. 3. privateKey ; The TPM chip uses its built in private S = LTPM_Sign(nonce + V ) key to sign the stored measurement along with the nonce.

(V,S) 4. LTPM −→ Lsoft; The TPM chip sends the measurement and its computed digital signature to the local software.

(V,S) 5. Lsoft −→ Rhost ; The local software sends the measurement and the digital signature to the remote host over the network.

publicKey 6. Rhost : V erifySignature (S,V ); Using the publicly available public key of local computer, the remote host veries the signature.

7. If the signature veries and V ∈ H, then Rhost can be sure that Lhost is running trusted software. 4 Verifying the state of a computer 9

In the above protocol, it is worth noting that valid signatures can only be produced by the TPM chip, and cannot be forged by the software running on the local machine, even if the software is malicious. This critical property is achieved primarily because all three components of a signature i.e the data, key and the signature algorithm reside in a trusted hardware. nonce is used to protect against replay attacks. The public/private keypair can theoretically be the Endorsement Key (EK) that is baked onto the TPM hardware and whose public key should be known to verier, but in practice Attestation Identity Keypair (AIK), which is negotiated and derived from EK, is used for signature.

4.2 Key Management In general the verication process involves keeping track of numerous software congurations, which are quite frequently updated, resulting in enormous datasets of Hash values and keeping track of many Endorsement Certicates from various TPM chip providers. In addition to the overhead of verication that every service provider (e.g online banking) must undertake to ensure that an attester is running trusted software, the fact that a verier must necessarily know exactly what software is running on a particular computer gives rise to privacy concerns. The has designed methods to o oad some of the responsibility of verication to trusted third parties.

Privacy CAs Privacy CA is an intermediate trusted third party that both the attester ( e.g an end user ) and the verier ( e.g online banking ) trust. Instead of using the Endorsement Key for signature, the attester uses it to convince the Privacy CA to generate intermediate keys ( AIK ) the private parts of which are known only to the real standards-compliant TPM. Privacy CA makes the corresponding public keys available to the verier. Verication process continues as before, with the only dierence that the identity of the attester is obscured by the Privacy CA in the form of generated AIKs and trust is now transitive with the Privacy CA sitting in the middle. Privacy CA adoption has faced many real-world challenges due to the absence of a single trusted Privacy CA that most or all users trust and due 4 Verifying the state of a computer 10 to the hard problem of making servers both highly secure and highly available at the same time [20].

Direct Anonymous Attestation (DAA) In order to address the shortcomings of Privacy CAs, a decentralized solution for anonymizing the identity of an attester was introduced and incorporated into TPM spec v1.2. DAA uses group signatures without a privileged group manager to achieve anonymity. In essence a Group signature scheme is a method for allowing a member of a group to anonymously sign a message on behalf of the group satisfying some basic constraints [4]. Using zero- knowledge proofs, a TPM equipped platform can convince an Issuer that it posses a legitimate TPM chip and obtain a group membership certicate. The DAA algorithm is quite complex as it o oads expensive group signature computations to the system's primarily untrusted CPU [20]. Hence, currently no hardware available supports DAA.

4.3 Post Verication Dilemma After the verication process, suppose the verier (e.g online banking ser- vice) comes to a conclusion that the end user's computer cannot be trusted, perhaps because it is running known buggy code or just unknown software, what is it supposed to do? It cannot simply send a reply saying that transac- tion failed because of untrusted software. Malware sitting on the end user's computer can intercept such messages and present the user with a fake bank- ing login screen, stealing important user credentials. In other words, the protocols we've seen so far aren't enough to establish a secure channel of communication between a consumer and a service provider. McCune et. al. discuss a possible solution using a trusted external device called an iT urtle which is responsible for letting the user know if the verication process has succeeded [18]. In essence, the problem lies primarily because the entire user interface, via the computer monitor, is under the direct control of logic based in software, which under our assumption is untrusted. One possible solution is to have hardware-override capable video buer, where some designated pixels on the screen ( like the bottom line ) can be overridden by devices in the trusted hardware layer, for e.g the TPM chip. If we can somehow leverage the existing Public Key Infrastructure used for SSL and extend it into hardware, then 4 Verifying the state of a computer 11 network cards can be designed to process in hardware a custom TPM-over- TCP protocol where a reply from a Verier is routed to the TPM chip which checks if the message corresponds to the previous attestation request and is genuine i.e its signature can be chained up to the trusted root CAs. The TPM chip can then notify the user by directly writing to the bottom line of the display buer. If the transaction is critical, the TPM chip can potentially pause the entire execution context ( e.g the VM communicating with the bank ) until the verication result is ready. A major concern with the above solution, and solutions in general which transfer functionality from software to the hardware layer to operate with increased security is managing updates. Software updates over the network are proving to be a non-trivial problem; methods for hardware or rmware updates can only be more complex.

4.4 Bootstrapping trust in TPM In the above protocols, we've assumed that the verier, or the Privacy CA, or the issuer who manages group memberships in the case of DAA, somehow possesses the exact public key of the entity that is being veried. This is a pretty big assumption. Every TPM ships with an Endorsement Certicate (EC) guaranteeing that the chip is genuine and standard-compliant. Even if a manufacturer makes all public keys available via a website, EC only assures the possessor of a public key that it belongs to some TPM, not a specic TPM. This opens up our verication protocol to cuckoo attacks where a malicious software intercepts a verication request, and instead of forwarding it to the local TPM, sends it over the network to a malicious TPM that has been tampered with at the hardware level to sign using its private key any data an attacker choses. In this way, attestation can always be faked to reect a genuine software conguration. Parno discusses this issue by modeling the attestation and verication process using formal methods [19]. He proposes several solutions including the use of special purpose interface that directly talks to a TPM, modifying existing interfaces like USB and using barcodes stickers that encode a factory- setting trusted software. A common observation is that most of the viable methods push around functionality from the untrusted software to trusted hardware layer. For instance, the TPM-over-TCP protocol mentioned in 4.3 can be extended to receive all verication requests, enabling a hardware based routing from the network card to the TPM chip. 5 Remarks and Conclusion 12

5 Remarks and Conclusion

Many interesting hardware and software technologies have been introduced under broad title of Trusted Computing in order to improve the overall se- curity of networked computers. Some of these technologies have been put to good use - like Bitlocker using TPM's secure registers to store drive encryption keys so that they can only be retrieved by a trusted ker- nel [10], but the core purpose - attestation and verication - has met with very limited success despite a large number of commodity computers ( over 200 million ) shipping with requisite hardware. We now look at three broad categories of issues associated with Trusted Computing.

Bloatware The basic premise that all software running on a computer is untrusted, has proven to be too much to deal with. Solutions proposed by the TCG have included some basic hardware support, and a few novel protocol designs, but the answer to some fundamental problems including 4.3 and 4.4 seem to be moving security-critical functionality from software to hardware. One has to be very careful in doing that because xing mass-produced hardware and bloated rmware is much harder than patching up software. Identifying a complete set of critical security sensitive code paths, while en- suring that most of software functionality can be implemented using those primitives, and then implementing them in the hardware would be the objec- tive. And without formal verication, one can never be too sure about what is secure and what isn't. Some eorts like Logic for Secure Systems and its Application to Trusted Computing [13] have been made in that direction, but as with any formal methods progress is slow, and often doesn't reveal much about how to modify our assumptions. Once we are intuitively sure about our assumptions, formal methods can help, but not vice versa.

Privacy and Ownership One very visible criticism that a skeptic of Trusted Computing makes is with respect to Privacy and Ownership. In commercial sector, when people buy personal computers, they like to believe that they have the power to do anything with it. This is obviously true for hardware, but issues with modi- cation of software have been heavily debated time and again giving rise to 5 Remarks and Conclusion 13 the Free and Open Source Software (FOSS) movement - an ideology where a user is given the freedom to modify software running on her computer. With freedom comes responsibility, but due to the sheer complexity of modern day computers, it is impossible for any user to verify by looking at each source code le to determine if she wants to run a particular software. So, users tend o oad some trust to vendors, but maintain the notional ability to write and run custom software. Trusted Computing requires more than that - it com- pletely relieves a user of the responsibility of maintaining her software while stripping away her freedom to arbitrarily change it. Opponents of Trusted Computing remind us that it is not just an additional Warranty void if the software is broken virtual seal, but a You will be unable to use any service if you modify your software ultimatum - especially if Trusted Computing usage becomes prevalent and the nancial sector, free email providers and social networking sites mandate that every software stack connecting to them must run trusted code. That said, it is worth noting that most users already trust their banks and their email providers with all the sensitive data. What they are contesting is their freedom to retrieve and use their data in whatever way they please, but still retaining the ability to blame a service provider ( e.g bank ), in case of a fraudulent transaction due to malware sitting on their computers. What we need for a higher adoption of Trusted Computing technologies is a social change.

A social change Maintaining a balance between the perception of freedom while appropriating responsibility among users is a tricky business. In the software world, despite its name, Trusted Computing has acquired a bad reputation as it possesses too many totalitarian qualities including the fear of vendor lock-in. However adoption in military has been undeterred, with the US Department of Defense mandating the use of TPM chips in all its computers deployed since 2007 [3]. For adoption in the commercial sector to succeed, there has to be a monetary advantage to using Trusted Computing technologies. If Trusted Computing indeed reduces malware, then a part of more than a billion dollars that are lost due to fraudulent transactions, should nd its way back to users who helped ght malware. For e.g a bank might be able to reduce interest rates for customers running code they trust. This could be done in the following way : 5 Remarks and Conclusion 14

• A user boots up a trusted hypervisor ( like TrustVisor ), and then a trusted VM - either provided by a bank or a trusted third party. Lets call this the TrustedBaseVM

• When a user wants to transact with a bank, he goes into TrustedBa- seVM, and clicks on a Bank Icon.

• An application launches, and over the network downloads from the bank website, a patch P.

• A BankVM is created by patching TrustedBaseVM with P. This VM is launched, and is used for banking and all transactions happening through it are trusted.

Motivation is from looking at existing systems where we execute a tremen- dous amount of JavaScript code downloaded from bank and other web servers. Critical browser vulnerabilities are a major concern. So, why not just down- load the browser and other code required to access the banking services directly from them. The TrustedBaseVM and the hypervisor can be shared by many web services, and can be shipped with all new computers along with their regular operating systems. This approach assists in a gradual change by providing an economic impetus to the users of Trusted Computing. References

[1] Bryan parno phd thesis. http://research.microsoft.com/pubs/ 138307/final-thesis-parno.pdf.

[2] Coreboot. http://www.coreboot.org/FAQ#Why_do_we_need_ coreboot.3F.

[3] Dod. http://iase.disa.mil/policy-guidance/ dod-dar-tpm-decree07-03-07.pdf.

[4] Group signatures. http://en.wikipedia.org/wiki/Group_ signature.

[5] Hardware tpm hacking. http://abcnews.go.com/Technology/ wireStory?id=9780148.

[6] Hosting backdoors in hardware. http://blog.ksplice.com/2010/10/ hosting-backdoors-in-hardware/.

[7] Privacy concerns. http://www.gnu.org/philosophy/can-you-trust. html.

[8] Trend micro core protection for virtual machines. http: //us.trendmicro.com/imperia/md/content/us/flv/enterprise/ endpointsecurity/ds01cpvm_090622us.pdf.

[9] Vmware esx. http://www.vmware.com/files/pdf/ VMware-ESX-and-VMware-ESXi-DS-EN.pdf.

[10] Windows . http://en.wikipedia.org/wiki/BitLocker_ Drive_Encryption.

[11] . http://en.wikipedia.org/wiki/Xen.

[12] Brickell, E., Camenisch, J., and Chen., L. Direct anonymous attestation. In ACM CCS (2004). 5 Remarks and Conclusion 16

[13] Datta, A., Franklin, J., Garg, D., and Kaynar, D. A logic of secure systems and its application to trusted computing. In IEEE (2009).

[14] Forrest, S., Somayaji, A., and T.A, L. A sense of self for unix processes. In IEEE Symposium on Security and Privacy (1996).

[15] Itoi, N., Arbaugh, W. A., Pollack, S. J., and Personal, D. M. R. Secure booting. In Proceedings of the Australasian Conference on Information Security and Privacy (ACISP) (2001).

[16] McCune, J., Li, Y., Qu, N., Zhou, Z., Datta, A., Gligor, V., and Perrig, A. Trustvisor: Ecient tcb reduction and attestation. In Oakland (2010).

[17] McCune, J. M., Parno, B., Perrig, A., Reiter, M. K., and Isozaki, H. Flicker: An execution infrastructure for tcb minimization. In Proc.of the ACM European Conference on Computer Systems, (2008).

[18] McCune, J. M., Perrig, A., Seshadri, A., and van Doorn, L. Turtles all the way down: Research challenges in user-based attestation. In Hotsec (2007).

[19] Parno, B. Bootstrapping trust in a trusted platform. In Hotsec (2008).

[20] Parno, B., McCune, J. M., and Perrig, A. Bootstrapping trust in commodity computers. In Oakland (2010).