Random Fill Cache Architecture
Total Page:16
File Type:pdf, Size:1020Kb
2014 47th Annual IEEE/ACM International Symposium on Microarchitecture Random Fill Cache Architecture Fangfei Liu and Ruby B. Lee Princeton Architecture Laboratory for Multimedia and Security (PALMS) Department of Electrical Engineering, Princeton University Princeton, NJ 08544, USA {fangfeil, rblee}@princeton.edu Abstract—Correctly functioning caches have been shown on any resource contention. Instead, they only exploit the to leak critical secrets like encryption keys, through various reuse of a previously accessed (and cached) security-critical types of cache side-channel attacks. This nullifies the secu- data to correlate the addresses of two memory accesses. We rity provided by strong encryption and allows confidentiality breaches, impersonation attacks and fake services. Hence, point out that reuse of the cached data is exactly the purpose future cache designs must consider security, ideally without of a cache, therefore reuse based attacks strike at the heart degrading performance and power efficiency. We introduce a of a cache and are much harder to defend against. new classification of cache side channel attacks: contention Several recent work [8], [10]–[14] investigated how to based attacks and reuse based attacks. Previous secure cache design secure caches to provide built-in defenses against designs target only contention based attacks, and we show that they cannot defend against reuse based attacks. We show the cache side channel attacks. Wang and Lee proposed two surprising insight that the fundamental demand fetch policy general design approaches [11]: the partition-based approach of a cache is a security vulnerability that causes the success [8], [11], [13], [14] that eliminates the cache contention, and of reuse based attacks. We propose a novel random fill cache the randomization-based approach [10]–[12] that randomizes architecture that replaces demand fetch with random cache the cache contention. However, these approaches only target fill within a configurable neighborhood window. We show that our random fill cache does not degrade performance, and in contention based attacks and are not effective in defeating fact, improves the performance for some types of applications. reuse based attacks. There are also some efforts that try We also show that it provides information-theoretic security to achieve constant execution time by either not loading against reuse based attacks. security-critical data into the cache at all, or trying to ensure Keywords-cache; security; side channel attacks; cache colli- all cache hits whenever security-critical data is accessed, by sion attacks; secure caches; computer architecture. frequently preloading or reloading all security-critical data [8], [14], [15]. This approach may potentially defeat the I. INTRODUCTION reuse based attacks, but at the cost of significant performance Recent findings on cache side channel attacks [1]–[7] have degradation, and sometimes enabling other types of attacks. shown that correctly functioning caches may leak critical In this paper, we try to find a general approach against secrets like cryptographic keys, nullifying any protection reuse based attacks, as a complement to existing secure provided by strong cryptography. These attacks are easy to cache design approaches. We show that, contrary to conven- perform and are effective on all platforms, from embedded tional wisdom, constant execution time is not the necessary systems to cloud servers, that use hardware caches. There- condition to defeat reuse based attacks. Surprisingly, we fore, future cache designs must take into account security, find that the fundamental demand fetch policy of a cache ideally without degrading performance and power efficiency. is a security vulnerability that causes the success of reuse In cache side channel attacks, an attacker exploits the based attacks. With the demand fetch policy, the cache fill large timing difference between cache hits and cache misses is always correlated with a demand memory access, hence to infer the key-dependent (i.e., security-critical) memory the state of a cache reveals information about previous addresses, and hence the secret information, during the memory accesses. Hence, we propose a general approach execution of cryptographic programs. We introduce a new against reuse based attacks: re-design the cache fill strategy classification of cache side channel attacks, depending on so that it is de-correlated with the demand memory access. how the attacker infers memory addresses: contention based We propose a novel random fill cache architecture with attacks versus reuse based attacks. In contention based a new security-aware cache fill strategy. The random fill attacks [2], [3], [5], the key-dependent memory accesses cache architecture takes advantage of the random access may contend for the same cache set with the attacker’s pattern found in cryptographic algorithms. Hence, it does memory accesses, and result in eviction of one by the other, not degrade performance. In fact, it is more general and in a deterministic way. This enables the attacker to infer the flexible than the demand fetch strategy, and even enables memory address according to which cache set it maps to. In performance improvements for some types of applications. contrast, the reuse based attacks [4], [6], [8], [9] do not rely Our main contributions are: 1072-4451/14 $31.00 © 2014 IEEE 203 DOI 10.1109/MICRO.2014.28 • A new classification of cache side channel attacks as measuring the impact of the victim’s cache accesses to the contention based and reuse based attacks, attacker’s own accesses. In the timing-driven attacks, the • A new general approach for securing caches against attacker can measure the execution time of the victim pro- reuse based attacks: the cache fill strategy must be cess. However, this classification is not helpful in identifying re-designed to de-correlate the cache fill and demand root causes and potential countermeasures. We introduce memory accesses, a new classification: contention based attacks and reuse • A novel random fill cache architecture with a flexible based attacks, based on how the attacker infers the memory cache fill strategy, which replaces the demand fetch address. with random cache fill within a configurable neighbor- Table I hood window, CLASSIFICATION OF CACHE SIDE CHANNEL ATTACKS • Contention based Reuse based An information-theoretic proof of the security provided Attacks Attacks by our random fill cache architecture, Access-driven Flush-Reload Prime-Probe Attacks • Performance evaluation of the proposed cache architec- Attacks Attacks Timing-driven Cache collision Evict-Time Attacks ture and study of the broader performance implications Attacks Attacks of the random cache fill strategy to programs that are not demand-fetch amenable. 1) Contention based Attacks: The attacker may contend The rest of the paper is organized as follows: Section for the same cache set with the victim process and the II gives some background on cache side channel attacks contention results in eviction of one’s cache line by the other. and section III discusses past work. We introduce our new If the contention and eviction is deterministic, the attacker random fill cache architecture in section IV. We provide can infer the memory address of the victim according to an information-theoretic proof of the security provided by which cache set it maps to. Figure 1 illustrates how this our random fill cache architecture in section V. We evaluate works. There are two variations of contention based attacks: the performance of our cache architecture in section VI. Prime-Probe Attack [3], [5]: The attacker repeats the We discuss the broader performance implications of our following operations: 1) Prime: the attacker fills one or more cache architecture in section VII. We compare security and cache sets with his own data. 2) Idle: the attacker waits for a performance with past work in section VIII and conclude in pre-specified Prime-Probe interval while the victim process section IX. is running and utilizing the cache. 3) Probe: the attacker II. BACKGROUND process runs again and measures the time to load each set of his data. The Probe phase primes the cache for subsequent A. Overview of Cache Side Channel Attacks observations. If the victim process uses some cache sets The majority of cache side channel attacks exploit the during the Prime-Probe interval, some of the attacker’s cache interaction of the key-dependent data flow in a program lines in these cache sets will be evicted, which causes cache with the underlying cache (mostly L1 data cache) to learn the misses and thus a longer load time during the Probe phase. secret information. We primarily consider the information Evict-Time Attack [5]: The attacker repeats the following flow in which the secret information is directly modulated operations: 1) Evict: the attacker fills one specific cache set onto the memory address, in the form of key-dependent with his own data and hence evicts the victim’s data in that table lookups. This is commonly found in the software cache set. 2) Time: the attacker triggers the victim process implementation of cryptographic algorithms. For example, to perform a cryptographic operation, and measures the total the substitution box (S-box) in the block ciphers (e.g., Data execution time. If the victim accesses the evicted data, his Encryption Standard (DES), Advanced Encryption Standard execution time tends to be statistically higher, due to the (AES), Blowfish), and the multipliers table in the public-key victim having a cache miss. algorithms (e.g., RSA) are all implemented as lookup tables indexed by a linear function of the secret key. The attacker is Victim’s lookup table 3-ways an unprivileged user-level process that aims to infer the key- K=0 K=1 dependent memory addresses, indirectly through the cache 6 sets K=2 behavior. contention Attacker’s memory K=3 B. Classification of Cache Side Channel Attacks K=4 Table I summarizes the classification of all known cache side channel attacks. Cache side channel attacks have Memory Cache been conventionally classified as access-driven attacks and Figure 1.