Quick viewing(Text Mode)

Cold Boot Attack and Countermeasures on Systems with Non-Volatile Caches

Cold Boot Attack and Countermeasures on Systems with Non-Volatile Caches

Cold Boot Attack and Countermeasures on Systems with Non-Volatile Caches

THESIS

Presented in Partial Fulfillment of the Requirements for the Degree Master of Science in the Graduate School of The Ohio State University

By

Spencer Alan Rudolph

Graduate Program in Computer Science and Engineering

The Ohio State University

2016

Master's Examination Committee:

Mircea-Radu Teodorescu, Advisor

Yinqian Zhang

Copyrighted by

Spencer Alan Rudolph

2016

Abstract

Non-Volatile Memory is an emerging technology that has already found its way to mainstream devices in the form of hard drive like devices (SSD’s, Flash Drives, etc.) but has recently been found to be effective replacements for traditionally volatile memory components of systems. While replacement devices for RAM memory are still too expensive to go to market, Non-Volatile Cache Memory is a viable candidate to emerge in consumer CPU’s. Factors such as storage amounts are far less than what are required from RAM, making the overall cost cheaper. In addition, Non-Volatile Cache Memory has higher density compared to today’s caches which is desirable for smaller CPU’s being manufactured today and it does not need a constant power supply to retain memory which can lead to less power usage, giving Non-Volatile Cache Memory many advantages to consumers. However, with using memory that retains information regardless of a power supply leaves devices open to Cold Boot Attacks.

Cold Boot Attacks are used to steal information from the memory of a target device by freezing the state of the system and dumping its memory contents for examination. This is particularly effective for capturing the master key of encrypted hard drives, rendering the security of the hard drive compromised. In this paper we exam how such type of attacks are achieved, how vulnerable Non-Volatile Cache Memory systems are, and offer effective solutions to counteract these type of attacks.

ii

Acknowledgments

I would like to extend my sincerest gratitude to Xiang Pan, Anys Bacha, and

Professor Radu Teodorescu for all of their help and combined efforts to advance this project. Their expertise and previous experience have been instrumental to the progress of the project as a whole and would not have been possible without them. I would also like to thank Yinqian Zhang for agreeing to participate on my committee as he has offered great feedback and viewpoints in my final examination to continue to review after the fact.

iii

Vita

June 2012 ...... Mayfield High School

May 2016 ...... B.S. Computer Science and Engineering,

The Ohio State University

December 2016 ...... M.S. Computer Science and Engineering,

The Ohio State University

Fields of Study

Major Field: Computer Science and Engineering

iv

Table of Contents

Abstract ...... ii

Acknowledgments...... iii

Vita ...... iv

Fields of Study ...... iv

Table of Contents ...... v

List of Figures ...... vii

Chapter 1: Introduction ...... 1

Chapter 2: ...... 4

Chapter 3: Disk ...... 6

3.1 Types of Encryption ...... 7

3.2 Ciphers ...... 8

3.3 Other Disk Encryption Related Devices ...... 10

Chapter 4: Countermeasure...... 12

4.1 Software Based Countermeasure ...... 12

4.2 Design...... 13

v

4.3 Implementation...... 15

Chapter 5: Experimental Setup and Methodology ...... 17

5.1 Environment ...... 17

5.2 Simulation Strategy ...... 18

5.3 Gem5 Simulation Issues ...... 20

Chapter 6: Results ...... 22

References ...... 26

vi

List of Figures

Figure 1. ECB Weakness ...... 9

Figure 2. ECB Algorithm ...... 9

Figure 3. XTS Algorithm ...... 10

Figure 4. Poweroff Flags...... 19

Figure 5. Key Found After Poweroff ...... 22

Figure 6. 2 MB Cache ...... 24

Figure 7. 4 MB Cache ...... 25

Figure 8. 8 MB Cache ...... 25

vii

Chapter 1: Introduction

In today’s world where security exploits are consistently emerging, some of the most damaging exploits are ones that have evolved from previous work. While there have been fixes for previously documented Cold Boot Attacks, the theory behind stealing a computer’s memory for access to secured information is still an open problem. The difference in many cases is how the concept is applied to the situation. Given that fast non-volatile memory chips are an emerging market in the computer industry, the investigation of cold boot like attacks on these devices needs to be documented and fixed.

Personal information, corporate sensitive data, and government classified secrets are all at risk of Cold Boot Attacks given this new type of memory.

Cold Boot Attacks are a classification of security exploits that steal information stored in the main memory of computers by ways of physical manipulation of the system.

The end goal in most cases is to gain unfettered access to a systems main memory by either restarting a system or moving its RAM memory sticks to another device while also not losing their information. There have been variations on how this has been achieved from writing custom OS’s to freezing memory sticks, but the end results have been the same. Now with new non-volatile memory devices coming to consumers, this attack becomes easier to accomplish and its scope expands to cache memory.

1

Non-Volatile Cache Memory is a major improvement to an existing hardware component on almost every CPU today. CPU caches in general have largely gone unchanged since their launch in the 1960’s but given the technology of non-volatile chips, they are moving towards major enhancement. This is because non-volatile chips have the ability to retain information without the need for a constant power supply. This is dangerous however because previously discussed Cold Boot Attacks can take advantage of this feature and use it to steal information off the device (given they have physical access to it).

The combination of this new technology and the existing Cold Boot Attack concept should allow for a new attack vector to target Non-Volatile Cache Memory. The goal for us to prove was that an attacker could steal the computer’s hard dive master key from the cache after it has been unlocked by the user. From there, the security of the hard drive is considered compromised and the attack is considered effective. In order to test this we needed a system that could simulate the effects of a Non-Volatile Cache Memory or a cache that could be examined by us at any time which would produce similar results.

The solution was actually to use a simulator that we could control to examine the cache contents at will so we utilized the gem5 simulator for the project. Once setup, we installed a disk encryption application on the simulator and tested it to find encryption keys used to break the security of the device.

The tests were successful in finding the key on our simulations and the next step was create a countermeasure that would render this attack non vulnerable. The solution was to create an algorithm embedded at in the kernel code of the system that would

2 programmatically remove the disk encryption keys from the cache immediately after they were no longer in use by the CPU. This would then create a much smaller window for attackers to try and steal the keys from cache. We found this countermeasure to be effective in systems that were single tasked with one SPEC 2006 benchmark.

Simulations that had a mixture of activities taking place still showed improvement in the countermeasure but were not necessarily fully effective.

3

Chapter 2: Cold Boot Attack

The goal of a Cold Boot Attack is to compromise the encryption of local disk drives such as HDD’s or SSD’s by obtaining the AES master key that is used to encrypt and decrypt a hard drive. This key is stored in the systems memory during normal operation and is typically inaccessible to an end user on the system. However, by manipulating a system, an attacker can force a system to show its memory to the attacker, revealing the master key.

The first type of a Cold Boot Attack was more of a fast restart attack. The concept behind the attack was to interrupt power long enough that execution on the CPU would halt, causing the system and CPU to believe it had shutdown. The goal was then to restore power back to the system quickly so that information stored in RAM did not have time to dissipate [2]. This would then cause the system to and restart execution on the CPU as well as load BIOS into memory. However, an attacker could force the system to load from a live OS and dump the memory contents from the previous state of the system, therefore revealing the key of a hard drive [3]. This can easily be prevented by building hardware into the system that would clear the contents of RAM on system initialization (before allowing the BIOS to load an OS), causing the master key to be wiped before an attacker could to get to it.

4

The next version of the Cold Boot Attack that emerged was a process that utilized a property of memory corresponding to temperature. It was discovered that volatile memory that typically needs a constant power supply to store data would retain its information for significantly longer at -50 degrees Celsius, up to 10 minutes with typically less than .05 percent of errors occurring [3, 5]. This means that instead of needing to dump the contents of the memory on the system it was installed on, you could instead freeze the RAM, disconnect power, and remove the RAM memory sticks out of the target system. Then after, the attacker could place the memory sticks into custom hardware to read their memory contents, once again exposing the master key [3, 1].

Ways to prevent this type of attack are to either solder in the memory sticks to the system and/or encrypt memory on the system. Both would render this type of attack far less effective, if possible at all.

The way these type of attacks are involved with Non-Volatile Cache Memory is by using the same core concept from a Cold Boot Attack on a system that supports persistent cache memory. Given persistent memory in the cache, an attacker can steal a device with disk encryption, use the persistent memory of the cache to look for a master key and in turn compromise the security of the drive on the system. The main difference here is that since the persistent memory being used is in the form of a cache it is not guaranteed the key will always be in the cache.

5

Chapter 3: Disk Encryption

Disk Encryption refers to the process of securing data stored on a hard drive by utilizing standard AES encryption on a hard drives partition. While Disk Encryption and

Full Disk Encryption are distinctly different, they are fundamentally the same in reference to the mechanisms that secure the data and differ only in regards to the scope of data that they secure. For the purpose of this paper we will reference specifically Disk

Encryption (although Full Disk Encryption can be substituted in most cases).

The main use cases that disk encryption protect against are in circumstances where the hard drive is compromised (stolen, physical access to a computer, discarded and recovered) but given the state of the hard drive, the key to decrypt the data stored is inaccessible. In today’s software based disk encryption, the key is typically stored on the hard drive itself but requires a to unlock the key before it can be utilized to decrypt the rest of the hard drive. (Systems with TPM modules will also store part of the key exclusively on their trusted secure area so the key on the hard drive and the TPM device are both required to be present in order to decrypt the drives data with the master key.)

One of the key features to understand in disk encryption is that the data stored on the hard drive is never stored on the hard drive in a decrypted state. The decrypted 6 information is always stored in memory to safe guard against unexpected power loss.

This process means that any time the computer is reading or writing to the hard drive, the process involves either decryption or encryption respectively to safely operate. When the computer is using encryption and decryption to function, it must use the same master key that was mentioned before to correctly function. This key is stored in main memory and when it is used on the CPU for encrypting/decrypting of information for the disk, it is moved into the CPU’s cache as well.

3.1 Types of Encryption

With today’s standards, there are generally two major forms of Disk Encryption available to the end user. The first is a Filesystem Stacked Encryption method. This style encrypts the folder structure and its contents as a file and can be unlocked dynamically by the user. This type of system has advantages due to its ability to scale its encrypted portion for only the size that is needed, but generally requires a little more overhead because the encryption happens in vs kernel space. It is also helpful if there are multiple users on a system because it is then much easier to encrypt a directory for just that user and that way other users on the system do not have access to the encrypted folder without a password, even if they have privileged/super user access on the systems OS.

They second style that is supported is Block Device level encryption. The way block encryption is handled is by dedicating a full device or a partition of a device to be organized as fully encrypted. This means anything thing that is written to this device is

7 encrypted immediately and is generally more effective compared to Filesystem Stacked

Encryption because it is handled in the kernel space. It also can be accelerated with possessor specific enhancements but is limited by the fixed size of the block device it was installed on vs the scalable size of Filesystem Stacked Encryption directory. Because

Block Device encryption is what is used for Full Disk Encryption, this was the standard we used for our experiments.

3.2 Disk Encryption Ciphers

The main reason Electronic Code Book (ECB) is no longer supported is because of its primary security flaw of information bleeding. A real world representation of information bleeding through its encryption is shown in Figure 1. while Figure 2. shows the process to encrypt in ECB format [8]. While all the information is in fact encrypted, the main security flaw in ECB is that identical pieces of plain text end up as identical pieces of cipher text after encryption takes place. This mean that correlations can be drawn from the cipher text about the plain text. For instance, if 2 pieces of plain text both represent the black pixel color, after encryption both cipher texts come out to be the same. Even though the cipher text does not explicitly represent black, a conclusion can be draw that they represent the same color. When an entire image is subjected to the same process, you can identify the weakness shown in Figure 1.

8

Figure 1. ECB Weakness

Figure 2. ECB Algorithm

The XTS standard corrects this issue by adding a salt of sorts to the encryption process. A salt is something you add to plain text before it is encrypted to result in the cipher text looking different for 2 identical pieces of plain text. In XTS format, the encrypted sector number of a drive accompanied by the Galois polynomial function can be combined to be used as the salt for each sector. These sectors are encrypted separately with the same master key, and are then written to the disk usually in bulk when transferring the disk write buffer in memory to the hard drive. XTS format uses a dual

9 encryption method that involves encrypting a nuance (the sector number) with the first half of the master key and adding that as a salt to the actual plain text data being encrypted with the second half of the key, shown in Figure 3. At which point, the data is then written to the hard drive in its encrypted state. This addresses the previous vulnerabilities with of data bleeding through encryption when using ECB as shown in

Figure 1. because 2 pieces of identical plain text will encrypt into completely different cipher text, properly obscuring the data. The relevant part of this process is that there are actually 2 AES keys being used here and in turn, 2 AES expanded keys that need to be found as a part of our experiments in order to properly compromise the disk encryption.

Figure 3. XTS Algorithm

3.3 Other Disk Encryption Related Devices

The (TPM) is a device that is installed on and has a variety of cryptographic functions. It primarily is used for random number generation, key generation, secure storage, and detecting platform tampering. While it is a common belief TPM devices enhance the security of disk encryption, it is ineffective as 10 a defense against Cold Boot Attacks [3]. This is because a TPM device is used primarily as a guard against an insecure platform stealing a master key after the user unlocks it.

Because the device itself will not release part of the master key if it detects an issue with the system, it can guard against a user unintentionally unlocking the hard drive on a system that has been tampered with. However, once the master key is unlocked and stored in memory to decrypt the hard drive on a user’s regular system, the TPM device does not do any encryption/decryption its self, meaning it must take place on the CPU, forcing the key into the cache and leaving the system vulnerable to our Cold Boot Attack

[7].

On the other hand, Apple’s iOS devices are not vulnerable to the our described attack because it primarily has a “Secure Enclave” portion dedicated on the CPU that stores the disk encryption key, like the TPM device, but instead directly releases the key to a dedicated crypto engine built into the memory path between the hard drive and

RAM. Because of this dedicated crypto engine, the CPU does not encrypt/decrypt info for the hard drive, leaving it not vulnerable to a Cold Boot Attack involving Non-Volatile

Cache Memory because the key is never brought into the CPU’s cache [4].

11

Chapter 4: Countermeasure

In order to render the Cold Boot Attack ineffective against Non-Volatile Cache

Memory, we would need a hardware or software based solution that would detect and remove AES keys as they were brought into the cache by means of eviction/flushing.

While both hardware and software countermeasure each have positive features regarding cost to implement and cost of overhead on a system, we decided to pursue a software based countermeasure to experiment with.

4.1 Software Based Countermeasure

The advantages to coding a solution in software (in our case, the kernel) is that it is a translatable solution to other software implementations for other platforms and does not require special hardware modifications. This has an advantage to a hardware based solution because it becomes a cheaper solution to implement on existing or future hardware and can easily be upgraded via a software patch rather than creating new custom hardware. The two main disadvantages to a software based solutions is that it will require power/speed/resources (general overhead) to compute and evict the key being held in cache. In addition, the software solution we are purposing is built specifically for hard drive encryption on a kernel. This would mean developers

12 would need to rewrite the countermeasure’s algorithm in all software applications that support AES encryption.

Using a software based solution presented an opportunity to effectively remove the AES keys from the cache using logical triggers embedded at the core code of the system. For countermeasures specific to software disk encryption of the device, we altered the and leveraged the existing code to trigger our countermeasure at any point where encryption or decryption occurred on the device. This efficiently allowed us to keep the countermeasure as effective as possible while also keeping the burden on the system low.

4.2 Design

The design of the countermeasure took place in two steps. The first being a detailed analysis of existing kernel code to try and isolate where the master key would be stored for encryption and decryption. The second step was creating an algorithm that would target the address of a key and overwrite similar addresses to the key that would align on in the cache of the CPU. This would then cause irrelevant data to forcibly evict the cached key already on the CPU. When accomplished, this would leave a much smaller window for attackers to try and steal the key from the cache.

To implement our key eviction algorithm, we first needed to allocate a contiguous space of memory that we could guarantee would have a portion of memory that was cached aligned and large enough to access every set. The solution was to allocate a contiguous space of memory that was exactly twice the size of the cache because as the

13 user, we are not given privilege to choose where a memory address starts meaning it could or could not be cache aligned. However, this solution was chosen because it can be mathematically proven that given any area of contiguous memory twice the size of the cache, it is guaranteed that at least one cache aligned portion must be contained in that memory space. (In the possible case that we are given a starting point that does align with the cache, we have 2 cache aligned chunks available but only utilize one.)

The next step once we know we have a memory space that contains a portion of memory that is aligned with the cache, was to compute what address the cache alignment began at. To do this you take the base address that was given by the system, zero out the lower bits that are not in the tag, and then increment the address by the size of the cache.

The reason this is done is because by zeroing out the lower bits, you ensure that address is cache aligned to the first set/index in the cache. The problem is that this address is outside of the legally accessible range you have been given, and to find the next cache aligned address, you add to the address that size of the cache. This guarantees the address is cache aligned and inside the memory range you have been granted.

Now that you have a space that is cache aligned, you could write immediate values to every address ensuring the entire cache would then be wiped out (assuming

LRU cache eviction policy). However, since the expected performance hit would be too great, the goal is to target which exact cache lines would conflict with the key and write to the same address in our memory space, causing a conflict and resulting in the cache evicting out the key. To compute the addresses to write to, we needed to know the size of

14 the cache, the number ways of the cache, and the cache line size. Once this information is gathered we computed the target address in the following way.

For caches that are directly mapped, this is the easiest computational approach.

Since there is only one space for the key to be evicted, we simply need to take the index of the keys address, attach it to the tag that we computed in our memory space, and write immediate data causing a write back to that address and storing our irrelevant data in the cache.

For caches that are X-way associative, we need to take the same approach as before, except now the key can be held in X different ways in the same set, creating the need to overwrite X number of set aligned addresses. To compute this we again compute the index which will tell us the set the key is being held in and then add that index to the address. Then to make sure every way is accounted for and written to, we iterate over the number of ways of the cache and write immediate data to each unique address that is computed from each way, causing all ways in the set to be over written and ensuring the key has been evicted from cache altogether.

4.3 Implementation

When testing our original countermeasure, we encountered many issues finding in some tests that we actually increased the likelihood of finding the key compared to base results. Debugging the kernel became an extremely difficult task to do because first, there is no GNU debugger or any real-time debugger for testing a kernel and secondly, having no experience in kernel space we ran into many issues from not completely

15 understanding how the kernels inner processes work like allocating large amount of space correctly.

Eventually, we were able to break through and understand the AES key was being copied to individual kworker threads and not actually in the dm-crypt module. This allowed us to redeploy the countermeasure in other locations in the kernel code allowing for a correct eviction of the AES keys. While we found this latest break through to be the most effective in many cases, there are still bugs that have not been resolved.

16

Chapter 5: Experimental Setup and Methodology

5.1 Environment

In order to test our experiments we either had to order specialized hardware that would allow examination of the CPU’s cache as it ran normal operations or examine a simulator option that models a CPU cache and would allow us to view the cache contents.

This decision lead to using the gem5 simulator that would model the testing needs of our experiments for timing, memory, and CPU architectures [6]. While gem5 is very modular and allows the user to control almost every aspect of the simulation, it is a very difficult system to set up and runs fairly slow by today’s standards.

The configuration for gem5 was an 8-Core ARM 64 based CPU with 2GB of main memory and varying cache sizes based on our experiments needs. The system ran a minimally altered version of headless Ubuntu 14.04 64-bit with the only extra application installed being the cryptsetup application. Cryptsetup is a facilitator to use the dm-crypt kernel module and . The major issue here is since the gem5 simulator is extremely slow and allows many customizations, the typical kernel shipped with gem5 has been heavily striped down to only critical modules required to run the system. Since crytpsetup requires the dm-crypt module, this required altering the preconfigured gem5 setup and recompiling the kernel binary to statically enable dm-crypt and all the supporting modules. 17

For our experiments, we used a 2GB hard drive partition to run tests via the cryptsetup application using LUKS () and the XTS format block cipher. The XTS format is the standard form for Linux systems since plain text or ECB format has known security flaws and CBC format has draw backs that XTS resolves.

Once established, the last step was creating a new partition on the .img disk file and formatting it inside of gem5 with cryptsetup LUKS format and XTS block cipher.

To do initial testing, we manually specified the master key to be in a preconfigured format to test the key detecting algorithm against the full system simulation. Once found effective, we moved on to create a larger partition and a randomly generated key to test the SPEC 2006 benchmarks on as well as different attack scenarios.

5.2 Simulation Strategy

The approach we took in order to test a keys persistence and probability of being found was to take snapshots of the simulators cache, examine the cache’s content searching for the keys by using a key detection algorithm, and log the results to find how often both keys were found to compose the master key of the hard drive. Our initial tests found that with an encrypted drive we were able to find the keys in cache during tasks that required immediate communication with the storage device. Tasks such as mounting and unmounting it from the , new reads or large writes, and syncing the write buffer with the hard drive all caused the key to be brought into cache because the CPU needed to encrypt or decrypt new information in order to complete the task. In addition, powering off the device would cause many of these operation to take place and at the

18 time the CPU is powered off, we would examine the cache for our keys to see if they persisted in the cache after words. The results to whether if we found the key in the cache are found in Chapter 6, Figure 5. and the accompanying flags and their descriptions we used to test different power off methods are found below in Figure 4.

Figure 4. Poweroff Flags

The way we collected results from these test was by scripting the test execution before simulation to guarantee consistent results. After the simulations initiated, we programed the simulator to pause the simulation every 1 billion instructions. At that time, an algorithm examined the contents of the cache and logged the result whether or not it was able to locate both AES keys need to compromise the hard drive. Once completed, the simulation resumed where it left off and the system did not even recognize it was suspended.

19

For the testing of the SPEC 2006 benchmarks, we tested each individual benchmark in its own isolated simulation so there was no chance it would have an impact on other tests results. In addition to running these tests individually, we also ran a mixture of these test together in a single simulation to observe how they would alter our results and react with the other tests in the simulation. For the mixtures we ran a combination of computationally intensive tests (mixC), a combination of memory intensive tests (mixM), and a combination of tests that were either computationally or memory intensive (mixCM). Shown in the results of Chapter 6, as the size of the cache increased, so did the persistence of finding the keys in the cache without a countermeasure. We exam these results further in Chapter 6

5.3 Gem5 Simulation Issues

There were a range of issues that came with using the dm-crypt disk encryption on gem5. The first issues were due to the fact that gem5 does not support network connections. In order to install cryptsetup, we needed to manual download all packages and dependences for cryptsetup in the ARM architecture format, transfer them to the gem5 system and then install them. This did not work mainly because the system we were using did not support the multiple dependencies we needed to transfer to the target simulation system. The next step was to use chroot with a cross architecture support to then download and install from apt-get the required packages for the system to run cryptsetup which was sucessful.

20

The second major issue we found was that gem5 did not support the dm-crypt module by default. It required downloading the gem5 kernel repository, recompiling with dm-crypt support statically compiled into the binary, and re-running the simulator with this new binary kernel. This process eventually gave a correct setup to run disk encryption but it was very slow process due to the nature of gem5.

21

Chapter 6: Results

Our simulations found that in many cases the key remained persistent in the cache even after a task was completed. This situation of keys being left in the cache was one of the more vulnerable aspects we found in our testing and it was very likely to occur in the poweroff experiments and the SPEC 2006 tests. The results show that it is more likely the key would persist in the cache the larger its capacity. This was due to the lower likely hood the key would be evicted for space constraints in the cache. Because it is inevitable that the key would brought into cache in order to be used by the CPU, we focused our countermeasure to evict the key out programmatically as fast as possible using the method described in Chapter 4. Figure 5. shows the results of experiments where we simulated the poweroff command with various flags found in Chapter 5, Figure 4.

Idle prior to shutdown Specrand_f prior to shutdown benchmark 2MB 4MB 8MB 128MB benchmark 2MB 4MB 8MB 128MB poweroff-p 0 0 1 1 poweroff-p 0 1 1 1 poweroff-n 0 0 1 1 poweroff-n 0 1 1 1 poweroff-d 0 0 1 1 poweroff-d 0 0 1 1 poweroff-f 1 1 1 1 poweroff-f 0 1 1 1 poweroff-i 0 0 1 1 poweroff-i 0 0 1 1 poweroff-h 0 0 0 1 poweroff-h 0 0 1 1 Figure 5. Key Found After Poweroff

22

In Figures 6, 7, and 8 the individual tests ran showed a lot of variance the smaller the cache size when using the original kernel. The reason we had such variance in our results from test to test is because some tests were computationally intensive while other tests were memory intensive. This difference meant that tests that were computational tests did not have a need to access a lot memory or to access memory often and as a result, the key to decrypt the hard drive was more likely to stay inside of the cache and not get evicted. This is supported by our results for the mixC benchmark without the countermeasure as it was very likely to find the key in cache during that simulation. The other type of test considered were tests that were very memory intensive and our results showed the key did not persist in the cache very long. This is because these tests were constantly loading and storing from a large set of memory and it was far more likely that a naturally occurring conflict would occur in the cache and the key would be evicted out on its own. You can see when compared to mixC, mixM generally has half of a chance of finding the key in cache without the countermeasure. In addition, this explains why there is a very small change in mixM for the regular kernel compared to the countermeasure in Figure 6. This is because based on the small size of the cache and high memory operations taking place in mixM, the cache conflicts were expected to be higher and therefor are frequently removing the key from the cache almost as frequently as the countermeasure is in the same situation. This caused the countermeasure to show little effect for that category of tests.

After the tests with a regular “vanilla” Linux kernel were conducted, we reran the exact same test sets with the kernel containing the software countermeasure. We found

23 that in all cases of single test simulations, the likely hood of finding a key was under 3% of the total time it was tested. On the other hand, the mixed test results showed an improvement but still had up to a 26% chance of finding the keys in cache with the countermeasure in place. The major improvement showing the countermeasure working in single test cases is believed to be because it accurately targets area and times in the kernel to evict the key out and reducing the time frame the key is persistent and lingering in the cache. This countermeasure worked in all the configurations even as the probability got higher for larger caches in the regular tests because the key is still being evicted out immediately regardless of the size of the cache. While these results are promising and obviously show a positive impact in our goal, there is still more work to complete before the solution is 100% effective.

Figure 6. 2 MB Cache

24

Figure 7. 4 MB Cache

Figure 8. 8 MB Cache

25

References

[1] Chow, Jim, Ben Pfaff, Tal Garfinkel, and Mendel Rosenblum. "Shredding Your Garbage: Reducing Data Lifetime Through Secure Deallocation." Shredding Your Garbage: Reducing Data Lifetime Through Secure Deallocation. 14th USENIX Security Symposium, 03 Aug. 2005. Web. 29 Nov. 2016.

[2] Gutmann, Peter. " in Semiconductor Devices." Data Remanence in Semiconductor Devices. 10th USENIX Security Symposium, 02 Jan. 2002. Web. 29 Nov. 2016.

[3] Halderman, J. Alex, Seth D. Schoen, , William Clarkson, William Paul, Joseph A. Calandrino, Ariel J. Feldman, , and Edward W. Felten. "Lest We Remember: Cold Boot Attacks on Encryption." Lest We Remember: Cold Boot Attacks on Encryption Keys. 17th USENIX Security Symposium, n.d. Web. 29 Nov. 2016.

[4] "IOS Application Security." Network Security 2016.2 (2016): 4. Apple. Appple, Mar. 2016. Web. 29 Nov. 2016.

[5] Link, W., and H. May. "Tieftemperaturphysik." Lexikon Der Physik - Spektrum Der Wissenschaft. Spektrum Akademischer Verlag, Heidelberg, 1998. Web. 29 Nov. 2016.

[6] "Main Page." Gem5. N.p., n.d. Web. 01 Dec. 2016. .

[7] "Trusted Platform Module Library - Part1: Architecture." ISO/IEC 11889-1. ISO/IEC, 01 Apr. 2016. Web. 29 Nov. 2016.

[8] "XTS Encryption." Kingston Technology Company. N.p., n.d. Web. 29 Nov. 2016.

26