A Complete Key Recovery Timing Attack on a GPU

A Complete Key Recovery Timing Attack on a GPU

A Complete Key Recovery Timing Attack on a GPU Zhen Hang Jiang, Yunsi Fei, David Kaeli Electrical & Computer Engineering Department, Northeastern University Boston, MA 02115 USA ABSTRACT nerability of GPU devices. Graphics Processing Units (GPUs) have become main- Timing attacks have been demonstrated to be one of stream parallel computing devices. They are deployed the most powerful classes of side-channel attacks [12, 13, on diverse platforms, and an increasing number of ap- 14, 15, 16, 17]. Timing attacks exploit the relationship plications have been moved to GPUs to exploit their between input data and the time (i.e., number of cycles) massive parallel computational resources. GPUs are for the system to process/access the data. For example, starting to be used for security services, where high- in a cache collision attack [13], the attacker exploits the volume data is encrypted to ensure integrity and con- difference in terms of CPU cycles to serve a cache miss fidentiality. However, the security of GPUs has only versus a hit, and considers the cache locality produced begun to receive attention. Issues such as side-channel by a unique input data set. There is no prior work of vulnerability have not been addressed. evaluating timing attacks on GPUs. To the best of our The goal of this paper is to evaluate the side-channel knowledge, our work is the first one to consider timing security of GPUs and demonstrate a complete AES (Ad- attacks deployed at the architecture level on a GPU. vanced Encryption Standard) key recovery using known The GPU's Single Instruction Multiple Threads (SIMT) ciphertext through a timing channel. To the best of our execution model prevents us from simply leveraging prior knowledge, this is the first work that clearly demon- timing attack methods adopted for CPUs on GPUs. strates the vulnerability of a commercial GPU archi- A GPU can perform multiple encryptions concurrently, tecture to side-channel timing attacks. Specifically, for and each encryption will compete for hardware resources AES-128, we have been able to recover all key bytes with other threads, providing the attacker with confus- utilizing a timing side channel in under 30 minutes. ing timing information. Also, when using SIMT, the at- tacker would not be able to time-stamp each encryption 1. INTRODUCTION individually. The timing information that the attacker obtains would be dominated by the longest running en- With introduction of programmable shader cores and cryption. Given these challenges in GPU architectures, high-level programming frameworks [1, 2], GPUs have most existing timing attack methods become infeasible. become fully programmable parallel computing devices. In this paper, we demonstrate that information leak- Compared to modern multi-core CPUs, a GPU can de- age can be extracted when executing on an SIMT-based liver significantly higher throughput by executing work- GPU to fully recover the encryption secret key. Specif- loads in parallel over thousands of cores. As a result, ically, we first observe that the kernel execution time GPUs have quickly become the accelerator of choice is linearly proportional to the number of unique cache for a large number of applications, including physics line requests generated during a kernel execution. In simulation, biomedical analytics and signal processing. the L1-Cache memory controller in a GPU, memory re- Given their ability to provide high throughput and effi- quests are queued and processed in the First-In-First- ciency, GPUs are now being leveraged to offload cryp- Out (FIFO) order, so the time to process all of memory tographic workloads from CPUs [3, 4, 5, 6, 7, 8]. This requests depends on the number of memory requests. move to the GPU provides cryptographic processing to As AES encryption generates memory requests to load achieve up to 28X higher throughput [3]. its S-box/T-tables entries, the addresses of memory re- While an increasing number of security systems are quests are dependent on the input data and encryption deploying GPUs, the security of GPU execution has not key. Thus, the execution time of an encryption kernel been well studied. Pietro et al. identified that informa- is correlated with the key. By leveraging this relation- tion leakage can occur throughout the memory hierar- ship, we can recover all the 16 AES secret key bytes on chy due to the lack of memory-zeroing operations on a an Nvidia Kepler GPU. Although we demonstrate this GPU [9]. Previous work has also identified vulnerability attack on a specific Nvidia GPU, other GPUs also have of GPUs using software methods [10] [11]. While there the same exploitable leakage. has been a large number of studies on side-channel se- We have set up a client-server infrastructure shown curity on other platforms, such as CPUs and FPGAs, in Figure. 1. In this setting, the attacker (client) sends there has been little attention paid to side-channel vul- messages to the victim (encryption server) through the timing attack methods consist of one block of cipher- internet, the server employs its GPU for encryption, text, and profiling the associated time to process that and sends encrypted messages back to the attacker. For block. However, on a GPU it would be highly inef- each message, the encrypted ciphertext is known to the ficient to only perform one block encryption and pro- attacker, as well as the timing information. If the timing duce one block of ciphertext at a time, given the GPU's data measured is clean (mostly attributes to the GPU massive computational resources. In a real world sce- kernel computation), we are able to recover all the 16 nario, the encryption workload would contain multi- key bytes using one million timing samples. In a more block messages, and on each data sample, the GPU practical attack setting where there is CPU noise in our timing attackwould produce many blocks of ciphertext. timing data, we are still able to fully recover all the The key difference is that the GPU scenario will only key bytes by collecting a larger number of samples and collect a single timing value for the multiple blocks. filtering out the noise. Our attack results show that Although many successful attack methods have been modern SIMT-based GPU architectures are vulnerable demonstrated on CPU platforms, these methods can- to timing side-channel attacks. not be directly applied to the GPU platform due to a lack of accurate timing information and nondetermin- ism in thread scheduling. Our timing attack method targets a GPU and is non-offensive like Bernstein's. We exploit the inherent parallelism present on the GPU, as well as its memory behavior, in order to recover the secret key. 3. BACKGROUND In this Section, we discuss the memory hierarchy and memory handling features of Nvidia's Kepler GPU ar- chitecture [18]. Note, not all details of the Kepler mem- ory system are publicly available - we leverage informa- tion that has been provided by Nvidia, as well as many details of the microarchitecture we have been able to Figure 1: The attack environment reverse engineer. We also describe the AES implemen- tation we have evaluated on the Kepler and the con- The rest of the paper is organized as follows. In Sec- figuration of the target hardware platform used in this tion 2, we discuss related work. In Section 3, we pro- work. vide an overview of our target GPU memory architec- ture and our AES GPU implementation. In Section 4, 3.1 GPU Memory Architecture the architecture leakage model is first presented, fol- lowed by our attack method that exploits the leakage 3.1.1 Target Hardware Platform for complete AES key recovery. We discuss potential Our encryption server is equipped with an Nvidia countermeasures in Section 5. Finally, the paper is con- Tesla K40 GPU. The Kepler-family device includes 15 cluded in Section 6. streaming multiprocessors (SMXs). Each SMX has 192 single-precision CUDA cores, 64 double-precision units, 2. RELATED WORK 32 special function units, and 32 load/store units (LSU). Timing attacks utilize the relationship between data In CUDA terminology, a group of 32 threads are called and the time taken in a system to access/process the a warp. Each SMX has four warp schedulers and eight data. Multiple attacks have been demonstrated suc- instruction dispatch units, which means four warps can cessfully by exploiting cache access latencies, which leak be scheduled and executed concurrently, and each warp secrets through generating either cache contention or can issue two independent instructions in one GPU cy- cache reuse [13, 14, 15]. In order to create cache con- cle [18]. tention or cache reuse, attackers need to have their own process(spy process) coexisting with the targeted pro- 3.1.2 GPU Memory Hierarchy cess (victim process) on the same physical machine. Kepler provides an integrated off-chip DRAM mem- This way the spy process can evict or reuse cache con- ory, called device memory. The CPU transfers data to tents created by the victim process to introduce dif- and from the device memory before and after it launches ferent cache access latencies. We refer to this class the kernel. Global memory, texture memory, and con- of attacks as offensive attacks. Another kind, non- stant memory reside in the device memory. Data re- offensive attack, has also been demonstrated success- siding in the device memory is shared among all of the fully by Bernstein [12]. Unlike offensive attacks, Bern- SMXs. Each SMX has L1/shared, texture, and con- stein's timing attack does not interfere with the victim stant caches, and they are used to cache data from process. His attack exploits the relationship between global memory, texture memory, and constant memory, the time for an array lookup and the array index.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us