2020 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom)

Live Container Migration via Pre-restore and Random Access Memory

1st Zhixing Yu 2nd Kejing He Guangdong Key Laboratory of Computer Network Guangdong Key Laboratory of Computer Network School of Computer Science and Engineering School of Computer Science and Engineering South China University of Technology South China University of Technology Guangzhou, China Guangzhou, China [email protected] [email protected]

3rd Chao Chen 4th Jian Wang School of Computer Science and Engineering School of Computer Science and Engineering South China University of Technology South China University of Technology Guangzhou, China Guangzhou, China [email protected] cs [email protected]

Abstract—Container technology is increasingly being used to migrate the files used by the container process to the target for virtualization due to its ability to isolate the operating host, but only need to migrate the container runtime memory environment of the program. In cloud computing environment, and other metadata information instead. As shown in Fig. 1, we need to migrate containers between different hosts for load balancing or downtime maintenance. However, during the the conventional container migration process includes three migration process, the container will be temporarily shut down, steps. First, the migration tool freezes the running container and the service will be unavailable. Therefore, the time cost is process in order to collect memory page information and an essential indicator to measure the quality of the migration metadata information (e.g., file descriptors, pipe parameters) process. To achieve live container migration, we propose a pre- and then dump the information locally as image files. Second, restore method and a complete random access memory (RAM) based method to migrate containers. Extensive experiments all the local image files are transported from the source host validate the effectiveness of our methods in reducing downtime to the destination host over the network. Finally, the migration and improving the efficiency of container migration. tool re-runs the container by extracting the information from Index Terms—Container, Live migration, Downtime, Pre- the image files on the target host. restore, RAM There are four main metrics to measure the performance of live migration, including three key metrics of time for I. INTRODUCTION container migration represented in Fig. 2: Virtual machine migration is a method of moving a virtual • total amount of data transferred, including the size of the machine on a source host to a target host and re-running the container memory image and metadata size. virtual machine on the target host. After years of research, • total migration time, which refers the time from the start virtual machine migration technology has been widely applied of the migration to the duration of the container running in practical applications [1], some of which focus on Service again at the target host. Level Agreements (SLA) [2], some on energy efficiency [3], etc. In recent years, the technology of container, also known as the lightweight virtual machine, has achieved great attention since it is lightweight and easy to be deployed. Some 6RXUFHKRVW 'HVWLQDWLRQKRVW researches have shown that, in most cases, the performance of the container is better than or equal to that of the virtual &RQWDLQHU &RQWDLQHU machine [4]. In a real production environment, container migration generally comes from the requirement for load balancing or downtime maintenance. GXPS UHVWRUH In this paper, we focus on live migration (or real-time 7UDQVSRUW migration) of a container, which is a process transmitted from )LOHV\VWHP )LOHV\VWHP source hosts to destination hosts. In this case, there is no need

This work was supported by the Science and Technology Planning Project of Guangdong Province, China (2017B030306016), and the Special Support Program of Guangdong Province (201528004). Fig. 1. Conventional container migration process.

978-0-7381-3199-3/20/$31.00 ©2020 IEEE 102 DOI 10.1109/ISPA-BDCloud-SocialCom-SustainCom51426.2020.00039 • downtime, which is the time from the migrated container II. BACKGROUND AND PROBLEM STATEMENT to pause on the source host to re-run on the target host. A. Background • absolute downtime, which is the difference between downtime and network transmission time. Therefore, ab- 1) Live migration: Live migration or real-time migration solute downtime can eliminate the network quality factor for virtual machine refers to the process of moving appli- compared to downtime. cations between different physical computers or cloud plat- forms while ensuring that client access is not interrupted [8]. Live container migration is a similar process. In [6], Andrey start finish Mirkin, Alexey Kuznetsov, and Kir Kolyshkin first presented down transmit transmit re-run transmitting restarting the checkpointing and restart mechanism for live container migration implemented in OpenVZ. During the live container total migration time migration, the memory, file systems, and network connections downtime required to run the container on bare hardware can be moved from the source host to the target host while ensuring consis- tabs tabs absolute downtime absolute downtime tent state. Live container migration is useful for maintaining high availability and fault tolerance of applications in the cloud Fig. 2. Time metrics for container migration. environment, as well as for dynamic load balancing in a cluster server. However, the larger the image files being transferred, the 2) Pre-copy: A container needs to be dumped to disk before longer the transmission time and downtime it consumes, it can be further migrated to another server. Therefore, if the which makes the migrated container service unavailable for container to be migrated consumes a lot of memory, it will a long time. Therefore, various methods have been proposed take a long time to dump, resulting in more extra downtime. to improve the performance of migration from the perspective Long downtime means that the service is interrupted for a long of the migrated file transfer mode. Dynamic self-expansion [1] time, which violates the SLA. Consequently, the main require- and compression-based method [5] were proposed to reduce ment for live container migration is to minimize application the data to be transmitted. On the other hand, pre-copy [6] and downtime. post-copy [7] were proposed to effectively reduce downtime Clark, Christopher, and Fraser et al. proposed a pre-copy and accelerate the transport process of image files by dividing method to solve this problem [8]. As shown in Fig. 3, the pre- the migration into multiple phases. copy mainly includes three phases. In the first phase, all the In this paper, we propose two approaches to improve the memory pages of the container are dumped into the image file performance of live container migration, including the pre- and transferred to the destination host, where the container still restore method and the complete Random Access Memory keeps running. In the second phase, the memory page tracking (RAM) based method. The pre-restore method leverages the technique is used to record the memory page modified from idea of mapping and merging the memory pages, the RAM- the last iteration and dump it into the image file for migration. based method adopts random access memory to store a collec- In the final phase, the container stops until the iteration stop tion of image files instead of using disks during the migration condition are reached, then copies all remaining memory pages process. The main contributions are summarized as follows: and metadata information and transfers to the target host. 3) Post-copy: Michael R Hines, Umesh Deshpande, and • We design and implement a novel pre-restore method for Kartik Gopalan proposed the post-copy [7] method for virtual live container migration, which processes the image files machine migration. The post-copy method is quite different before transmission and reconstruction, making downtime from the pre-copy method, although they are both iterative of the migrated container declined. migration algorithms. First, it migrates the virtual machine’s • We propose a novel complete RAM-based method for the smallest working set from the source host to the target host migration process of all image files, which alleviates the and then immediately rehabilitates the virtual machine running disk I/O overhead. on the target host. After the recovery operation, a page fault • We implement the above two methods in the form of interrupt will occur when the memory page that needs to middleware running on the destination host and carry out be accessed does not exist, where the corresponding page experiments. Experimental results demonstrate that the information is obtained from the source host through the proposed methods reduce the total migration time and network. Hirofuchi, Takahiro and Nakada et al. [9] proposed a absolute downtime effectively. virtual machine migration method based on post-copy, which The rest of this paper is organized as follows. Section enables the virtual machine to be migrated automatically 2 provides the background and problem statement of live according to the change of resource usage, thus providing a container migration. Section 3 elaborates on our proposed pre- higher performance guarantee for the system. restore method and RAM-based method. In Section 4, the Although the post-copy method can effectively shorten the experiments are carried out and the experimental results are total migration time and enable the virtual machine to run presented. Section 5 concludes our work. quickly on the destination host, the page fault interruption

103 6RXUFHKRVW 'HVWLQDWLRQKRVW $OOSDJHV ,WHUDWLRQ

FKDQJHGSDJHV ,WHUDWLRQ

3DJHVFKDQJHGGXULQJLWHUDWLRQ FKDQJHGSDJHV ,WHUDWLRQ

3DJHVFKDQJHGGXULQJLWHUDWLRQ

Fig. 3. Pre-copy process. may often occur in the subsequent stage, which increases 1) Lack of parallel processing: When the image files of the response time of services running in the virtual machine. the container are transmitted on the network, the destination Therefore, this approach is usually not practical enough. host is idle, which means a waste of computing resources 4) Linux Containers(LXC): Linux Containers (LXC) [10] and low efficiency. The efficiency of live container image is a container-based hierarchy virtualization transmission can be improved by parallel processing. In other technology. LXC can provide a virtual execution environment words, live container migration can be processed in the form for processes at the operating system level. Therefore, LXC of a . As shown in Fig. 4, pipeline method divides a provides less virtualization overhead and faster deployment container migration into n tasks with k iterations, each task than the traditional hardware abstraction layer (HAL). In LXC, can be executed in parallel. In this way, the destination host programmers can bind specific CPU and memory nodes to the can recover the files completed by the last iteration when container or can allocate a specific proportion of CPU time and transferring an execution unit in the current iteration, which I/O time. Besides, LXC provides device access control which greatly improves the utilization rate of computing resources. limits the amount of memory that can be used (including mem- If the execution time of each iteration is denoted as τi, ory and swap space). Moreover, LXC relies on the where i ∈ [0,k]. The parameter τ in Equation 1 denotes the subsystem for resource management, which is a process group- pipeline cycle time, e.g., the time overhead of storing the based resource management framework provided by the Linux results of each execution unit on the pipeline registers. i is the kernel that limits the resources available for a particular process group. LXC relies on the namespace feature of the Linux kernel in terms of isolation control. Specifically, it adds task 1 task 2... task n the corresponding flag (NEWNS NEWPID, etc.) when it is created. 1 5) CRIU: Checkpoint/Restore In Userspace (CRIU), is a software tool for the Linux operating system. CRIU is able 2 1 to freeze a running application (or part of it) and keep a backup of the checkpoint with a collection of files on disk. 3 2 The files are then used to restore the application, making it re- run exactly as it was before the time of freezing. Based on this 3 1 feature of CRIU, it is often used in live container migration. ... 2 Yahya Al-Dhuraibi, Fawaz Paraiso et al. used to implement ... Tk ... live container migration in their ELASTICDOCKER system 3 [11]. Radostin Stoyanov and Martin J Kollingbaum also used k CRIU to migrate container applications [12]. k B. Problem statement ... Current live container migration techniques exist several drawbacks that are not addressed well in previous studies, including lack of parallel processing, high disk I/O overhead k and excess read/write time, etc. We briefly illustrate point by point as follows. Fig. 4. The parallel process by pipeline method.

104 time overhead of the ith execution unit, μ denotes some other design both two methods as middleware and adopt the CRIU time overhead in the pipeline process. tool to implement our live container migration system.

τ =max(τi)+μ A. Pre-restore i (1) i ∈ [0,k] 1) Overview: Pre-restore is equivalent to the reverse pro- cess of pre-copy. As demonstrated in Fig. 5, the source host Tk in Equation 2 gives the total time of n tasks on a pipeline adopts iterative process over the transport of container images with k execution units, which can be pulled out easily from to reduce downtime. Similarly, to resume the operation of the Fig. 4. If n tasks are executed sequentially, the total time container on the destination host, it is necessary to merge the overhead can be expressed as T1 in Equation 3. container images from these iterations, which takes a certain amount of time. The process of migrating a container image Tk =(k + n − 1) ∗ τ (2) from a source host to a destination host over the network also consumes considerable time. These two processes can be performed simultaneously. Absorbing the idea of the pipeline T1 = n ∗ k ∗ τ (3) method, the transfer process can be divided into two process- Thus, the ratio between the time overhead of the serial mode ing units, one is the network transport unit, and the other is the and the pipeline mode can be given by Equation 4. Sk can be recovery unit. After the image is transferred from the source used to represent the parallel execution efficiency of splitting host to the destination host through the network, it can be the migration task into k iterations. Furthermore, Equation 4 processed by the recovery unit, while the network transport indicates that if there are always enough tasks to execute in unit can transmit the next iteration image. With the approach the pipeline, as the number of execution units k increases, the of pre-restore, the efficiency of image transmission can be pipeline method is k times more efficient than that without the improved, and downtime can be declined. pipeline. This explains the necessity of the pipeline method. 2) Implementation: Concretely, the implementation of pre- restore is demonstrated in Fig. 6. Assume that page1.img and T1 n ∗ k ∗ τ n ∗ k 1 Sk = = = (4) pagemap .img are image files of the first iteration and the Tk (k + n − 1) ∗ τ k + n − 1 corresponding memory mapping file respectively. The image n ∗ k file starts with the page address where each page stores the lim Sk = lim = k (5) n→∞ n→∞ k + n − 1 actual memory content, the mapping file starts with the page- map address where each page-map stores the memory mapping 2) High disk I/O overhead: In traditional migration meth- information. Similarly, page2.img and pagemap2.img are ods, the container image is first dumped to disk before being image files of the second iteration and the corresponding mem- transferred over the network to the destination host, where the ory mapping file respectively. The pre-restore method needs to overhead of disk I/O is usually high. In contrast, the overhead merge different page-map files in all iterations into one page- of RAM-based is much lower compared with that of disk I/O map file. For example, the address {0x1000000, 4,in parent} [13]. Therefore, in the proposed RAM-based method, we use in pagemap2.img indicates that the memory block content RAM storage media instead of the disk to store the image is actually in its parent image file (the file generated in the files, further accelerating live container migration. previous iteration), thus the pre-restore method first merges 3) Excess read/write time: All image files are written to the pagemap2.img and the pagemap1.img into one mapping disk and read from disk twice, causing excess time consump- file, then merges the corresponding page image files into one tion on read and write.Forwrite operation, the one is the image file. The image files and mapping files of subsequent time when the container process is dumped on the source iterations are merged in the same way. After the pre-restore host, the other is when the container image is received on process, we only need to reruns the merged image into the the destination host. For read operation, the one read occurs container. During the container recovery procedure, there is no when sending an image file to the target host, while the other difference between pre-restore and the conventional migration occurs at the time of performing a recovery operation [12]. Our method (one-time migration). proposed RAM-based method solves this problem by dumping the container image file directly to the destination host instead B. Complete RAM-based method of the local disk. 1) Overview: To reduce I/O overhead and read/write time, III. METHODOLOGY we propose a complete RAM-based migration method, which To solve the shortcomings mentioned above, we propose adopts memory to store a collection of image files instead of two methods and design an image transport service middle- using disks during the migration process. As shown in Fig. ware as the implementation. First, we propose a pre-restore 7, image files are directly dumped from the source host and method to achieve parallel processing of image transfer and transported to the destination host, which means the image save image restore time. Second, we propose a complete files are stored only in the destination host and not in the RAM-based migration process to accelerate the migration. We source host, making the count of read and write reduced

105 'HVWLQDWLRQKRVW

1HWZRUN

1HWZRUN

1HWZRUN

SUHUHVWRUH

Fig. 5. Pre-restore process.

SDJHPDSLPJ 0HPRU\ ^[ ` LPDJHILOHV

ĊĊ SDJHLPJ ^[ `

^[&) ` SUHUHVWRUH PHPRU\ SUHUHVWRUH 2WKHUVWDWXV SDJHPDSLPJ LPDJHILOHV ^[  LQBSDUHQW` ^[&) `

SDJHLPJ Fig. 8. Middleware workflow.

Fig. 6. Pre-restore implementation.

the container memory pages on the source host are not dumped to half of the original compared with conventional migration to disk but are dumped directly to the network, eliminating any methods. disk read or write on the source host. On the destination host, 2) Implementation: The complete RAM-based method is the middleware receives the container memory image from the implemented based on middleware. As shown in Fig. 8, the network and writes to the memory-based file system (tmpfs) middleware mainly performs two tasks. First, the memory page after performing the pre-restore process. image files of the container are transmitted from the source host in the way of workflow through middleware. Second, the transferred metadata image files are directly stored in the IV. EXPERIMENTS memory by middleware. Therefore, when using middleware,

To evaluate our work, we design an experiment that aims 6RXUFHKRVW 'HVWLQDWLRQKRVW to migrate containers between two hosts. In the experiment, we compare the original pre-copy migration method and the &RQWDLQHU &RQWDLQHU two proposed methods to perform live container migration. As we mentioned in Section 1, the most important metric of live container migration is downtime. Therefore, the experiment UHVWRUH 'XPS WUDQVSRUW also focused on the effect of the methods on the downtime. Experimental results show that the proposed method can 0LGGOH:DUH improve the efficiency of live container migration, and the selected time indicators are superior to the pre-copy method. Moreover, the combination of the two methods can achieve Fig. 7. Improved RAM-based migration process. better results.

106 A. Experimental environment and configuration mentioned in Section 2 about the necessity of the pipeline We adopt two Cloud Virtual Machine (CVM) 1 nodes of approach, the more iterations we carry out, the better the Tencent Cloud to simulate two hosts. Both hosts are equipped advantages of the proposed methods will be. In addition, as with the Ubuntu 16.04 operating system which deploys the the number of iterations increases, the absolute downtime con- same version of LXC and CRIU. We use a memory-intensive sumed by the pre-restore method is more different from that of program memhog to load the container, which means that the pre-copy method and RAM-based method. Therefore, our the memory pages of the container are modified frequently. proposed RAM-based and pre-restore methods are effective Meanwhile, memhog can specify the amount of memory it to reduce absolute downtime during live container migration, occupies, allowing us to migrate containers with different among which the pre-restore method is more effective when loads easily. Therefore, this program is suitable for testing taking more iterations. the performance of the pre-copy method and the pre-restore 2) Total migration time: The total migration time of dif- method. We vary the load size running on the container in ferent methods at 5 iterations and 10 iterations are presented {8, 16, 32, 64, 128, 256} MB. To evaluate the performance of in Fig. 11 and Fig. 12, respectively. Compared to the absolute different methods, we vary the number of iterations in {5, 10}. downtime, the total migration time includes the time it takes We repeat each group of experiments 10 times and take the to transfer data from the source host to the destination host, averaged results as the final results. i.e., the total migration time takes into account additional I/O We adopt following two evaluation metrics: total migration operations. This means that in this experiment, the RAM-based time and absolute downtime mentioned in Section 1. The container migration method will be able to better reflect its total migration time refers to the total time taken from the performance advantages. As shown in Fig. 11 and Fig. 12, start of the migration to the end of the migration. The absolute the pre-copy method takes the longest total migration time, downtime tabs is given by: followed by the pre-restore method and then the RAM-based method. tabs = tdown − tnet (6) Similar to the previous experiment, when overload is low, both the RAM-based method and the pre-restore method cost where tdown is downtime which refers to the time from similar migration time compared with the original pre-copy stopping of the container on the source host to re-run on method. However, our methods achieve much lower time cost the destination host, tnet denotes network transmission time with the overload increasing. Besides, the ensemble method from source host to destination host, as shown in Fig. 2. of RAM-based and pre-restore achieves the lowest migration The absolute downtime eliminates the impact of the network, time. The experimental results illustrated in Fig. 12 indicate making the comparison between the different methods of the same trend as shown in Fig. 11. In summary, our proposed experimentation fairer. RAM-based method and pre-restore method can effectively B. Experimental Result improve the efficiency of live container migration, and for the 1) Absolute downtime: Fig. 9 and Fig. 10 show the absolute total migration time, the RAM-based method is better than the downtime of different methods at 5 iterations and 10 iterations, pre-restore method. respectively. For all overload configurations and both iteration V. C ONCLUSION &FUTURE WORK configurations, our proposed methods are superior to the Live container migration is widely used in cloud computing original pre-copy approach, respectively. As shown in Fig. 9, and plays an important role in maintaining high availability when overload is low, our methods achieve similar results in and load balancing of applications. The main concern of live terms of absolute downtime compared with the original pre- container migration is downtime. Therefore, in this paper, copy method. Specifically, the absolute downtime of the RAM- we propose a pre-restore method and a complete RAM- based method is second only to the pre-copy method, followed based method to improve the efficiency of live container by the pre-restore method. However, both the RAM-based migration, where the pre-restore leverages the idea of mapping method and the pre-restore method achieve much lower time and merging memory pages, the RAM-based method adopts cost with the overload increasing. In addition, the ensemble memory to store a collection of image files instead of using method of RAM-based and pre-restore achieves the lowest disks during the migration process. Experiments demonstrate time cost, as expected. Because the ensemble method can not that our methods achieve significantly lower absolute down- only absorb the high efficiency of image transmission brought time and migration time, validating the effectiveness of our by the pre-restore method but also take the advantages of proposed methods. the RAM-based method reducing I/O overhead and read/write However, the methods we proposed still need to be im- time. proved, mainly in the following aspects: The experimental results shown in Fig. 10 are similar to those in Fig. 9. The experimental results illustrated in Fig. 10 • We didn’t consider resource control for live container further validate the superiority of our methods when taking migration. In the process of container migration, the two more iterations. This is also in line with expectations. As we hosts need to transfer pages frequently and consume a lot of bandwidth resources, which is likely to affect other 1https://intl.cloud.tencent.com/product/cvm services on the host. In future work, we can consider

107 Fig. 9. Absolute downtime of different methods at 5 iterations. Fig. 10. Absolute downtime of different methods at 10 iterations.

Fig. 11. Total migration time of different methods at 5 iterations. Fig. 12. Total migration time of different methods at 10 iterations.

limiting the bandwidth of live container migration and states naive bayesian prediction model. In 2018 IEEE Intl Conf even other resources, to reduce the impact on other on Parallel & Distributed Processing with Applications, Ubiquitous Computing & Communications, Big Data & Cloud Computing, So- services on the host while considering the performance cial Computing & Networking, Sustainable Computing & Commu- of container migration. nications (ISPA/IUCC/BDCloud/SocialCom/SustainCom), pages 80–87. • Data compression and encryption are not considered IEEE, 2018. [3] Najet Hamdi, Walid Chainbi, and Mohamed Ali Mahjoub. A transition during live container migration, which can cause security state cost sensitive virtual machines consolidation. In 2018 IEEE Intl problems. Besides, if data compression and encryption Conf on Parallel & Distributed Processing with Applications, Ubiquitous are considered, as each iteration may require a process Computing & Communications, Big Data & Cloud Computing, Social Computing & Networking, Sustainable Computing & Communications of compression and decompression/encryption and de- (ISPA/IUCC/BDCloud/SocialCom/SustainCom), pages 279–286. IEEE, cryption, the computational load of the host computer 2018. will also increase with the number of iterations, resulting [4] Wes Felter, Alexandre Ferreira, Ram Rajamony, and Juan Rubio. An updated performance comparison of virtual machines and linux contain- in additional computing time. Therefore, how to balance ers. In 2015 IEEE international symposium on performance analysis of the relationship between data encryption and compression systems and software (ISPASS), pages 171–172. IEEE, 2015. and the number of iterations to obtain better performance [5] Hai Jin, Li Deng, Song Wu, Xuanhua Shi, and Xiaodong Pan. Live virtual machine migration with adaptive, memory compression. In 2009 is also the direction of future research. IEEE International Conference on Cluster Computing and Workshops, pages 1–10. IEEE, 2009. REFERENCES [6] Andrey Mirkin, Alexey Kuznetsov, and Kir Kolyshkin. Containers checkpointing and live migration. In Proceedings of the Linux Sym- [1] Michael R Hines and Kartik Gopalan. Post-copy based live virtual ma- posium, volume 2, pages 85–90, 2008. chine migration using adaptive pre-paging and dynamic self-ballooning. [7] Michael R Hines, Umesh Deshpande, and Kartik Gopalan. Post-copy In Proceedings of the 2009 ACM SIGPLAN/SIGOPS international live migration of virtual machines. ACM SIGOPS operating systems conference on Virtual execution environments, pages 51–60. ACM, 2009. review, 43(3):14–26, 2009. [2] Lianpeng Li, Jian Dong, Decheng Zuo, and JIaxi Liu. Sla-aware [8] Christopher Clark, Keir Fraser, Steven Hand, Jacob Gorm Hansen, Eric and energy-efficient vm consolidation in cloud data centers using host Jul, Christian Limpach, Ian Pratt, and Andrew Warfield. Live migration

108 of virtual machines. In Proceedings of the 2nd conference on Symposium on Networked Systems Design & Implementation-Volume 2, pages 273– 286. USENIX Association, 2005. [9] Takahiro Hirofuchi, Hidemoto Nakada, Satoshi Itoh, and Satoshi Sekiguchi. Reactive consolidation of virtual machines enabled by post- copy live migration. In Proceedings of the 5th international workshop on Virtualization technologies in distributed computing, pages 11–18, 2011. [10] David Bernstein. Containers and cloud: From lxc to docker to kuber- netes. IEEE Cloud Computing, 1(3):81–84, 2014. [11] Yahya Al-Dhuraibi, Fawaz Paraiso, Nabil Djarallah, and Philippe Merle. Autonomic vertical elasticity of docker containers with elasticdocker. In 2017 IEEE 10th International Conference on Cloud Computing (CLOUD), pages 472–479. IEEE, 2017. [12] Radostin Stoyanov and Martin J Kollingbaum. Efficient live migration of linux containers. In International Conference on High Performance Computing, pages 184–193. Springer, 2018. [13] Bruce Jacob, Spencer Ng, and David Wang. Memory systems: cache, DRAM, disk. Morgan Kaufmann, 2010.

109