Nvme-Based Caching in Video-Delivery Cdns

Nvme-Based Caching in Video-Delivery Cdns

<p>NVMe-Based Caching In <br>Video-Delivery CDNs </p><p>Thomas Colonna 1901942 Double Degree INSA Rennes Supervisor: Sébastien Lafond Faculty of Science and Engineering Åbo Akademi University 2020 <br>In HTTP-based video-delivery CDNs (content delivery networks), a critical component is caching servers that serve clients with content obtained from an origin server. These caches store the content they obtain in RAM or onto disks for serving additional clients without fetching them from the origin.&nbsp;For most use cases, access to the disk remains the limiting factor, thus requiring a significant amount of RAM to avoid these accesses and achieve good performance, but increasing the cost. <br>In this master’s thesis, we benchmark various approaches to provide storage such as regular disks and NVMe-based SSDs. Based on these insights, we design a caching module for a web server relying on kernel-bypass, implemented using the reference framework SPDK. <br>The outcome of the master’s thesis is a caching module leveraging specific properties of NVMe disks, and benchmark results for the various types of disks with the two approaches to caching (i.e., regular filesystem based or NVMe-specific). </p><p>Contents </p><p></p><ul style="display: flex;"><li style="flex:1">1 Introduction </li><li style="flex:1">1</li></ul><p></p><ul style="display: flex;"><li style="flex:1">2 Background </li><li style="flex:1">3</li></ul><p></p><p>344566677788<br>2.1 Caching&nbsp;in the context of CDNs&nbsp;. . . . . . . . . . . . . . . . . . . . . . 2.2 Performances&nbsp;of the different disk models&nbsp;. . . . . . . . . . . . . . . . . <br>2.2.1 Hard-Disk&nbsp;Drive .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Random-Access&nbsp;Memory .&nbsp;. . . . . . . . . . . . . . . . . . . . . 2.2.3 Solid-State&nbsp;Drive . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.4 Non-Volatile&nbsp;Main Memory&nbsp;. . . . . . . . . . . . . . . . . . . . 2.2.5 Performance&nbsp;comparison of 2019-2020 storage devices&nbsp;. . . . . . <br>2.3 Analysing&nbsp;Nginx .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . <br>2.3.1 Event&nbsp;processing .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Caching&nbsp;with Nginx&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Overhead&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.4 Kernel&nbsp;bypasses .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . </p><p></p><ul style="display: flex;"><li style="flex:1">3 Related&nbsp;Work </li><li style="flex:1">11 </li></ul><p></p><p>3.1 Kernel&nbsp;bypass for network&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;11 3.2 Kernel&nbsp;bypass for storage&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;11 3.3 Optimizing&nbsp;the existing access to the storage .&nbsp;. . . . . . . . . . . . . . .&nbsp;11 3.4 Optimizing&nbsp;the application&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;12 3.5 Optimizing&nbsp;the CDN&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;12 </p><p></p><ul style="display: flex;"><li style="flex:1">4 Technical&nbsp;Choices </li><li style="flex:1">13 </li></ul><p></p><p>4.1 First&nbsp;measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;13 4.2 Data&nbsp;visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;13 4.3 Disks&nbsp;performances comparison&nbsp;. . . . . . . . . . . . . . . . . . . . . .&nbsp;14 </p><p></p><ul style="display: flex;"><li style="flex:1">5 NVMe-specific&nbsp;caching module </li><li style="flex:1">21 </li></ul><p></p><p>5.1 Architecture&nbsp;of the module&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;21 <br>5.1.1 Asynchronous&nbsp;programming . . . . . . . . . . . . . . . . . . . .&nbsp;21 <br>5.2 Integration&nbsp;with Nginx&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;23 5.3 Experimental&nbsp;protocol .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;24 5.4 Results&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;26 </p><p></p><ul style="display: flex;"><li style="flex:1">6 Conclusion </li><li style="flex:1">27 </li></ul><p></p><p>6.1 Future&nbsp;work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;27 </p><p>1. Introduction </p><p>Broadpeak is a company created in 2010, designing and providing components for content delivery networks.&nbsp;The company focuses on content delivery networks for video content, like IPTV, Cable and on demand video.&nbsp;Broadpeak provides it services to content providers and internet services providers like Orange. <br>The classical ways of consuming videos and films, broadcasts and DVDs have become less popular since the emergence of services providing on-demand video via the internet. To&nbsp;provide this kind of service, some technologies have been created, such as the HLS [1] and DASH [2] protocols. These protocols were developed using norms and protocols already existing for the web, allowing video streaming to use a large part of the infrastructure created for the web.&nbsp;HLS and DASH contributed to popularizing the ondemand video streaming by allowing a large number of devices (smartphones, laptops, television...) to access the system and by creating a standard ensuring the interoperability between all the devices and servers in the network. <br>Now, the focus is on creating systems able to manage the increasing number of users switching from traditional television to OTT (Over The Top) streaming, because of the advantages of this technology: choice, replay, time-shifting...&nbsp;The example of the containment because of the COVID-19 shows the importance of having optimized and scalable systems to be able to manage the huge demand for content.&nbsp;In the context of the video distribution, one key element is the Content Delivery Network (CDN). The role of the CDN is to deliver the content once the user is connected to it.&nbsp;Because video content is huge (e.g., 1.8 GB for an hour of video delivered at 4 Mbps), the pressure on the CDN throughput is higher for video than for any other content.&nbsp;Note that this trend is reinforcing as shifting to large screens and Ultra HD means that video bitrate is doubled. In addition, video CDNs have stringent performance requirements. Indeed, a CDN must be able to store this huge amount of data and at the same time answer the requests of the users at a constant rate. The constant rate is important, because even the slightest fluctuation can lead to a pause in the reading for one user or more, which has a huge negative impact on the user experience. <br>Video Streaming relies heavily on I/O operations. I/O operations are the bottleneck of </p><p>1a Content Delivery Network (CDN). To increase the performance of the system, we can increase the number of servers in the CDN and, thus, its cost and energy consumption. Improving the I/O performances allows reducing the number of servers without losing performances, leading to less cost, and less energy consumption. <br>Concerning the I/O, with the evolution of storage device technology, the software overhead, negligible until now, is becoming huge compared to the time spent in the disk, especially the overhead related to the isolation between the kernel and user-space.&nbsp;This isolation requires that every time a storage system is accessed, a context switch is done [3]. The POSIX API used to access files is also an element that slows down the process. <br>In this master’s thesis, we analyze the performance of a streaming server using standard modules of Nginx.&nbsp;We show the time distribution between the kernel and the disk, and compare it to a configuration where an NVMe disk is used to illustrate the impact of the overhead (Chapter 4).&nbsp;We present a new Nginx caching module using SPDK [4], aiming to eliminate this overhead when using NVMe disks (Chapter 5). <br>The design of our module is similar to EvFS [5], but we specialized it for caching purposes, and we do not expose a POSIX API. We moved the API to interact with the storage device outside the kernel, reducing the overhead due to the file system API. We also designed it knowing the specific workload of the caching server to maximize the lifetime of the flash memory of the disks. There are several projects aiming to achieve a similar objective: Bluestore [6] and NVMeDirect [7] are two examples. </p><p>2</p><p>2. Background </p><p>2.1 Caching&nbsp;in the context of CDNs </p><p>A Content Delivery Network (CDN) is a network composed of servers located in various geographical places [8]. Its purpose is to quickly deliver content such as web pages, files, videos... The&nbsp;infrastructure of a CDN is conceived to reduce the network load and the server load, leading to an improvement of the performances of the applications using it [9]. <br>The infrastructure of a CDN heavily relies on the principle of caching. Web caching is a technology developed to allow websites and applications to face the increasing number of users of the internet [11, 10, 12, 13]. Web caching can reduce the bandwidth consumption, improve the load-balancing, reduce network latency and provide higher availability. <br>To achieve caching we need at least two servers, the first one, the origin and the second one the caching server.&nbsp;The principle is that when a user requests some content to the server, the request first goes through the caching server to check if the content demanded is already in the cache.&nbsp;If it is, the caching server provides the content to the user. If not, the caching server requests the content to the origin server that stores the data, provides it to the user and keeps a copy of the content for the next time someone requests this content, as illustrated in figure 2.1. <br>Using the same principle, a CDN is composed of one or several origin servers which store the content to be delivered, and multiples caching all around the geographical area it delivers, as shown in figure 2.2. When a user requests a content to the CDN the request goes through the closest caching server. Thus, the latency is greatly reduced, because the data does not need to travel all around the world, and the workload is spread between all the servers. <br>This infrastructure is largely used by the industry because of its efficiency, its reliability and its performances.&nbsp;However, in some cases, this is not enough, like in a video CDN. In the context of a video CDN, objects are large, so the request payload is larger and the number of requests is reduced. Since there are fewer requests to serve, the load of </p><p>3<br>Figure 2.1: caching diagram. the CPU is reduced. Because the content to deliver is larger, a video CDN requires more storage capacity.&nbsp;To improve the performances of these systems we can leverage on the technology used to store the data. </p><p>2.2 Performances&nbsp;of the different disk models </p><p>2.2.1 Hard-Disk&nbsp;Drive </p><p>The Hard-Disk Drive (HDD) is a permanent storage technology made of rotating magnetic disks that store the information. To read or write data with this kind of storage, the disk needs to place the reading head on the correct location and start to read the data. <br>The main advantage of this kind of storage is that it is a well-known technology and we can produce devices that can store a huge amount of the information. HDDs have the best capacity/price ratio making it appealing for use in caching servers. However, due to the multiple steps required to locate the data before reading or writing on the disk, this technology has a big latency. The latency is even grater where the accessing small files, in a video CDN context this could be the video manifests or the subtitle files. The limited number of reading head installed on a HDD increase the latency of concurrent access to the data stored on it. <br>Before the popularization of the technologies presented below, some techniques to </p><p>4<br>Figure 2.2: CDN diagram. improve the performance of the HDD were developed.&nbsp;Some strategies of writing on a disk are more performing than others: it is more efficient to split the writing onto several disks than writing it on one disk [14].&nbsp;The main inconvenience of this technique is that it requires more disks and the capacity of storage is not increased. This principle is used in the technology RAID (Redundant Array of Independent Disks).&nbsp;It requires a large number of disks and greatly improves all the performances of a storage system: speed, reliability, and availability.&nbsp;However, this system is still less performing than the other systems presented below. </p><p>2.2.2 Random-Access&nbsp;Memory </p><p>The Random-Access Memory (RAM) is a kind of data storage that is used to store "working data". It is mainly used for this purpose because it is a volatile memory storage, meaning that when the power supply of the system is shut off, all the data stored in the RAM is lost. In the context of a caching server, since the server is supposed to be constantly run- </p><p>5ning this is not a real problem, and a backup policy can be implemented, to periodically make a copy of all the data stored in the RAM on a more reliable storage device. <br>The interest in storing data in the RAM is the extremely low latency. This is the fastest memory technology (excluding CPU cache).&nbsp;On a caching server delivering video to a large number of clients, the performance gain of storing the data in the RAM instead of storing it on HDDs is crucial.&nbsp;This is why some server models have terabytes of RAM. Although it is performing, it is also orders of magnitude more expensive than HDDs, ass showed in table 2.1. </p><p>2.2.3 Solid-State&nbsp;Drive </p><p>Solid-State Drives (SSD) are a new type of storage device.&nbsp;They are composed of electronic components like NAND gates.&nbsp;Unlike the RAM, SSDs are non-volatile memory storage. They are still slower to access than the RAM, but they are orders of magnitude faster than HDDs. Now SSDs are so fast that the protocol used to interact with the disks is the bottleneck. <br>To interact with storage disks a computer must use a protocol supported by the disks. <br>Today there are multiple protocols. The most used are the Serial AT Attachment (SATA) and the Serial Attached SCSI (SAS). These protocols were designed with the slow HardDisk Drive (HDD) in mind.&nbsp;To use the full potential of SSDs, a new protocol has been created: the Non-Volatile Memory Host Controller Interface Specification or NVM Express (NVMe).&nbsp;This protocol requires that the disk is connected to the computer with a PCI Express (PCIe). It allows more efficient access to the data: better speed, better parallel access. With this new protocol, SSDs have great performances and are cheaper than the RAM. <br>Since then, some hardware constructors have made significant progress and have created disks that we can consider as Ultra-Low Latency SSDs [15, 16].&nbsp;These disks are designed to use the maximum of the NVMe protocol. </p><p>2.2.4 Non-Volatile&nbsp;Main Memory </p><p>Non-Volatile Main Memory (NVMM) is a design of storage devices composed of flash memory, like SSDs, connected to the computer through the DIMM ports.&nbsp;One example of this kind of storage is the Intel Optane DIMM [17]. </p><p>2.2.5 Performance&nbsp;comparison of 2019-2020 storage devices </p><p>Table 2.1 shows the characteristics and performances of eight NVMe SSDs compared to standard RAM. Even thought the RAM has the best performances, it comes with a high </p><p>6cost. In&nbsp;order to create a caching server with a large capacity, NVMe SSD are a viable solution thanks to their price and capacity. </p><p>2.3 Analysing&nbsp;Nginx </p><p>Nginx is an asynchronous web server. Two large CDN and related operators have publicly claimed that they use Nginx (i) Cloudflare, and (ii) Netflix.&nbsp;It is also used as the loadbalancer for HTTP service in Kubernetes. It can fulfill multiples roles: web server, reverse proxy, load balancer, mail proxy, and HTTP cache. </p><p>2.3.1 Event&nbsp;processing </p><p>Nginx uses a multi-process pattern where there is a master process controlling worker processes. This&nbsp;allows reloading the configuration of the server without interruption of the service.&nbsp;Nginx is asynchronous because each request made to the server creates an event that is posted in the list of events, and the event loop picks one event and starts processing the request associated.&nbsp;The pseudo-code of the event loop is presented in figure 2.3. </p><p>while (true) {next_timer = get_next_timer(); wait_for_io_events(XX, next_timer - now); process_timer_event(); process_list_event(); <br>}</p><p>Figure 2.3: Pseudo code of the event loop of Nginx <br>The main advantage of this model of programming is that requests are non-blocking. <br>Whenever a request is waiting for further I/O (reading or writing from disk or network), it does not block the process so that other requests can be processed in the meantime, minimizing the CPU time spent doing nothing. </p><p>2.3.2 Caching&nbsp;with Nginx </p><p>Nginx is often used to make a caching server. The standard cache implemented in Nginx uses the abstraction of a file system to store the data to cache.&nbsp;The cache can be stored on any storage device connected, even the RAM via a RAMFS (a file system located in </p><p>7the RAM of the computer). To access the cache, Nginx must use a system call to ask the kernel to retrieve the data. The goal of this cache is to be deployed everywhere, to achieve that it does not use any special properties of the hardware of the storage device. <br>Since Nginx is modular, some modules implement different caching mechanisms. For instance, the module "nginx_tcache" is a module developed for tengine [18], a fork of Nginx. This&nbsp;module uses a cache located in the RAM. It stores the data to cache in a large RAM segment allocated at the start of the server.&nbsp;Since the cache is stored in RAM, whenever the server is shut down, all the data stored in the cache is lost. The same drawback applies to the standard cache of Nginx if we choose to use the RAMFS. </p><p>2.3.3 Overhead </p><p>The role of the operating system is to provide an abstraction of the hardware, to allow applications to run without worrying about the hardware they run on.&nbsp;This abstraction provides an API usable by any program launched on the computer.&nbsp;On UNIX-based systems this API is called POSIX and is used by almost every software developed for UNIX systems. <br>However, when a software needs to use a specific feature of a hardware component or it needs to have faster access to the memory or to the network, this API is not efficient. Mostly because it is too generic to implement special features. <br>Each time we want to access a file on a disk or send a request on the network, the program executes a system call. It calls a function provided by the kernel of the operating system. This call requires a context switch from the user space, the area of the memory the applications can use, to the kernel space, the area of the memory the kernel of the operating system and its extensions use. This context switch is heavy on the CPU, especially if it is done frequently. A large number of cycles are lost because of it. <br>When a program executes a system call, the calling thread initializes some registers. <br>Then it is suspended to allow the kernel to execute the system call.&nbsp;Once the call is finished, the kernel restores the context of the suspended thread where it was stopped and then resumes the execution of the thread.&nbsp;If the system call is an I/O operation, the suspension of the thread can be long, leading to a significant loss of performances. </p><p>2.3.4 Kernel&nbsp;bypasses </p><p>Some use cases do not need the abstraction of the operating system and are considerably slowed down by multiple context switches, for instance when an application has an intensive use of the network.&nbsp;To improve the performances, one of the solutions available is to ignore the kernel.&nbsp;The application directly uses the hardware it needs without ask- </p><p>8ing the kernel, therefore there is no context switch, leading to a significant improvement of the performances of the application.&nbsp;Software with intensive use of the network frequently uses this solution. Open vSwitch [19], a software switch, uses DPDK to improve its performances. <br>Without using the kernel API to access the network or the storage disks, the application has better performances, but it is more difficult to develop because everything must be handled by it, even the security, usually handled by the kernel. </p><p></p><ul style="display: flex;"><li style="flex:1">9</li><li style="flex:1">10 </li></ul><p></p><p>3. Related Work </p><p>3.1 Kernel&nbsp;bypass for network </p><p>One of the first uses of kernel bypass is in networking. The DPDK (Data Plane Development Kit) project is a popular project used to achieve kernel bypass for networking. The F-Stack project is also frequently use for kernel bypass for networking. </p><p>3.2 Kernel&nbsp;bypass for storage </p><p>This project of using kernel bypass to store data is not the first one. Ceph’s Bluestore [6] project is one example.&nbsp;However, its main goal is not to do memory cache, but permanently store data. <br>EvFS [5] is also a project of kernel bypass for storage.&nbsp;The goal of this project is to create a POSIX API in the user space, meaning that a program that needs access to the data on an NVMe Drive does not need to make a system call to retrieve the data, eliminating the overhead.&nbsp;Exposing a POSIX API makes it easier to integrate into preexisting systems. <br>The project NVMeDirect [7] aims to give, to the applications, direct access to the <br>NVMe SSD, to improve the performances of the applications.&nbsp;With this project, other applications can still use the default I/O stack to access the storage device, whereas the SPDK framework claims the whole device and it becomes impossible to access the storage device without using SPDK. The project also provides different features aimed to make the interaction with the storage device easier and more flexible. </p><p>3.3 Optimizing&nbsp;the existing access to the storage </p><p>The I/O stack and the abstractions developed with it were created when the storage devices were slow compared to the CPU. Some projects are trying to modernize the existing I/O stacks, of the kernel, to use the full potential of the new storage technologies. </p><p>11 <br>Enberg, Rao, and Tarkoma proposed a new structure for the operating system.&nbsp;They called this structure, the parakernel [20]. The role of this structure is to partition the hardware of the computer to allow different applications to use it at the same time. Thanks to this separation, it is easier to parallelize the access to I/O devices.&nbsp;With this structure, a program that needs some resources will ask the parakernel for the access, and the parakernel will isolate a part of the device and give access to the program. Now, the program can directly interact with the device, without involving the kernel. <br>Lee, Shin, Song, Ham, Lee, and Jeong are creating a new I/O stack to use with NVMe devices. This I/O stack is optimized to use low latency NVMe SSDs like the Intel Optane [16] or the Samsung Z-SSD [15].&nbsp;They are not creating a new API, they just change the implementation of some functions of the POSIX standard to use the advantages of the hardware.&nbsp;To improve the performances, they developed a special block I/O layer, lighter and faster but only usable with NVMe SSDs.&nbsp;They also overlapped/parallelized the operations that create the structures necessary to store the request result with the data transfer from the device to the memory, and they implemented several lazy mechanisms. </p>

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    36 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us