(12) Patent Application Publication (10) Pub. No.: US 2014/0188870 A1 Borthakur Et Al
Total Page:16
File Type:pdf, Size:1020Kb
US 2014O18887OA1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2014/0188870 A1 BOrthakur et al. (43) Pub. Date: Jul. 3, 2014 (54) LSM CACHE (52) U.S. Cl. CPC ................................ G06F 17/30312 (2013.01) (71) Applicants: Dhrubajyoti Borthakur, Sunnyvale, CA USPC .......................................................... T07/736 (US); Nagavamsi Ponnekanti, Fremont, CA SS Jeffrey Rothschild, Los Altos, (57) ABSTRACT (72) Inventors: Dhrubajyoti Borthakur, Sunnyvale, CA A variety of methods for improving efficiency in a database (US); Nagavamsi Ponnekanti, Fremont, system are provided. In one embodiment, a method may CA (US); Jeffrey Rothschild, Los Altos, comprise: generating multiple levels of data according to how CA (US) recently the data have been updated, whereby most recently updated data are assigned to the newest level; storing each (21) Appl. No.: 13/730,698 level of data in a specific storage tier, splitting data stored in (22) Filed: Dec. 28, 2012 a particular storage tier into two or more groups according to 9 access statistics of each specific data; during compaction, Publication Classification storing data from different groups in separate data blocks of the particular storage tier, and when a particular data in a (51) Int. Cl. specific data block is requested, reading the specific data G06F 7/30 (2006.01) block into a low-latency storage tier. 22O 21 Database Application Database Web Servers Engine Patent Application Publication Jul. 3, 2014 Sheet 1 of 6 US 2014/0188870 A1 I(5)IAI Patent Application Publication Jul. 3, 2014 Sheet 2 of 6 US 2014/O18887.0 A1 09 |NST ZPOICH Patent Application Publication Jul. 3, 2014 Sheet 3 of 6 US 2014/018887.0 A1 No.enes £(9IAI O[?AQT Patent Application Publication Jul. 3, 2014 Sheet 4 of 6 US 2014/O18887.0 A1 s Patent Application Publication Jul. 3, 2014 Sheet 5 of 6 US 2014/018887.0 A1 500 Generate multiple levels of data according to how recently the data have been updated by using Log 510 Structured Merge Tree (LSM) Store each level of data in a particular storage tier 520 plit data stored e particular storage tier into two or more groups according to how 530 frequently and/or recently the data have been accessed During compaction, store data from different groups 1 540 in Separate data blocks of the particular storage tier Read a specific data block into memory when a particular data in the specific data block is 550 requested FIG. 5 Patent Application Publication Jul. 3, 2014 Sheet 6 of 6 US 2014/0188870 A1 't 92 -sa h O s c) s Z : US 2014/O 18887.0 A1 Jul. 3, 2014 LSM CACHE data can be cached in a given memory or low-latency storage tier, which may reduce the overall read latency. FIELD OF INVENTION 0008. In some embodiments, a webserver and/or a data base system may monitor access statistics of each data and 0001. At least one embodiment of the present disclosure categorize data into a plurality of groups according to access pertains to databases and in particular to data storage with frequencies (i.e., update or read frequencies). Data with simi caching. lar access frequencies may be assigned to the same group and stored together in one or more data blocks. In some embodi BACKGROUND ments, data in a particular data group may be stored according 0002 The explosion of social networking has led to exten to frequencies that data have been accessed together in the sive sharing of information including items such as videos, past. Data that were accessed together frequently in the past photos, blogs, links, etc. Existing Social networking data may be stored together in a data block. bases face growing challenges to Support highly random writ 0009. In some embodiments, characteristics of each data ing and reading operations (e.g., new insertion, update and may be monitored and analyzed. Related data with certain deletion). commonalities may more likely be accessed together in the 0003 Log Structured Merge Tree (LSM) has become a future. The database system may store these related data popular solution in many write-optimized databases. AN together in a data block. LSM database typically organizes the data in the storage 0010 Some embodiments of the present disclosure have media in the form of blocks and uses a cache for faster access other aspects, elements, features, and steps in addition to or in to recently accessed data. The cache is typically Smaller than place of what is described above. These potential additions the entire dataset but has lower latency than the storage and replacements are described throughout the rest of the media. The LSM database, upon a read request for a data specification. record in a particular data block, pulls in the particular block from the storage media and cache it in memory. However, the BRIEF DESCRIPTION OF THE DRAWINGS cached block may include many data records that are rarely 0011. One or more embodiments of the present disclosure requested. A significant portion of the cache may be wasted in are illustrated by way of example and not limited to the storing these rarely requested data records, resulting in a low figures of the accompanying drawings, in which like refer cache hit ratio. ences indicate similar elements. One skilled in the art will 0004 Thus, there is a need to improve the efficiency of readily recognize that alternative embodiments of the struc cache in an LSM database and value in doing so. Solutions to tures and methods illustrated herein may be employed with the problem have been long sought but thus far have eluded out departing from the principles of the invention described those skilled in the art. herein. 0012 FIG. 1 illustrates a schematic block diagram of a SUMMARY system for data storage over a network according to one 0005 Embodiments of the present disclosure provide a embodiment(s) of the present disclosure. variety of methods for improving efficiency in a database 0013 FIG. 2 illustrates an example of data storage accord system. In some embodiments, a database may be imple ing to another embodiment(s) of the present disclosure. mented by using a multi-tiered caching architecture, where 0014 FIG.3 illustrates an example of organized data in the multiple levels of data are created by use of LSM trees accord form of tiers, in accordance with yet another embodiment of ing to how recently the data have been updated. In some the invention. embodiments, storage media may be split into multi-tiers 0015 FIG. 4 illustrates an example of a Bloom Filter used according to the latency of individual storage medium. Each for data storage in a system, in accordance with yet another level of data can be assigned to an appropriate storage tier. For embodiment of the invention. example, data in the newer levels may be stored in lower 0016 FIG. 5 illustrates a flow chart showing a set of opera latency storage tiers while data in the older levels may be tions 500 that may be used for improving efficiency in an stored in higher-latency storage tiers. LSM database, in accordance with yet another embodiment 0006. In some embodiments, data may be organized in the of the invention. form of data blocks. Each block may contain a number of 0017 FIG. 6 illustrates a diagram of a computer system, in separate data. The separate data stored within a particular accordance with yet another embodiment of the invention. storage tier may be split into two or more groups according to how recently or frequently the data have been accessed (i.e., DETAILED DESCRIPTION “hot” and “cold groups). The two or more data groups may 0018. The following description and drawings are illustra be stored in separate data blocks of a given storage tier. The tive and are not to be construed as limiting. Numerous specific particular storage tier may be a high-latency, an intermediate details are described to provide a thorough understanding of latency or a low-latency storage tier. the disclosure. However, in certain instances, well-known 0007 When a specific data is read into memory or a low details are not described in order to avoid obscuring the latency storage tier from one of those data blocks, all other description. References to one or an embodiment in the “hot” data in that data block may also be read into the memory present disclosure can be, but not necessarily are, references or the low-latency storage tier at the same time. This may to the same embodiment. Such references mean at least one of reduce the likelihood that a future read request requires access the embodiments. to higher-latency tiers. Further, by splitting data into hot data 0019 Reference in this specification to “one embodiment' and cold data, data blocks having hot data may be cached or “an embodiment’ means that a particular feature, structure more frequently than data blocks having cold data. More hot or characteristic described in connection with the embodi US 2014/O 18887.0 A1 Jul. 3, 2014 ment is included in at least one embodiment of the disclosure. set-top box (STB), a personal digital assistant (PDA), a cel The appearances of the phrase “in one embodiment in vari lular telephone, an iPhone(R), an iPadR), a computational ous places in the specification are not necessarily all referring device, a web-browsing device, a television (e.g., with a set to the same embodiment, nor are separate or alternative top box and/or a remote control), a vehicle-based device, embodiments mutually exclusive of other embodiments. and/or any suitable portable, mobile, stationary, and handheld Moreover, various features are described which may be communication device.